CN116610935A - Mechanical fault detection method based on engine vibration signal multi-mode analysis - Google Patents

Mechanical fault detection method based on engine vibration signal multi-mode analysis Download PDF

Info

Publication number
CN116610935A
CN116610935A CN202310554203.4A CN202310554203A CN116610935A CN 116610935 A CN116610935 A CN 116610935A CN 202310554203 A CN202310554203 A CN 202310554203A CN 116610935 A CN116610935 A CN 116610935A
Authority
CN
China
Prior art keywords
feature
dimensional
module
characteristic
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310554203.4A
Other languages
Chinese (zh)
Inventor
刘翔鹏
李文杰
袁非牛
张相芬
王心怡
安康
管西强
张会
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Normal University
Original Assignee
Shanghai Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Normal University filed Critical Shanghai Normal University
Priority to CN202310554203.4A priority Critical patent/CN116610935A/en
Publication of CN116610935A publication Critical patent/CN116610935A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Multimedia (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Measurement Of Mechanical Vibrations Or Ultrasonic Waves (AREA)

Abstract

The invention relates to a mechanical fault detection method based on engine vibration signal multi-mode analysis, firstly adopting a multi-mode feature extraction network to extract image features related to abnormal signals from one-dimensional amplitude data and two-dimensional image data for the extracted diesel engine vibration signal, then adopting a mixed channel feature fusion detection network to split a feature image into two groups, respectively adopting a spatial attention mechanism and a channel attention mechanism to carry out weighting calculation on the feature image in a spatial domain and a channel domain, grouping the calculated feature image again, obtaining a multi-dimensional weighting feature image through merging operation, and finally adopting a multi-scale detector to detect the three feature images simultaneously to judge whether the signal in the period has an abnormal state. Compared with the prior art, the invention has the advantages of high accuracy, good noise resistance and the like.

Description

Mechanical fault detection method based on engine vibration signal multi-mode analysis
Technical Field
The invention relates to the technical field of machine learning, in particular to a mechanical fault detection method based on engine vibration signal multi-mode analysis.
Background
The fault detection of the diesel engine can prolong the service life of the diesel engine, enhance the use safety and have important economic value and social benefit.
However, the existing fault detection method cannot cope with a strong noise environment in practical application, and has no universality under multiple working conditions. Meanwhile, the existing fault diagnosis model adopts a single data detection scheme, for example, only one-dimensional data is used as input to carry out anomaly analysis detection, and the detection scheme is lack of consideration of the inherent correlation and distribution gap of the same data source under different forms, so that the exploration of multi-source data is limited.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a mechanical fault detection method based on multi-mode analysis of engine vibration signals.
The aim of the invention can be achieved by the following technical scheme:
a mechanical fault detection method based on engine vibration signal multi-mode analysis comprises the following steps:
collecting vibration signals of a generator;
inputting the vibration signals into a multi-mode feature extraction network to obtain a plurality of feature information;
p pieces of characteristic information are selected and input into a mixed channel characteristic fusion detection network, characteristic secondary processing and detection are carried out, and a detection result is output
The multi-mode feature extraction network comprises a shaping network, a convolution module, a full-connection module and a multi-mode transducer module;
the shaping network is used for converting an input engine vibration signal into a two-dimensional picture, and the convolution module is used for extracting features based on the two-dimensional picture to obtain a two-dimensional feature picture;
the full connection module is used for extracting one-dimensional feature vectors of input engine vibration signals; the multi-mode transducer module is used for integrating the one-dimensional feature vector and the two-dimensional feature picture;
the mixed channel feature fusion detection network comprises a feature aggregation module, a feature mixing module and a multi-scale detection module;
the feature aggregation module is used for aggregating the p pieces of feature information to obtain an aggregate feature map FM;
the feature mixing module is used for carrying out feature mixing on the aggregate feature map FM to obtain a plurality of feature maps with different sizes;
the multi-scale detection module is used for detecting the feature images after the mixing, and judging whether the diesel engine cylinder has abnormal vibration signals in the current period by detecting whether the feature images have abnormal and irregular texture areas.
Further, the multi-mode feature extraction network comprises a q-layer structure, and each layer structure comprises a convolution module, a full-connection module and a multi-mode transducer module.
Further, the multi-mode feature extraction network performs feature extraction on the vibration signal to obtain a plurality of feature information, and the method comprises the following steps:
s1, respectively inputting the vibration signals into a shaping network and a full-connection module, outputting by the shaping network to obtain a two-dimensional picture, and inputting the two-dimensional picture into a convolution module to extract characteristics;
s2, outputting by the full-connection module to obtain a one-dimensional feature vector;
s3, outputting by the convolution module to obtain a two-dimensional characteristic picture;
s4, inputting the one-dimensional feature vector and the two-dimensional feature picture into a multi-mode transducer module to obtain integrated feature information;
s5, inputting the integrated characteristic information and the two-dimensional characteristic picture into a convolution module of a next layer, and inputting the one-dimensional characteristic vector into a full-connection module of the next layer;
s6, repeating the steps S2-S5 until the q-th layer structure of the multi-mode feature extraction network is reached, and carrying out layer-by-layer feature extraction to obtain a plurality of feature information.
Further, the multimode transducer module comprises two multi-head attention networks, and the multi-head attention networks respectively correspond to long-distance relation interaction between the one-dimensional feature vector and the two-dimensional feature image.
Further, the multi-modal converter module integrating the one-dimensional feature vector and the two-dimensional feature picture comprises the steps of:
dividing the one-dimensional feature vector into n token sub-vectors;
equally dividing the two-dimensional characteristic picture into n characteristic blocks, and extending the characteristic blocks into n token sub-vectors;
in a first multi-head attention network, inputting the token corresponding to the two-dimensional feature picture into a matrixIn which token corresponding to the one-dimensional feature vector is input into the matrix->And->In the method, the matching degree calculation is carried out on the query matrix Q with the picture characteristic information obtained by calculation and the key matrix K with the one-dimensional amplitude characteristic n information, and the obtained matching degree is assignedThe value is on the corresponding characteristic value matrix V, and the image characteristic is mapped to the amplitude characteristic;
in the second multi-head attention network, the token corresponding to the two-dimensional characteristic picture is input into a matrixIn which token corresponding to the one-dimensional feature vector is input into the matrix->And->Performing matching degree calculation on the query matrix Q with the picture characteristic information and the key matrix K with the one-dimensional amplitude characteristic n information, and assigning the obtained matching degree on a corresponding characteristic value matrix V to finish the operation of mapping the image characteristic to the amplitude characteristic;
wherein, in two multi-head attention networks, the corresponding two groups of Q, K, V vectors are respectively formed by Andcalculating a matrix;
and combining the output one-dimensional feature vectors of the two multi-head attention networks, performing activation calculation on the combined features by adopting a full connection layer and a ReLU activation function, and finally performing integer calculation on the obtained 1 multiplied by C vectors to obtain a feature map with H multiplied by W multiplied by C dimensions.
Further, the feature aggregation module outputs feature graphs FM output by the 3 convolution modules at the bottommost layer of the multi-mode feature extraction network 1 、FM 2 FM 3 Polymerization is carried out, comprising the following steps:
the feature information of each feature map is promoted;
FM 3 the deconvolution is used for expanding the size of the characteristic diagram by two times and compressing the number of channels to be one half of the original number, and the add fusion mode is used for FM 3 Feature assignment to FM 2 On the above, FM 'is obtained' 2
For FM 2 Upsampling and channel compression are carried out on the feature diagram, and the feature diagram is matched with FM 1 The features of the feature map are fused to obtain FM' 1
FM 'with concat layer pair' 1 、FM′ 2 FM 3 Combining the three feature maps, and performing FM 'combination operation' 1 Downsampling FM 3 And (5) up-sampling and polymerizing to obtain a feature map FM.
Further, the feature mixing module performs feature mixing on the polymerized feature map FM, and includes the following steps:
compressing and combining FM characteristic channel number with FM 'by using 1x1 convolution layer' 2 The same is adopted, the FM is divided into two feature maps of FM_1 and FM_2 by grouping convolution, the size of the feature map is the same as the FM, and the channel number is one half of the FM;
performing feature extraction operation on the grouped feature graphs FM_1 and FM_2 to obtain FM_1 'and FM_2';
extracting feature graphs with different sizes and channel numbers from FM_1 'and FM_2' by adopting the same parallel convolution module to form two paired groups;
and combining and mixing the feature images in the pairing group by adopting a fusion module combining the concat layer and the 1x1 convolution layer to obtain three feature images with different sizes of FM1, FM2 and FM 3.
Further, spatial attention calculation is adopted on the feature map FM_1, so that morphological feature information FM_1' corresponding to the abnormal signal is improved; adopting channel attention calculation for FM_2 to promote semantic feature weight FM_2' of abnormal signals;
the paired set extracted from fm_1 'contains a spatial attention-seeking feature value, and the paired set extracted from fm_2' contains a channel attention-seeking feature value.
Further, the multi-scale detection module is adopted to detect the feature graphs output by the feature mixing module, the cross entropy classification loss is adopted to adjust the classification module of the detector, and meanwhile, the position loss is calculated based on the CIoU position evaluation relationship.
Compared with the prior art, the invention has the following beneficial effects:
the invention takes the multi-mode of the vibration signal of the generator into consideration, extracts a one-dimensional amplitude vector and a two-dimensional feature vector in the vibration signal of the engine through a multi-mode feature extraction network, carries out feature secondary processing and detection on the extracted features by a mixed channel feature fusion detection network, comprises a space and channel two-dimensional attention calculation mechanism in a mixed channel feature fusion module, carries out weighting calculation on a space domain and a channel domain on a feature map so as to realize suppression of vibration noise, ensures that the network can resist the influence of environmental noise and operation condition change on a final detection result, is suitable for a strong noise environment in practical application, and has universality under the condition of multiple working conditions;
the accuracy and the noise resistance of the invention are obviously superior to those of the prior art, and on four data sets constructed under different working conditions, even if the signal to noise ratio is-4 dB, the accuracy of the invention reaches at least 99.008 percent, which is far higher than that of other methods.
Drawings
Fig. 1 is a schematic structural diagram of a multi-modal feature extraction network according to the present invention.
Fig. 2 is a two-dimensional graph of waveform signals under normal and abnormal operation conditions of a cylinder according to an embodiment of the present invention, wherein (2 a) represents normal operation conditions and (2 b) represents abnormal operation conditions.
FIG. 3 is a schematic diagram of a multi-mode transducer module according to an embodiment of the present invention.
Fig. 4 is a schematic structural diagram of a mixed channel feature fusion detection network according to an embodiment of the present invention.
Fig. 5 is a schematic diagram of an MITDCNN network structure according to an embodiment of the present invention.
FIG. 6 is a representative time domain signal of a diesel engine single cylinder misfire at an operating speed of 1800rpm in an embodiment of the present invention.
FIG. 7 is a schematic representation of a diesel engine single cylinder misfire time domain signal converted into a two-dimensional image at an operating speed of 1800rpm in an embodiment of the present invention.
Fig. 8 is a loss iteration curve of the network 3 (MITDCNN) according to an embodiment of the present invention.
Fig. 9 is a graph illustrating an iteration of the network 3 (MITDCNN) AP in accordance with an embodiment of the present invention.
Detailed Description
The invention will now be described in detail with reference to the drawings and specific examples. The present embodiment is implemented on the premise of the technical scheme of the present invention, and a detailed implementation manner and a specific operation process are given, but the protection scope of the present invention is not limited to the following examples.
In order to solve the problems in the prior art, the invention provides a convolutional neural network (MITDCNN) based on multi-mode converter feature extraction, which is used for diesel engine fire diagnosis under strong environmental noise and different working conditions.
According to the embodiment, vibration signals of the engine cylinder head at different speeds are collected through experiments, one-dimensional amplitude vector features and two-dimensional image features are extracted from the vibration signals and are input into a multi-mode feature extraction network, the extracted features are subjected to feature secondary processing and detection by a mixed channel feature fusion detection network, and a spatial and channel two-dimensional attention calculation mechanism is contained in the mixed channel feature fusion module so as to inhibit vibration noise, so that the network can resist the influence of environmental noise and operation condition changes on a final detection result. The present example verifies the effectiveness of the proposed method by experimentally collected data sets and comparison with existing representative algorithms. The result shows that the accuracy and the noise resistance of the MITDCNN are obviously superior to those of the existing algorithm. On four data sets constructed under different working conditions, the accuracy of the method reaches at least 99.008% even if the signal-to-noise ratio is-4 dB, which is far higher than other methods.
In order to strengthen the sensitivity of the network to abnormal signals, the invention designs a multi-mode feature extraction network for detecting the abnormal signals of the diesel engine. Compared with a single-mode feature extraction network, the network designed by the invention simultaneously performs feature extraction on one-dimensional and two-dimensional data, improves the richness of the extracted feature information, and has the overall network structure shown in figure 1.
As shown in fig. 1, the data information extracted from the diesel generator cylinder is a waveform signal (waveform), firstly, the waveform signal is converted into a two-dimensional picture from one-dimensional data according to time and amplitude by a shaping network (ReshapeNet), and secondly, the waveform signal is extracted from the corresponding amplitude according to time units to form a one-dimensional vector, so that two different types of data in the same time period are obtained.
The waveform signal output by the cylinder under normal operation has certain regularity, the two-dimensional image obtained by conversion is an image with certain texture rule, and when the working state of the cylinder is abnormal, an irregular area appears in the converted image, and the two-dimensional images obtained by conversion under normal operation and abnormal operation are shown in fig. 2.
Similarly, for one-dimensional vector data, when an abnormality occurs in a cylinder, irregular transformation of the value thereof occurs. Extracting and aggregating features in two directions of an x axis and a y axis in a network as shown in fig. 1, shaping a waveform signal in the x axis to obtain a two-dimensional image, extracting features by a convolution module (ConvNet) to obtain a two-dimensional feature picture, and inputting the extracted two-dimensional feature picture into a Multi-mode converter module (Multi-Modal Transformer); and the output characteristics of the multimode transducer module are aggregated with the output of the convolution module, the multimode characteristics are mapped into the characteristic diagram, and the multimode characteristics are overlapped and combined by 6 structural bodies in a main network to form a multimode characteristic extraction network. On the other hand, in the y axis, the extracted one-dimensional vector is subjected to feature extraction by adopting a full connection module (FCNet) to obtain a one-dimensional feature vector, so as to obtain a feature relation between time sequence and amplitude, the obtained feature vector is input into a multi-mode transducer module, and the feature vector is mutually complemented with the feature information of a two-dimensional image to learn the feature type of an abnormal waveform.
In summary, the multi-mode feature extraction network is used for extracting the feature information of the two-dimensional image as a main part, and the extraction and fusion of the one-dimensional time sequence features are added as a supplement, so that the feature richness of the network is amplified. The core unit of the network is a multi-mode transducer module and is responsible for constructing the association between one-dimensional and two-dimensional input features.
Compared with a cyclic neural network, the multi-mode transducer module provided by the invention has better long-dependence solving capability for one-dimensional time sequence characteristics, and can acquire characteristic information in a longer time sequence. And the multi-mode transducer module is used for extracting the image features, so that the association between the features of each region of the graph, namely the global features, can be extracted. In the invention, in order to fuse one-dimensional and two-dimensional input information, a multi-mode transducer module is designed, multi-head attention calculation is carried out on two types of input, and the extracted characteristic information is fused, wherein the structure of the multi-mode transducer module is shown in figure 3.
As shown in fig. 3, the multimode transducer module structure includes two multi-head attention networks (Multi Head Attention) that respectively correspond to the long-distance relationship interaction between the one-dimensional feature vector and the two-dimensional feature image.
The two types of data input into the multi-mode transducer module are respectively subjected to the following preprocessing operations: after the one-dimensional amplitude vector is extracted to a one-dimensional feature vector through the full connection layer, the one-dimensional feature vector is divided into n token sub-vectors; after the two-dimensional feature picture is extracted by the convolution module, the feature picture is equally divided into n feature blocks, and then the feature blocks are extended into n one-dimensional feature vectors token.
In two converterler multi-head attention networks, two corresponding Q, K, V vectors are respectively composed of And->Six matrices are calculated.
First in the first transducer module (transducer-1 Multi Hend Attention), inputting the token corresponding to the two-dimensional feature picture of the converted picture toIn the matrix, the token corresponding to the one-dimensional amplitude vector is input into +.>And->In the matrix, matching degree calculation is carried out on the query matrix Q with the picture characteristic information obtained by calculation and the key matrix K with the one-dimensional amplitude characteristic n information, and the obtained matching degree is assigned to the corresponding characteristic value matrix V, so that the operation of mapping the image characteristic to the amplitude characteristic is completed;
similarly, the token corresponding to the one-dimensional amplitude feature is input into the first transducer module (transducer-2 Multi Head Attention)In, the token corresponding to the image feature is input to +.>And->In the matrix, the amplitude features are mapped into the feature map.
And then combining the output one-dimensional feature vectors of the multi-head attention network, and performing activation calculation on the combined features by adopting a full connection layer (FCLayer) and a ReLU activation function to enhance the nonlinearity of the combined features. And finally, carrying out integer calculation on the obtained 1 multiplied by C vector to obtain a characteristic diagram with H multiplied by W multiplied by C dimensions.
After the extraction of the multi-mode features is completed, the invention designs a mixed channel feature fusion network to combine the network features of different layers, improves the feature comprehensiveness, and detects the abnormal signal images by adopting a multi-scale detection scheme because the abnormal image blocks displayed in the converted images are different due to different occurrence time periods of the abnormal signals, so that the invention adopts a network structure designed in the way that the feature images of different sizes are detected, and the network structure is shown in figure 4.
The whole mixed channel feature fusion network can be divided into: the system comprises a feature aggregation module, a feature mixing module and a multi-scale detection module, wherein the workflow and the function of each part are as follows:
firstly, feature graphs output by three convolution modules at the bottommost layer of a multi-mode feature extraction network are aggregated in a feature aggregation module, as shown in fig. 4, three feature graphs with different sizes are input into the feature aggregation module and are respectively named as FM 1 、FM 2 FM 3 (Feature Map, FM). From the characteristic dimension analysis, FM 1 To FM 3 In a semantic feature progressive enhancement state, so that the feature information of each feature map is firstly promoted (upsampled) and FM is carried out during the aggregation operation 3 The deconvolution is used for expanding the size of the characteristic diagram by two times and compressing the number of channels to be one half of the original number, and the add fusion mode is used for assigning the characteristics to FM 2 On the above, FM 'is obtained' 2 The method comprises the steps of carrying out a first treatment on the surface of the FM after fusion of the same pair 2 Feature map up-sampling and channel compression, and FM 1 The features of the feature map are fused to obtain FM' 1 . This is followed by the use of a concat layer pair FM' 1 、FM′ 2 FM 3 Combining the three feature maps, and performing FM 'combination operation' 1 Downsampling (downsampling), FM 3 Upsampling, and polymerizing to obtain a feature map FM with the same size as FM' 2 The feature map contains three levels of feature information, and therefore is superior to FM in feature information richness 1 Three feature maps.
And secondly, carrying out feature mixing on the polymerized feature map FM by utilizing a feature mixing module. Compressing and combining FM characteristic channel number with FM 'by using 1x1 convolution layer' 2 The same is followed by dividing FM into two feature maps FM_1 and FM_2 by a block convolution, the size of which is the same as FM, and the number of channels is one half of FM. Feature extraction is carried out on the grouped feature graphsAnd taking operation to strengthen the weight of the effective characteristic features in the characteristic diagram. In the right raising operation, spatial attention calculation (Spatial Attention model) is adopted for the feature map FM_1, so that morphological feature information FM_1' corresponding to the abnormal signal is raised; channel attention computation (Channel Attention model) is applied to fm_2 to boost the semantic feature weights fm_2' of the anomaly signals. And extracting feature maps with different sizes and channel numbers from the FM_1 'and the FM_2' by adopting the same parallel convolution module. Taking FM_1' as an example, a 3x3 convolution with a sliding step size of 2 is used to obtain the size and FM 1 The same, feature map FM_1 'with half the number of channels' 1 The method comprises the steps of carrying out a first treatment on the surface of the The size and FM are obtained by adopting 3x3 convolution with step length of 1 2 Similarly, FM_1 'with channel being one-half of it' 2 Finally, the size and FM are obtained by adopting 3x3 deconvolution 3 The same, the number of channels is also FM_1 'which is one half of that' 3 . Similarly, the FM_2 'feature diagram is calculated by adopting a parallel convolution module to obtain FM_2' 1 、FM_2′ 2 And FM_2' 3 . FM_1 'on the feature map parameters' 1 And FM_2' 1 、FM_1′ 2 And FM_2' 2 、FM_1′ 3 And FM_2' 3 The three feature maps are called as paired sets, each paired set contains a spatial attention-raising feature value and a channel attention-raising feature value, and a fusion module combining a concat layer and a 1x1 convolution layer is adopted to combine and mix the two feature maps in the paired sets, so that three feature maps with different sizes FM1, FM2 and FM3 are obtained.
And finally, detecting the three feature graphs after the mixed group by using a multi-scale detection module, and judging whether the diesel engine cylinder has abnormal vibration signals in the current period by detecting whether the feature graphs have abnormal and irregular texture areas. For the detector, the classification loss and regression loss used are as follows: in the detection task of the invention, only whether the converted graph area has abnormal signals is judged, so that the abnormal signals are classified, a classification module for adjusting the detector by adopting cross entropy classification loss is adopted, and a loss function formula is as follows:
Loss=-y*log(p)-(1-y)*log(1-p)
in the calculation of the position loss, the image block area generated by the abnormal information number is a regular rectangle and has no complex boundary information, so the position loss is performed based on the CIoU position evaluation relationship, and the loss formula can be expressed as follows:
Loss=1-CIoU
to sum up, in order to more accurately detect the misfire signal of the diesel engine, the invention constructs the MITDCNN network based on the multi-mode feature extraction network and the mixed channel feature fusion detection network, and the whole network structure is shown in fig. 5. The overall working principle of the network is as follows: for the extracted diesel engine vibration signals, firstly, a multi-mode feature extraction network is adopted to extract image features related to abnormal signals from one-dimensional amplitude data and two-dimensional image data, then a mixed channel feature fusion detection network is adopted to split the feature images into two groups, a spatial attention mechanism and a channel attention mechanism are respectively adopted to carry out weighting calculation on the space domain and the channel domain of the feature images, the calculated feature images are grouped again, the feature images with multi-dimensional weighting are obtained through combination operation, finally, a multi-scale detector is adopted to detect the three feature images at the same time, and whether the signals in the period have abnormal states is judged. Details of the constructed network are shown in table 1 below:
TABLE 1MITDCNN network layer Structure composition Table
Based on the above, the present embodiment is directed to a study of misfire conditions of a diesel engine at three different operating speeds. The three operation speeds are 1300rpm, 1800rpm and 2200rpm, and respectively simulate the working conditions of low speed, medium speed and high speed. Since three or more cylinder fires can cause severe engine vibration, and can be observed by the naked eye of an operator without performing fault detection, the embodiment focuses on single cylinder fires and double cylinder (mixed cylinder) fires. As shown in Table 4, the single cylinder misfire detection was performed at three speeds of 1300rpm, 1800rpm, 2200rpm, and the double cylinder misfire detection was performed at 1800 rpm. Each group except for the fire fault was referred to as a comparative reference for the case of normal operation. Common misfire failures of engines are basically shown in the type of misfire listed in Table 2, and the data set employed in this embodiment is generic because the relevant data related to the failures has been collected more comprehensively during the course of the experiment.
Table 2 failure of diesel engines at different operating speeds
In this embodiment, vibration signals are collected at a sampling frequency of 25.6kHz, and the sampling time of each fire type is 41s, and the sampling time at least includes 900 working cycles, so as to obtain 1,049,600 vibration sequence points in total. A typical time domain signal for a single cylinder misfire at different operating speeds is shown in FIG. 6. Because the difference between the time domain signals of different fire types under the same working condition is very small, it is difficult to directly diagnose the specific fire type according to the time domain signals. Therefore, it is necessary to analyze the "texture" of the signal by means of computer vision to determine whether it is abnormal. Meanwhile, a sample after converting the vibration signal into a two-dimensional image is shown in fig. 7.
The data set partitioning results under four different conditions are shown in table 3. Data sets A, B and C correspond to single cylinder misfire conditions at three operating speeds of 1300rpm, 1800rpm and 2200rpm, respectively, and data set D corresponds to mixed cylinder misfire conditions at an operating speed of 1800 rpm. Since 1,049,600 vibration sequence points are collected at different speeds for each fire fault, 511 samples can be obtained for each tag, and each dataset has five tags of I, II, III, IV and V, which means that there are 2555 samples for each dataset. Finally, the training set, the test set and the validation set account for 80%,10%,10% of these data sets, respectively.
TABLE 3 data set partitioning under different operating conditions
Next, the environment used in this example is shown in Table 4,
Table 4 experimental environment
In this embodiment, the central processor uses Inter 7 12100 in the hardware environment, and the graphics processor uses Nvidia RTX 3080, which has 8704 CUDA cores and 184 Tensor cores, and has 10GB video memory. In terms of software, pyTorch 1.11.0 is adopted as a deep learning framework API, the CUDA operation platform version is 11.3, and the cuDNN calculation acceleration library version is 8.2.1.
The following evaluation indexes are adopted in this embodiment to judge the performance of the network model:
(1) Precision: the judgment accuracy of the evaluation model on whether the signal is abnormal, namely the precision, is as follows:
(2) Recall: the sensitivity of the evaluation model to abnormal signals, namely the recall ratio, is calculated by the following formula:
(3) F1-Score: the calculation formula of the harmonic mean of the Precision and Recall indexes is as follows:
(4) Average Precision (AP): the average detection Precision of the model is evaluated, the lower surrounding area of the Precision-Recall index curve is calculated to reflect the comprehensive performance of the model, and the calculation formula is as follows
In the above formula, TP, FP, FN are elements in the confusion matrix, and each element represents a meaning as follows:
TP: detecting the number of correct targets, namely correctly detecting the number of abnormal signal image areas;
FP: detecting the number of false targets, namely judging the image area of the normal signal as the number of abnormal signals;
FN: the number of missed detection targets, i.e., the number of image areas in which an abnormal signal is not detected.
In an ablation experiment, the network MITDCNN designed by the invention is firstly disassembled into the following three groups of networks:
network 1: only a convolution module is adopted in the feature extraction network to extract the features of the two-dimensional image, a mixed channel feature fusion network is not used, and only a detection network is matched;
network 2: the multi-mode feature extraction network is adopted in the feature extraction network to extract one-dimensional and two-dimensional features, a mixed channel feature fusion network is not used, and only a detection network is matched;
network 3: and adopting a multi-mode feature extraction network and a mixed channel feature fusion detection network, namely a MITDCNN network.
And testing vibration signal data sets of the engine at different working speeds by adopting the evaluation indexes through the three networks, and comparing test results are shown in tables 5-8.
Table 5 comparison of the single cylinder low speed operating mode dataset at 1300rpm
The comparative testing of different networks on dataset A is shown in Table 5, with network 1 having Precision, recall, F-Score and AP indices of 83.128%, 84.035%, 83.579%, 88.268%, respectively, all in the range of [80%,90% ]. The Precision, recall, F-Score and AP indexes of the network 2 are all over 90%, and compared with the network 1, the indexes are respectively improved by 9.229%, 11.017%, 10.106% and 6.589%. The Precision, recall, F-Score and AP indexes in the network 3 are 99.738%, 99.888%, 99.812%, 99.926%, and very close to 100%, respectively, and the standard deviation is controlled within 1%.
Table 6 1800rpm Single cylinder Medium speed Condition data set contrast Condition
Comparative testing of different networks on dataset B the Precision of network 1 was 83.571% and Recall was 84.268% as shown in table 6. The F1-Score and AP index of network 2 and network 3 were 93.724%, 94.968%, 99.735% and 99.853%, respectively, which exceeded 90%.
Table 7 comparison of the single cylinder high speed operating mode dataset at 2200rpm
The comparison test conditions of different networks on the data set C are shown in table 7, where the Precision and Recall of the network 1 are 84.287% and 85.937%, respectively, which are higher than the corresponding Precision and Recall values of the network 1 in the data set A, B. The Recall, F1-Score and AP indexes of network 2 were all higher than 95%, 96.872%, 95.201% and 95.587%, respectively. In network 3, the indices of Precision, recall, F-Score and AP were 99.184%, 99.179%, 99.181% and 99.217%, respectively, which are 14.897%, 13.242%, 14.077% and 11.292% higher than those of network 1.
Table 8 comparison of speed Condition data sets for mixing cylinders at 1800rpm
The comparative testing of different networks on dataset D is shown in Table 8, with network 1 having Precision, recall, F-Score and AP indices of 81.918%, 84.268%, 83.076%, 86.687%, respectively, with standard deviations within 4%. The Precision, recall, F-Score and AP index of network 2 were increased by 8.619%, 11.225%, 9.872%, and 4.571% as compared to network 1. In network 3, the indices Precision, recall, F-Score and AP are all higher than 99%, which is 8.473%, 4.089%, 6.347%, 8.481% higher than that of network 2.
By integrating the comparison conditions of tables 5-8, the network 1 only adopts a convolution module to extract relevant characteristics from the converted image, and the gray point texture map obtained by signal conversion has the difference which cannot obviously indicate whether the signal is normal or not, so that the precision and the recovery indexes obtained on four data sets are low, and the F1-Score and AP comprehensive performance indexes are also low. In the network 2, compared with the network 1, the method for extracting the characteristics adopts multi-mode characteristic extraction, the richness of the characteristics after the main network fuses the amplitude characteristics and the dimension amplitude characteristics is greatly improved, and the adoption of a transducer module can extract the global characteristics and complement the local characteristics of a convolution module, so that the characteristic level is more perfect. Therefore, in the comparison result of the table, compared with various indexes of the network 1 and the network 2, the indexes are greatly improved. Finally, in the network 3, compared with the network 2, the main difference is represented by adding a mixed channel feature fusion detection network, and the saliency of the abnormal signal features is improved by carrying out mixed group aggregation on the extracted different layers of features and carrying out feature weighting from two dimensional layers of a space domain and a channel domain. From the test and comparison results, each item of data of the network 3 is optimal, and the lifting amplitude is larger compared with that of the network 2, so that the mixed channel characteristic fusion detection network is also described to be helpful for improving the network performance.
Second, to simulate misfire failures in different noisy environmentsSignal detection performance, adding noise of different signal-to-noise ratios to the original data set. Let P be signal And P noise The signal-to-noise ratio SNR is defined as follows, for signal and noise energies, respectively:
in an ablation experiment of a noise environment, gaussian white noise with different signal-to-noise ratios (-4, -2, 0, 2, 4, 6, 8 and 10 dB) is added to original signals of different working speeds and cylinder numbers of an engine, detection performances of each network on abnormal signals under different working conditions and noise environments are compared and tested, and test results of the three networks are shown in tables 9-11 respectively.
Table 9 test of the detection performance of the network 1 under various operating environments by different signal-to-noise ratios
/>
Table 9 shows the detection performance index of the network 1 under noise interference of different data sets and different intensities, and the average indexes of the accuracy (Precision), recall (Recall), F1-Score (ability of measuring model to find alignment) and average Accuracy (AP) in the data set a are 76.636%, 78.058%, 77.340% and 82.038%. The Precision values are very close in dataset B, C, 75.616% and 75.278%, respectively. The Precision value in dataset D was 73.350%, which is the lowest value in the four datasets. In the case of-4 dB noise interference, the network 1 fluctuates in the range of [70%,80% ] for Precision, recall, F-Score and AP averages in the four data sets, 70.760%, 72.757%, 71.743%, 76.995%, respectively.
Table 10 test of the detection performance of different signal-to-noise ratios on the network 2 under various operating environments
/>
The detection performance indexes of the network 2 under different data sets and different noise interference intensities are shown in table 10. The Recall value in data set a is higher than 90%, 91.786%, 13.728% higher than that of network 1 in data set a. The average index of the AP values in dataset B, C, D was 92.482%, 92.965%, 89.345%, respectively, and the standard deviation in dataset C was 1.414% at most. In the case of-2 dB noise interference, the average value of the network 2 in each of the four data sets Recall, F1-Score and AP is higher than 90%, 91.940%, 90.845%, 92.225%, respectively, which are 1.934%, 1.587%, 1.455% higher than the average value of Recall, F1-Score and AP in 8dB noise interference, respectively.
Table 11 test of the detection performance of the network 3 under various operating environments by different signal-to-noise ratios
/>
Table 11 shows the detection performance index of network 3 under different data sets and noise interference of different intensities, wherein the average index of Precision, recall, F-Score and AP in data set a is higher than 99.7%, which are 99.711%, 99.754%, 99.732% and 99.856%, respectively. The average indices of Precision, recall, F-Score and AP in dataset B, C all fluctuated within the range of [99.1% -99.9% ] and were extremely close to 100%. The average index of Precision, recall, F-Score and AP in dataset D was 98.995%, 99.304%, 99.149%, 99.699%, which is far higher than the average index of Precision, recall, F1-Score and AP for networks 1, 2 in dataset D.
Comparing the test results of the three networks under different working conditions and noise environments, the conclusion can be drawn that the characteristic of the network 1 is extracted by the convolution module, and the image texture characteristics obtained by conversion after adding noise are greatly interfered, so that the fluctuation of various performance indexes of the network 1 in the test results is larger; and the network 2 adopts a multi-modal feature extraction scheme. The noise immunity of the network is improved to some extent by adding one-dimensional features. Compared with other two networks, the network 3 is added with a mixed channel feature fusion module, the spatial attention and channel attention calculation is contained in the mixed channel feature fusion module, and noise can be restrained on a spatial domain and a channel domain, so that fluctuation caused by noise influence of various indexes of the network 3 in a test result is small, and strong robustness is achieved. Next, iteration curves of the loss function value and the precision value in the training process of the network 3 (MITDCNN) model are shown in fig. 8 and 9.
In this embodiment, the following optimal super-parameter settings are obtained through multiple experimental tests: the number of epoch iterations was 200, the learn rate initial learning rate was set to 0.001, the training gradually converged to 0.00001, the momentum parameter momentum was set to 0.954, and the weight decay rate weight decay was set to 0.0013. It can be seen from FIGS. 8-9 that after 100 epochs, both the loss and model accuracy remain stable. The loss function value is quickly reduced and converged in a short period, and meanwhile, the AP value is also quickly improved, so that the learning ability of the network to the target characteristics is higher.
The foregoing describes in detail preferred embodiments of the present invention. It should be understood that numerous modifications and variations can be made in accordance with the concepts of the invention by one of ordinary skill in the art without undue burden. Therefore, all technical solutions which can be obtained by logic analysis, reasoning or limited experiments based on the prior art by the person skilled in the art according to the inventive concept shall be within the scope of protection defined by the claims.

Claims (10)

1. The mechanical fault detection method based on the engine vibration signal multi-mode analysis is characterized by comprising the following steps of:
collecting vibration signals of a generator;
inputting the vibration signals into a multi-mode feature extraction network to obtain a plurality of feature information;
p pieces of characteristic information are selected and input into a mixed channel characteristic fusion detection network, characteristic secondary processing and detection are carried out, and a detection result is output
The multi-mode feature extraction network comprises a shaping network, a convolution module, a full-connection module and a multi-mode transducer module;
the shaping network is used for converting an input engine vibration signal into a two-dimensional picture, and the convolution module is used for extracting features based on the two-dimensional picture to obtain a two-dimensional feature picture;
the full connection module is used for extracting one-dimensional feature vectors of input engine vibration signals; the multi-mode transducer module is used for integrating the one-dimensional feature vector and the two-dimensional feature picture;
the mixed channel feature fusion detection network comprises a feature aggregation module, a feature mixing module and a multi-scale detection module;
the feature aggregation module is used for aggregating the p pieces of feature information to obtain an aggregate feature map FM;
the feature mixing module is used for carrying out feature mixing on the aggregate feature map FM to obtain a plurality of feature maps with different sizes;
the multi-scale detection module is used for detecting the feature images after the mixing, and judging whether the diesel engine cylinder has abnormal vibration signals in the current period by detecting whether the feature images have abnormal and irregular texture areas.
2. The method for detecting mechanical faults based on multi-modal analysis of engine vibration signals according to claim 1, wherein the multi-modal feature extraction network comprises a q-layer structure, and each layer structure comprises a convolution module, a full connection module and a multi-modal transducer module.
3. The method for detecting mechanical faults based on multi-modal analysis of engine vibration signals according to claim 2, wherein the multi-modal feature extraction network performs feature extraction on the vibration signals to obtain a plurality of feature information, and the method comprises the following steps:
s1, respectively inputting the vibration signals into a shaping network and a full-connection module, outputting by the shaping network to obtain a two-dimensional picture, and inputting the two-dimensional picture into a convolution module to extract characteristics;
s2, outputting by the full-connection module to obtain a one-dimensional feature vector;
s3, outputting by the convolution module to obtain a two-dimensional characteristic picture;
s4, inputting the one-dimensional feature vector and the two-dimensional feature picture into a multi-mode transducer module to obtain integrated feature information;
s5, inputting the integrated characteristic information and the two-dimensional characteristic picture into a convolution module of a next layer, and inputting the one-dimensional characteristic vector into a full-connection module of the next layer;
s6, repeating the steps S2-S5 until the q-th layer structure of the multi-mode feature extraction network is reached, and carrying out layer-by-layer feature extraction to obtain a plurality of feature information.
4. The method for detecting mechanical faults based on multi-modal analysis of engine vibration signals according to claim 1, wherein the multi-modal transducer module comprises two multi-head attention networks, and the multi-head attention networks correspond to long-distance relation interaction between the one-dimensional feature vectors and the two-dimensional feature images respectively.
5. The method for detecting mechanical failure based on multi-modal analysis of engine vibration signals according to claim 4, wherein the multi-modal converter module integrates the one-dimensional feature vector and the two-dimensional feature picture, comprising the steps of:
dividing the one-dimensional feature vector into n token sub-vectors;
equally dividing the two-dimensional characteristic picture into n characteristic blocks, and extending the characteristic blocks into n token sub-vectors;
in a first multi-head attention network, inputting the token corresponding to the two-dimensional feature picture into a matrixIn which token corresponding to the one-dimensional feature vector is input into the matrix->And->Performing matching degree calculation on the query matrix Q with the picture characteristic information and the key matrix K with the one-dimensional amplitude characteristic n information, and assigning the obtained matching degree on a corresponding characteristic value matrix V to finish the operation of mapping the image characteristic to the amplitude characteristic;
in the second multi-head attention network, the token corresponding to the two-dimensional characteristic picture is input into a matrixIn which token corresponding to the one-dimensional feature vector is input into the matrix->And->Performing matching degree calculation on the query matrix Q with the picture characteristic information and the key matrix K with the one-dimensional amplitude characteristic n information, and assigning the obtained matching degree on a corresponding characteristic value matrix V to finish the operation of mapping the image characteristic to the amplitude characteristic;
wherein, in two multi-head attention networks, the corresponding two groups of Q, K, V vectors are respectively formed by Andcalculating a matrix;
and combining the output one-dimensional feature vectors of the two multi-head attention networks, performing activation calculation on the combined features by adopting a full connection layer and a ReLU activation function, and finally performing integer calculation on the obtained 1 multiplied by C vectors to obtain a feature map with H multiplied by W multiplied by C dimensions.
6. The method for detecting mechanical failure based on multi-modal analysis of engine vibration signals according to claim 1, wherein the feature aggregation module extracts feature maps FM output by the 3 convolution modules at the bottommost layer of the network from the multi-modal feature extraction 1 、FM 2 FM 3 Polymerization is carried out, comprising the following steps:
the feature information of each feature map is promoted;
FM 3 the deconvolution is used for expanding the size of the characteristic diagram by two times and compressing the number of channels to be one half of the original number, and the add fusion mode is used for FM 3 Feature assignment to FM 2 Obtaining FM 2 ′;
For FM 2 Upsampling and channel compression are carried out on the feature diagram, and the feature diagram is matched with FM 1 The features of the feature map are fused to obtain FM 1 ′;
FM with concat layer 1 ′、FM 2 ' and FM 3 Combining the three feature maps, and performing FM (frequency modulation) in the combining operation process 1 ' downsampling FM 3 And (5) up-sampling and polymerizing to obtain a feature map FM.
7. The method for detecting mechanical failure based on multi-modal analysis of engine vibration signals according to claim 6, wherein the feature blending module performs feature blending on the aggregated feature map FM, and includes the steps of:
compressing and combining FM characteristic channel number with FM by using 1x1 convolution layer 2 ' same, divide FM equally into FM_1 and FM_2 two characteristic diagrams by grouping convolution, its size is the same as FM, the channel number is one half of FM;
performing feature extraction operation on the grouped feature graphs FM_1 and FM_2 to obtain FM_1 'and FM_2';
extracting feature graphs with different sizes and channel numbers from FM_1 'and FM_2' by adopting the same parallel convolution module to form two paired groups;
and combining and mixing the feature images in the pairing group by adopting a fusion module combining the concat layer and the 1x1 convolution layer to obtain three feature images with different sizes of FM1, FM2 and FM 3.
8. The method for detecting mechanical faults based on multi-modal analysis of engine vibration signals according to claim 7, wherein the feature map FM_1 is calculated by adopting spatial attention, and morphological feature information FM_1' corresponding to abnormal signals is improved; adopting channel attention calculation for FM_2 to promote semantic feature weight FM_2' of abnormal signals;
the paired set extracted from fm_1 'contains a spatial attention-seeking feature value, and the paired set extracted from fm_2' contains a channel attention-seeking feature value.
9. The method for detecting mechanical faults based on multi-modal analysis of engine vibration signals according to claim 1, wherein the multi-scale detection module is used for detecting a feature map output by the feature blending module, and a cross entropy classification loss is used for adjusting a classification module of a detector.
10. The mechanical fault detection method based on engine vibration signal multi-mode analysis according to claim 1, wherein the multi-scale detection module is adopted to detect a feature map output by the feature mixing module, and position loss calculation is performed based on a CIoU position evaluation relationship.
CN202310554203.4A 2023-05-17 2023-05-17 Mechanical fault detection method based on engine vibration signal multi-mode analysis Pending CN116610935A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310554203.4A CN116610935A (en) 2023-05-17 2023-05-17 Mechanical fault detection method based on engine vibration signal multi-mode analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310554203.4A CN116610935A (en) 2023-05-17 2023-05-17 Mechanical fault detection method based on engine vibration signal multi-mode analysis

Publications (1)

Publication Number Publication Date
CN116610935A true CN116610935A (en) 2023-08-18

Family

ID=87681143

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310554203.4A Pending CN116610935A (en) 2023-05-17 2023-05-17 Mechanical fault detection method based on engine vibration signal multi-mode analysis

Country Status (1)

Country Link
CN (1) CN116610935A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117152156A (en) * 2023-10-31 2023-12-01 通号通信信息集团有限公司 Railway anomaly detection method and system based on multi-mode data fusion

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117152156A (en) * 2023-10-31 2023-12-01 通号通信信息集团有限公司 Railway anomaly detection method and system based on multi-mode data fusion
CN117152156B (en) * 2023-10-31 2024-02-13 通号通信信息集团有限公司 Railway anomaly detection method and system based on multi-mode data fusion

Similar Documents

Publication Publication Date Title
CN111028146B (en) Image super-resolution method for generating countermeasure network based on double discriminators
CN108613802B (en) A kind of mechanical failure diagnostic method based on depth mixed network structure
Mayer et al. Exposing fake images with forensic similarity graphs
Jia et al. GTFE-Net: A gramian time frequency enhancement CNN for bearing fault diagnosis
CN116610935A (en) Mechanical fault detection method based on engine vibration signal multi-mode analysis
CN114169377A (en) G-MSCNN-based fault diagnosis method for rolling bearing in noisy environment
Zhang et al. Imbalanced data enhancement method based on improved DCGAN and its application
CN116403032A (en) Breaker fault evaluation method based on multi-domain information fusion and deep learning
Wang et al. Using artificial intelligence methods to classify different seismic events
CN115733673B (en) Data anomaly detection method based on multi-scale residual error classifier
CN115356599B (en) Multi-mode urban power grid fault diagnosis method and system
Zhiyong et al. Fault identification method of diesel engine in light of pearson correlation coefficient diagram and orthogonal vibration signals
CN116541771A (en) Unbalanced sample bearing fault diagnosis method based on multi-scale feature fusion
CN116340817A (en) Intelligent fault identification method for hydraulic piston pump
CN113537010B (en) Fifteen-phase asynchronous motor rolling bearing fault diagnosis method based on single-channel diagram data enhancement and migration training residual error network
CN115525866A (en) Deep learning rolling bearing fault diagnosis method and system
Zhao et al. Dimension reduction graph‐based sparse subspace clustering for intelligent fault identification of rolling element bearings
Zhu et al. A novel visual transformer for long-distance pipeline pattern recognition in complex environment
Gunawan et al. Classification of Japanese fagaceae wood based on microscopic image analysis
Li et al. Feature extraction for engine fault diagnosis utilizing the generalized S-transform and non-negative tensor factorization
Guo et al. A Neural Network Method for Bearing Fault Diagnosis
Yu et al. GAN-Based Day and Night Image Cross-Domain Conversion Research and Application
Rajapaksha et al. Sensitivity analysis of SVM kernel functions in machinery condition classification
CN117725419A (en) Small sample unbalanced rotor fault diagnosis method and system
CN115358274A (en) DCGAN-CNN-based fault classification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination