CN110415709B - Transformer working state identification method based on voiceprint identification model - Google Patents

Transformer working state identification method based on voiceprint identification model Download PDF

Info

Publication number
CN110415709B
CN110415709B CN201910561468.0A CN201910561468A CN110415709B CN 110415709 B CN110415709 B CN 110415709B CN 201910561468 A CN201910561468 A CN 201910561468A CN 110415709 B CN110415709 B CN 110415709B
Authority
CN
China
Prior art keywords
sound
gray level
transformer
detected
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910561468.0A
Other languages
Chinese (zh)
Other versions
CN110415709A (en
Inventor
张欣
吕启深
党晓婧
刘顺桂
王丰华
周东旭
解颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Power Supply Co ltd
Original Assignee
Shenzhen Power Supply Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Power Supply Co ltd filed Critical Shenzhen Power Supply Co ltd
Priority to CN201910561468.0A priority Critical patent/CN110415709B/en
Publication of CN110415709A publication Critical patent/CN110415709A/en
Application granted granted Critical
Publication of CN110415709B publication Critical patent/CN110415709B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01HMEASUREMENT OF MECHANICAL VIBRATIONS OR ULTRASONIC, SONIC OR INFRASONIC WAVES
    • G01H17/00Measuring mechanical vibrations or ultrasonic, sonic or infrasonic waves, not provided for in the preceding groups
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/04Training, enrolment or model building
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/57Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for processing of video signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a transformer working state identification method based on a voiceprint identification model. The recognition method comprises the steps of taking a plurality of sound gray level images as input parameters of the convolutional neural network, taking a plurality of pieces of working state information which correspond to the sound gray level images one by one as output parameters of the convolutional neural network, training the convolutional neural network, and establishing a voiceprint recognition model. Compared with the prior art, the sound gray level image can amplify the time-frequency characteristic of the sound signal of the transformer, and the identification degree of the sound to be detected can be improved. When the to-be-detected sound gray level image of the to-be-identified transformer is used as the input parameter of the voiceprint identification model, the voiceprint identification model can accurately identify the working state of the transformer. The transformer working state identification method can effectively extract the time-frequency characteristics of the sound signals of the transformer, and further improves the accuracy of transformer working state identification.

Description

Transformer working state identification method based on voiceprint identification model
Technical Field
The application relates to the technical field of detection, in particular to a transformer working state identification method based on a voiceprint identification model.
Background
Power transformers play an important role in voltage conversion and power transmission. The transformer in the power system has large usage amount, various capacity grades and specifications and long operation time, so that the accident rate is correspondingly increased. Once the transformer fails, huge economic loss can be brought to the power grid, and personal safety of operation and maintenance personnel is endangered. Therefore, the working state of the transformer is effectively monitored, potential fault hidden dangers are found as soon as possible, and the problem of important attention of researchers in the power industry is solved.
The transformer vibration in operation mainly comprises winding vibration, iron core vibration, cooling system vibration and the like. Mechanical waves generated by vibration are radiated outwards through media such as solid structural parts of the transformer, insulating oil and air to form sound wave signals. The sound wave signal contains a large amount of information of the working state of the transformer. 20Hz to 20kHz is the range of sound frequencies audible to the human ear. An experienced substation worker can directly listen to the sound of the running transformer by ears to judge whether the state is normal. For the transformer in operation, the sound signal contains abundant equipment state information. When the transformer is in an overload state, an iron core is loosened, direct current magnetic biasing is performed, ferromagnetic resonance occurs and other abnormal working states, the characteristics of the sound signals sent by the transformer are correspondingly changed.
Time-frequency analysis is a common approach in the field of acoustic signal processing. However, the acoustic signal of the running transformer is inevitably affected by load current, noise interference and the like, so that the acoustic signal monitored at different times is changed along with the influence and presents broadband non-stationary characteristics, the time-frequency characteristics of the acoustic signal present certain complexity, and the acoustic signal is difficult to directly analyze to distinguish different working states of the transformer. How to improve the accuracy of the identification of the working state of the transformer is an urgent problem to be solved.
Disclosure of Invention
Therefore, it is necessary to provide a transformer operating state identification method based on a voiceprint identification model for solving the problem of how to improve the accuracy of transformer operating state identification.
A transformer working state identification method based on a voiceprint identification model comprises the following steps:
s100, selecting a plurality of sound gray level images of the transformer in various working states to train a convolutional neural network, and establishing a voiceprint recognition model, wherein the sound gray level images are used as input parameters of the convolutional neural network, and a plurality of working state information which corresponds to the sound gray level images one to one is used as output parameters of the convolutional neural network.
And S200, inputting the to-be-detected sound gray level image of the to-be-detected transformer into the voiceprint recognition model, and acquiring working state information corresponding to the to-be-detected sound gray level image.
In one embodiment, the step S100 includes:
s110, randomly selecting the sound gray level images under various working states of the transformer, dividing the sound gray level images into a training sample set and a testing sample set, and correspondingly setting the various working states as a plurality of working state information corresponding to the sound gray level images one by one.
And S120, taking the sound gray images in the training sample set as the input of the convolutional neural network, taking the working state information as the output of the convolutional neural network, and training the convolutional neural network.
In one embodiment, after the step S120, the identification method further includes:
s130, inputting the sound gray level images in the test sample set into the trained convolutional neural network.
And S140, recording a plurality of test working state information which is output by the voiceprint recognition model and corresponds to the plurality of sound gray level images one by one, and calculating the recognition rate of the trained convolutional neural network according to the plurality of test working state information.
S150, if the change rate of the recognition rate of the convolutional neural network is smaller than a set value, establishing the voiceprint recognition model according to the trained convolutional neural network.
In one embodiment, after the step S140, the identification method further includes:
s141, if a rate of change of the recognition rate of the convolutional neural network is greater than a set value, performing the S120 to the 150.
In one embodiment, before the step S100, the identification method further includes:
and S01, collecting a plurality of first sound signals of the plurality of working states, wherein each working state corresponds to a plurality of first sound signals, and processing the plurality of first sound signals to obtain a plurality of sound gray level images corresponding to the plurality of first sound signals one to one.
In one embodiment, in the step S01, the step of processing the plurality of first sound signals includes:
and S010, setting the sampling frequency and the sampling duration of the sound signal of the transformer, and acquiring a plurality of first sound signals in various working states.
And S020, performing segmented windowing on each first sound signal respectively to obtain a plurality of second sound signals.
And S030, respectively carrying out Fourier transform on the plurality of second sound signals to obtain the frequency spectrum distribution of the plurality of second sound signals.
And S040, performing wavelet transform on the second sound signal according to the second sound signal frequency spectrum distribution to obtain a plurality of wavelet coefficient matrixes corresponding to the second sound signals one by one.
And S050, performing gray level transformation on the wavelet coefficient matrixes respectively to obtain a plurality of sound gray level images in various working states.
In one embodiment, the step S040 includes:
and S041, respectively selecting a plurality of wavelet basis functions corresponding to the plurality of second sound signals one to one.
And S042, dividing the frequency bandwidths of the second sound signals at equal intervals to obtain a plurality of frequency band subintervals, and constructing a plurality of two-dimensional networks which correspond to the second sound signals one by one according to the frequency band subintervals.
And S043, performing wavelet transformation on the second sound signals according to the plurality of wavelet basis functions and the plurality of two-dimensional networks, and obtaining a plurality of wavelet coefficient matrixes corresponding to the plurality of second sound signals one by one.
In one embodiment, after the step S043, the identification method further includes:
and S044, respectively calculating the Shannon entropies of the wavelet coefficient matrixes, and determining a plurality of optimal wavelet basis functions according to the Shannon entropies.
And S045, performing wavelet transformation on the plurality of second sound signals corresponding to the plurality of optimal wavelet basis functions one to one according to the plurality of optimal wavelet basis functions, and obtaining a plurality of optimized wavelet coefficient matrixes.
In one embodiment, the step S050 includes:
s051, respectively carrying out normalization processing on the wavelet coefficient matrixes according to columns to obtain a plurality of normalized wavelet coefficient matrixes.
And S052, performing gray scale transformation on the plurality of normalized wavelet coefficient matrixes respectively to obtain a plurality of initial gray scale images.
And S053, performing smoothing filtering processing on the plurality of initial gray level images respectively to obtain a plurality of smoothed initial gray level images.
And S054, respectively carrying out sharpening processing and gray correction on the initial gray level images after the plurality of smoothing processing to obtain a plurality of sound gray level images.
In one embodiment, in the step S053, a gaussian blur operator is used to perform a smoothing filtering process on the plurality of initial grayscale images respectively.
In one embodiment, before the step S200, the identification method further includes:
and S02, collecting the sound signal to be detected of the transformer to be identified, and processing the sound signal to be detected to obtain the gray level image of the sound to be detected corresponding to the sound signal to be detected.
In one embodiment, the step S02 includes:
and S021, acquiring the to-be-detected sound signal of the to-be-identified transformer according to the set sampling frequency and the set sampling duration.
S022, performing segmented windowing on the sound signal to be detected to obtain a plurality of segmented sound signals to be detected.
S023, performing fourier transform on the plurality of segmented voice signals to be detected to obtain a plurality of frequency spectrum distributions of the segmented voice signals to be detected, which correspond to the plurality of segmented voice signals to be detected one by one.
S024, performing wavelet transformation on the segmented to-be-detected sound signals according to the frequency spectrum distribution of the segmented to-be-detected sound signals to obtain a plurality of to-be-detected wavelet coefficient matrixes corresponding to the segmented to-be-detected sound signals one by one.
And S025, performing gray level transformation on the wavelet coefficient matrixes to be tested respectively to obtain a plurality of gray level images of the sound to be tested under various working states.
According to the transformer working state identification method based on the voiceprint recognition model, a plurality of sound gray level images are used as input parameters of the convolutional neural network, a plurality of working state information in one-to-one correspondence with the sound gray level images are used as output parameters of the convolutional neural network, the convolutional neural network is trained, and the voiceprint recognition model is established. Compared with the prior art, the sound gray level image can amplify the time-frequency characteristic of the sound signal of the transformer, and the identification degree of the sound to be detected can be improved. When the to-be-detected sound gray level image of the to-be-identified transformer is used as the input parameter of the voiceprint identification model, the voiceprint identification model can accurately identify the working state of the transformer. The transformer working state identification method can effectively extract the time-frequency characteristics of the sound signals of the transformer, and further improves the accuracy of transformer working state identification.
Drawings
Fig. 1 is a schematic flowchart of a transformer operating state identification method based on a voiceprint recognition model according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a transformer operating state identification method based on a voiceprint recognition model according to another embodiment of the present application;
fig. 3 is a schematic flowchart of a transformer operating state identification method based on a voiceprint recognition model according to another embodiment of the present application.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, embodiments accompanying the present application are described in detail below with reference to the accompanying drawings. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is capable of embodiments in many different forms than those described herein and those skilled in the art will be able to make similar modifications without departing from the spirit of the application and it is therefore not intended to be limited to the embodiments disclosed below.
The numbering of the components as such, e.g., "first", "second", etc., is used herein only to distinguish the objects as described, and does not have any sequential or technical meaning. The term "connected" and "coupled" when used in this application, unless otherwise indicated, includes both direct and indirect connections (couplings). In the description of the present application, it is to be understood that the terms "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", and the like, indicate orientations or positional relationships based on those shown in the drawings, and are used only for convenience in describing the present application and for simplicity in description, and do not indicate or imply that the devices or elements referred to must have a particular orientation, be constructed in a particular orientation, and be operated, and thus, are not to be considered as limiting the present application.
In this application, unless expressly stated or limited otherwise, the first feature "on" or "under" the second feature may be directly contacting the first and second features or indirectly contacting the first and second features through intervening media. Also, a first feature "on," "over," and "above" a second feature may be directly or diagonally above the second feature, or may simply indicate that the first feature is at a higher level than the second feature. A first feature being "under," "below," and "beneath" a second feature may be directly under or obliquely under the first feature, or may simply mean that the first feature is at a lesser elevation than the second feature.
Referring to fig. 1, an embodiment of the present application provides a transformer operating state identification method based on a voiceprint identification model, including:
s100, selecting a plurality of sound gray level images of the transformer in various working states to train a convolutional neural network, and establishing a voiceprint recognition model, wherein the sound gray level images are used as input parameters of the convolutional neural network, and a plurality of working state information which corresponds to the sound gray level images one to one is used as output parameters of the convolutional neural network.
S200, inputting the to-be-detected sound gray level image of the to-be-detected transformer into the voiceprint recognition model, and acquiring the working state information corresponding to the to-be-detected sound gray level image.
The transformer working state identification method based on the voiceprint identification model is provided by the embodiment of the application. The recognition method comprises the steps of taking a plurality of sound gray level images as input parameters of the convolutional neural network, taking a plurality of pieces of working state information which correspond to the sound gray level images one by one as output parameters of the convolutional neural network, training the convolutional neural network, and establishing the voiceprint recognition model. Compared with the prior art, the sound gray level image can amplify the time-frequency characteristic of the sound signal of the transformer, and the identification degree of the sound to be detected can be improved. When the to-be-detected sound gray level image of the to-be-identified transformer is used as the input parameter of the voiceprint identification model, the voiceprint identification model can accurately identify the working state of the transformer. The transformer working state identification method can effectively extract the time-frequency characteristics of the sound signals of the transformer, and further improves the accuracy of transformer working state identification.
The step S100 is used to train the convolutional neural network and establish the voiceprint recognition model. The step S200 is used for identifying the working state of the transformer.
Referring to fig. 2, in an embodiment, the step S100 includes:
s110, randomly selecting the sound gray level images under various working states of the transformer, dividing the sound gray level images into a training sample set and a testing sample set, and correspondingly setting the various working states as a plurality of working state information corresponding to the sound gray level images one by one.
In one embodiment, the gray level images of the transformer in the M working states are randomly selected as the input of the convolutional neural network, and respectively constitute a training sample set I and a test sample set I' of the convolutional neural network. The working states of the transformers to be identified are M, and expected output Y of the convolutional neural network is formed. The training sample set I and the test sample set I' are respectively as follows:
Figure BDA0002108408570000081
Figure BDA0002108408570000082
wherein N isxAnd NyThe number of the sound signal gray level images in a certain state of the transformer is Nx+Ny=N。NxNumber of training sample sets, N, for a grey scale image of a sound signal in a certain state of a transformeryThe number of test sample sets for the gray scale image of the sound signal in a certain state of the transformer is Ny<Nx
Figure BDA0002108408570000083
Is the training sample set I.
The convolutional neural network is used for image recognition. The convolutional neural network comprises an input layer, a convolutional layer, an excitation layer, a pooling layer, a full-link layer, an output layer and the like. The input layer is for receiving an input image. The convolutional layer is used for extracting local information of the image. The excitation layer is used for performing regularization processing on the convolutional layer output so as to facilitate network training. The pooling layer is used for simplifying image information and extracting main image information so as to reduce data volume and improve the operation performance of the neural network. The full-connection layer fully utilizes image information and achieves the required output characteristic through network training. The output layer is used for outputting the working state of the transformer to be identified.
And S120, taking the sound gray level images in the training sample set as the input of a convolutional neural network, taking the working state information as the output of the convolutional neural network, and training the convolutional neural network.
The convolutional neural network training process comprises:
s121, inputting the sound gray level images of the training sample into the input layer.
And S122, in the convolution layer of the convolution neural network, respectively adopting c convolution kernels with the size of h x h, carrying out convolution operation on the plurality of sound gray level images of the training sample output by the input layer by step length S, and inputting a calculation result to the excitation layer.
And S123, in the excitation layer, converting the output result of the convolution layer by adopting an activation function sigma, and inputting the calculation result into the pooling layer.
And S124, in the pooling layer, resampling the output result of the excitation layer to reduce the data dimension of the output result of the excitation layer, and outputting the resampling result to the full-link layer.
And S125, wherein the full connection layer comprises a connection layer and a softmax classification layer. Wherein the connection layer is a forward neural network comprising l layers of neurons. Wherein the output of the pooling layer is a first layer of the forward neural network. And the input and the output of two adjacent layers of the forward neural network are connected through a certain weight. An output of the forward neural network is an input of the softmax classification layer. And carrying out data processing through a softmax function to obtain the output of the softmax classification layer. Comparing an output of the softmax classification layer with the expected output Y to update connection weights of the forward neural network. The expression of the softmax function is as follows:
Figure BDA0002108408570000091
wherein Q ismThe mth element of array Q is output for the connection layer.
In one embodiment, after the step S120, the identification method further includes:
s130, inputting the sound gray level images in the test sample set into the trained convolutional neural network.
And S140, recording a plurality of test working state information which is output by the voiceprint recognition model and corresponds to the plurality of sound gray level images one by one, and calculating the recognition rate of the trained convolutional neural network according to the plurality of test working state information.
S150, if the change rate of the recognition rate of the convolutional neural network is smaller than a set value, establishing the voiceprint recognition model according to the trained convolutional neural network.
In one embodiment, after the step S140, the identification method further includes:
s141, if a rate of change of the recognition rate of the convolutional neural network is greater than a set value, performing the S120 to the S150.
In the step S141 and the step S150, the change rate of the recognition rate of the convolutional neural network refers to a difference value of the recognition rates of the same set of test samples input to the same convolutional neural network twice.
In one embodiment, the obtaining of the rate of change of the recognition rate comprises:
and S1, taking the sound gray level images in the training sample set as the input of a convolutional neural network, taking the working state information as the output of the convolutional neural network, and training the convolutional neural network.
And S2, inputting the sound gray level images in the test sample set into the trained convolutional neural network.
And S3, recording a plurality of test working state information which is output by the voiceprint recognition model and corresponds to the plurality of sound gray level images one by one, and calculating the first recognition rate of the trained convolutional neural network according to the plurality of test working state information.
S4, training the convolutional neural network by using the plurality of sound gray scale images in the training sample set (which is the same as the training sample set in step S1) as inputs of the convolutional neural network, and using the plurality of operation state information as outputs of the convolutional neural network, respectively.
S5, inputting the plurality of sound gray scale images in the test sample set (same as the training sample set in the step S2) into the trained convolutional neural network.
And S6, recording a plurality of test working state information which is output by the voiceprint recognition model and corresponds to the plurality of sound gray level images one by one, and calculating a second recognition rate of the trained convolutional neural network according to the plurality of test working state information.
And S7, calculating the difference value between the first recognition rate and the second recognition rate to obtain the change rate of the recognition rate.
Referring to fig. 3, in an embodiment, before the step S100, the identification method further includes:
and S01, acquiring a plurality of first sound signals of the transformer under multiple working states, wherein each working state of the transformer corresponds to a plurality of first sound signals, and processing the plurality of first sound signals to obtain a plurality of sound gray level images corresponding to the plurality of first sound signals one to one.
In one embodiment, the step S01 includes:
and S010, setting the sampling frequency and the sampling duration of the sound signal of the transformer, and acquiring a plurality of first sound signals of the transformer under various working states.
And S020, performing segmented windowing on each first sound signal respectively to obtain a plurality of second sound signals.
And S030, performing Fourier transform on the plurality of second sound signals respectively to obtain the frequency spectrum distribution of the plurality of second sound signals corresponding to the plurality of second sound signals one by one.
And S040, performing wavelet transform on the second sound signal according to the second sound signal frequency spectrum distribution to obtain a plurality of wavelet coefficient matrixes corresponding to the second sound signals one by one.
And S050, performing gray level transformation on the wavelet coefficient matrixes respectively to obtain a plurality of sound gray level images in various working states.
In one embodiment, the step S040 includes:
and S041, respectively selecting a plurality of wavelet basis functions corresponding to the plurality of second sound signals one to one. The wavelet basis functions are:
Figure BDA0002108408570000121
wherein x isi(t) the second sound signal, ψi(t) is the wavelet basis function of the ith segment of the sound signal, fbiIs the bandwidth of the wavelet basis function, fciIs the center frequency.
The wavelet basis function selection determines the rationality of the wavelet transform. The wavelet basis functions have two important parameters, bandwidth and center frequency. The bandwidth affects the range size of the frequency analysis. The target frequency to be analyzed is generally taken as the center frequency. The cosine function of the exponential decay properties is taken as the wavelet basis function in this application. The width of the bandwidth of the cosine function is moderate, and the frequencies to be analyzed cannot be overlapped. Meanwhile, the cosine function has convergence, and the cosine function with the exponential decay property is used as a wavelet basis function to obtain accurate frequency response.
And S042, dividing the frequency bandwidths of the second sound signals at equal intervals to obtain a plurality of frequency band subintervals, and constructing a plurality of two-dimensional networks which correspond to the second sound signals one by one according to the frequency band subintervals.
The frequency band range of the second sound signal in the ith stage is [ f ]imin,fimax]Where i is 1,2, …, N; frequency band range [ f ] of the second sound signal for the ith segmentimin,fimax]Dividing the mixture at equal intervals to obtain B with the length delta fiEach of the frequency band subintervals is constructed with a size of Bi×BiOf the frequency band subinterval BiThe calculation formula of (2) is as follows:
Figure BDA0002108408570000131
wherein "[ ]" means rounding.
The step S042 optimizes the bandwidth and the center frequency by using a grid method of the two-dimensional grid.
And S043, performing wavelet transformation on the second sound signals according to the plurality of wavelet basis functions and the plurality of two-dimensional networks, and obtaining a plurality of wavelet coefficient matrixes corresponding to the plurality of second sound signals one by one.
Traversal size by row of Bi×BiAll nodes (g1, g2) of the two-dimensional grid. Here, g1 is 1,2, …, Bi+1,g2=1,2,…,Bi+1. And all nodes of each row in the two-dimensional grid are made to be the bandwidth of the wavelet basis function of the ith segment of the second sound signal. Wherein the bandwidth of the wavelet basis function at nodes at row g1 and column g2 of the two-dimensional grid is fbi (g1,g2)=fimin+ (g1-1) Δ f. And all nodes in each column in the two-dimensional grid are set as the central frequency of the wavelet basis function of the ith segment of the second sound signal. Wherein nodes at row g1 and column g2 of the two-dimensional gridHas a center frequency of fci (g1,g2)=fimin+ (g2-1) Δ f, calculating the second sound signal x of the i-th segmenti(t) wavelet transform, said wavelet transform being calculated as:
Figure BDA0002108408570000132
Figure BDA0002108408570000133
j=1,…,H;k=1,…,L;g1=1,2,…,Bi+1;g2=1,2,…,Bi+1 (8)
wherein the content of the first and second substances,
Figure BDA0002108408570000134
wavelet coefficient matrix W for the ith segment being the second sound signaliThe jth row in the jth column of the series,
Figure BDA0002108408570000135
a wavelet basis function for the second sound signal of the ith segment, and H is a wavelet coefficient matrix W of the second sound signal of the ith segmentiL is the wavelet coefficient matrix W of the second sound signal of the ith stageiThe number of columns.
ajIs a scale factor, and can be expressed as:
Figure BDA0002108408570000141
the value of the scale factor should ensure the corresponding actual frequency fajCapable of covering the frequency band range f of the sound signalimin,fimax]。
The bandwidth parameters and the center frequency are optimized by using a grid method of a two-dimensional network. For the constructed grid, each point of the grid acts as a parametric combination of bandwidth and center frequency. This combination is used to perform a continuous wavelet transform and an optimal set of parameters is selected based on the objective function set forth below.
In one embodiment, after the step S043, the identification method further includes:
and S044, respectively calculating the Shannon entropies of the wavelet coefficient matrixes, and determining a plurality of optimal wavelet basis functions according to the Shannon entropies.
Calculating the wavelet coefficient matrix
Figure BDA0002108408570000142
The shannon entropy of (a). Selecting the bandwidth corresponding to the two-dimensional grid nodes (g1, g2) when the Shannon entropy is the minimum value and the bandwidth of the central frequency as the optimal wavelet basis function
Figure BDA0002108408570000143
And center frequency
Figure BDA0002108408570000144
The calculation formula of the shannon entropy is as follows:
Figure BDA0002108408570000145
Figure BDA0002108408570000146
wherein S isi(g1, g2) is Shannon entropy, pjIs a wavelet coefficient matrix
Figure BDA0002108408570000147
J-th row element energy and wavelet coefficient matrix
Figure BDA0002108408570000148
The ratio of the total energies.
And S045, performing wavelet transformation on the plurality of second sound signals corresponding to the plurality of optimal wavelet basis functions one to one according to the plurality of optimal wavelet basis functions, and obtaining a plurality of optimized wavelet coefficient matrixes.
The time-frequency characteristics of the signals can be better extracted by performing wavelet transform on the second sound signals of the transformer. The time-frequency characteristics reflect the change of the frequency domain characteristics of the signals along with time. The analysis effect of the wavelet transform depends on the reasonable choice of the wavelet basis functions and analysis bands. According to the method and the device, the optimized continuous wavelet transform is adopted to extract the time-frequency characteristics of the acoustic signals, the accuracy in time domain and frequency domain can be considered, and the accuracy of the identification of the working state of the transformer is improved.
In one embodiment, the step S050 includes:
s051, respectively carrying out normalization processing on the wavelet coefficient matrixes according to columns to obtain a plurality of normalized wavelet coefficient matrixes.
The normalization processing calculation formula of the second sound signal in the ith section is as follows:
Figure BDA0002108408570000151
Figure BDA0002108408570000152
Figure BDA0002108408570000153
wherein, Ui(k) is the mean value of the kth column of the wavelet coefficient matrix of the second sound signal in the ith segment, deltai(k) is the variance of the kth column of the wavelet coefficient matrix of the second sound signal in the ith segment,
Figure BDA0002108408570000154
is the wavelet coefficient matrix.
And S052, performing gray scale transformation on the plurality of normalized wavelet coefficient matrixes respectively to obtain a plurality of initial gray scale images.
The gray scale transformation formula of the initial gray scale image G' of the second sound signal in the ith stage is:
Figure BDA0002108408570000155
wherein, G'i(j, k) is the initial gray image G 'of the second sound signal of the ith segment'iThe gray scale of the element in the jth row and the kth column. The ceil function represents rounding up, with p being the grey bit depth.
Figure BDA0002108408570000156
The kth column element of the wavelet transform matrix for the ith segment of the second sound signal.
And S053, performing smoothing filtering processing on the plurality of initial gray level images respectively to obtain a plurality of smoothed initial gray level images.
In one embodiment, in the step S053, a gaussian blur operator is used to perform a smoothing filtering process on the plurality of initial grayscale images respectively.
The smooth filtering process of the initial gray image G' of the second sound signal in the ith stage is:
Figure BDA0002108408570000161
Figure BDA0002108408570000162
Figure BDA0002108408570000163
wherein, G'1F is a Gaussian blur operator, c is the gray image obtained by smoothing and filtering the initial gray image G1And c2Is the length and width of the rectangular area.
And S054, respectively carrying out sharpening processing and gray correction on the initial gray level images after the plurality of smoothing processing to obtain a plurality of sound gray level images.
The ith segment is a gray scale image G 'of the second sound signal'1iThe sharpening processing formula is as follows:
Figure BDA0002108408570000164
wherein, G'2Is to gray scale image G'1And carrying out sharpening to obtain a gray image.
The ith segment is a gray scale image G 'of the second sound signal'2The gray scale correction formula is as follows:
Figure BDA0002108408570000171
where γ is a gamma correction coefficient, and γ is usually < 1.
In one embodiment, before the step S200, the identification method further includes:
and S02, collecting the sound signal to be detected of the transformer to be identified, and processing the sound signal to be detected to obtain the gray level image of the sound to be detected corresponding to the sound signal to be detected. The processing method of the sound signal to be measured refers to the step S01 and its related steps included therein.
In one embodiment, the step S02 includes:
and S021, acquiring the to-be-detected sound signal of the to-be-identified transformer according to the set sampling frequency and the set sampling duration.
S022, performing segmented windowing on the sound signal to be detected to obtain a plurality of segmented sound signals to be detected.
S023, performing fourier transform on the plurality of segmented voice signals to be detected to obtain a plurality of frequency spectrum distributions of the segmented voice signals to be detected, which correspond to the plurality of segmented voice signals to be detected one by one.
S024, performing wavelet transformation on the segmented to-be-detected sound signals according to the frequency spectrum distribution of the segmented to-be-detected sound signals to obtain a plurality of to-be-detected wavelet coefficient matrixes corresponding to the segmented to-be-detected sound signals one by one.
And S025, performing gray level transformation on the wavelet coefficient matrixes to be tested respectively to obtain a plurality of gray level images of the sound to be tested under various working states.
In one embodiment, in the step S010, the sampling frequency is fsThe acquisition time of each state acoustic signal is T. The kind of the different working states is denoted as M, here, fsThe frequency of the transformer is 50kHz, T is 10min, M is 4, and the working states of the transformer are normal, winding loosening, iron core loosening and current overload respectively. The operating states are indicated numerically respectively. "normal" is denoted by "1", "winding loose" is denoted by "2", "core loose" is denoted by "3", and "current overload" is denoted by "4".
In step S010, the first sound signal x (t) is segmented and windowed to obtain N segments of the second sound signal. The length of each section of the second sound signal is L, the overlapping length of two adjacent sections of the second sound signal is O, and each section of the second sound signal with the length of L can be regarded as a stable signal. Here, N is 800, L is 51200, and O is 10240.
In the step S053, c1And c2Is the length and width of a rectangular area, c1=c2=3。
In step S054, γ is 0.5.
In step S122, c is 8, h is 4, and S is 2.
In step S123, the activation function is a Relu function, where the Relu function is:
Figure BDA0002108408570000181
in step S124, the pooling layer of the convolutional neural network performs resampling processing on the output result of the excitation layer to reduce the data dimension of the output result of the excitation layer, and uses the data dimension as the output of the pooling layer. Here, a max pooling process is used to replace the original region with the maximum of the elements in the 4 x 4 region.
In step S125, the connection layer is a forward neural network including l layers of neurons, where l is 4.
In step S150, the set value δ is 1%.
The step S200 includes:
s210, inputting the to-be-identified sound gray level image of the to-be-identified transformer into the voiceprint identification model to obtain a plurality of numbers corresponding to the working state.
S220, selecting the maximum value of the numbers, and obtaining the working state corresponding to the maximum value, namely the working state of the transformer to be identified.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-described examples merely represent several embodiments of the present application and are not to be construed as limiting the scope of the claims. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (7)

1. A transformer working state identification method based on a voiceprint identification model is characterized by comprising the following steps:
s100, selecting a plurality of sound gray level images of a transformer in various working states to train a convolutional neural network, and establishing a voiceprint recognition model, wherein the sound gray level images are used as input parameters of the convolutional neural network, and a plurality of pieces of working state information which corresponds to the sound gray level images one to one are used as output parameters of the convolutional neural network;
s200, inputting a to-be-detected sound gray image of the to-be-detected transformer into the voiceprint recognition model, and acquiring working state information corresponding to the to-be-detected sound gray image;
the step S100 further includes:
s01, acquiring a plurality of first sound signals under the plurality of working states, where each working state corresponds to a plurality of first sound signals, and processing the plurality of first sound signals to obtain a plurality of sound gray level images corresponding to the first sound signals one to one;
in the step S01, the step of processing the plurality of first sound signals includes:
s010, setting the sampling frequency and the sampling duration of the sound signal of the transformer, and collecting a plurality of first sound signals under various working states;
s020, performing segmented windowing on each first sound signal respectively to obtain a plurality of second sound signals;
s030, performing fourier transform on the plurality of second sound signals, respectively, to obtain spectral distributions of the plurality of second sound signals;
s040, performing wavelet transform on the second sound signal according to the frequency spectrum distribution of the second sound signal to obtain a plurality of wavelet coefficient matrixes corresponding to the second sound signal one by one;
s050, carrying out gray level transformation on the wavelet coefficient matrixes respectively to obtain a plurality of sound gray level images in the plurality of working states;
the step S040 includes:
s041, respectively selecting a plurality of wavelet basis functions corresponding to the second sound signals one to one, where the wavelet basis functions are:
Figure FDA0003289258070000021
wherein psii(t) is the wavelet basis function of the ith segment of the sound signal, fbiIs the bandwidth of the wavelet basis function, fciIs the center frequency;
s042, dividing the frequency bandwidths of the second sound signals at equal intervals to obtain a plurality of frequency band subintervals, and constructing a plurality of two-dimensional networks which correspond to the second sound signals one by one according to the frequency band subintervals;
s043, performing wavelet transform on the second sound signal according to the plurality of wavelet basis functions and the plurality of two-dimensional networks, and obtaining a plurality of wavelet coefficient matrices corresponding to the second sound signal one to one, where the wavelet transform calculation formula is:
Figure FDA0003289258070000022
Figure FDA0003289258070000023
j=1,L,H;k=1,L,L;g1=1,2,L,Bi+1;g2=1,2,L,Bi+1
wherein the content of the first and second substances,
Figure FDA0003289258070000024
wavelet coefficient matrix W for the ith segment being the second sound signaliThe jth row in the jth column of the series,
Figure FDA0003289258070000025
a wavelet basis function for the second sound signal of the ith segment, and H is a wavelet coefficient matrix W of the second sound signal of the ith segmentiL is the wavelet coefficient matrix W of the second sound signal of the ith stageiNumber of columns, BiIs a frequency band sub-interval;
the step S043 is followed by:
s044, respectively calculating the Shannon entropies of the wavelet coefficient matrixes, and determining a plurality of optimal wavelet basis functions according to the Shannon entropies, wherein the calculation formula of the Shannon entropies is as follows:
Figure FDA0003289258070000031
Figure FDA0003289258070000032
wherein S isi(g1, g2) is Shannon entropy, pjIs a wavelet coefficient matrix
Figure FDA0003289258070000033
J-th row element energy and wavelet coefficient matrix
Figure FDA0003289258070000034
The ratio of the total energies;
s045, performing wavelet transformation on a plurality of second sound signals corresponding to the optimal wavelet basis functions one by one according to the optimal wavelet basis functions, and obtaining a plurality of optimized wavelet coefficient matrixes;
the step S050 includes:
s051, respectively carrying out normalization processing on the wavelet coefficient matrixes according to columns to obtain a plurality of normalized wavelet coefficient matrixes;
s052, performing gray scale transformation on the normalized wavelet coefficient matrices to obtain initial gray scale images;
s053, performing smoothing filtering processing on the plurality of initial gray level images respectively to obtain a plurality of smoothed initial gray level images;
and S054, respectively carrying out sharpening processing and gray correction on the initial gray level images after the smoothing processing to obtain a plurality of sound gray level images.
2. The transformer operating state recognition method based on the voiceprint recognition model according to claim 1, wherein the step S100 comprises:
s110, randomly selecting the sound gray level images under various working states of the transformer, dividing the sound gray level images into a training sample set and a testing sample set, and correspondingly setting the various working states as a plurality of working state information corresponding to the sound gray level images one by one;
and S120, taking the sound gray images in the training sample set as the input of the convolutional neural network, taking the working state information as the output of the convolutional neural network, and training the convolutional neural network.
3. The method for recognizing the operating condition of the transformer based on the voiceprint recognition model according to claim 2, wherein after the step S120, the method further comprises:
s130, inputting the plurality of sound gray level images in the test sample set into the trained convolutional neural network;
s140, recording a plurality of test working state information which is output by the voiceprint recognition model and corresponds to the plurality of sound gray level images one by one, and calculating the recognition rate of the trained convolutional neural network according to the plurality of test working state information;
s150, if the change rate of the recognition rate of the convolutional neural network is smaller than a set value, establishing the voiceprint recognition model according to the trained convolutional neural network.
4. The method for recognizing the operating state of the transformer based on the voiceprint recognition model according to claim 3, wherein after the step S140, the method further comprises:
s141, if a rate of change of the recognition rate of the convolutional neural network is greater than a set value, performing the S120 to the S150.
5. The transformer operating state identification method based on the voiceprint recognition model according to claim 1, wherein in step S053, a gaussian blur operator is adopted to perform smoothing filtering processing on the plurality of initial gray level images respectively.
6. The method for recognizing the operating state of the transformer based on the voiceprint recognition model according to claim 1, wherein the step S200 is preceded by:
and S02, collecting the sound signal to be detected of the transformer to be identified, and processing the sound signal to be detected to obtain the gray level image of the sound to be detected corresponding to the sound signal to be detected.
7. The method for recognizing the operating state of the transformer based on the voiceprint recognition model according to claim 6, wherein the step S02 comprises:
s021, collecting the sound signal to be detected of the transformer to be identified according to the set sampling frequency and the set sampling duration;
s022, performing segmented windowing on the sound signal to be detected to obtain a plurality of segmented sound signals to be detected;
s023, performing Fourier transform on the segmented to-be-detected sound signals to obtain the frequency spectrum distribution of the segmented to-be-detected sound signals corresponding to the segmented to-be-detected sound signals one by one;
s024, performing wavelet transformation on the segmented to-be-detected sound signals according to the frequency spectrum distribution of the segmented to-be-detected sound signals to obtain a plurality of to-be-detected wavelet coefficient matrixes corresponding to the segmented to-be-detected sound signals one by one;
and S025, performing gray level transformation on the wavelet coefficient matrixes to be tested respectively to obtain a plurality of gray level images of the sound to be tested under various working states.
CN201910561468.0A 2019-06-26 2019-06-26 Transformer working state identification method based on voiceprint identification model Active CN110415709B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910561468.0A CN110415709B (en) 2019-06-26 2019-06-26 Transformer working state identification method based on voiceprint identification model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910561468.0A CN110415709B (en) 2019-06-26 2019-06-26 Transformer working state identification method based on voiceprint identification model

Publications (2)

Publication Number Publication Date
CN110415709A CN110415709A (en) 2019-11-05
CN110415709B true CN110415709B (en) 2022-01-25

Family

ID=68359737

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910561468.0A Active CN110415709B (en) 2019-06-26 2019-06-26 Transformer working state identification method based on voiceprint identification model

Country Status (1)

Country Link
CN (1) CN110415709B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111222285A (en) * 2019-12-31 2020-06-02 国网安徽省电力有限公司 Transformer high active value prediction method based on voiceprint and neural network
CN111579056A (en) * 2020-05-19 2020-08-25 北京快鱼电子股份公司 Transformer direct-current magnetic bias prediction method and system
CN111735533B (en) * 2020-06-08 2022-05-13 贵州电网有限责任公司 Transformer direct-current magnetic bias judgment method based on vibration signal wavelet energy spectrum characteristics
CN111929542B (en) * 2020-07-03 2023-05-26 北京国网富达科技发展有限责任公司 Power equipment diagnosis method and system
CN112420055A (en) * 2020-09-22 2021-02-26 甘肃同兴智能科技发展有限公司 Substation state identification method and device based on voiceprint characteristics
CN112735436A (en) * 2021-01-21 2021-04-30 国网新疆电力有限公司信息通信公司 Voiceprint recognition method and voiceprint recognition system
CN113985156A (en) * 2021-09-07 2022-01-28 绍兴电力局柯桥供电分局 Intelligent fault identification method based on transformer voiceprint big data
CN115728612A (en) * 2022-12-01 2023-03-03 广州广电计量检测股份有限公司 Transformer discharge fault diagnosis method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6839673B1 (en) * 1999-03-29 2005-01-04 Markany Inc. Digital watermarking method and apparatus for audio data
JP2008281898A (en) * 2007-05-14 2008-11-20 Univ Of Tokyo Signal processing method and device
CN107330405A (en) * 2017-06-30 2017-11-07 上海海事大学 Remote sensing images Aircraft Target Recognition based on convolutional neural networks
CN108846323A (en) * 2018-05-28 2018-11-20 哈尔滨工程大学 A kind of convolutional neural networks optimization method towards Underwater Targets Recognition
CN109740523A (en) * 2018-12-29 2019-05-10 国网陕西省电力公司电力科学研究院 A kind of method for diagnosing fault of power transformer based on acoustic feature and neural network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6839673B1 (en) * 1999-03-29 2005-01-04 Markany Inc. Digital watermarking method and apparatus for audio data
JP2008281898A (en) * 2007-05-14 2008-11-20 Univ Of Tokyo Signal processing method and device
CN107330405A (en) * 2017-06-30 2017-11-07 上海海事大学 Remote sensing images Aircraft Target Recognition based on convolutional neural networks
CN108846323A (en) * 2018-05-28 2018-11-20 哈尔滨工程大学 A kind of convolutional neural networks optimization method towards Underwater Targets Recognition
CN109740523A (en) * 2018-12-29 2019-05-10 国网陕西省电力公司电力科学研究院 A kind of method for diagnosing fault of power transformer based on acoustic feature and neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《基于卷积神经网络的变压器振动信号分析》;苏世玮等;《广东电力》;20180630;第31卷(第6期);第127-132页 *
《基于自适应Morlet小波变换滚动轴承声学故障诊断的研究》;李静娇等;《石家庄铁道大学学报(自然科学版)》;20170930;第30卷(第3期);第29-32、47页 *

Also Published As

Publication number Publication date
CN110415709A (en) 2019-11-05

Similar Documents

Publication Publication Date Title
CN110415709B (en) Transformer working state identification method based on voiceprint identification model
CN107909118B (en) Power distribution network working condition wave recording classification method based on deep neural network
CN109884459B (en) Intelligent online diagnosis and positioning method for winding deformation of power transformer
CN110398647B (en) Transformer state monitoring method
CN111160171B (en) Radiation source signal identification method combining two-domain multi-features
CN108761287B (en) Transformer partial discharge type identification method
CN106443379A (en) Transformer partial discharge fault type identifying method and transformer partial discharge fault type identifying device
CN108154223B (en) Power distribution network working condition wave recording classification method based on network topology and long time sequence information
CN109142851A (en) A kind of novel power distribution network internal overvoltage recognition methods
CN102279358A (en) MCSKPCA based neural network fault diagnosis method for analog circuits
CN102982351A (en) Porcelain insulator vibrational acoustics test data sorting technique based on back propagation (BP) neural network
CN111914705A (en) Signal generation method and device for improving health state evaluation accuracy of reactor
CN114881093B (en) Signal classification and identification method
CN115728612A (en) Transformer discharge fault diagnosis method and device
CN112599134A (en) Transformer sound event detection method based on voiceprint recognition
CN112182961A (en) Large-scale fading modeling prediction method for wireless network channel of converter station
CN117110744A (en) Transformer fault diagnosis method and system based on voiceprint analysis
CN112486137A (en) Method and system for constructing fault feature library of active power distribution network and fault diagnosis method
CN111079647A (en) Circuit breaker defect identification method
CN105842588A (en) Method and system for correcting ultrasonic partial discharge detection
CN112735468A (en) MFCC-based automobile seat motor abnormal noise detection method
CN116189711B (en) Transformer fault identification method and device based on acoustic wave signal monitoring
CN112686182A (en) Partial discharge mode identification method and terminal equipment
CN117219124A (en) Switch cabinet voiceprint fault detection method based on deep neural network
CN110825583B (en) Energy efficiency qualitative assessment technology for multi-index fusion of cloud data center

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant