CN114882901A - Shrimp acoustic signal time-frequency feature extraction method based on frequency domain convolution and marginal spectrum feedback - Google Patents

Shrimp acoustic signal time-frequency feature extraction method based on frequency domain convolution and marginal spectrum feedback Download PDF

Info

Publication number
CN114882901A
CN114882901A CN202210446792.XA CN202210446792A CN114882901A CN 114882901 A CN114882901 A CN 114882901A CN 202210446792 A CN202210446792 A CN 202210446792A CN 114882901 A CN114882901 A CN 114882901A
Authority
CN
China
Prior art keywords
frequency
time
spectrum
feedback
frequency domain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210446792.XA
Other languages
Chinese (zh)
Inventor
曹天宇
朱彩云
赵晓群
杨一晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University filed Critical Tongji University
Priority to CN202210446792.XA priority Critical patent/CN114882901A/en
Publication of CN114882901A publication Critical patent/CN114882901A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/80Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in fisheries management
    • Y02A40/81Aquaculture, e.g. of fish

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Measurement Of Mechanical Vibrations Or Ultrasonic Waves (AREA)

Abstract

The invention relates to a shrimp acoustic signal time-frequency feature extraction method based on frequency domain convolution and marginal spectrum feedback. S1: performing complementary set empirical mode decomposition on the shrimp sound signals to obtain a plurality of intrinsic mode function components, performing Hilbert spectrum analysis on the intrinsic mode function components, and performing normalization processing to obtain a time-frequency spectrogram of the shrimp sound signals; s2: constructing a frequency domain convolution vector according to actual conditions, and performing frequency domain convolution processing on the normalized time-frequency spectrogram to obtain a time-frequency spectrogram after frequency domain smoothing; s3: calculating to obtain a marginal spectrum, and performing primary feedback and secondary feedback on the time-frequency spectrogram after the frequency domain is smoothed by using the characteristic of reflecting the time-frequency spectrum characteristics to obtain optimized time-frequency spectrograms under different marginal spectrum feedback times; s4: and comparing and analyzing the optimized time-frequency spectrograms under different marginal spectrum feedback times, and synthesizing the spectrogram expression to obtain the time-frequency characteristics of the shrimp acoustic signals.

Description

Shrimp acoustic signal time-frequency feature extraction method based on frequency domain convolution and marginal spectrum feedback
Technical Field
The invention relates to the field of shrimp acoustic signal processing, in particular to a shrimp acoustic signal time-frequency feature extraction method based on frequency domain convolution and marginal spectrum feedback.
Background
Shrimp-like acoustic signals are mainly present in the eating and communication behaviors of shrimps and have rich biological information. In the actual production life, the shrimp acoustic signal analysis can provide reference for the habit research and the high-density intelligent breeding of the shrimps. The shrimp sound signals are taken as non-stationary signals, and single global time domain analysis or global frequency domain analysis such as Fourier transform cannot show the signal characteristics of the shrimp sound signals comprehensively, so that time-frequency double-domain analysis is needed. In addition, shrimp acoustic signals are often short and jerky, so that the analysis method needs to ensure sufficient frequency resolution.
The most common existing time-frequency analysis methods are short-time fourier transform, wavelet transform and hilbert-yellow transform. The window shape of the short-time Fourier transform is fixed and is not suitable for the characteristic of high and low frequency change of an actual signal, the wavelet transform has the advantage of multi-resolution, but is subject to the uncertainty principle as the Fourier transform, and the frequency resolution is limited. Compared with the first two kinds of transformation, the Hilbert-Huang transformation has the advantages that the Hilbert-Huang transformation breaks out of the framework of Fourier idea, is used as self-adaptive transformation based on the characteristics of signals, does not need to consider the selection of basis functions, and has high frequency resolution, so that the frequency resolution of a dense distribution area in a time frequency spectrum is obviously higher than that of short-time Fourier transformation and wavelet transformation.
However, the original hilbert yellowing transform algorithm still has the problems of modal mixing, time-frequency spectrum information redundancy and the like. The intermittent phenomenon is often caused by signal abnormal events, and extreme point distribution of signals can be changed in empirical mode decomposition of Hilbert-Huang transformation, so that local envelopes of the abnormal events appear in a fitting envelope, and the obtained intrinsic mode function components of the signals have modal mixing, which causes wrong intrinsic mode function components; the information contained in the time frequency spectrum is over accurate due to the high frequency resolution of the Hilbert-Huang transform, so that the frequency spectrum contains excessive information, the information redundancy of the time frequency spectrum is caused, and the signal time frequency feature is difficult to directly extract from the time frequency spectrum. In order to better extract the time-frequency characteristics of shrimp acoustic signals, the problems of mode aliasing and time-frequency spectrum information redundancy existing in the original Hilbert-Huang transform algorithm need to be solved.
Disclosure of Invention
The invention aims to provide a shrimp acoustic signal time-frequency feature extraction method based on frequency domain convolution and marginal spectrum feedback for solving the problems of modal aliasing and time-frequency spectrum information redundancy existing in a Hilbert-Huang transform algorithm.
The purpose of the invention can be realized by the following technical scheme:
a shrimp acoustic signal time-frequency feature extraction method based on frequency domain convolution and marginal spectrum feedback is characterized in that the time-frequency spectrum of a shrimp acoustic signal is subjected to feature analysis and extraction by adopting frequency domain convolution processing and marginal spectrum feedback, and the method comprises the following steps:
s1: performing complementary set empirical mode decomposition on the shrimp sound signals to obtain a plurality of intrinsic mode function components, performing Hilbert spectrum analysis on the intrinsic mode function components, and performing normalization processing to obtain a time-frequency spectrogram of the shrimp sound signals;
s2: constructing a frequency domain convolution vector according to actual conditions, and performing frequency domain convolution processing on the normalized time-frequency spectrogram to smooth a frequency domain of the time-frequency spectrogram to obtain the time-frequency spectrogram after the frequency domain is smoothed;
s3: calculating to obtain a marginal spectrum, and performing primary feedback and secondary feedback on the time-frequency spectrogram after the frequency domain is smoothed by using the characteristic of reflecting the time-frequency spectrum characteristics to obtain optimized time-frequency spectrograms under different marginal spectrum feedback times;
s4: and comparing and analyzing the optimized time-frequency spectrograms under different marginal spectrum feedback times, and synthesizing the spectrogram expression to obtain the time-frequency characteristics of the shrimp acoustic signals.
Further, the shrimp acoustic signal time-frequency feature extraction method based on frequency domain convolution and marginal spectrum feedback is characterized in that the shrimp acoustic signal time-frequency feature extraction method in the step S1 includes complementary set empirical mode decomposition and Hilbert spectrum analysis in the early stage; optionally, the time frequency spectrum obtained by hilbert spectrum analysis is subjected to normalization processing.
Further, the shrimp acoustic signal time-frequency feature extraction method based on frequency domain convolution and marginal spectrum feedback is characterized in that in the step S1, shrimp acoustic signals are subjected to complementary set empirical mode decomposition to obtain eigenmode function components, and then subjected to Hilbert spectrum analysis to obtain normalized time-frequency spectrums. The complementary ensemble empirical mode decomposition and hilbert spectral analysis comprises the steps of:
s11: let the original signal be x (i), add a pair of positive and negative complementary white noise sequences to x (t)
x b1 (t)=x(t)+(-1) b σn(t)
S12: decomposing x by empirical mode b1 (t) decomposition into n eigenmode function components and the remainder R (t)
Figure BDA0003617241060000031
S13: repeating the steps S11 and S12m times and using different white noise sequences, the j-th decomposition result can be expressed as
Figure BDA0003617241060000032
S14: summing and averaging the corresponding intrinsic mode function components in each decomposition result to obtain a final intrinsic mode function:
Figure BDA0003617241060000033
where b is 1, 2, n (t) is a white noise sequence with an expected value of 0 and a variance of 1, n is the number of IMFs, m is the integration order, σ is the standard deviation, and σ is generally 0.1 to 0.2 times the standard deviation of the signal. And (4) ending the complementary set empirical mode decomposition, and entering a Hilbert spectrum analysis step.
S16: the generated intrinsic mode function component is analyzed by a Hilbert spectrum to obtain a Hilbert amplitude spectrum, and the expression is
Figure BDA0003617241060000034
Wherein w is the angular frequency,
Figure BDA0003617241060000035
y i (t) is IMF i (t) a Hilbert transform value,
Figure BDA0003617241060000036
p is the value of the cauchy principle,
Figure BDA0003617241060000041
further, the method for extracting the time-frequency characteristics of the shrimp acoustic signals based on the frequency domain convolution and the marginal spectrum feedback is characterized in that in the step S2, a frequency domain convolution vector is constructed according to actual requirements, and the vector is used for carrying out frequency domain convolution processing on the time-frequency spectrum to smooth the time-frequency spectrum; optionally, different frequency domain convolution vectors are constructed according to actual requirements aiming at the time-frequency feature extraction of the shrimp acoustic signal based on frequency domain convolution and marginal spectrum feedback, and different time-frequency spectrum smooth expressions are correspondingly obtained; the frequency domain convolution process comprises the following steps:
s21: the hilbert amplitude spectrum obtained in step S16 may be rewritten into a discrete form, and its expression is:
Figure BDA0003617241060000042
in the formula s ij ,i∈[1,k],j∈[1,l]Representing amplitude values of corresponding points in a time-frequency domain; give a
Figure BDA0003617241060000043
And
Figure BDA0003617241060000044
constructs a frequency domain convolution vector K, denoted as:
Figure BDA0003617241060000045
s22: according to given
Figure BDA0003617241060000046
Zero-filling S to obtain matrix S 0 ,S 0 The expression of (a) is:
Figure BDA0003617241060000047
s23: convolving a matrix S with a frequency domain convolution vector K 0 . Center point of K is from s 11 Slide one by one to s lk . Is s' pq ,p∈[1,k],q∈[1,l]Indicating that it was originally at S0
Figure BDA0003617241060000048
Row and q column points, the convolution value of which is:
Figure BDA0003617241060000049
the new value of the convolved point is:
Figure BDA0003617241060000051
the time spectrum after the frequency domain convolution process can be expressed as:
Figure BDA0003617241060000052
further, the method for extracting the time-frequency characteristics of the shrimp acoustic signal based on frequency domain convolution and marginal spectrum feedback is characterized in that the marginal spectrum is obtained by calculation in the step S3, and primary feedback and secondary feedback are performed on the time-frequency spectrogram after the frequency domain is smoothed by using the characteristic that the marginal spectrum reflects the frequency characteristics of the time-frequency spectrum, so as to obtain the optimized time-frequency spectrogram under different marginal spectrum feedback times, wherein the feedback process specifically comprises the following steps:
s31: calculating a marginal spectrum of the Hilbert-time spectrum of the step S16, wherein the expression of the marginal spectrum is as follows:
Figure BDA0003617241060000053
where w is the signal angular frequency, f is the signal frequency, and T is the signal time span; the marginal spectrum may be seen as a spectrogram of the signal, reflecting the intensity of different frequency components in the signal. Therefore, by performing the marginal spectrum feedback processing on the signal by using the value of the marginal spectrum, the contrast between high-intensity and low-intensity frequency components in the time spectrum can be increased, and instantaneous frequency points with different intensities reflected in the frequency spectrum can be more clearly distinguished.
S32: in order to avoid the loss of strong and weak energy discrimination caused by too large feedback amplitude, the average marginal spectrum is used as a feedback factor. The time frequency spectrum expression after the first feedback optimization is as follows:
Figure BDA0003617241060000054
the discrete expression form is as follows:
Figure BDA0003617241060000055
in which l is S p K is S p The number of rows of (a) to (b),
Figure BDA0003617241060000061
is S p (ii) a Hilbert marginal spectrum of;
s33: in order to highlight the strong frequency component, performing secondary feedback processing on the time frequency spectrum by using the marginal frequency, wherein the obtained optimized time frequency spectrum expression is as follows:
Figure BDA0003617241060000062
whose discrete expression is
Figure BDA0003617241060000063
In which l is S z K is S z Number of lines of (A) z Is S z The Hilbert marginal spectrum of (A) is,
Figure BDA0003617241060000064
further, the method for extracting the time-frequency characteristics of the shrimp acoustic signals based on frequency domain convolution and marginal spectrum feedback is characterized in that in the step S4, optimization time-frequency spectrograms under different marginal spectrum feedback times are compared and analyzed, and the time-frequency characteristics of the shrimp acoustic signals are obtained through comprehensive spectrogram expression. The discrimination between different frequency components in the time frequency spectrum is improved by the primary feedback of the marginal spectrum, and the intuition of the result is improved; and (3) secondary feedback of the marginal spectrum is processed again on the basis of the primary feedback result, key frequency components of the time spectrum are highlighted, other components are filtered out, and the time-frequency characteristics of the shrimp acoustic signals can be extracted by analyzing the time-frequency spectrograms of the primary feedback and the secondary feedback in a combined manner.
The invention discloses a shrimp acoustic signal time-frequency feature extraction method based on frequency domain convolution and marginal spectrum feedback, which is characterized by comprising complementary set empirical mode decomposition, Hilbert spectrum analysis, frequency domain convolution processing of a time-frequency spectrum, primary frequency domain marginal spectrum feedback and secondary frequency domain marginal spectrum feedback of the time-frequency spectrum, wherein the shrimp acoustic signal adopts the feature detection extraction method to execute the steps of the method.
Compared with the prior art, the invention has the following advantages:
the method introduces complementary set empirical mode decomposition to decompose the signal, effectively avoids the mode aliasing problem which is often caused by empirical mode decomposition of shrimp acoustic signals, improves the discrimination between different frequency components by using frequency domain convolution vectors and primary feedback of a marginal spectrum to a time frequency spectrum, reduces the spectrum information redundancy brought by the high frequency resolution of the original Hilbert spectrum analysis, emphasizes the strong frequency components of the time frequency spectrum of the signal by secondary feedback of the marginal spectrum to the time frequency spectrum, reflects the strong frequency components to the extraction of the time frequency characteristics of the shrimp acoustic signals, can effectively improve the intuition of the time frequency spectrum and the convenience of the extraction of the time frequency characteristics, and improves the accuracy of the characteristic extraction by the integration of primary feedback and secondary feedback results.
Drawings
FIG. 1 is a time domain waveform diagram of a valid sample of a shrimp-like acoustic signal;
FIG. 2 is a signal time-frequency spectrum of a Hilbert-Huang transform algorithm using complementary ensemble empirical mode decomposition according to an embodiment;
FIG. 3 shows an example of a time-frequency spectrogram-frequency domain convolution vector of a signal using a Hilbert-Huang transform algorithm with an added frequency domain convolution process as [1, 1, 1, 1 ]] T
FIG. 4 shows an example of a time-frequency spectrogram-frequency domain convolution vector of a signal using a Hilbert-Huang transform algorithm with an added frequency domain convolution process as [1, 2, 4, 2, 1 ]] T
FIG. 5 shows a signal time-frequency spectrogram-frequency domain convolution vector of the Hilbert-Huang transform algorithm with a feedback process as [1, 1, 1, 1 ]] T
FIG. 6 shows a signal time-frequency spectrogram-frequency domain convolution vector of a Hilbert-Huang transform algorithm using quadratic feedback processing as [1, 1, 1, 1 ]] T
FIG. 7 is a flowchart of a shrimp acoustic signal time-frequency feature extraction method based on frequency domain convolution and marginal spectrum feedback according to the present invention; in the figure, t is time and f is frequency.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. The present embodiment is implemented on the premise of the technical solution of the present invention, and a detailed implementation manner and a specific operation process are given, but the scope of the present invention is not limited to the following embodiments.
Example 1
As shown in FIG. 7, a method for extracting time-frequency characteristics of shrimp acoustic signals based on frequency domain convolution and marginal spectrum feedback, which adopts frequency domain convolution processing and marginal spectrum feedback to analyze and extract the characteristics of the time-frequency spectrum of shrimp acoustic signals,
the shrimp acoustic signals belong to non-stationary signals, and conventional single-domain transformation such as Fourier transformation cannot completely show signal characteristics, so time-frequency double-domain analysis is needed. In this embodiment, taking the sound signal of the penaeus vannamei boone as an example, the hydrophone is used to acquire a series of data, and then preprocessing operations such as sound discrimination, short-time fourier analysis and interception are performed on the data to obtain a sample sound signal fragment, which is provided to step S1.
The method comprises the following steps:
s1: performing complementary set empirical mode decomposition on the shrimp sound signals to obtain a plurality of intrinsic mode function components, performing Hilbert spectrum analysis on the intrinsic mode function components, and performing normalization processing to obtain a time-frequency spectrogram of the shrimp sound signals;
s2: constructing a frequency domain convolution vector according to actual conditions, and performing frequency domain convolution processing on the normalized time-frequency spectrogram to smooth a frequency domain of the time-frequency spectrogram to obtain the time-frequency spectrogram after the frequency domain is smoothed;
s3: calculating to obtain a marginal spectrum, and performing primary feedback and secondary feedback on the time-frequency spectrogram after the frequency domain is smoothed by using the characteristic of reflecting the time-frequency spectrum characteristics to obtain optimized time-frequency spectrograms under different marginal spectrum feedback times;
s4: and comparing and analyzing the optimized time-frequency spectrograms under different marginal spectrum feedback times, and synthesizing the spectrogram expression to obtain the time-frequency characteristics of the shrimp acoustic signals.
Each portion is described in detail below.
1. Signal decomposition and Hilbert spectroscopy
S11: the method comprises the steps of acquiring series data through a hydrophone, carrying out preprocessing operations such as sound discrimination, short-time Fourier analysis and interception on the data, and obtaining an effective signal segment as shown in a time-domain oscillogram of an effective sample of the shrimp acoustic signal shown in figure 1, wherein the effective signal segment is used as a sample signal for extracting the time-frequency characteristics of the shrimp acoustic signal.
S12: performing complementary set empirical mode decomposition on the signal x (t) to obtain an intrinsic mode function:
Figure BDA0003617241060000081
wherein, the expression of the j-th decomposition is as follows:
Figure BDA0003617241060000082
where b is 1, 2, n (t) is a white noise sequence with an expected value of 0 and a variance of 1, n is the number of IMFs, m is the integration order, σ is the standard deviation, and σ is generally 0.1 to 0.2 times the standard deviation of the signal.
S13: the obtained intrinsic mode function component is analyzed by a Hilbert spectrum to obtain a Hilbert amplitude spectrum, and the expression is as follows:
Figure BDA0003617241060000091
where w is the angular frequency of the signal,
Figure BDA0003617241060000092
y i (t) is IMF i (t) a Hilbert transform value expressed as
Figure BDA0003617241060000093
P is the value of the cauchy principle,
Figure BDA0003617241060000094
i.e. using Hilbert-Huang transform based on complementary set empirical mode decompositionThe time-frequency feature of the signal is extracted by the algorithm, and the obtained time-frequency spectrogram is shown in fig. 2 by using a hilbert yellowing transform algorithm of complementary set empirical mode decomposition.
2. Frequency domain convolution processing of signal time spectrum
The time-frequency spectrum shown in fig. 2 is weak in intuitiveness, and cannot directly extract effective time-frequency characteristics of shrimp acoustic signals, and a mean frequency-domain convolution vector K is constructed as [1, 1, 1, 1 ]] T The time-frequency spectrum and the time-frequency spectrum are subjected to frequency domain convolution processing, and an obtained time-frequency spectrogram is shown in fig. 3 by using a signal time-frequency spectrogram of a hilbert-yellow transform algorithm added with frequency domain convolution processing. Reconstructing a median frequency domain convolution vector K ═ 1, 2, 4, 2, 1] T The time-frequency spectrum and the time-frequency spectrum are subjected to frequency domain convolution processing, and an obtained time-frequency spectrogram is shown in fig. 4 by using a signal time-frequency spectrogram of a hilbert-yellow transform algorithm added with frequency domain convolution processing.
3. Frequency domain margin spectrum feedback of signal time frequency spectrum
S31: observing fig. 3 and fig. 4, the information representation of fig. 3 is more comprehensive and the information contrast of fig. 4 is higher, so that fig. 3 is selected for subsequent frequency domain marginal spectrum feedback processing to ensure sufficient display of the information. Calculating a marginal spectrum of the time frequency spectrum of the graph 3, wherein the expression of the marginal spectrum is as follows:
Figure BDA0003617241060000095
where f is the signal frequency and T is the signal time span; the marginal spectrum may be seen as a spectrogram of the signal, reflecting the intensity of different frequency components in the signal. Therefore, the marginal spectrum feedback processing can be performed on the signal by using the value of the marginal spectrum to increase the contrast between high-intensity and low-intensity frequency components in the time spectrum so as to better distinguish instantaneous frequency points with different intensities reflected in the frequency spectrum.
S32: in order to avoid the loss of strong and weak energy discrimination caused by overlarge feedback amplitude, an average marginal spectrum is used as a feedback factor. The time frequency spectrum expression after the first feedback optimization is as follows:
Figure BDA0003617241060000096
the discrete expression form is as follows:
Figure BDA0003617241060000101
in which l is S p Number of columns of (k) is S p The number of rows of (a) to (b),
Figure BDA0003617241060000102
is S p The Hilbert marginal spectrum of (1). The time-frequency spectrum obtained by the primary marginal spectrum feedback processing is shown in fig. 5, which is a signal time-frequency spectrum obtained by using a hilbert-yellow transform algorithm added with frequency domain convolution and primary feedback processing.
S33: in order to highlight the strong frequency component, performing secondary feedback processing on the time frequency spectrum by using the marginal frequency, and obtaining an optimized time frequency spectrum expression as follows:
Figure BDA0003617241060000103
whose discrete expression is
Figure BDA0003617241060000104
In which l is S z K is S z Number of lines of (A) z Is S z The Hilbert marginal spectrum of (A) is,
Figure BDA0003617241060000105
the time-frequency spectrum obtained by the secondary marginal spectrum feedback processing is shown in fig. 6, which is a signal time-frequency spectrum obtained by using a hilbert-yellow transform algorithm added with frequency domain convolution and secondary feedback processing.
4. Shrimp acoustic signal time-frequency feature extraction
As can be seen from the reading of FIGS. 5 and 6, when the signal frequency is limited to 0-24 kHz, the duration of the sound signal of Penaeus vannamei Boone is about 5.5ms, and the main frequency is within 12-18 kHz. The energy of the signal is strong in the first 0.6ms, during which the signal continuously outputs strong energy about 0.06ms and the frequency is about in the range of 14.9-15.3 kHz. There is then also a small tail in the signal, which is not analyzed due to equipment limitations.
The preferred embodiment method of the present invention is described in detail above. The method introduces complementary set empirical mode decomposition to decompose the signal, effectively solves the problem of mode aliasing existing in the empirical mode decomposition of the prawn acoustic signal, improves the discrimination between different frequency components by using frequency domain convolution vector and the primary feedback of the marginal spectrum to the time frequency spectrum, reduces the spectrum information redundancy brought by the high frequency resolution of Hilbert spectrum analysis, highlights the strong frequency component of the signal time frequency spectrum by the secondary feedback of the marginal spectrum to the time frequency spectrum, reflects the strong frequency component to the extraction of the time frequency characteristic of the prawn acoustic signal, can effectively improve the intuition of the time frequency spectrum and the convenience of the extraction of the time frequency characteristic, and improves the accuracy of the characteristic extraction by the synthesis of the primary feedback and the secondary feedback result. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.

Claims (5)

1. A shrimp acoustic signal time-frequency feature extraction method based on frequency domain convolution and marginal spectrum feedback is characterized in that the method adopts frequency domain convolution processing and marginal spectrum feedback to perform feature analysis and extraction on the time-frequency spectrum of a shrimp acoustic signal;
the method specifically comprises the following steps:
s1: performing complementary set empirical mode decomposition on the shrimp sound signals to obtain a plurality of intrinsic mode function components, performing Hilbert spectrum analysis on the intrinsic mode function components, and performing normalization processing to obtain a time-frequency spectrogram of the shrimp sound signals;
s2: constructing a frequency domain convolution vector according to actual conditions, and performing frequency domain convolution processing on the normalized time-frequency spectrogram to smooth a frequency domain of the time-frequency spectrogram to obtain the time-frequency spectrogram after the frequency domain is smoothed;
s3: calculating to obtain a marginal spectrum, and performing primary feedback and secondary feedback on the time-frequency spectrogram after the frequency domain is smoothed by using the characteristic of reflecting the frequency characteristic of the time-frequency spectrum to obtain optimized time-frequency spectrograms under different marginal spectrum feedback times;
s4: and comparing and analyzing the optimized time-frequency spectrograms under different marginal spectrum feedback times, and synthesizing the spectrogram expression to obtain the time-frequency characteristics of the shrimp acoustic signals.
2. The method for extracting shrimp acoustic signal time-frequency characteristics based on frequency domain convolution and marginal spectrum feedback as claimed in claim 1, wherein the shrimp acoustic signal time-frequency characteristics extraction method pre-processing of step S1 includes complementary set empirical mode decomposition and hilbert spectrum analysis; optionally, the time frequency spectrum obtained by hilbert spectrum analysis is subjected to normalization processing.
3. The shrimp acoustic signal time-frequency feature extraction method based on frequency domain convolution and marginal spectrum feedback as claimed in claim 2, wherein the step S1 is to perform complementary set empirical mode decomposition on the shrimp acoustic signal to obtain an eigenmode function component, and then perform hilbert spectrum analysis on the eigenmode function component to obtain a normalized time-frequency spectrum;
the complementary ensemble empirical mode decomposition and hilbert spectral analysis comprises the steps of:
s11: let the original signal be x (t), and add a pair of white noise sequences with positive and negative complementation to x (t)
x b1 (t)=x(t)+(-1) b σn(t)
S12: decomposing x by empirical mode b1 (t) decomposition into n eigenmode function components and the remainder R (t)
Figure FDA0003617241050000021
S13: repeating the steps S11 and S12m times and using different white noise sequences, the j-th decomposition result is expressed as:
Figure FDA0003617241050000022
s14: summing and averaging the corresponding intrinsic mode function components in each decomposition result to obtain a final intrinsic mode function:
Figure FDA0003617241050000023
wherein b is 1, 2, n (t) is an expected 0, a white noise sequence with a variance of 1, n is the number of IMFs, m is the integration frequency, σ is the standard deviation, and the value of σ is generally 0.1-0.2 times of the standard deviation of the signal;
after the complementary set empirical mode decomposition is finished, a Hilbert spectrum analysis step is carried out;
s16: the generated intrinsic mode function component is analyzed by a Hilbert spectrum to obtain a Hilbert amplitude spectrum, and the expression is
Figure FDA0003617241050000024
Wherein w is the angular frequency,
Figure FDA0003617241050000025
y i (t) is IMF i (t) a Hilbert transform value,
Figure FDA0003617241050000026
p is the value of the cauchy principle,
Figure FDA0003617241050000027
4. the method for extracting the time-frequency characteristics of the shrimp acoustic signals based on the frequency domain convolution and the marginal spectrum feedback as claimed in claim 1, wherein in the step S2, a frequency domain convolution vector is constructed according to actual requirements, and the frequency domain convolution processing is performed on the time-frequency spectrum by using the vector to smooth the time-frequency spectrum; optionally, different frequency domain convolution vectors are constructed according to actual requirements aiming at the time-frequency feature extraction of the shrimp acoustic signal based on frequency domain convolution and marginal spectrum feedback, and different time-frequency spectrum smooth expressions are correspondingly obtained;
the frequency domain convolution process comprises the following steps:
s21: the hilbert amplitude spectrum obtained in step S16 is rewritten into a discrete form, and its expression is:
Figure FDA0003617241050000031
in the formula, s ij ,i∈[1,k],j∈[1,l]Representing the amplitude value of the corresponding point in the time-frequency spectrum; give a
Figure FDA0003617241050000032
And
Figure FDA0003617241050000033
to construct a frequency domain convolution vector K, denoted as:
Figure FDA0003617241050000034
s22: according to given
Figure FDA0003617241050000035
Zero-filling S to obtain matrix S 0 ,S 0 The expression of (a) is:
Figure FDA0003617241050000036
s23: convolving a matrix S with a frequency domain convolution vector K 0 . Center point of K is from s 11 Slide one by one to s lk (ii) a Is s' pq ,p∈[1,k],q∈[1,l]Indicating that it was originally at S0
Figure FDA0003617241050000037
Row and q columns of points whose convolution value is:
Figure FDA0003617241050000038
the new values of the convolved points are:
Figure FDA0003617241050000039
the time-frequency spectrum after the frequency domain convolution processing is represented as:
Figure FDA0003617241050000041
5. the method for extracting time-frequency characteristics of a shrimp acoustic signal based on frequency domain convolution and marginal spectrum feedback as claimed in claim 1, wherein the step S3 obtains a marginal spectrum by calculation, and performs primary feedback and secondary feedback on the time-frequency spectrogram after frequency domain smoothing by using the characteristics of the marginal spectrum reflecting the frequency characteristics of the time-frequency spectrum to obtain an optimized time-frequency spectrogram under different marginal spectrum feedback times, and the feedback process comprises the following specific steps:
s31: calculating a marginal spectrum of the Hilbert time spectrum of the step S16, wherein the expression of the marginal spectrum is as follows:
Figure FDA0003617241050000042
where w is the signal angular frequency, f is the signal frequency, and T is the signal time span; the marginal spectrum is regarded as a spectrogram of the signal, and the intensities of different frequency components in the signal are reflected;
s32: in order to avoid the loss of strong and weak energy discrimination caused by overlarge feedback amplitude, an average marginal spectrum is used as a feedback factor; the time frequency spectrum expression after the first feedback optimization is as follows:
Figure FDA0003617241050000043
the discrete expression form is as follows:
Figure FDA0003617241050000044
in which l is S p K is S p The number of rows of (a) to (b),
Figure FDA0003617241050000045
is S p (ii) a Hilbert marginal spectrum of;
s33: performing secondary feedback processing on the time frequency spectrum by using the marginal frequency to obtain an optimized time frequency spectrum expression as follows:
Figure FDA0003617241050000046
whose discrete expression is
Figure FDA0003617241050000051
In which l is S z K is S z Number of lines of (A) z Is S z The Hilbert marginal spectrum of (A) is,
Figure FDA0003617241050000052
CN202210446792.XA 2022-04-26 2022-04-26 Shrimp acoustic signal time-frequency feature extraction method based on frequency domain convolution and marginal spectrum feedback Pending CN114882901A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210446792.XA CN114882901A (en) 2022-04-26 2022-04-26 Shrimp acoustic signal time-frequency feature extraction method based on frequency domain convolution and marginal spectrum feedback

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210446792.XA CN114882901A (en) 2022-04-26 2022-04-26 Shrimp acoustic signal time-frequency feature extraction method based on frequency domain convolution and marginal spectrum feedback

Publications (1)

Publication Number Publication Date
CN114882901A true CN114882901A (en) 2022-08-09

Family

ID=82670768

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210446792.XA Pending CN114882901A (en) 2022-04-26 2022-04-26 Shrimp acoustic signal time-frequency feature extraction method based on frequency domain convolution and marginal spectrum feedback

Country Status (1)

Country Link
CN (1) CN114882901A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117939726A (en) * 2024-03-25 2024-04-26 深圳市能波达光电科技有限公司 Air purifying method and system based on LED sterilizing lamp

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117939726A (en) * 2024-03-25 2024-04-26 深圳市能波达光电科技有限公司 Air purifying method and system based on LED sterilizing lamp
CN117939726B (en) * 2024-03-25 2024-05-24 深圳市能波达光电科技有限公司 Air purifying method and system based on LED sterilizing lamp

Similar Documents

Publication Publication Date Title
US8041136B2 (en) System and method for signal processing using fractal dimension analysis
CN115326783B (en) Raman spectrum preprocessing model generation method, system, terminal and storage medium
CN105279379B (en) Tera-hertz spectra feature extracting method based on convex combination Kernel principal component analysis
CN116701845B (en) Aquatic product quality evaluation method and system based on data processing
Torres-Castillo et al. Neuromuscular disorders detection through time-frequency analysis and classification of multi-muscular EMG signals using Hilbert-Huang transform
Sonavane et al. Classification and segmentation of brain tumor using Adaboost classifier
CN108567418A (en) A kind of pulse signal inferior health detection method and detecting system based on PCANet
CN114081508B (en) Spike detection method based on fusion of deep neural network and CCA (common cancer cell) characteristics
CN114882901A (en) Shrimp acoustic signal time-frequency feature extraction method based on frequency domain convolution and marginal spectrum feedback
CN117116290B (en) Method and related equipment for positioning defects of numerical control machine tool parts based on multidimensional characteristics
Naranjo-Alcazar et al. On the performance of residual block design alternatives in convolutional neural networks for end-to-end audio classification
Meng et al. Gaussian mixture models of ECoG signal features for improved detection of epileptic seizures
CN116211322A (en) Depression recognition method and system based on machine learning electroencephalogram signals
Zhang et al. Slight crack identification of cottonseed using air-coupled ultrasound with sound to image encoding
CN115944305B (en) Electrocardiogram abnormality detection method, system, equipment and medium without heart beat segmentation
CN112697270A (en) Fault detection method and device, unmanned equipment and storage medium
Khatar et al. Advanced detection of cardiac arrhythmias using a three-stage CBD filter and a multi-scale approach in a combined deep learning model
CN113940638B (en) Pulse wave signal identification and classification method based on frequency domain dual-feature fusion
CN112807000B (en) Method and device for generating robust electroencephalogram signals
Hua Improving YANGsaf F0 Estimator with Adaptive Kalman Filter.
Nejad et al. An adaptive FECG extraction and analysis method using ICA, ICEEMDAN and wavelet shrinkage
Gunjan et al. An effective user interface image processing model for classification of Brain MRI to provide prolific healthcare
US10539655B1 (en) Method and apparatus for rapid acoustic analysis
CN115541021A (en) Method for locating characteristic peak of Raman spectrum, electronic device and storage medium
CN109793511A (en) Electrocardiosignal noise detection algorithm based on depth learning technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination