CN117440307A - Intelligent earphone detection method and system - Google Patents

Intelligent earphone detection method and system Download PDF

Info

Publication number
CN117440307A
CN117440307A CN202311753909.XA CN202311753909A CN117440307A CN 117440307 A CN117440307 A CN 117440307A CN 202311753909 A CN202311753909 A CN 202311753909A CN 117440307 A CN117440307 A CN 117440307A
Authority
CN
China
Prior art keywords
audio data
earphone
period
data points
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311753909.XA
Other languages
Chinese (zh)
Other versions
CN117440307B (en
Inventor
林凤梅
方韶劻
郭峰
吴义胡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Aangsi Science & Technology Co ltd
Original Assignee
Shenzhen Aangsi Science & Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Aangsi Science & Technology Co ltd filed Critical Shenzhen Aangsi Science & Technology Co ltd
Priority to CN202311753909.XA priority Critical patent/CN117440307B/en
Publication of CN117440307A publication Critical patent/CN117440307A/en
Application granted granted Critical
Publication of CN117440307B publication Critical patent/CN117440307B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R29/00Monitoring arrangements; Testing arrangements

Landscapes

  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Otolaryngology (AREA)
  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Headphones And Earphones (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

The invention relates to the technical field of intelligent earphone detection, in particular to an intelligent earphone detection method and system. The method comprises the following steps: acquiring an original generated signal and left and right earphone audio signals; acquiring a characteristic vector of a real point of the earphone audio signal according to a starting point and adjacent data points of the earphone audio signal, and acquiring data consistency according to the difference between the characteristic vectors and the amplitude difference of the audio data points, thereby acquiring a period; acquiring the period change rate according to the data consistency of the data points among the periods, and acquiring the distortion weight of each data point based on the period change rate; and acquiring the frequency response flatness of the earphone audio signals, and acquiring a detection score according to the synchronism and response flatness difference between the earphone audio signals and the original generated signals to finish earphone detection. The invention improves the quality detection efficiency and accuracy of the intelligent earphone.

Description

Intelligent earphone detection method and system
Technical Field
The invention relates to the technical field of intelligent earphone detection, in particular to an intelligent earphone detection method and system.
Background
The earphone is used as an external device of a computer system or mobile equipment, is provided with the functions of converting a system digital signal into an analog signal, playing and collecting sound data within a certain range to the system, and can form a complete data signal communication system by the computer or mobile equipment and the second level, and the intelligent earphone is used for detecting through comparing data of a transmitting end and a receiving end by taking sound as a medium and air as a channel.
In this process, the audio data is affected by various factors in the transmission, processing and output processes, and finally, a small difference exists between the audio data and the audio data. For example, when the generated audio signal is transmitted to the earphone through a wire or wireless, the transmission process may be disturbed or lost, and the final output effect is affected. Therefore, when the quality detection of the earphone is finished by directly comparing the audio data generated by the software with the audio data received by the earphone, the detection effect of the earphone has deviation and certain limitation.
Disclosure of Invention
In order to solve the technical problem that the detection effect is deviated, the invention provides an intelligent earphone detection method and system, and the adopted technical scheme is as follows:
in a first aspect, the present invention provides an intelligent earphone detection method, which includes the following steps:
acquiring an original generated signal and left and right earphone audio signals;
for any one earphone audio signal, corresponding to a waveform diagram, acquiring a feature vector of a starting point according to the starting point of the waveform diagram and adjacent audio data points after time sequence; acquiring a characteristic vector of each audio data point according to the mode of acquiring the characteristic vector of the starting point, and acquiring the data consistency of the starting point and the audio data point according to the amplitude value difference between the starting point and the audio data point, the characteristic vector included angle and the difference of characteristic vector modes; acquiring the period of the earphone audio signal according to the data consistency;
recording any period as a target period, and acquiring the period change rate of the target period according to the difference of the number of the audio data points of the target period and the rest periods and the data consistency of the audio data points; acquiring matching audio data points of the target period in different periods, and acquiring distortion weights of the audio data points according to amplitude value differences, data consistency and period change rate of the target period between the audio data points and the matching audio data points;
acquiring a spectrogram of the earphone audio signal, wherein the ratio of the maximum response value and the minimum response value difference value of the spectrogram to the response value variance is used as the frequency response flatness of the spectrogram; acquiring synchronicity of two earphones according to the frequency response flatness difference between the two earphone audio signals, the amplitude value difference of the audio data points and the period change rate difference; obtaining earphone detection scores according to the synchronicity of the two earphones, the distortion weight of the audio data points in the earphone audio signals and the amplitude value difference of the response audio data points of the earphone audio signals and the original generated signals;
and finishing intelligent earphone detection according to the earphone detection scores.
Preferably, the method for obtaining the feature vector of the starting point according to the starting point of the waveform diagram and the adjacent time sequence of the audio data points comprises the following steps:
after the starting point time sequence is acquired, a preset number of audio data points are recorded as adjacent audio data points, the starting point is taken as a starting point, the adjacent audio data points are taken as end point constituent vectors, for the first adjacent audio data point of the starting point to be taken as the starting point, the subsequent adjacent audio data points are taken as end point constituent vectors, for the second adjacent audio data point of the starting point to be taken as the starting point, the subsequent adjacent audio data points are taken as end point constituent vectors, and so on, the vectors when the starting point and each adjacent audio data point are taken as the starting point are acquired, and the vectors are recorded as characteristic vectors of the starting point.
Preferably, the method for obtaining the data consistency between the starting point and the audio data point according to the difference between the amplitude values of the starting point and the audio data point, the included angle of the feature vector and the difference of the feature vector modes comprises the following steps:
in the method, in the process of the invention,amplitude value representing starting point, +.>Amplitude value representing a-th audio data point, < +.>Modulo representing the i-th feature vector of the starting point, +.>Modulo representing the ith eigenvector of the a-th audio data point,/->Representing the included angle between the ith feature vector of the starting point and the ith feature vector of the a audio data point, wherein L represents the number of corresponding feature vectors of the audio data point, < >>Represents an exponential function based on natural constants, < ->Representing the minimum positive number, & lt & gt>Representing the data consistency of the starting point and the a-th audio data point.
Preferably, the method for obtaining the period of the earphone audio signal according to the data consistency comprises the following steps:
setting a preset threshold, if the data consistency is greater than the preset threshold, retaining the audio data points, otherwise, eliminating, and then calculating the data consistency of the adjacent audio data points of the starting point and the adjacent audio data points of the retained audio data points, if the data consistency of each adjacent audio data point is also greater than the preset threshold, retaining the audio data points, otherwise eliminating; the audio data point of the last filtering is recorded as a period audio data point of a starting point, and the audio data point between two adjacent period audio data points is regarded as one period.
Preferably, the method for obtaining the period change rate of the target period according to the difference of the number of the audio data points and the data consistency of the audio data points between the target period and the rest periods comprises the following steps:
in the method, in the process of the invention,number of audio data points representing target period, +.>A number of audio data points representing the v-th period, < >>Maximum value representing data consistency of jth data point and all data points of the jth period of the target period, +.>Representing the number of cycles>Representing the rate of change of the period of the target period.
Preferably, the method for acquiring the matching audio data points of the target period in different periods comprises the following steps:
and recording the audio data point of the target period as a target audio data point, calculating data consistency between the target audio data point and all the audio data points of the other period, and taking the audio data point corresponding to the maximum value of the data consistency as a matching audio data point of the target audio data point.
Preferably, the method for obtaining the distortion weight of the audio data point according to the amplitude value difference, the data consistency and the period change rate of the target period between the audio data point and the matched audio data point comprises the following steps:
the amplitude values of the target audio data point and the matching audio data points of the rest periods are differenced, the ratio of the absolute value of the difference value to the data consistency of the target audio data point and the matching audio data point in the period is used as the change rate of each period, and the change rates obtained according to the target audio data point and the matching audio data points in all the periods are accumulated to obtain the data mutation rate of the target audio data point;
taking the product of the data mutation rate of the audio data point and the period change rate of the period as the distortion weight of the audio data point.
Preferably, the method for obtaining the synchronicity of the two headphones according to the difference of the frequency response flatness between the two headphone audio signals, the difference of the amplitude values of the audio data points and the difference of the period change rate comprises the following steps:
in the method, in the process of the invention,representing the flatness of the frequency response of earphone A, +.>Representing the flatness of the frequency response of earphone B, +.>Represents the period change rate of the v-th period of earphone A,>period change rate of matching period representing the v-th period of earphone B, +.>Representing the number of matching cycles, +.>Representing the number of audio data points corresponding to earphone A, < >>Represents the number of audio data points corresponding to earphone B, +.>Amplitude value of a-th audio data point representing earphone A, +.>Amplitude value of a-th audio data point representing earphone B, +.>Representing a minimum function, +.>Represents an exponential function based on natural constants, < ->Indicating the synchronicity of the two headphones.
Preferably, the method for obtaining the earphone detection score according to the synchronicity of the two earphones, the distortion weight of the audio data points in the earphone audio signals and the amplitude value difference of the response audio data points of the earphone audio signals and the original generated signals comprises the following steps:
and taking the difference between the amplitude values of the audio data points of the earphone audio signals and the original generated signals under the same time sequence, accumulating the product of the absolute value of the difference value and the distortion weight of the audio data points of the earphone audio data under the time sequence to obtain the distortion rate of each earphone audio signal, and normalizing the ratio of the synchronicity corresponding to the two earphones to the maximum value of the distortion rate of the earphone audio signals to be used as an earphone detection score.
In a second aspect, an embodiment of the present invention further provides a smart headset detection system, including a memory, a processor, and a computer program stored in the memory and running on the processor, where the processor implements the steps of any one of the foregoing smart headset detection methods when the processor executes the computer program.
The invention has the following beneficial effects: according to the invention, the audio signal is adaptively generated through software, the audio signal is periodically analyzed, the signals received by the earphone are adaptively divided, the data consistency and the period change rate of the audio signal are adaptively constructed through the change of the corresponding audio signal in different periods, the construction of the data distortion rate weight is further completed, the quality of the audio data received by the earphone is represented through the data distortion rate weight, the difference of the corresponding audio data and the synchronism of the intelligent earphone, and the quality detection efficiency and the accuracy of the intelligent earphone are improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a method for detecting an intelligent earphone according to an embodiment of the present invention;
fig. 2 is a flowchart of an implementation of a method for detecting a smart headset according to an embodiment of the present invention.
Detailed Description
In order to further describe the technical means and effects adopted by the invention to achieve the preset aim, the following detailed description is given below of a method and a system for detecting an intelligent earphone according to the invention, which are specific embodiments, structures, features and effects thereof, with reference to the accompanying drawings and preferred embodiments. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
An intelligent earphone detection method and system embodiment:
the following specifically describes a specific scheme of the intelligent earphone detection method provided by the invention with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of a method for detecting an intelligent earphone according to an embodiment of the invention is shown, and the method includes the following steps:
step S001, acquiring an original generated signal and left and right headphone audio signals.
In the detection process, the audio signals of the speakers at the left side and the right side of the earphone are respectively acquired by using sound acquisition equipment; the sound signal received by the intelligent earphone is generated by software in the computer system, such as MATLAB, audacity, python library, etc., and the generated audio signal is a waveform with a repetition period, and in this embodiment, the frequency range of the audio signal is set to 20hz-20khz.
Thus, the original generated signal and the left and right headphone audio signals are acquired.
Step S002, for any earphone audio signal, obtaining a waveform diagram of the earphone audio signal, and obtaining a feature vector of the starting point according to the starting point of the waveform diagram and the adjacent audio data points after time sequence; acquiring a characteristic vector of each audio data point according to the mode of acquiring the characteristic vector of the starting point, and acquiring the data consistency of the starting point and the audio data point according to the amplitude value difference between the starting point and the audio data point, the characteristic vector included angle and the difference of characteristic vector modes; and acquiring the period of the earphone audio signal according to the data consistency.
And according to the steps, the acquisition of the original generated signals and the acquisition of the left and right audio signals received by the intelligent earphone are completed. And analyzing the audio signals collected by any earphone of the left earphone and the right earphone. The acquired audio signal typically exhibits a waveform over a time domain, wherein the abscissa represents time and the ordinate represents amplitude values of the audio signal.
While the original generated signal generated by the software is not exactly the same as the audio signal actually received by the headphones. The reason is that the original generated signal is affected by various factors in the transmission, processing and output processes, and finally, a small difference exists between the two. For example, when the originally generated signal is transmitted to the earphone through a wire or wirelessly, the transmission process may be interfered or lost, so that the final output effect is affected. Therefore, when the quality detection of the earphone is finished by directly comparing an original generated signal generated by software with an audio signal received by the earphone, the detection effect of the earphone has deviation and certain limitation.
Analyzing one of the left and right earphone audio signals, wherein the original generated signal generated by software has a certain periodicity, analyzing a waveform diagram of the earphone audio signal received by the earphone, recording a first audio data point of the earphone audio signal as a starting point, acquiring a plurality of adjacent audio data points for the starting point, acquiring 5 audio data points behind a starting point time sequence in the embodiment, recording the acquired audio data points as adjacent audio data points, taking the starting point as the starting point, each adjacent audio data point of the starting point as an end point to form a feature vector, for the first adjacent audio data point of the starting point, taking the first adjacent audio data point as the starting point, each adjacent audio data point of the first adjacent audio data point as the end point to form a feature vector, taking the second adjacent audio data point as the starting point, each adjacent audio data point behind the second adjacent audio data point as the end point to form a feature vector, analogizing, acquiring a plurality of feature vectors of each adjacent audio data point, recording the feature vectors acquired by the starting point and the feature vectors of the adjacent audio data points as the feature vectors of the starting point, and sequencing the feature vectors according to the acquired feature vector sequence. For example: there are 6 points, respectively 1, 2, 3, 4, 5, 6,1 as a starting point, one feature vector exists for each of 1 and the remaining 5 points, one feature vector exists for each of 2 and 3, 4, 5, 6, one feature vector exists for each of 3 and 4, 5, 6, one feature vector exists for each of 4 5, 6, one feature vector exists for each of 5 and 6, and thus the number of feature vectors of the starting point is 5+4+3+2+1=15.
For any one audio data point in the earphone audio signal, the feature vector sequence of the audio data point is obtained in the same way, and it is worth to say that if the number of adjacent audio data points after the time sequence of the audio data points is less than 5, the feature vector is not obtained, and the data consistency between the starting point and each audio data point is obtained according to the differences of the included angle and the modulus between the feature vectors of the starting point and the audio data point and the amplitude value difference of the starting point and the audio data point, wherein the formula is as follows:
in the method, in the process of the invention,amplitude value representing starting point, +.>Amplitude value representing a-th audio data point, < +.>Modulo representing the i-th feature vector of the starting point, +.>Modulo representing the ith eigenvector of the a-th audio data point,/->Representing the included angle between the ith feature vector of the starting point and the ith feature vector of the a audio data point, wherein L represents the number of corresponding feature vectors of the audio data point, < >>Represents an exponential function based on natural constants, < ->Representing the minimum positive number, & lt & gt>Representing the data consistency of the starting point and the a-th audio data point.
The larger the data consistency is, the more consistent the subsequent data changes of the two audio data points are, the more likely the audio data points are the periodic characteristic points corresponding to the starting points, the smaller the data consistency is, the more different the subsequent data changes of the two audio data points are, and the more the audio data points are not the periodic characteristic points corresponding to the starting points.
And calculating the data consistency of the starting point and each audio data point, when the data consistency is larger than a preset threshold value, reserving the audio data points, wherein the preset threshold value is 0.8 in the embodiment, then calculating the data consistency of the adjacent audio data points of the starting point and the reserved adjacent audio data points, and when the data consistency of each adjacent audio data point is also larger than the preset threshold value, screening the audio data points once again, and recording the audio data points screened last time as periodic audio data points of the starting point. It should be noted that the number of adjacent audio data points of different audio data points is the same, so that when the data consistency is calculated, the adjacent audio data points with the same relative positions are calculated.
And taking the period audio data point which is screened out and is closest to the starting point as a first period audio data point, and taking all the audio data points in the starting point and the first period audio data point as a first period. The data between two adjacent periodic audio data points is taken as one period.
To this end, several cycles of the earphone audio signal are acquired.
Step S003, marking any period as a target period, and acquiring the period change rate of the target period according to the difference of the number of the audio data points of the target period and the rest periods and the data consistency of the audio data points; and acquiring matching audio data points of the target period in different periods, and acquiring distortion weights of the audio data points according to amplitude value differences, data consistency and period change rate of the target period.
And analyzing each period, and constructing a period change rate U through the change of the audio signals in different periods, wherein the larger the period change rate U is, the larger the fluctuation of the audio signals received by the earphone is, the more unstable the tone quality is, and the lower the quality corresponding to the earphone is.
For each audio data point in the earphone audio signal, acquiring data consistency between any two audio data points, recording any one period as a target period, and acquiring a period change rate according to the data consistency between the audio data points of different periods and the number difference of the audio data points of different periods, wherein the formula is as follows:
in the method, in the process of the invention,number of audio data points representing target period, +.>A number of audio data points representing the v-th period, < >>Maximum value representing data consistency of jth data point and all data points of the jth period of the target period, +.>Representing the number of cycles>Representing the rate of change of the period of the target period.
The larger the difference of the number of the audio data points in different periods is, the larger the period change rate is, the smaller the data consistency of the different audio data points in different periods is, and the larger the period change rate is.
According to the steps, the cycle change rate of each cycle is obtained, namely the whole cycle is analyzed, and then the local change of the data in a single cycle is analyzed. For each audio data point of the target period, taking the audio data point corresponding to the maximum value of the data consistency of the audio data point and all the audio data points of the other period as a matched audio data point, thus obtaining the matched audio data point of each audio data point of the target period, recording any one data point of the target period as the target audio data point, obtaining the data mutation rate of the audio data point according to the amplitude difference of the matched audio data point of the target period data point in different periods, and obtaining the distortion weight of the audio data point based on the data mutation rate and the period change rate, wherein the formula is as follows:
in the method, in the process of the invention,amplitude value representing target audio data point, +.>Amplitude value of matching data point representing the target audio data point at the v-th period, +.>Representing the data consistency of the target audio data point with its matching audio data point at the v-th period,/>Representing the number of cycles>A data mutation rate representing a target audio data point; />Data mutation Rate representing a-th Audio data point,/->A period change rate representing the period in which the a-th audio data point is located,/->Representing the distortion weight of the a-th audio data point.
Thus, a distortion weight for each audio data point is obtained.
Step S004, obtaining a spectrogram of the earphone audio signal, wherein the ratio of the difference value of the maximum response value and the minimum response value of the spectrogram to the variance of the response value is used as the frequency response flatness of the spectrogram; acquiring synchronicity of two earphones according to the frequency response flatness difference between the two earphone audio signals, the amplitude value difference of the audio data points and the period change rate difference; and obtaining earphone detection scores according to the synchronicity of the two earphones, the distortion weight of the audio data points in the earphone audio signals and the amplitude value difference of the response audio data points of the earphone audio signals and the original generated signals.
Because the audio data generated by the software are sound signals under different frequencies, the sound gain or attenuation degree of the earphone under different frequencies is relatively close, the spectrogram of the audio signal is obtained, the abscissa of the spectrogram is the frequency and gradually increases and decreases from low frequency to high frequency, the unit is Hz, the vertical axis is the gain or attenuation degree of the sound, the unit is dB, the curve on the spectrogram is the sound response of the earphone under different frequencies, the spectrogram corresponding to the earphone is flat, namely, the similar response is realized in the whole frequency range, so that the earphone can accurately transmit the sound of each frequency, and no remarkable gain or attenuation occurs. And marking the longitudinal sitting as a response value, and enabling the ratio of the difference value between the maximum response value and the minimum response value of the frequency in the spectrogram and the variance of the response value to be used as the frequency response flatness of the spectrogram.
Because the intelligent earphone is usually composed of a left earphone and a right earphone, the data such as the corresponding period change rate and the like can be obtained according to the steps for the sound collected by the other earphone. The sounds obtained by the two headphones of the intelligent headphones should be identical, and if the sounds have large differences, it is often indicated that there are problems inside the headphones, such as a failure of a connection line, damage of internal elements, or abnormality of a signal processing part.
Therefore, in this embodiment, the audio signals of the left and right earphones are analyzed to obtain the synchronicity corresponding to the earphones; each earphone has a corresponding waveform diagram, and if the number of data points of which the abscissa of one period is the same as that of the other period is the largest in the audio signals of the two earphones, the two periods are matched with each other and recorded as matched periods of the periods.
According to the difference of the frequency response flatness of the two earphones, the difference of the period change rate of the period and the matching period thereof and the difference of the amplitude values of the audio data points, the synchronicity corresponding to the two earphones is obtained, and the formula is as follows:
in the method, in the process of the invention,representing the flatness of the frequency response of earphone A, +.>Representing the flatness of the frequency response of earphone B, +.>Represents the period change rate of the v-th period of earphone A,>period change rate of matching period representing the v-th period of earphone B, +.>Representing the number of matching cycles, +.>Representing the number of audio data points corresponding to earphone A, < >>Represents the number of audio data points corresponding to earphone B, +.>Amplitude value of a-th audio data point representing earphone A, +.>Amplitude value of a-th audio data point representing earphone B, +.>Representing a minimum function, +.>Represents an exponential function based on natural constants, < ->Indicating the synchronicity of the two headphones. The timings of the two headphones are the same, so that the difference in amplitude values of the two headphones for the a-th audio data point is the difference in amplitude values of the two audio data points at the same timing. The greater the synchronicity of the headphones, the less likely it is that two headphones of the smart headphones will fail. The smaller the synchronicity, the greater the probability of the smart headset failing, the lower its quality detection score.
The left and right audio signals of the earphone are obtained through original generated signals, the distortion rate of the earphone is obtained through the amplitude value difference and the distortion weight of the audio data points on the same time sequence, and the earphone detection score is obtained according to the synchronicity of the distortion rate of the two earphones and the corresponding two earphones, wherein the formula is as follows:
in the method, in the process of the invention,amplitude value representing a-th audio data point, < +.>Representing the amplitude value of the a-th audio data point in the original generated signal,/and>representing the distortion weight of the a-th audio data point, < ->Representing the number of audio data points in the headphone audio signal, < >>Representing the distortion rate of the earphone, < >>Representing the distortion rate of earphone A, < >>Representing the distortion rate of earphone B, +.>Represents a maximum function>Indicating synchronicity between headphones->Representing a linear normalization function, ++>Representing the detection score of the headset. If the number of audio data points on one side is small, the small part is supplemented with 0 difference. The greater the synchronicity between the earphones, the greater the detection score, the smaller the distortion rate of the earphones, the greater the detection score, and the calculation is performed by the earphone with the largest distortion rate.
So far, the detection score of the earphone is obtained.
And step S005, completing intelligent earphone detection according to the earphone detection scores.
The larger the detection score of the earphone is, the better the effect quality of the intelligent earphone for receiving the audio data is, and the smaller the detection score is, the worse the effect quality of the intelligent earphone for receiving the audio data is. If the detection score of the earphone is greater than or equal to the score threshold, the quality of the intelligent earphone meets the production requirement, the detection of the intelligent earphone is completed, and the implementation flow chart for completing the earphone detection is shown in fig. 2.
The embodiment provides an intelligent earphone detection system, which comprises a memory, a processor and a computer program stored in the memory and running on the processor, wherein the processor executes the computer program to realize the methods of the steps S001 to S005.
It should be noted that: the sequence of the embodiments of the present invention is only for description, and does not represent the advantages and disadvantages of the embodiments. The processes depicted in the accompanying drawings do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments.

Claims (10)

1. The intelligent earphone detection method is characterized by comprising the following steps of:
acquiring an original generated signal and left and right earphone audio signals;
for any one earphone audio signal, corresponding to a waveform diagram, acquiring a feature vector of a starting point according to the starting point of the waveform diagram and adjacent audio data points after time sequence; acquiring a characteristic vector of each audio data point according to the mode of acquiring the characteristic vector of the starting point, and acquiring the data consistency of the starting point and the audio data point according to the amplitude value difference between the starting point and the audio data point, the characteristic vector included angle and the difference of characteristic vector modes; acquiring the period of the earphone audio signal according to the data consistency;
recording any period as a target period, and acquiring the period change rate of the target period according to the difference of the number of the audio data points of the target period and the rest periods and the data consistency of the audio data points; acquiring matching audio data points of the target period in different periods, and acquiring distortion weights of the audio data points according to amplitude value differences, data consistency and period change rate of the target period between the audio data points and the matching audio data points;
acquiring a spectrogram of the earphone audio signal, wherein the ratio of the maximum response value and the minimum response value difference value of the spectrogram to the response value variance is used as the frequency response flatness of the spectrogram; acquiring synchronicity of two earphones according to the frequency response flatness difference between the two earphone audio signals, the amplitude value difference of the audio data points and the period change rate difference; obtaining earphone detection scores according to the synchronicity of the two earphones, the distortion weight of the audio data points in the earphone audio signals and the amplitude value difference of the response audio data points of the earphone audio signals and the original generated signals;
and finishing intelligent earphone detection according to the earphone detection scores.
2. The method for detecting a smart earphone as claimed in claim 1, wherein the method for acquiring the feature vector of the starting point according to the starting point of the waveform diagram and the adjacent audio data points after the time sequence comprises:
after the starting point time sequence is acquired, a preset number of audio data points are recorded as adjacent audio data points, the starting point is taken as a starting point, the adjacent audio data points are taken as end point constituent vectors, for the first adjacent audio data point of the starting point to be taken as the starting point, the subsequent adjacent audio data points are taken as end point constituent vectors, for the second adjacent audio data point of the starting point to be taken as the starting point, the subsequent adjacent audio data points are taken as end point constituent vectors, and so on, the vectors when the starting point and each adjacent audio data point are taken as the starting point are acquired, and the vectors are recorded as characteristic vectors of the starting point.
3. The method for detecting the intelligent earphone according to claim 1, wherein the method for obtaining the data consistency between the starting point and the audio data point according to the difference between the amplitude values of the starting point and the audio data point, the included angle of the feature vector and the difference between the feature vector modes is as follows:
in the method, in the process of the invention,amplitude value representing starting point, +.>Amplitude value representing a-th audio data point, < +.>Modulo representing the i-th feature vector of the starting point, +.>Modulo representing the ith eigenvector of the a-th audio data point,/->Representing the included angle between the ith feature vector of the starting point and the ith feature vector of the a audio data point, wherein L represents the number of corresponding feature vectors of the audio data point, < >>Represents an exponential function based on natural constants, < ->Representing the minimum positive number, & lt & gt>Representing the data consistency of the starting point and the a-th audio data point.
4. The method for detecting intelligent headphones according to claim 1, wherein the method for acquiring the period of the headphone audio signal according to the data consistency comprises the steps of:
setting a preset threshold, if the data consistency is greater than the preset threshold, retaining the audio data points, otherwise, eliminating, and then calculating the data consistency of the adjacent audio data points of the starting point and the adjacent audio data points of the retained audio data points, if the data consistency of each adjacent audio data point is also greater than the preset threshold, retaining the audio data points, otherwise eliminating; the audio data point of the last filtering is recorded as a period audio data point of a starting point, and the audio data point between two adjacent period audio data points is regarded as one period.
5. The method for detecting a smart headset according to claim 1, wherein the method for obtaining the period change rate of the target period according to the difference between the number of audio data points and the consistency of the data of the audio data points between the target period and the rest periods comprises:
in the method, in the process of the invention,number of audio data points representing target period, +.>The number of audio data points representing the v-th period,maximum value representing data consistency of jth data point and all data points of the jth period of the target period, +.>Representing the number of cycles>Representing the rate of change of the period of the target period.
6. The method for detecting a smart headset according to claim 1, wherein the method for acquiring matching audio data points of the target period in different periods is as follows:
and recording the audio data point of the target period as a target audio data point, calculating data consistency between the target audio data point and all the audio data points of the other period, and taking the audio data point corresponding to the maximum value of the data consistency as a matching audio data point of the target audio data point.
7. The method for detecting intelligent headphones as recited in claim 6, wherein the method for obtaining distortion weights of the audio data points according to the amplitude value difference, the data consistency and the period change rate of the target period comprises the steps of:
the amplitude values of the target audio data point and the matching audio data points of the rest periods are differenced, the ratio of the absolute value of the difference value to the data consistency of the target audio data point and the matching audio data point in the period is used as the change rate of each period, and the change rates obtained according to the target audio data point and the matching audio data points in all the periods are accumulated to obtain the data mutation rate of the target audio data point;
taking the product of the data mutation rate of the audio data point and the period change rate of the period as the distortion weight of the audio data point.
8. The method for detecting intelligent headphones according to claim 1, wherein the method for obtaining the synchronicity of the two headphones according to the difference in frequency response flatness between the two headphone audio signals, the difference in amplitude values of the audio data points, and the difference in periodic variation is as follows:
in the method, in the process of the invention,representing the flatness of the frequency response of earphone A, +.>Representing the flatness of the frequency response of earphone B, +.>Represents the period change rate of the v-th period of earphone A,>period change rate of matching period representing the v-th period of earphone B, +.>Representing the number of matching cycles, +.>Representing the number of audio data points corresponding to earphone A, < >>Represents the number of audio data points corresponding to earphone B, +.>Amplitude value of a-th audio data point representing earphone A, +.>Amplitude value of a-th audio data point representing earphone B, +.>Representing a minimum function, +.>Represents an exponential function based on natural constants, < ->Indicating the synchronicity of the two headphones.
9. The method for detecting intelligent headphones according to claim 1, wherein the method for obtaining the headphone detection score according to the synchronicity of two headphones, the distortion weight of the audio data points in the headphone audio signal, and the amplitude value difference between the headphone audio signal and the original generated signal in response to the audio data points comprises the following steps:
and taking the difference between the amplitude values of the audio data points of the earphone audio signals and the original generated signals under the same time sequence, accumulating the product of the absolute value of the difference value and the distortion weight of the audio data points of the earphone audio data under the time sequence to obtain the distortion rate of each earphone audio signal, and normalizing the ratio of the synchronicity corresponding to the two earphones to the maximum value of the distortion rate of the earphone audio signals to be used as an earphone detection score.
10. A smart headset detection system comprising a memory, a processor and a computer program stored in the memory and running on the processor, characterized in that the processor implements the steps of a smart headset detection method according to any of claims 1-9 when the computer program is executed by the processor.
CN202311753909.XA 2023-12-20 2023-12-20 Intelligent earphone detection method and system Active CN117440307B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311753909.XA CN117440307B (en) 2023-12-20 2023-12-20 Intelligent earphone detection method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311753909.XA CN117440307B (en) 2023-12-20 2023-12-20 Intelligent earphone detection method and system

Publications (2)

Publication Number Publication Date
CN117440307A true CN117440307A (en) 2024-01-23
CN117440307B CN117440307B (en) 2024-03-22

Family

ID=89553821

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311753909.XA Active CN117440307B (en) 2023-12-20 2023-12-20 Intelligent earphone detection method and system

Country Status (1)

Country Link
CN (1) CN117440307B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180115815A1 (en) * 2016-10-24 2018-04-26 Avnera Corporation Headphone off-ear detection
US20200162808A1 (en) * 2017-06-26 2020-05-21 Ecole De Technologie Superieure System, Device and Method for Assessing a Fit Quality of an Earpiece
US20210014597A1 (en) * 2019-07-08 2021-01-14 Apple Inc. Acoustic detection of in-ear headphone fit
CN114143646A (en) * 2020-09-03 2022-03-04 Oppo广东移动通信有限公司 Detection method, detection device, earphone and readable storage medium
CN114697849A (en) * 2020-12-31 2022-07-01 Oppo广东移动通信有限公司 Earphone wearing detection method and device, earphone and storage medium
CN115175081A (en) * 2022-06-30 2022-10-11 深圳市歌尔泰克科技有限公司 Earphone detection method, device, equipment and computer readable storage medium
CN115243183A (en) * 2022-06-29 2022-10-25 上海勤宽科技有限公司 Audio detection method, device and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180115815A1 (en) * 2016-10-24 2018-04-26 Avnera Corporation Headphone off-ear detection
US20200162808A1 (en) * 2017-06-26 2020-05-21 Ecole De Technologie Superieure System, Device and Method for Assessing a Fit Quality of an Earpiece
US20210014597A1 (en) * 2019-07-08 2021-01-14 Apple Inc. Acoustic detection of in-ear headphone fit
CN114143646A (en) * 2020-09-03 2022-03-04 Oppo广东移动通信有限公司 Detection method, detection device, earphone and readable storage medium
CN114697849A (en) * 2020-12-31 2022-07-01 Oppo广东移动通信有限公司 Earphone wearing detection method and device, earphone and storage medium
CN115243183A (en) * 2022-06-29 2022-10-25 上海勤宽科技有限公司 Audio detection method, device and storage medium
CN115175081A (en) * 2022-06-30 2022-10-11 深圳市歌尔泰克科技有限公司 Earphone detection method, device, equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN117440307B (en) 2024-03-22

Similar Documents

Publication Publication Date Title
CN103229238B (en) System and method for producing an audio signal
CN103000184B (en) Noise reduction apparatus, audio input apparatus, wireless communication apparatus, and noise reduction method
KR101260131B1 (en) Audio source proximity estimation using sensor array for noise reduction
CN104424953A (en) Speech signal processing method and device
CN102918588A (en) A spatial audio processor and a method for providing spatial parameters based on an acoustic input signal
CN106612482A (en) Method for adjusting audio parameter and mobile terminal
CN104155644B (en) Ranging method based on sound sensor and system thereof
US11102569B2 (en) Methods and apparatus for a microphone system
US10536191B1 (en) Maintaining consistent audio setting(s) between wireless headphones
CN115798502B (en) Audio denoising method for Bluetooth headset
CN115775562B (en) Sound leakage detection method for Bluetooth headset
CN111901737A (en) Hearing aid parameter self-adaption method based on intelligent terminal
CN111142066A (en) Direction-of-arrival estimation method, server, and computer-readable storage medium
CN110191397B (en) Noise reduction method and Bluetooth headset
CN116405823A (en) Intelligent audio denoising enhancement method for bone conduction earphone
CN110111802A (en) Adaptive dereverberation method based on Kalman filtering
CN107592600B (en) Pickup screening method and pickup device based on distributed microphones
CN117440307B (en) Intelligent earphone detection method and system
CN113573212A (en) Sound amplification system and microphone channel data selection method
CN110931034B (en) Pickup noise reduction method for built-in earphone of microphone
WO2023051622A1 (en) Method for improving far-field speech interaction performance, and far-field speech interaction system
CN115243183A (en) Audio detection method, device and storage medium
CN112235679B (en) Signal equalization method and processor suitable for earphone and earphone
CN112672265B (en) Method and system for detecting microphone consistency and computer readable storage medium
CN112235675B (en) Active noise reduction method and chip of earphone

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant