CN111415682A - Intelligent evaluation method for musical instrument - Google Patents

Intelligent evaluation method for musical instrument Download PDF

Info

Publication number
CN111415682A
CN111415682A CN202010257405.9A CN202010257405A CN111415682A CN 111415682 A CN111415682 A CN 111415682A CN 202010257405 A CN202010257405 A CN 202010257405A CN 111415682 A CN111415682 A CN 111415682A
Authority
CN
China
Prior art keywords
module
data
voice data
transmitted
processing unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010257405.9A
Other languages
Chinese (zh)
Inventor
倪卫娟
罗景文
夏威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lehe Data Information Technology Jiangsu Co ltd
Original Assignee
Beijing Yuejiele Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yuejiele Technology Co ltd filed Critical Beijing Yuejiele Technology Co ltd
Priority to CN202010257405.9A priority Critical patent/CN111415682A/en
Publication of CN111415682A publication Critical patent/CN111415682A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/24Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being the cepstrum
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/69Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for evaluating synthetic or decoded voice signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Acoustics & Sound (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Electrophonic Musical Instruments (AREA)

Abstract

The invention discloses an intelligent evaluation method for musical instruments, which comprises the following steps: s1, burning the characteristic data; s2, signal data acquisition; s3, generating voice data; s4, judging musical tones; s5, recording voice data; s6, voice data transmission; s7, processing voice data; s8, grading the voice data; s9, detecting frequency drift; and S10, report generation. The intelligent evaluation method for the musical instrument, provided by the invention, can manage the playing of the musical instrument, can improve the detection precision of the musical instrument and has wide applicability.

Description

Intelligent evaluation method for musical instrument
Technical Field
The invention relates to the technical field of intelligent management of musical instruments, in particular to an intelligent evaluation method for musical instruments.
Background
In the prior art, the performance quality of the musical instrument is generally judged in an artificial mode, and the judgment result is greatly influenced by human factors; meanwhile, the musical instrument can generate intonation deviation along with the time change of playing, particularly, the wood musical instrument is greatly influenced by temperature and humidity, and the tightness of wood can be influenced when the temperature and the humidity of air change, so that the intonation of the musical instrument is influenced; in the prior art, no corresponding product is available for comprehensively and intelligently managing musical instruments.
Application number 201610859490.X discloses a universal real-time musical instrument playing evaluation system, which comprises a main control module, a database, a music selection module, an electronic music score analysis module, a music score composing display module, an audio acquisition module, a music score tracking module, a musical instrument playing correctness evaluation module and a score calculation module; through analyzing the electronic music score, carrying out real-time music score tracking and automatic music transcription on musical instrument playing, a performance correctness evaluation result is obtained and displayed in real time, and the score is calculated on the performance level of a user.
Application No. 201520553923.X discloses a wireless intelligent device and a wireless monitoring system for musical instruments, wherein the wireless intelligent device for musical instruments comprises the following circuit modules: the intelligent terminal comprises a temperature sensor for detecting temperature information, a humidity sensor for detecting humidity information, a processing chip for processing detection information and a wireless transmission module for sending the detection information to the intelligent terminal, wherein the wireless transmission module, the temperature sensor and the humidity sensor are connected with the processing chip. Detect temperature information through temperature sensor, detect humidity information through humidity transducer, and via handling chip, wireless transmission module with above-mentioned temperature information, humidity information transfer to intelligent terminal, can preserve above-mentioned data through intelligent terminal, and when the temperature that detects and humidity surpassed and set for the threshold value, give the suggestion through intelligent terminal, remind the user to carry out corresponding cooling, drying operation, avoid the musical instrument temperature, the too big problem of humidity.
Disclosure of Invention
The invention aims to solve the technical problem that comprehensive intelligent management of musical instruments cannot be realized in the prior art, and provides an intelligent evaluation method for the musical instruments, which can manage musical instrument playing and improve the detection precision of the musical instruments and has wide applicability.
The invention provides an intelligent evaluation method for a musical instrument, which comprises the following steps:
s1, burning the characteristic data: recording the MFCC characteristic coefficients of the musical instruments into a flash access module;
s2, signal data acquisition: starting an intelligent management system, and acquiring analog signals of musical instruments in real time by a microphone in intelligent hardware equipment and transmitting the analog signals to an audio codec in real time;
s3, voice data generation: the audio codec converts the analog signals into digital signals in real time, generates voice data by filtering and amplifying the digital signals and transmits the voice data to an AMR processing unit;
s4, sound judgment: the AMR processing unit judges whether the voice data is music, if so, the step S5 is carried out, otherwise, the step S2 is carried out;
s5, voice data recording: the AMR processing unit sends a driving instruction to the SD card high-capacity access module, and the SD card high-capacity access module starts to record music;
s6, voice data transmission: after the music recording is finished, the AMR processing unit transmits the voice data to the cloud server through the communication module;
s7, voice data processing: a data processing module in the cloud server compresses voice data, reduces noise and identifies a song, the processed voice data and an identification result are transmitted to a rating module, and the processed voice data are transmitted to a frequency drift detection module;
s8, speech data rating: the rating module is used for rating the voice data according to the musical instrument rating standard and transmitting a rating result to the report generating module;
s9, frequency drift detection: the frequency drift detection module performs frequency drift detection on the voice according to the musical instrument tone setting standard and transmits the detection result to the report generation module;
s10, report generation: and the report generation module generates a report according to the rating result and the detection result and transmits the report to the client.
The voice activity judgment is carried out through Voice Activity Detection (VAD) algorithm and instrument playing voice detection (ASR), only the instrument user plays the music and records the voice, and various sounds do not trigger the recording at ordinary times. The end of recording is also performed by detecting VAD and ASR, and the recording is finished after 10s of keeping. The VAD algorithm is determined by combining a short-time zero-crossing rate threshold value and a frame energy threshold value. Typically, the threshold setting will be relatively easy, avoiding filtering out useful information. ASR adopts MFCC + DTW algorithm to realize, extracts the MFCC characteristic coefficient of musical instrument in advance, leave the factory and burn in the flash access module of intelligent hardware equipment, judge whether it is a voice activity frame through VAD, if it is, record 1s audio data and preliminary processing promptly, then calculate the MFCC coefficient of the current frame, compare with the MFCC characteristic coefficient of musical instrument through DTW algorithm, if judge that passes through, it is the musical instrument sound of playing, can open the recording procedure.
In the intelligent evaluation method for musical instruments according to the present invention, as a preferred mode, the step S7 further includes the following steps:
s71, the data processing module classifies the voice data and compresses the voice data into MP3 format;
s72, performing one-dimensional convolution and down-sampling processing on the voice data based on a one-dimensional Wave-U-Net convolution neural network to obtain an intermediate result;
s73, performing up-sampling and deconvolution processing on the intermediate result, and classifying the frequency spectrum of the audio characteristic signal in each layer of convolution sampling;
s74, after the convolution sampling processing is finished, outputting separated pure musical instrument playing music tones and discarding the environmental background noise to generate processed voice data;
s75, performing track identification according to the processed voice data and generating an identification result;
and S76, transmitting the processed voice data and the recognition result to a rating module, and transmitting the processed voice data to a frequency drift detection module.
In the intelligent evaluation method for musical instruments according to the present invention, as a preferred mode, the step S8 further includes the following steps:
s81, encoding the audio frequency spectrum characteristic sequence obtained by MFCC processing of the processed voice data by using an Encoder of an RNN network to obtain a semantic vector Xt
S82, using semantic vector XtHidden layer state Ht ═ RNN (X) of Decoder as RNN networkt,Ht-1);
S83, dividing the hidden state layer H at the previous time ttH as a later moment t +1t+1To obtain an output state Yt=RNN(Xt,Ht-1) Outputting a sequence of spectral time series of musical instrument performance musical tone signals in a state;
s84, classifying and connecting the frequency spectrum time sequence through a long-time and short-time memory neural network and continuous time sequence classification to obtain a musical instrument performance musical tone symbolization sequence; step S84 further includes the steps of:
and S85, comparing the musical tone symbolized sequence with the musical instrument rating standard to generate a rating result and transmitting the rating result to the report generating module.
In the intelligent evaluation method for musical instruments according to the present invention, as a preferred mode, the step S84 further includes the following steps:
s841, every Y in the frequency spectrum time sequencetThe output is input into L STM network as a time slice and then is connected with softmax, and a posterior probability matrix Y is output;
s842, processing argmax function for each column of the posterior probability matrix Y to obtain the type NET of the note output by each columnw(x) Wherein w represents a parameter of L STM;
s843, classifying the musical notes into NETsw(x) And inputting the data into the CTC to carry out loss operation to align the note sequence to obtain a musical tone symbolization sequence.
The intelligent evaluation method for the musical instrument according to the present invention preferably further includes the following steps in step S9:
s91, the frequency drift detection module continuously compares the musical tone symbolized sequence with the musical instrument tone setting standard and statistically analyzes the frequency drift proportion of each independent musical tone;
s92, when the frequency drift proportion of all the music is less than 10%, the detection result indicates that the musical instrument does not need to be tuned; when the frequency drift proportion of any one or more tones is greater than or equal to 10%, the detection result indicates that the musical instrument needs tuning;
and S93, the frequency drift detection module transmits the detection result to the report generation module.
The invention relates to an intelligent evaluation method for musical instruments, which is characterized in that as an optimal mode, a system for realizing the intelligent evaluation method comprises the following steps: intelligent hardware equipment: the system comprises a cloud server, a music instrument, a music center and a music center, wherein the cloud server is arranged on the music instrument and used for collecting data information of the music instrument and transmitting the data information to the music center; the intelligent hardware device comprises:
a microphone: the audio coder-decoder is electrically connected with the audio coder-decoder, is used for collecting analog signals and transmitting the analog signals to the audio coder-decoder;
an audio codec: the system comprises a microphone, an ARM processing unit, a processing unit and a processing unit, wherein the processing unit is electrically connected with the microphone and the ARM processing unit, and is used for receiving an analog signal transmitted by the microphone, converting the analog signal into a digital signal, filtering and amplifying the digital signal and generating voice data, transmitting the voice data to the ARM processing unit, and receiving a control instruction transmitted by the ARM processing unit;
an ARM processing unit: the device comprises an audio codec, a flash access module, an SD card high-capacity access module, a sram memory module and a temperature and humidity sensor, wherein the audio codec, the flash access module, the SD card high-capacity access module, the sram memory module and the temperature and humidity sensor are electrically connected, and the device is used for receiving voice data transmitted by the audio codec, receiving temperature and humidity data transmitted by the temperature and humidity sensor, transmitting the voice data and the temperature and humidity data to the SD card high-capacity access module, transmitting the voice data and the temperature and humidity data to a communication module, and transmitting a control instruction to the audio codec, the flash access module, the SD card high-capacity; the temperature and humidity data grids are packaged into a JSON format by the ARM processing unit;
a flash access module: the ARM processing unit is electrically connected with the computer system, is used for accessing the firmware of the intelligent hardware equipment and receiving the control instruction transmitted by the ARM processing unit;
SD card large capacity access module: the system is electrically connected with the ARM processing unit, is used for receiving the stored audio data transmitted by the ARM processing unit, and is used for receiving the control instruction transmitted by the ARM processing unit; the SD card is loaded with a FATFS file system to facilitate file management and reading and writing;
a sram memory module: the ARM processing unit is electrically connected with the audio data processing unit, is used for caching the audio data and is used for receiving a control instruction transmitted by the ARM processing unit;
temperature and humidity sensor: the temperature and humidity data acquisition unit is electrically connected with the ARM processing unit, is used for acquiring temperature and humidity data of an environment, transmitting the temperature and humidity data to the ARM processing unit and receiving a control instruction transmitted by the ARM processing unit;
a communication module: the system comprises an ARM processing unit, a cloud server and a processing unit, wherein the ARM processing unit is used for receiving voice data and temperature and humidity data transmitted by the ARM processing unit and transmitting the voice data and the temperature and humidity data to the cloud server;
a power supply module: the device is electrically connected with the microphone, the audio codec, the ARM processing unit, the flash access module, the SD card high-capacity access module, the sram memory module and the temperature and humidity sensor and is used for providing power for the microphone, the audio decoder, the ARM processing unit, the flash access module, the SD card high-capacity access module, the sram memory module and the temperature and humidity sensor;
cloud server: the system comprises a communication module, a client and a data processing module, wherein the communication module is used for receiving voice data and temperature and humidity data transmitted by the communication module, processing and storing the voice data and the temperature and humidity data, and generating a report according to a data processing result and transmitting the report to the client; the protocol adopted in network transmission is based on two data link layer protocols of HTTPS and WEBSOCKET of TCP/IP, and part of the protocol is self-defined to support breakpoint continuous transmission;
a client: the report is transmitted by the cloud server and is used for feeding back the report to the client.
The invention discloses an intelligent evaluation method for musical instruments, which is characterized in that as a preferred mode, a cloud server comprises the following steps:
a data receiving module: the voice data and the temperature and humidity data are transmitted by the communication module, the voice data are transmitted to the classification compression module, and the temperature and humidity data are transmitted to the rating module and the frequency drift detection module;
a classification compression module: the voice data processing module is used for receiving the voice data transmitted by the data receiving module, classifying and compressing the voice data and transmitting the compressed voice data to the data processing module;
a data processing module: the system comprises a classification compression module, a rating module, a drift detection module, a database module and a voice data processing module, wherein the classification compression module is used for compressing voice data, receiving the compressed voice data transmitted by the classification compression module, performing noise reduction processing on the voice data, performing track identification according to the noise-reduced voice data and track information stored in the database module, and transmitting the noise-reduced voice data and an identification result to the rating module and the drift detection module;
a rating module: the voice recognition module is used for receiving the voice data and the recognition result after noise reduction transmitted by the data processing module, receiving the temperature and humidity data transmitted by the data receiving module, rating and generating a rating result according to the voice data after noise reduction, the recognition result, the temperature and humidity data and the musical instrument rating standard stored in the database module, and transmitting the rating result to the report generating module;
frequency drift detection module: the voice data processing module is used for receiving the voice data after noise reduction transmitted by the data processing module, receiving the temperature and humidity data transmitted by the data receiving module, generating a detection result according to the voice data after noise reduction, the temperature and humidity data and the instrument sound fixing standard stored in the database module, and transmitting the detection result to the report generating module;
a report generation module: the system comprises a database module, a frequency drift detection module, a database module and a client, wherein the frequency drift detection module is used for detecting the frequency drift of the data stream, receiving a rating result transmitted by the rating module, receiving a detection result transmitted by the frequency drift detection module, generating a report according to the rating result and the detection result, and transmitting the report to the database module and the client;
a database module: the system is used for storing the song information, the musical instrument rating standard and the musical instrument tuning standard and receiving and storing the report transmitted by the report generating module.
According to the intelligent evaluation method for the musical instrument, the microphone comprises a left channel microphone and a right channel microphone, and the left channel microphone and the right channel microphone are electrically connected with the audio codec.
The intelligent evaluation method for the musical instrument is used as an optimal mode, and an audio codec and an ARM processing unit are communicated in a 12S bus mode. The ARM processor can start a 12S DMA interrupt mode to perform data response.
The invention relates to an intelligent evaluation method for musical instruments, which is characterized in that as an optimal mode, an ARM processing unit receives audio data transmitted by an audio codec through a double DMA (ping pong DMA) and a ringbuff mode, and when the ringbuff data is full, an SD card large-capacity access module is started to access real-time audio data into the SD card large-capacity access module.
As an optimal mode, the intelligent evaluation method for the musical instrument is characterized in that the temperature and humidity sensor is an SHT30 high-precision digital temperature and humidity sensor.
According to the intelligent evaluation method for the musical instrument, as a preferred mode, the power supply module is a lithium battery.
According to the intelligent evaluation method for the musical instrument, as a preferred mode, the microphone is an electret microphone or a silicon microphone MEMS.
The method can record the content of musical instrument playing, the playing time, the temperature and humidity inside the musical instrument and other information with high resolution, transmit data to the cloud server through the WIFI communication module, deploy an algorithm in the cloud server, identify the collected audio file tracks, give AI scores according to the musical instrument examination level requirements, and finally feed back the AI scores to a user through the mobile phone mobile terminal, so that full intellectualization of musical instrument use and management is realized, and the AI technology is perfectly combined with the musical instrument.
Drawings
FIG. 1 is a flow chart of a method for intelligently evaluating musical instruments;
FIG. 2 is a flow chart of speech data processing for a method of intelligently evaluating musical instruments;
FIG. 3 is a flow chart of a smart evaluation method speech data rating for musical instruments;
FIG. 4 is a flow chart of a method for calculating a symbolic sequence of musical tones for an intelligent evaluation method of musical instruments;
FIG. 5 is a flow chart of a frequency drift detection method for intelligent evaluation of musical instruments;
FIG. 6 is a diagram of a management system for implementing the intelligent evaluation method for musical instruments;
FIG. 7 is a diagram of the intelligent hardware device of the management system for implementing the intelligent evaluation method of musical instruments;
FIG. 8 is a schematic diagram of an installation of intelligent hardware devices of a management system for implementing an intelligent evaluation method for musical instruments;
fig. 9 is a composition diagram of a management system cloud server for implementing the intelligent musical instrument evaluation method.
Reference numerals:
1. musical instruments; 2. an intelligent hardware device; 3. a cloud server; 31. a data receiving module; 32. a classification compression module; 33. a data processing module; 34. a rating module; 35. a report generation module; 36. a database module; 37. a frequency drift detection module; 4. a client; 5. a power supply module; 6. a left channel microphone; 7. a right channel microphone; 8. an audio codec; 9. a flash access module; 10. a SD card high-capacity access module; 11. a sram memory module; 12. a temperature and humidity sensor; 13. an ARM processing module; 14. and a communication module.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments.
Example 1
As shown in fig. 1, an intelligent evaluation method for musical instruments includes the following steps:
s1, burning the characteristic data: recording the MFCC characteristic coefficients of the musical instrument 1 into a flash access module 9;
s2, signal data acquisition: starting an intelligent management system, and acquiring analog signals of the musical instrument 1 in real time by a microphone in the intelligent hardware device 2 and transmitting the analog signals to the audio codec 8 in real time;
s3, voice data generation: the audio codec 8 converts the analog signal into a digital signal in real time, generates voice data by filtering and amplifying the digital signal, and transmits the voice data to the AMR processing unit 13;
s4, sound judgment: the AMR processing unit 13 judges whether or not the voice data is a musical sound, if yes, proceeds to step S5, if no, proceeds to return to step S2;
s5, voice data recording: the AMR processing unit 13 sends a driving instruction to the SD card large-capacity access module 10, and the SD card large-capacity access module 10 starts to record music;
s6, voice data transmission: after the tone recording is completed, the AMR processing unit 13 transmits the voice data to the cloud server 3 through the communication module 14;
s7, voice data processing: the data processing module 33 in the cloud server 3 compresses, reduces noise and identifies the track of the voice data, transmits the processed voice data and the identification result to the rating module 34, and transmits the processed voice data to the frequency drift detection module 37; as shown in fig. 2, step S7 further includes the following steps:
s71, the data processing module 33 classifies the voice data and compresses the voice data into MP3 format;
s72, performing one-dimensional convolution and down-sampling processing on voice data based on a one-dimensional Wave-U-Net convolution neural network to obtain an intermediate result;
s73, performing up-sampling and deconvolution processing on the intermediate result, and classifying the frequency spectrum of the audio characteristic signal in each layer of convolution sampling;
s74, after the convolution sampling processing is finished, outputting separated pure musical instrument playing music tones and discarding the environmental background noise to generate processed voice data;
s75, performing track identification according to the processed voice data and generating an identification result;
s76, transmitting the processed voice data and the recognition result to the rating module 34, and transmitting the processed voice data to the frequency drift detection module 37;
s8, speech data rating: the rating module 34 rates the voice data according to the instrument rating standard and transmits the rating result to the report generating module 35; the rating results include the scores of the following individual items: the accuracy, rhythm, strength, speed, integrity and musicality of the performance; as shown in fig. 3, step S8 further includes the following steps:
s81, encoding the audio frequency spectrum characteristic sequence obtained by MFCC processing of the processed voice data by using an Encoder of an RNN network to obtain a semantic vector Xt
S82, using semantic vector XtHidden layer state Ht ═ RNN (X) of Decoder as RNN networkt,Ht-1);
S83, dividing the hidden state layer H at the previous time ttH as a later moment t +1t+1To obtain an output state Yt=RNN(Xt,Ht-1) Outputting a sequence of spectral time series of musical instrument performance musical tone signals in a state;
s84, classifying and connecting the frequency spectrum time sequence through a long-time and short-time memory neural network and continuous time sequence classification to obtain a musical instrument performance musical tone symbolization sequence; as shown in fig. 4, step S84 further includes the steps of:
s841, every Y in the frequency spectrum time sequencetThe output is input into L STM network as a time slice and then is connected with softmax, and a posterior probability matrix Y is output;
s842, processing argmax function for each column of the posterior probability matrix Y to obtain the type NET of the note output by each columnw(x) Wherein w represents a parameter of L STM;
s843, classifying the musical notes into NETsw(x) Inputting the CTC to carry out loss operation to align the note sequence to obtain a musical tone symbolization sequence;
s85, comparing the musical tone symbolized sequence with the musical instrument rating standard to generate a rating result and transmitting the rating result to the report generating module 35;
s9, frequency drift detection: the frequency drift detection module 37 performs frequency drift detection on the voice according to the musical instrument tone setting standard and transmits the detection result to the report generation module 35; as shown in fig. 5, step S9 further includes the following steps:
s91, the frequency drift detection module 37 continuously compares the musical tone symbolized sequence with the musical instrument tone setting standard and statistically analyzes the frequency drift proportion of each independent musical tone;
s92, when the frequency drift proportion of all the music is less than 10%, the detection result shows that the musical instrument 1 does not need to be tuned; when the frequency drift proportion of any one or more tones is greater than or equal to 10%, the detection result indicates that the musical instrument 1 needs tuning;
s93, the frequency drift detection module 37 transmits the detection result to the report generation module 35;
s10, report generation: the report generation module 35 generates a report based on the rating result and the detection result and transmits the report to the client 4.
As shown in fig. 6, a management system for implementing an intelligent evaluation method for musical instruments includes:
intelligent hardware device 2: as shown in fig. 7, the device is disposed on the musical instrument 1 (e.g., mounted on the lower side of the bottom plate of the upper cover of the piano or inside the box cover of the head of the koto, inside the resonator of the dulcimer, etc., and fixed by using a double-sided tape) for collecting data information of the musical instrument 1 and transmitting the data information to the cloud server 3; the intelligent hardware device 2 is a cylinder, the diameter of the intelligent hardware device is 68mm, and the height of the intelligent hardware device is 24 mm; as shown in fig. 8, the smart hardware device 2 includes:
a microphone: is electrically connected with the audio codec 8, is used for collecting analog signals and transmitting the analog signals to the audio codec 8; the microphones comprise a left channel microphone 6 and a right channel microphone 7, and the left channel microphone 6 and the right channel microphone 7 are electrically connected with an audio codec 8; the sampling rate of the microphone is 44.1KHz, the sampling bit is 16bits, the number of sound channels is 2 (stereo left and right sound channels), and the gain is 0 bottom plate 0 dB;
the audio codec 8: the microphone and ARM processing unit 13 is electrically connected, and is used for receiving an analog signal transmitted by the microphone, converting the analog signal into a digital signal, filtering and amplifying the digital signal and generating voice data, transmitting the voice data to the ARM processing unit 13, and receiving a control instruction transmitted by the ARM processing unit 13;
the ARM processing unit 13: the device is electrically connected with the audio codec 8, the flash access module 9, the SD card high-capacity access module 10, the sram memory module 11 and the temperature and humidity sensor 12, and is used for receiving voice data transmitted by the audio codec 8, receiving temperature and humidity data transmitted by the temperature and humidity sensor 12, transmitting the voice data to the SD card high-capacity access module 10, transmitting the voice data and the temperature and humidity data to the communication module 14, and transmitting a control instruction to the audio codec 8, the flash access module 9, the SD card high-capacity access module 10, the sram memory module 11 and the temperature and humidity sensor 12; the ARM processing unit 13 packages the temperature and humidity data grids into a JSON format; the audio codec 8 communicates with the ARM processing unit 13 in a 12S bus mode; the ARM processor 13 can start a 12S DMA interrupt mode to perform data response; the ARM processing unit 13 receives the audio data transmitted by the audio codec 8 through a dual dma (ping pong dma) + ringbuff manner, and when the ringbuff data is full, the SD card large-capacity access module 10 is opened, and the real-time audio data is accessed into the SD card large-capacity access module 10;
the flash access module 9: the hardware access control device is electrically connected with the ARM processing unit 13, is used for accessing the firmware of the intelligent hardware equipment, and is used for receiving the control instruction transmitted by the ARM processing unit 13; wherein the firmware comprises MFCC characteristic coefficients of the piano corresponding to 88 keys of the piano or the piano corresponding to 21 strings of the Chinese zither or the piano corresponding to 12 tones of the dulcimer;
SD card large capacity access module 10: the audio data storage device is electrically connected with the ARM processing unit 13, and is used for receiving the stored audio data transmitted by the ARM processing unit 13 and receiving the control instruction transmitted by the ARM processing unit 13; the SD card high-capacity access module 10 is loaded with a FATFS file system to facilitate file management and reading and writing;
the sram memory module 11: the system is electrically connected with the ARM processing unit 13, is used for caching audio data, and is used for receiving a control instruction transmitted by the ARM processing unit 13;
temperature and humidity sensor 12: the temperature and humidity data acquisition unit is electrically connected with the ARM processing unit 13, is used for acquiring temperature and humidity data of an environment, transmitting the temperature and humidity data to the ARM processing unit 13, and is used for receiving a control instruction transmitted by the ARM processing unit 13; the temperature and humidity sensor 12 is an SHT30 high-precision digital temperature and humidity sensor;
the communication module 14: the system is used for receiving the voice data and the temperature and humidity data transmitted by the ARM processing unit 13 and transmitting the voice data and the temperature and humidity data to the cloud server 3;
the power supply module 5: the device is electrically connected with the microphone, the audio codec 8, the ARM processing unit 13, the flash access module 9, the SD card high-capacity access module 10, the sram memory module 11 and the temperature and humidity sensor 12, and is used for providing power for the microphone, the audio codec 8, the ARM processing unit 13, the flash access module 9, the SD card high-capacity access module 10, the sram memory module 11 and the temperature and humidity sensor 12; the power module 12 is a lithium battery;
the cloud server 3: the system is used for receiving the voice data and the temperature and humidity data transmitted by the communication module 14, processing and storing the voice data and the temperature and humidity data, and generating a report according to a data processing result and transmitting the report to the client 4; the protocol adopted in network transmission is based on two data link layer protocols of HTTPS and WEBSOCKET of TCP/IP, and part of the protocol is self-defined to support breakpoint continuous transmission; as shown in fig. 9, the cloud server 3 includes:
the data receiving module 31: the voice data and the temperature and humidity data are transmitted by the communication module 14, the voice data is transmitted to the classification compression module 32, the temperature and humidity data is transmitted to the rating module 34 and the frequency drift detection module 37;
the classification compression module 32: the voice data receiving module 31 is used for receiving the voice data transmitted by the data receiving module 31, classifying and compressing the voice data, and transmitting the compressed voice data to the data processing module 33;
the data processing module 33: the system is used for receiving the compressed voice data transmitted by the classification compression module 32, performing noise reduction processing on the voice data, performing track identification according to the noise-reduced voice data and track information stored in the database module 36, and transmitting the noise-reduced voice data and an identification result to the rating module 34 and the drift detection module 37;
the rating module 34: the voice recognition module is used for receiving the voice data and the recognition result after noise reduction transmitted by the data processing module 33, receiving the temperature and humidity data transmitted by the data receiving module 31, rating and generating a rating result according to the voice data after noise reduction, the recognition result, the temperature and humidity data and the musical instrument rating standard stored in the database module 36, and transmitting the rating result to the report generating module 35;
frequency drift detection module 37: the voice data processing module 33 is used for receiving the noise-reduced voice data transmitted by the data processing module 33, receiving the temperature and humidity data transmitted by the data receiving module 31, generating a detection result according to the noise-reduced voice data, the temperature and humidity data and the instrument sound fixing standard stored in the database module 36, and transmitting the detection result to the report generating module 35;
the report generation module 35: for receiving the rating result transmitted by the rating module 34, for receiving the detection result transmitted by the frequency drift detection module 37, for generating a report according to the rating result and the detection result, and for transmitting the report to the database module 36 and the client 4;
database module 36: the system comprises a report generation module, a music score storage module and a music instrument rating standard, wherein the report generation module is used for generating a report according to music score information, music instrument rating standard and music instrument tuning standard;
the client 4: for receiving the report transmitted by the cloud server 3, and for feeding back the report to the client.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.

Claims (10)

1. An intelligent evaluation method for musical instruments is characterized in that: the method comprises the following steps:
s1, burning the characteristic data: recording the MFCC characteristic coefficients of the musical instruments into a flash access module (9);
s2, signal data acquisition: starting an intelligent management system, and acquiring analog signals of the musical instrument (1) in real time by a microphone in intelligent hardware equipment (2) and transmitting the analog signals to an audio codec (8) in real time;
s3, voice data generation: the audio codec (8) converts the analog signals into digital signals in real time, generates voice data by filtering and amplifying the digital signals and transmits the voice data to an AMR processing unit (13);
s4, sound judgment: the AMR processing unit (13) judges whether the voice data is a musical sound, if so, the step S5 is carried out, and if not, the step S2 is carried out;
s5, voice data recording: the AMR processing unit (13) sends a driving instruction to an SD card large-capacity access module (10), and the SD card large-capacity access module (13) starts to record the musical tones;
s6, voice data transmission: after the music is recorded, the AMR processing unit (13) transmits the voice data to a cloud server (3) through a communication module (14);
s7, voice data processing: a data processing module (33) in the cloud server (3) compresses and reduces noise and identifies tracks of the voice data, the processed voice data and an identification result are transmitted to a rating module (34), and the processed voice data are transmitted to a frequency drift detection module (37);
s8, speech data rating: the rating module (34) rating the speech data according to instrument rating criteria and transmitting the rating results to a report generating module (35);
s9, frequency drift detection: the frequency drift detection module (37) performs frequency drift detection on the voice according to the musical instrument sound fixing standard and transmits the detection result to the report generation module (35);
s10, report generation: the report generation module (35) generates a report according to the rating result and the detection result and transmits the report to the client (4).
2. The intelligent evaluation method for musical instruments according to claim 1, characterized in that: step S7 further includes the steps of:
s71, the data processing module (33) classifies and compresses the voice data into MP3 format;
s72, performing one-dimensional convolution and down-sampling processing on the voice data based on a one-dimensional Wave-U-Net convolution neural network to obtain an intermediate result;
s73, performing up-sampling and deconvolution processing on the intermediate result, and classifying the frequency spectrum of the audio characteristic signal in each layer of convolution sampling;
s74, after the convolution sampling processing is finished, outputting separated pure musical instrument playing music tones and discarding the environmental background noise to generate the processed voice data;
s75, performing track identification according to the processed voice data and generating the identification result;
s76, the processed voice data and the recognition result are transmitted to the rating module (34), and the processed voice data are transmitted to the frequency drift detection module (37).
3. The intelligent evaluation method for musical instruments according to claim 1, characterized in that: step S8 further includes the steps of:
s81, encoding the audio frequency spectrum characteristic sequence obtained by MFCC processing of the processed voice data by using an Encoder of an RNN (radio network node) network to obtain a semantic vector Xt
S82, using the semantic vector XtHidden layer state Ht ═ RNN (X) of Decoder as RNN networkt,Ht-1);
S83, dividing the hidden state layer H at the previous time ttH as a later moment t +1t+1To obtain an output state Yt=RNN(Xt,Ht-1) The output state is a sequence of the frequency spectrum timings of musical instrument performance musical tone signals;
s84, classifying and connecting the frequency spectrum time sequence through a long-time and short-time memory neural network and continuous time sequence classification to obtain a musical instrument performance musical tone symbolization sequence;
s85, comparing the musical tone symbolization sequence with the musical instrument rating standard to generate the rating result and transmitting the rating result to the report generating module (35).
4. The intelligent evaluation method for musical instruments according to claim 3, characterized in that: step S84 further includes the steps of:
s841, every Y in the frequency spectrum time sequencetThe output is input into L STM network as a time slice and then is connected with softmax, and a posterior probability matrix Y is output;
s842, performing argmax function processing on each column of the posterior probability matrix Y to obtain the type NET of the notes output by each columnw(x) Wherein w represents a parameter of L STM;
s843, classifying the musical notes into NETsw(x) And inputting the sequence into the CTC to carry out loss operation to align the note sequence to obtain the musical tone symbolization sequence.
5. The intelligent evaluation method for musical instruments according to claim 3, characterized in that: the rating results include scores for the following individual items: the accuracy, rhythm, strength, speed, integrity and musicality of the performance.
6. The intelligent evaluation method for musical instruments according to claim 3, characterized in that: the step S9 further includes the following steps:
s91, the frequency drift detection module (37) continuously compares the musical tone symbolized sequence with the musical instrument tone setting standard and statistically analyzes the frequency drift proportion of each independent musical tone;
s92, when the frequency drift proportion of all the music tones is less than 10%, the detection result indicates that the musical instrument (1) does not need to be tuned; when the frequency drift proportion of any one or more tones is larger than or equal to 10%, the detection result indicates that the musical instrument (1) needs tuning;
s93, the frequency drift detection module (37) transmits the detection result to the report generation module (35).
7. The intelligent evaluation method for musical instruments according to any one of claims 1 to 6, wherein: the system for realizing the intelligent evaluation method comprises the following steps:
intelligent hardware device (2): the system comprises a cloud server (3), a data processing unit and a data processing unit, wherein the cloud server is arranged on a musical instrument (1) and is used for collecting data information of the musical instrument (1) and transmitting the data information to the cloud server; the intelligent hardware device (2) comprises:
a microphone: is electrically connected with the audio codec (8) and is used for collecting analog signals and transmitting the analog signals to the audio codec (8);
audio codec (8): the system is electrically connected with a microphone and an ARM processing unit (13), is used for receiving the analog signal transmitted by the microphone, converting the analog signal into a digital signal, filtering and amplifying the digital signal and generating voice data, transmitting the voice data to the ARM processing unit (13), and receiving a control instruction transmitted by the ARM processing unit (13);
an ARM processing unit (13): the high-capacity SD card access module is electrically connected with the audio codec (8), the flash access module (9), the SD card high-capacity access module (10), the sram memory module (11) and the temperature and humidity sensor (12), and is used for receiving the voice data transmitted by the audio codec (8), receiving the temperature and humidity data transmitted by the temperature and humidity sensor (12), transmitting the voice data to the SD card high-capacity access module (10), transmitting the voice data and the temperature and humidity data to the communication module (14), and transmitting the control instruction to the audio codec (8), the flash access module (9), the SD card high-capacity access module (10), the sram memory module (11) and the temperature and humidity sensor (12);
flash access module (9): the system is electrically connected with the ARM processing unit (13), is used for accessing the firmware of the intelligent hardware device (2), and is used for receiving the control instruction transmitted by the ARM processing unit (13);
SD card high capacity access module (10): the audio data storage device is electrically connected with the ARM processing unit (13), and is used for receiving the audio data stored in the audio data transmitted by the ARM processing unit (13) and receiving the control instruction transmitted by the ARM processing unit (13);
a sram memory module (11): the system is electrically connected with the ARM processing unit (13), is used for caching the audio data, and is used for receiving the control instruction transmitted by the ARM processing unit (13);
temperature and humidity sensor (12): the temperature and humidity data acquisition unit is electrically connected with the ARM processing unit (13), is used for acquiring temperature and humidity data of an environment, transmitting the temperature and humidity data to the ARM processing unit (13), and is used for receiving the control instruction transmitted by the ARM processing unit (13);
communication module (14): the voice data and the temperature and humidity data are received and transmitted by the ARM processing unit (13), and the voice data and the temperature and humidity data are transmitted to a cloud server (3);
power module (5): the microphone, the audio codec (8), the ARM processing unit (13), the flash access module (9), the SD card high-capacity access module (10), the psram memory module (11) and the temperature and humidity sensor (12) are electrically connected, and are used for providing power for the microphone, the audio decoder (8), the ARM processing unit (13), the flash access module (9), the SD card high-capacity access module (10), the psram memory module (11) and the temperature and humidity sensor (13);
cloud server (3): the voice data and the temperature and humidity data are received and transmitted by the communication module (14), the voice data and the temperature and humidity data are processed and stored, and a report is generated and transmitted to a client (4) according to a data processing result;
client (4): for receiving the report transmitted by the cloud server (3) for feeding back the report to a customer.
8. An intelligent management method for musical instruments according to claim 7, characterized in that: the cloud server (3) comprises:
data reception module (31): the voice data and the temperature and humidity data are received and transmitted by the communication module (14), the voice data is transmitted to a classification compression module (32), and the temperature and humidity data is transmitted to a rating module (34) and the frequency drift detection module (37);
a classification compression module (32): the voice data receiving module (31) is used for receiving the voice data transmitted by the data receiving module, classifying and compressing the voice data, and transmitting the compressed voice data to the data processing module (33);
data processing module (33): the voice recognition module is used for receiving the compressed voice data transmitted by the classification compression module (32), performing noise reduction processing on the voice data, performing track recognition according to the noise-reduced voice data and track information stored in the database module (36) and transmitting the noise-reduced voice data and the recognition result to the rating module (34) and the frequency drift detection module (37);
rating module (34): the voice data and the recognition result after noise reduction transmitted by the data processing module (33) are received, the temperature and humidity data transmitted by the data receiving module (31) are received, and the voice data, the recognition result, the temperature and humidity data after noise reduction and the instrument rating standard stored in the database module (36) are rated and generate a rating result for transmitting the rating result to the report generating module (35);
frequency drift detection module (37): the voice data processing module is used for receiving the voice data after noise reduction transmitted by the data processing module (33), receiving the temperature and humidity data transmitted by the data receiving module (31), generating a detection result according to the voice data after noise reduction, the temperature and humidity data and an instrument sound fixing standard stored in the database module (36), and transmitting the detection result to the report generating module (35);
report generation module (35): for receiving the rating result transmitted by the rating module (34), for receiving the detection result transmitted by the frequency drift detection module (37), for generating the report according to the rating result and the detection result, for transmitting the report to the database module (36) and the client (4);
database module (36): for storing track information, instrument rating criteria and instrument tuning criteria, for receiving and storing the report transmitted by the report generation module (35).
9. An intelligent management system for musical instruments according to claim 7, wherein: the microphones comprise a left channel microphone (6) and a right channel microphone (7), and the left channel microphone (6) and the right channel microphone (7) are electrically connected with the audio codec (8); the microphone is an electret microphone or a silicon microphone MEMS.
10. An intelligent management system for musical instruments according to claim 7, wherein: the audio codec (8) and the ARM processing unit (13) communicate in a 12S bus mode; and the ARM processing unit (13) receives the audio data transmitted by the audio codec (8) in a double DMA + ringbuff mode.
CN202010257405.9A 2020-04-03 2020-04-03 Intelligent evaluation method for musical instrument Pending CN111415682A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010257405.9A CN111415682A (en) 2020-04-03 2020-04-03 Intelligent evaluation method for musical instrument

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010257405.9A CN111415682A (en) 2020-04-03 2020-04-03 Intelligent evaluation method for musical instrument

Publications (1)

Publication Number Publication Date
CN111415682A true CN111415682A (en) 2020-07-14

Family

ID=71491764

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010257405.9A Pending CN111415682A (en) 2020-04-03 2020-04-03 Intelligent evaluation method for musical instrument

Country Status (1)

Country Link
CN (1) CN111415682A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111292708A (en) * 2020-04-03 2020-06-16 北京乐界乐科技有限公司 Intelligent management system for musical instrument

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000181471A (en) * 1996-08-06 2000-06-30 Yamaha Corp Karaoke sing-along grading apparatus
JP2009237035A (en) * 2008-03-26 2009-10-15 Naltec Inc Remote monitoring system for piano
CN105070298A (en) * 2015-07-20 2015-11-18 科大讯飞股份有限公司 Polyphonic musical instrument scoring method and device
CN107884006A (en) * 2017-10-25 2018-04-06 江苏觉创科技有限公司 Multifunctional intellectual piano monitoring system
CN108701452A (en) * 2016-02-02 2018-10-23 日本电信电话株式会社 Audio model learning method, audio recognition method, audio model learning device, speech recognition equipment, audio model learning program and speech recognition program
CN110047475A (en) * 2019-05-24 2019-07-23 郑州铁路职业技术学院 A kind of Computer Distance Education system and method
CN110364180A (en) * 2019-06-06 2019-10-22 北京容联易通信息技术有限公司 A kind of examination system and method based on audio-video processing

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000181471A (en) * 1996-08-06 2000-06-30 Yamaha Corp Karaoke sing-along grading apparatus
JP2009237035A (en) * 2008-03-26 2009-10-15 Naltec Inc Remote monitoring system for piano
CN105070298A (en) * 2015-07-20 2015-11-18 科大讯飞股份有限公司 Polyphonic musical instrument scoring method and device
CN108701452A (en) * 2016-02-02 2018-10-23 日本电信电话株式会社 Audio model learning method, audio recognition method, audio model learning device, speech recognition equipment, audio model learning program and speech recognition program
CN107884006A (en) * 2017-10-25 2018-04-06 江苏觉创科技有限公司 Multifunctional intellectual piano monitoring system
CN110047475A (en) * 2019-05-24 2019-07-23 郑州铁路职业技术学院 A kind of Computer Distance Education system and method
CN110364180A (en) * 2019-06-06 2019-10-22 北京容联易通信息技术有限公司 A kind of examination system and method based on audio-video processing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
CALVO-ZARAGOZA: "End-to-End Neural Optical Music Recognition of Monophonic Scores", APPLIED SCIENCES, 11 April 2018 (2018-04-11), pages 1 - 23 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111292708A (en) * 2020-04-03 2020-06-16 北京乐界乐科技有限公司 Intelligent management system for musical instrument

Similar Documents

Publication Publication Date Title
CN101023469B (en) Digital filtering method, digital filtering equipment
CN108597498A (en) Multi-microphone voice acquisition method and device
CN109275084A (en) Test method, device, system, equipment and the storage medium of microphone array
CN108847215B (en) Method and device for voice synthesis based on user timbre
WO2020155490A1 (en) Method and apparatus for managing music based on speech analysis, and computer device
CN107221319A (en) A kind of speech recognition test system and method
CN109326305B (en) Method and system for batch testing of speech recognition and text synthesis
US4343969A (en) Apparatus and method for articulatory speech recognition
CN1965218A (en) Performance prediction for an interactive speech recognition system
WO2023222089A1 (en) Item classification method and apparatus based on deep learning
CN103354445A (en) Adaptive environment music playing apparatus and method thereof
CN112992109A (en) Auxiliary singing system, auxiliary singing method and non-instantaneous computer readable recording medium
WO2023222090A1 (en) Information pushing method and apparatus based on deep learning
CN213691420U (en) Intelligent management system for musical instrument
CN108389569B (en) Bridge, string instrument and string vibration detection method
CN111415682A (en) Intelligent evaluation method for musical instrument
CN111477249A (en) Intelligent scoring method for musical instrument
WO2011122522A1 (en) Ambient expression selection system, ambient expression selection method, and program
CN110610722A (en) Short-time energy and Mel cepstrum coefficient combined novel low-complexity dangerous sound scene discrimination method based on vector quantization
CN111292708A (en) Intelligent management system for musical instrument
CN110739006B (en) Audio processing method and device, storage medium and electronic equipment
CN105632523B (en) Adjust the method and apparatus and terminal of the volume output valve of audio data
CN111415688A (en) Intelligent recording method for musical instrument
CN114664303A (en) Continuous voice instruction rapid recognition control system
CN113409809B (en) Voice noise reduction method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20240108

Address after: Floor 1, Building G, Yangzhou Software Park, No. 201 Wenchang East Road, Ecological Science and Technology New City, Yangzhou City, Jiangsu Province, 225000

Applicant after: Lehe Data Information Technology Jiangsu Co.,Ltd.

Address before: 100085 room 0714 / 0716, 7 / F, No.26, shangdixinxi Road, Haidian District, Beijing

Applicant before: BEIJING LEJIELE TECHNOLOGY CO.,LTD.