CN109003614A - A kind of voice transmission method, voice-transmission system and terminal - Google Patents

A kind of voice transmission method, voice-transmission system and terminal Download PDF

Info

Publication number
CN109003614A
CN109003614A CN201810857446.4A CN201810857446A CN109003614A CN 109003614 A CN109003614 A CN 109003614A CN 201810857446 A CN201810857446 A CN 201810857446A CN 109003614 A CN109003614 A CN 109003614A
Authority
CN
China
Prior art keywords
signal
main feature
voice coding
voice
recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810857446.4A
Other languages
Chinese (zh)
Inventor
刘小东
孟凡靖
李明静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Aiyouwei Software Development Co Ltd
Original Assignee
Shanghai Aiyouwei Software Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Aiyouwei Software Development Co Ltd filed Critical Shanghai Aiyouwei Software Development Co Ltd
Priority to CN201810857446.4A priority Critical patent/CN109003614A/en
Publication of CN109003614A publication Critical patent/CN109003614A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/008Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

This application involves intelligent terminal technical fields, in particular to a kind of voice transmission method, voice-transmission system and terminal, the technical solution of the application is in carrying out sound transmission course, combine compressed sensing technology and Recognition with Recurrent Neural Network RNN technology, it pre-processes in the primary speech signal that transmitting terminal will acquire into time series signal S, main feature signal therein is extracted by compressed sensing technology, coding is then carried out to the main feature signal extracted by Recognition with Recurrent Neural Network RNN and forms voice coding sequence H;Voice coding sequence H is received by Recognition with Recurrent Neural Network RNN in receiving end and is decoded, obtain main feature signal Z, then the main feature signal Z obtained again based on compressed sensing reconfiguration technique to decoding is reconstructed, and obtains primary speech signal, and then complete entire sound transmission course.Compressed sensing and Recognition with Recurrent Neural Network are combined together carry out voice transfer, substantially increase voice transfer efficiency.

Description

A kind of voice transmission method, voice-transmission system and terminal
Technical field
This application involves intelligent terminal technical field more particularly to a kind of voice transmission methods, voice-transmission system and end End.
Background technique
With the development of the digital information age, compression reconfiguration technology has been to be concerned by more and more people, especially for language Sound compressed encoding and transmission technology have become the hot spot of current research.Existing voice compression reconfiguration technology mostly uses Shannon to use greatly Theorem, it is desirable that the sample frequency of voice signal need to reach twice or more of highest signal frequency could distortionless reconstruct it is original Voice signal, and need that largely original signal could be restored using point, so that sample size be caused to increase, transmission efficiency is low, If original speech information amount is larger, need to acquire more multisample, most of transformation matrix base system number is dropped, and causes number According to the waste with hardware resource;Therefore, it needs one kind to can be improved voice transfer efficiency in the prior art and is maintained at voice biography The technical solution of voice data integrity is kept during defeated.
Summary of the invention
The purpose of the application is to provide a kind of voice transmission method, voice-transmission system and terminal, can utilize compression sense Know technology and Recognition with Recurrent Neural Network RNN technology, realize to the compression of voice, coding, decoding and reduction, and then realizes that voice passes It is defeated, voice transfer efficiency can be effectively improved, and keep integrality of the voice data in transmission process.
According to some embodiments of the present application in a first aspect, embodiments herein provides a kind of voice transfer side Method is applied to transmitting terminal, which comprises
Obtain primary speech signal;
The primary speech signal is pre-processed, time series signal S is formed;
Wherein, time series signal S is the one-dimensional signal of N variation at any time;It and is sparse signal;
Time series signal S includes multiple signaling points, S={ s1, s2 ..., sn };
Time N includes multiple time points, N={ 1,2 ..., n };
Wherein, the time point i in time N is corresponding with the signaling point s (i) in time series signal S;i∈[1,n];
Extract the main feature signal Z in the time series signal S;
It is encoded main feature signal Z input Recognition with Recurrent Neural Network RNN to obtain voice coding sequence H;
Voice coding sequence H is transmitted to receiving end.
Optionally, the method for extracting the main feature signal Z in the time series signal S includes:
Compressed sensing processing is carried out to each signaling point s (i) in time series signal S based on the first predetermined formula, is obtained To main feature signal Z;Main feature information Z includes multiple characteristic signal point z (i);
First predetermined formula are as follows:
Wherein, time series signal S is RnOne column vector S in spacenx1
P is compressed coefficient set, P={ p (1), p (2) ..., p (n) };
T is linear expression column vector Snx1Normal orthogonal set of bases, T={ t (1), t (2) ..., t (n) };
P (i) is the compressed coefficient corresponding with signaling point s (i);
T (i) is orthonormal basis corresponding with signaling point s (i).
Optionally, described to be encoded main feature signal Z input Recognition with Recurrent Neural Network RNN to obtain voice coding The method of sequence H includes:
By the good Recognition with Recurrent Neural Network RNN model of obtained main feature signal Z input prebuild;
Wherein, Recognition with Recurrent Neural Network RNN model includes at least: input layer, hidden layer and output layer;
Multiple characteristic signal point z (i) in main feature signal Z are transmitted to hidden layer through input layer;
Each characteristic signal point z (i) in main feature signal Z is encoded to obtain including multiple languages by hidden layer Sound encodes the voice coding sequence H of h (i);
Voice coding h (i) is corresponding with characteristic signal point z (i).
Optionally, described that each characteristic signal point z (i) in main feature signal Z encode by hidden layer Include: to the method for voice coding sequence H for including multiple voice coding h (i)
Coding is carried out according to characteristic signal point z (i) of second predetermined formula to input in hidden layer to be calculated and feature The corresponding voice coding h (i) of signaling point z (i);
Voice coding sequence H is formed based on multiple voice coding h (i);
Second predetermined formula are as follows: h (i)=σ (zi)=σ (Uz (i)+Wh (i-1)+b);
Wherein, σ is the activation primitive of Recognition with Recurrent Neural Network RNN model;B is the first linear deviation ratio;
U is the first linear relationship coefficient of Recognition with Recurrent Neural Network RNN model, and W is the second of Recognition with Recurrent Neural Network RNN model Linear relationship coefficient.
According to the another aspect of the application, embodiments herein additionally provides a kind of voice transmission method, applied to connecing Receiving end, which comprises
Receive the voice coding sequence H of transmitting terminal transmission;
Voice coding sequence H is decoded based on Recognition with Recurrent Neural Network RNN model to obtain main feature signal Z;
The main feature signal Z obtained to decoding is reconstructed, and obtains primary speech signal.
Optionally, the method packet of the voice coding sequence H being made of voice coding h (i) for receiving transmitting terminal transmission It includes:
The voice coding sequence including multiple voice coding h (i) is received by the output layer of Recognition with Recurrent Neural Network RNN model H。
It is described that voice coding sequence H is decoded based on Recognition with Recurrent Neural Network RNN model to obtain main feature signal Z's Method includes:
It is based on third predetermined formula in output layer, voice coding h (i) is decoded to obtain corresponding characteristic signal respectively Point z (i), to form main feature signal Z;
Wherein, third predetermined formula are as follows: z (i)=Vh (i)+c;
Wherein, V is the third linear coefficient of relationship of Recognition with Recurrent Neural Network RNN model, and c is the second linear deflection coefficient.
Optionally, the main feature signal Z obtained to decoding is reconstructed, and the method for obtaining primary speech signal includes:
Transformation of coefficient is carried out to the main feature signal Z using observing matrix base Gmxn, obtains observed result Y;
Wherein, Y=GP;
Compressed coefficient P, min are solved using the optimal problem under norm | | P | |0S.T.Y=GT-1S;
Solve the approximation of time series signal S;
Wherein, S is time series signal, and T is normal orthogonal set of bases, and G is observing matrix base;
Primary speech signal is restored according to the approximation of the time series signal S.
According to the another aspect of the application, embodiments herein additionally provides a kind of voice-transmission system, including sends End and receiving end;
Wherein, the transmitting terminal is at least configured to execute following operation:
Obtain primary speech signal;
The primary speech signal is pre-processed, time series signal S is formed;
Wherein, time series signal S is the one-dimensional signal of N variation at any time;
Time series signal S includes multiple signaling points, S={ s1, s2 ..., sn };
Time N includes multiple time points, N={ 1,2 ..., n };
Wherein, the time point i in time N is corresponding with the signaling point s (i) in time series signal S;i∈[1,n];
Extract the main feature signal Z in the time series signal S;
The main feature signal Z is transmitted to receiving end through Recognition with Recurrent Neural Network RNN;
The receiving end is at least configured to execute following operation:
Receive the voice coding sequence H of transmitting terminal transmission;
Voice coding sequence H is decoded based on Recognition with Recurrent Neural Network RNN model to obtain main feature signal Z;
It is encoded main feature signal Z input Recognition with Recurrent Neural Network RNN to obtain voice coding sequence H;
Voice coding sequence H is transmitted to receiving end.
According to the application's in another aspect, embodiments herein additionally provides a kind of for sending the terminal of information, use In by voice transfer to information receiving terminal, the information sends terminal and includes:
Memory is configured as storing data and instruction;
The processor communicated is established with memory;
Wherein, when executing the instruction in memory, the processor is configured to executing following operation:
Obtain primary speech signal;
The primary speech signal is pre-processed, time series signal S is formed;
Wherein, time series signal S is the one-dimensional signal of N variation at any time;
Time series signal S includes multiple signaling points, S={ s1, s2 ..., sn };
Time N includes multiple time points, N={ 1,2 ..., n };
Wherein, the time point i in time N is corresponding with the signaling point s (i) in time series signal S;i∈[1,n];
Extract the main feature signal Z in the time series signal S;
It is encoded main feature signal Z input Recognition with Recurrent Neural Network RNN to obtain voice coding sequence H;
Voice coding sequence H is transmitted to receiving end.
According to the another aspect of the application, embodiments herein additionally provides a kind of for receiving the terminal of information, use In the voice for receiving information transmission terminal transmission, the information receiving terminal includes:
Memory is configured as storing data and instruction;
The processor communicated is established with memory;
Wherein, when executing the instruction in memory, the processor is configured to executing following operation:
Receive the voice coding sequence H that information sends terminal transmission;
Voice coding sequence H is decoded based on Recognition with Recurrent Neural Network RNN model to obtain main feature signal Z;
The main feature signal Z obtained to decoding is reconstructed, and obtains primary speech signal.
The above-mentioned technical proposal of the application combines compressed sensing technology and circulation nerve in carrying out sound transmission course Network RNN technology pre-processes into time series signal S in the primary speech signal that transmitting terminal will acquire, passes through compressed sensing skill Art extracts main feature signal therein, is then carried out by Recognition with Recurrent Neural Network RNN to the main feature signal extracted Coding forms voice coding sequence H;
Voice coding sequence H is received by Recognition with Recurrent Neural Network RNN in receiving end and is decoded, main feature letter is obtained Number Z,
Then the main feature signal Z obtained again based on compressed sensing reconfiguration technique to decoding is reconstructed, and is obtained original Voice signal, and then complete entire sound transmission course.
Compressed sensing, also known as compression sampling can make full use of the sparsity (or compressibility) of signal, with one and change It changes the incoherent sampling matrix of base and will convert resulting high latitude signal and project to a low latitude spatially, then by solving one Optimization problem reconstructs original signal with high probability in the projection a small amount of from these, and it is fixed to breach traditional nyquist sampling The constraint of reason realizes compressing in perception to unknown signaling, under certain condition, need to only sample low volume data, so that it may Glorious restructing algorithm accurately recovers original signal, guarantees the undistorted reconstruction of signal;
Compressed sensing is high-dimensional spatial information compress technique, mainly the compression of solution information, lightweight raw information, but In view of transmitting terminal and receiving end are end-to-end structural models in transmission process, and Recognition with Recurrent Neural Network RNN is just solved The scene technology of seq2seq (sequence to sequence), i.e. what RNN was most good at processing is exactly the data of time series, and voice signal And change over time and change, it can be converted to time series signal, output is input to by RNN realization sequence data, i.e., From transmitting terminal to receiving end, the integrality of voice data on the one hand can effectively ensure that, it on the other hand can be based on the sequence of input Column data forms the coding/decoding model of deep learning, is conducive to the intelligent transmission of transmission, to realize the transmission effect of encoding and decoding Rate.
Detailed description of the invention
To more fully understand and illustrating some embodiments of the present application, below with reference to the description of attached drawing reference implementation example, In the drawings, same digital number indicates corresponding part in the accompanying drawings.
Fig. 1 is the schematic flow chart for the voice transmission method applied to transmitting terminal that embodiments herein provides;
Fig. 2 is the schematic flow chart for the voice transmission method applied to receiving end that embodiments herein provides;
Fig. 3 is the model schematic for the Recognition with Recurrent Neural Network RNN that some embodiments of the present application provide.
Specific embodiment
High-dimensional voice data transmits relatively slowly in transmission process, calculates Time & Space Complexity with data dimension Increase and to sharply increase efficiency lower, and compressed sensing technology provides solution route for voice transfer efficiency transmission;This Shen Purpose please is the voice RNN encoding and decoding transmission method for proposing a kind of compressed sensing, crosses Cheng Qian first by raw tone in transmission The main feature information of voice data is extracted in data compression, during transmission by characteristic information by RNN network code and The quick transmission of voice messaging is realized in decoding.
To keep the purposes, technical schemes and advantages of the application more clear, With reference to embodiment and join According to attached drawing, the application is further described.It should be understood that these descriptions are merely illustrative, and it is not intended to limit this Shen Range please.In addition, in the following description, descriptions of well-known structures and technologies are omitted, to avoid this is unnecessarily obscured The concept of application.
The term used in following description and claims and phrase are not limited to literal meaning, and being merely can Understand and consistently understands the application.Therefore, for those skilled in the art, it will be understood that provide to the various implementations of the application The description of example is only the purpose to illustrate, rather than limits the application of appended claims and its Equivalent definitions.
Below in conjunction with the attached drawing in some embodiments of the application, technical solutions in the embodiments of the present application carries out clear Chu is fully described by, it is clear that described embodiments are only a part of embodiments of the present application, instead of all the embodiments. Based on the embodiment in the application, obtained by those of ordinary skill in the art without making creative efforts all Other embodiments shall fall in the protection scope of this application.
It should be noted that the term used in the embodiment of the present application is only merely for the mesh of description specific embodiment , it is not intended to be limiting the application." one " of the embodiment of the present application and singular used in the attached claims, "one", "an", " described " and "the" be also intended to including most forms, unless the context clearly indicates other meaning.Also It should be appreciated that term "and/or" used herein refers to and includes that any of project is listed in one or more mutually bindings Or all possible combinations.Expression " first ", " second ", " first " and " second " be for modify respective element without Consideration sequence or importance are used only for distinguishing a kind of element and another element, without limiting respective element.Under in addition, Technical characteristic involved in the application different embodiments described in face as long as they do not conflict with each other can phase Mutually combine.
In order to which the clearer each attached drawing of description gives same step different labels in various figures.
Can be electronic equipment according to the terminal of some embodiments of the application, the electronic equipment may include smart phone, PC (PC, such as tablet computer, desktop computer, notebook, net book, palm PC PDA), mobile phone, e-book Reader, portable media player (PMP), audio/video player (MP3/MP4), video camera, virtual reality device (VR) and the combination of one or more of wearable device etc..According to some embodiments of the present application, the wearable device It may include type of attachment (such as wrist-watch, ring, bracelet, glasses or wear-type device (HMD)), integrated type (such as electronics Clothes), decorated type (such as pad skin, tatoo or built in electronic device) etc. or several combinations.In some realities of the application One of it applies in example, the electronic equipment can be flexibly, be not limited to above equipment, or can be above-mentioned various equipment Or several combination.In this application, term " user " can be indicated using the people of electronic equipment or setting using electronic equipment Standby (such as artificial intelligence electronic equipment).
Below in conjunction with attached drawing, it is described in detail according to the sequence of Fig. 1 to Fig. 3.
Fig. 1 is please referred to, Fig. 1 is the schematic flow chart provided according to some embodiments of the present application;
As shown in Figure 1, according to some embodiments of the present application in a first aspect, embodiments herein provides a kind of language Tone transmission method is applied to transmitting terminal, which comprises
Step S101: primary speech signal is obtained;It should be noted that primary speech signal can be by hardware device such as wheat Gram wind typing is formed,
Step S102: pre-processing the primary speech signal, forms time series signal S;
Wherein, time series signal S is the one-dimensional signal of N variation at any time;
Time series signal S includes multiple signaling points, S={ s1, s2 ..., sn };
Time N includes multiple time points, N={ 1,2 ..., n };
Wherein, the time point i in time N is corresponding with the signaling point s (i) in time series signal S;i∈[1,n];
Step S103: the main feature signal Z in the time series signal S is extracted;
Step S104: it is encoded main feature signal Z input Recognition with Recurrent Neural Network RNN to obtain voice coding sequence Arrange H;
Step S105: voice coding sequence H is transmitted to receiving end.
Optionally, the method for extracting the main feature signal Z in the time series signal S includes:
Compressed sensing processing is carried out to each signaling point s (i) in time series signal S based on the first predetermined formula, is obtained To main feature signal Z;Main feature information Z includes multiple characteristic signal point z (i);
First predetermined formula are as follows:
Wherein, time series signal S is RnOne column vector S in spacenx1
P is compressed coefficient set, P={ p (1), p (2) ..., p (n) };
T is linear expression column vector Snx1Normal orthogonal set of bases, T={ t (1), t (2) ..., t (n) };
P (i) is the compressed coefficient corresponding with signaling point s (i);
T (i) is orthonormal basis corresponding with signaling point s (i).
Optionally, described to be encoded main feature signal Z input Recognition with Recurrent Neural Network RNN to obtain voice coding The method of sequence H includes:
By the good Recognition with Recurrent Neural Network RNN model of obtained main feature signal Z input prebuild;
Wherein, as shown in figure 3, Fig. 3 is the model signal for the Recognition with Recurrent Neural Network RNN that some embodiments of the present application provide Figure;
Recognition with Recurrent Neural Network RNN model includes: input layer, hidden layer and output layer;
Wherein, as an alternative embodiment, input layer docks input terminal, output layer docks output end, passes through mind Realize voice voice by the transmission of input terminal to output end through network RNN, wherein in this application, input layer input is packet Include the main feature information Z of multiple characteristic signal point z (i), wherein the corresponding input layer of each characteristic signal point z (i) One input unit (input (i-1, input (i), input (i+1)) as illustrated in the drawing, if main feature information Z has i Characteristic point signal, then input unit also includes i, corresponding hidden layer also includes multiple implicit units, corresponding output layer Also have it is multiple, Recognition with Recurrent Neural Network RNN basic building be the prior art, the application mainly utilize existing neural network The coding transmission of RNN realization voice.
Multiple characteristic signal point z (i) in main feature signal Z are transmitted to hidden layer through input layer;
Each characteristic signal point z (i) in main feature signal Z is encoded to obtain including multiple languages by hidden layer Sound encodes the voice coding sequence H of h (i);
Voice coding h (i) is corresponding with characteristic signal point z (i).
Optionally, described that each characteristic signal point z (i) in main feature signal Z encode by hidden layer Include: to the method for voice coding sequence H for including multiple voice coding h (i)
Coding is carried out according to characteristic signal point z (i) of second predetermined formula to input in hidden layer to be calculated and feature The corresponding voice coding h (i) of signaling point z (i);
Voice coding sequence H is formed based on multiple voice coding h (i);
Second predetermined formula are as follows: h (i)=σ (zi)=σ (Uz (i)+Wh (i-1)+b);
Wherein, σ is the activation primitive of Recognition with Recurrent Neural Network RNN model;B is the first linear deviation ratio;
U is the first linear relationship coefficient of Recognition with Recurrent Neural Network RNN model, and W is the second of Recognition with Recurrent Neural Network RNN model Linear relationship coefficient.
Referring to figure 2., Fig. 2 is the schematic of the voice transmission method applied to receiving end that embodiments herein provides Flow chart;
As shown in Fig. 2, embodiments herein additionally provides a kind of voice transfer side according to the another aspect of the application Method is applied to receiving end, which comprises
Step S201: the voice coding sequence H of transmitting terminal transmission is received;
Step S202: voice coding sequence H is decoded based on Recognition with Recurrent Neural Network RNN model to obtain main feature letter Number Z;
Step S203: the main feature signal Z obtained to decoding is reconstructed, and obtains primary speech signal.
Optionally, the method packet of the voice coding sequence H being made of voice coding h (i) for receiving transmitting terminal transmission It includes:
The voice coding sequence including multiple voice coding h (i) is received by the output layer of Recognition with Recurrent Neural Network RNN model H。
It is described that voice coding sequence H is decoded based on Recognition with Recurrent Neural Network RNN model to obtain main feature signal Z's Method includes:
It is based on third predetermined formula in output layer, voice coding h (i) is decoded to obtain corresponding characteristic signal respectively Point z (i), to form main feature signal Z;
Wherein, third predetermined formula are as follows: z (i)=Vh (i)+c;
Wherein, V is the third linear coefficient of relationship of Recognition with Recurrent Neural Network RNN model, and c is the second linear deflection coefficient.
Optionally, the main feature signal Z obtained to decoding is reconstructed, and the method for obtaining primary speech signal includes:
Transformation of coefficient is carried out to the main feature signal Z using observing matrix base Gmxn, obtains observed result Y;
Wherein, Y=GP;
Compressed coefficient P, min are solved using the optimal problem under norm | | P | |0S.T.Y=GT-1S;
Solve the approximation of time series signal S;
Wherein, S is time series signal, and T is normal orthogonal set of bases, and G is observing matrix base;
Primary speech signal is restored according to the approximation of the time series signal S.
According to the another aspect of the application, embodiments herein additionally provides a kind of voice-transmission system, including sends End and receiving end;
Wherein, the transmitting terminal is at least configured to execute following operation:
Obtain primary speech signal;
The primary speech signal is pre-processed, time series signal S is formed;
Wherein, time series signal S is the one-dimensional signal of N variation at any time;
Time series signal S includes multiple signaling points, S={ s1, s2 ..., sn };
Time N includes multiple time points, N={ 1,2 ..., n };
Wherein, the time point i in time N is corresponding with the signaling point s (i) in time series signal S;i∈[1,n];
Extract the main feature signal Z in the time series signal S;
The main feature signal Z is transmitted to receiving end through Recognition with Recurrent Neural Network RNN;
The receiving end is at least configured to execute following operation:
Receive the voice coding sequence H of transmitting terminal transmission;
Voice coding sequence H is decoded based on Recognition with Recurrent Neural Network RNN model to obtain main feature signal Z;
It is encoded main feature signal Z input Recognition with Recurrent Neural Network RNN to obtain voice coding sequence H;
Voice coding sequence H is transmitted to receiving end.
As an optional embodiment,
When extracting the main feature signal Z in the time series signal S, the transmitting terminal is at least configured to hold The following operation of row:
Compressed sensing processing is carried out to each signaling point s (i) in time series signal S based on the first predetermined formula, is obtained To main feature signal Z;Main feature information Z includes multiple characteristic signal point z (i);
First predetermined formula are as follows:
Wherein, time series signal S is RnOne column vector S in spacenx1
P is compressed coefficient set, P={ p (1), p (2) ..., p (n) };
T is linear expression column vector Snx1Normal orthogonal set of bases, T={ t (1), t (2) ..., t (n) };
P (i) is the compressed coefficient corresponding with signaling point s (i);
T (i) is orthonormal basis corresponding with signaling point s (i).
When being encoded main feature signal Z input Recognition with Recurrent Neural Network RNN to obtain voice coding sequence H, The transmitting terminal is at least configured to execute following operation:
By the good Recognition with Recurrent Neural Network RNN model of obtained main feature signal Z input prebuild;
Wherein, Recognition with Recurrent Neural Network RNN model includes at least: input layer, hidden layer and output layer;
Multiple characteristic signal point z (i) in main feature signal Z are transmitted to hidden layer through input layer;
Each characteristic signal point z (i) in main feature signal Z is encoded to obtain including multiple languages by hidden layer Sound encodes the voice coding sequence H of h (i);
Voice coding h (i) is corresponding with characteristic signal point z (i).
Each characteristic signal point z (i) in main feature signal Z is encoded to obtain including multiple by hidden layer When the voice coding sequence H of voice coding h (i), the transmitting terminal is at least configured to execute following operation:
Coding is carried out according to characteristic signal point z (i) of second predetermined formula to input in hidden layer to be calculated and feature The corresponding voice coding h (i) of signaling point z (i);
Voice coding sequence H is formed based on multiple voice coding h (i);
Second predetermined formula are as follows: h (i)=σ (zi)=σ (Uz (i)+Wh (i-1)+b);
Wherein, σ is the activation primitive of Recognition with Recurrent Neural Network RNN model;B is the first linear deviation ratio;
U is the first linear relationship coefficient of Recognition with Recurrent Neural Network RNN model, and W is the second of Recognition with Recurrent Neural Network RNN model Linear relationship coefficient.
As an alternative embodiment,
When receiving the voice coding sequence H of transmitting terminal transmission being made of voice coding h (i), the receiving end is at least It is configured as executing following operation:
The voice coding sequence including multiple voice coding h (i) is received by the output layer of Recognition with Recurrent Neural Network RNN model H。
It is described that voice coding sequence H is decoded based on Recognition with Recurrent Neural Network RNN model to obtain main feature signal Z's Method includes:
It is based on third predetermined formula in output layer, voice coding h (i) is decoded to obtain corresponding characteristic signal respectively Point z (i), to form main feature signal Z;
Wherein, third predetermined formula are as follows: z (i)=Vh (i)+c;
Wherein, V is the third linear coefficient of relationship of Recognition with Recurrent Neural Network RNN model, and c is the second linear deflection coefficient.
It is reconstructed in the main feature signal Z obtained to decoding, when obtaining primary speech signal, the receiving end is at least It is configured as executing following operation:
Transformation of coefficient is carried out to the main feature signal Z using observing matrix base Gmxn, obtains observed result Y;
Wherein, Y=GP;
Compressed coefficient P, min are solved using the optimal problem under norm | | P | |0S.T.Y=GT-1S;
Solve the approximation of time series signal S;
Wherein, S is time series signal, and T is normal orthogonal set of bases, and G is observing matrix base;
Primary speech signal is restored according to the approximation of the time series signal S.
According to the application's in another aspect, embodiments herein additionally provides a kind of for sending the terminal of information, use In by voice transfer to information receiving terminal, the information sends terminal and includes:
Memory is configured as storing data and instruction;
The processor communicated is established with memory;
Wherein, when executing the instruction in memory, the processor is configured to executing following operation:
Obtain primary speech signal;
The primary speech signal is pre-processed, time series signal S is formed;
Wherein, time series signal S is the one-dimensional signal of N variation at any time;
Time series signal S includes multiple signaling points, S={ s1, s2 ..., sn };
Time N includes multiple time points, N={ 1,2 ..., n };
Wherein, the time point i in time N is corresponding with the signaling point s (i) in time series signal S;i∈[1,n];
Extract the main feature signal Z in the time series signal S;
It is encoded main feature signal Z input Recognition with Recurrent Neural Network RNN to obtain voice coding sequence H;
Voice coding sequence H is transmitted to receiving end.
As an optional embodiment,
When extracting the main feature signal Z in the time series signal S, the processor is configured to execute with Lower operation:
Compressed sensing processing is carried out to each signaling point s (i) in time series signal S based on the first predetermined formula, is obtained To main feature signal Z;Main feature information Z includes multiple characteristic signal point z (i);
First predetermined formula are as follows:
Wherein, time series signal S is RnOne column vector S in spacenx1
P is compressed coefficient set, P={ p (1), p (2) ..., p (n) };
T is linear expression column vector Snx1Normal orthogonal set of bases, T={ t (1), t (2) ..., t (n) };
P (i) is the compressed coefficient corresponding with signaling point s (i);
T (i) is orthonormal basis corresponding with signaling point s (i).
When being encoded main feature signal Z input Recognition with Recurrent Neural Network RNN to obtain voice coding sequence H, The processor is configured to executing following operation:
By the good Recognition with Recurrent Neural Network RNN model of obtained main feature signal Z input prebuild;
Wherein, Recognition with Recurrent Neural Network RNN model includes at least: input layer, hidden layer and output layer;
Multiple characteristic signal point z (i) in main feature signal Z are transmitted to hidden layer through input layer;
Each characteristic signal point z (i) in main feature signal Z is encoded to obtain including multiple languages by hidden layer Sound encodes the voice coding sequence H of h (i);
Voice coding h (i) is corresponding with characteristic signal point z (i).
Each characteristic signal point z (i) in main feature signal Z is encoded to obtain including multiple by hidden layer When the voice coding sequence H of voice coding h (i), the processor is configured to executing following operation:
Coding is carried out according to characteristic signal point z (i) of second predetermined formula to input in hidden layer to be calculated and feature The corresponding voice coding h (i) of signaling point z (i);
Voice coding sequence H is formed based on multiple voice coding h (i);
Second predetermined formula are as follows: h (i)=σ (zi)=σ (Uz (i)+Wh (i-1)+b);
Wherein, σ is the activation primitive of Recognition with Recurrent Neural Network RNN model;B is the first linear deviation ratio;
U is the first linear relationship coefficient of Recognition with Recurrent Neural Network RNN model, and W is the second of Recognition with Recurrent Neural Network RNN model Linear relationship coefficient.
According to the another aspect of the application, embodiments herein additionally provides a kind of for receiving the terminal of information, use In the voice for receiving information transmission terminal transmission, the information receiving terminal includes:
Memory is configured as storing data and instruction;
The processor communicated is established with memory;
Wherein, when executing the instruction in memory, the processor is configured to executing following operation:
Receive the voice coding sequence H that information sends terminal transmission;
Voice coding sequence H is decoded based on Recognition with Recurrent Neural Network RNN model to obtain main feature signal Z;
The main feature signal Z obtained to decoding is reconstructed, and obtains primary speech signal.
As an alternative embodiment,
When receiving the voice coding sequence H of transmitting terminal transmission being made of voice coding h (i), the processor is matched It is set to the following operation of execution:
The voice coding sequence including multiple voice coding h (i) is received by the output layer of Recognition with Recurrent Neural Network RNN model H。
When being decoded to obtain main feature signal Z to voice coding sequence H based on Recognition with Recurrent Neural Network RNN model, The processor is configured to executing following operation: be based on third predetermined formula in output layer, respectively to voice coding h (i) into Row decoding obtains corresponding characteristic signal point z (i), to form main feature signal Z;
Wherein, third predetermined formula are as follows: z (i)=Vh (i)+c;
Wherein, V is the third linear coefficient of relationship of Recognition with Recurrent Neural Network RNN model, and c is the second linear deflection coefficient.
It is reconstructed in the main feature signal Z obtained to decoding, when obtaining primary speech signal, the processor quilt It is configured to execute following operation:
Transformation of coefficient is carried out to the main feature signal Z using observing matrix base Gmxn, obtains observed result Y;
Wherein, Y=GP;
Compressed coefficient P, min are solved using the optimal problem under norm | | P | |0S.T.Y=GT-1S;
Solve the approximation of time series signal S;
Wherein, S is time series signal, and T is normal orthogonal set of bases, and G is observing matrix base;
Primary speech signal is restored according to the approximation of the time series signal S.
The above-mentioned technical proposal of the application combines compressed sensing technology and circulation nerve in carrying out sound transmission course Network RNN technology pre-processes into time series signal S in the primary speech signal that transmitting terminal will acquire, passes through compressed sensing skill Art extracts main feature signal therein, is then carried out by Recognition with Recurrent Neural Network RNN to the main feature signal extracted Coding forms voice coding sequence H;
Voice coding sequence H is received by Recognition with Recurrent Neural Network RNN in receiving end and is decoded, main feature letter is obtained Number Z,
Then the main feature signal Z obtained again based on compressed sensing reconfiguration technique to decoding is reconstructed, and is obtained original Voice signal, and then complete entire sound transmission course.
Compressed sensing, also known as compression sampling can make full use of the sparsity (or compressibility) of signal, with one and change It changes the incoherent sampling matrix of base and will convert resulting high latitude signal and project to a low latitude spatially, then by solving one Optimization problem reconstructs original signal with high probability in the projection a small amount of from these, and it is fixed to breach traditional nyquist sampling The constraint of reason realizes compressing in perception to unknown signaling, under certain condition, need to only sample low volume data, so that it may Glorious restructing algorithm accurately recovers original signal, guarantees the undistorted reconstruction of signal;
In addition, by Recognition with Recurrent Neural Network RNN be most good at processing be exactly time series data, and voice signal is also Change over time and change, time series signal can be converted to, using Recognition with Recurrent Neural Network RNN realize data coding transmission and Decoding can effectively ensure that the integrality of voice data, and export from being input to, i.e., from transmitting terminal to receiving end, Ke Yishi The quick transmission of existing voice, improves voice transfer efficiency.
It should be noted that the above embodiments are intended merely as example, the application is not limited to such example, but can To carry out various change.
It should be noted that in the present specification, the terms "include", "comprise" or its any other variant are intended to Non-exclusive inclusion, so that the process, method, article or equipment including a series of elements is not only wanted including those Element, but also including other elements that are not explicitly listed, or further include for this process, method, article or equipment Intrinsic element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that There is also other identical elements in process, method, article or equipment including the element.
Finally, it is to be noted that, it is above-mentioned it is a series of processing not only include with sequence described here in temporal sequence The processing of execution, and the processing including executing parallel or respectively rather than in chronological order.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with It is completed by the relevant hardware of computer program instructions, the program can be stored in a computer readable storage medium, The program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein, the storage medium can for magnetic disk, CD, read-only memory (Read-Only Memory, ROM) or random access memory (Random Access Memory, RAM) Deng.
Above disclosed is only some preferred embodiments of the application, and the right model of the application cannot be limited with this It encloses, those skilled in the art can understand all or part of the processes for realizing the above embodiment, and wants according to the application right Made equivalent variations is sought, is still belonged to the scope covered by the invention.

Claims (10)

1. a kind of voice transmission method, which is characterized in that be applied to transmitting terminal, which comprises
Obtain primary speech signal;
The primary speech signal is pre-processed, time series signal S is formed;
Wherein, time series signal S is the one-dimensional signal of N variation at any time;
Time series signal S includes multiple signaling points, S={ s1, s2 ..., sn };
Time N includes multiple time points, N={ 1,2 ..., n };
Wherein, the time point i in time N is corresponding with the signaling point s (i) in time series signal S;i∈[1,n];
Extract the main feature signal Z in the time series signal S;
It is encoded main feature signal Z input Recognition with Recurrent Neural Network RNN to obtain voice coding sequence H;
Voice coding sequence H is transmitted to receiving end.
2. the method according to claim 1, wherein it is described extract it is main in the time series signal S The method of characteristic signal Z includes:
Compressed sensing processing is carried out to each signaling point s (i) in time series signal S based on the first predetermined formula, is led Want characteristic signal Z;Main feature information Z includes multiple characteristic signal point z (i);
First predetermined formula are as follows:
Wherein, time series signal S is RnOne column vector S in spacenx1
P is compressed coefficient set, P={ p (1), p (2) ..., p (n) };
T is linear expression column vector Snx1Normal orthogonal set of bases, T={ t (1), t (2) ..., t (n) };
P (i) is the compressed coefficient corresponding with signaling point s (i);
T (i) is orthonormal basis corresponding with signaling point s (i).
3. according to the method described in claim 2, it is characterized in that, described input circulation nerve for the main feature signal Z The method that network RNN is encoded to obtain voice coding sequence H includes:
By the good Recognition with Recurrent Neural Network RNN model of obtained main feature signal Z input prebuild;
Wherein, Recognition with Recurrent Neural Network RNN model includes at least: input layer, hidden layer and output layer;
Multiple characteristic signal point z (i) in main feature signal Z are transmitted to hidden layer through input layer;
Each characteristic signal point z (i) in main feature signal Z is encoded to obtain including multiple voice coders by hidden layer The voice coding sequence H of code h (i);
Voice coding h (i) is corresponding with characteristic signal point z (i).
4. according to the method described in claim 3, it is characterized in that, it is described by hidden layer to every in main feature signal Z The method that a characteristic signal point z (i) is encoded to obtain the voice coding sequence H including multiple voice coding h (i) includes:
Coding is carried out according to characteristic signal point z (i) of second predetermined formula to input in hidden layer to be calculated and characteristic signal The corresponding voice coding h (i) of point z (i);
Voice coding sequence H is formed based on multiple voice coding h (i);
Second predetermined formula are as follows: h (i)=σ (zi)=σ (Uz (i)+Wh (i-1)+b);
Wherein, σ is the activation primitive of Recognition with Recurrent Neural Network RNN model;B is the first linear deviation ratio;
U is the first linear relationship coefficient of Recognition with Recurrent Neural Network RNN model, and W is the second linear of Recognition with Recurrent Neural Network RNN model Coefficient of relationship.
5. a kind of voice transmission method, which is characterized in that be applied to receiving end, which comprises
Receive the voice coding sequence H of transmitting terminal transmission;
Voice coding sequence H is decoded based on Recognition with Recurrent Neural Network RNN model to obtain main feature signal Z;
The main feature signal Z obtained to decoding is reconstructed, and obtains primary speech signal.
6. according to the method described in claim 5, it is characterized in that, it is described receive transmitting terminal transmission by voice coding h (i) group At the method for voice coding sequence H include:
The voice coding sequence H including multiple voice coding h (i) is received by the output layer of Recognition with Recurrent Neural Network RNN model.
It is described that the method for obtaining main feature signal Z is decoded to voice coding sequence H based on Recognition with Recurrent Neural Network RNN model Include:
It is based on third predetermined formula in output layer, voice coding h (i) is decoded to obtain corresponding characteristic signal point z respectively (i), to form main feature signal Z;
Wherein, third predetermined formula are as follows: z (i)=Vh (i)+c;
Wherein, V is the third linear coefficient of relationship of Recognition with Recurrent Neural Network RNN model, and c is the second linear deflection coefficient.
7. according to the method described in claim 6, obtaining it is characterized in that, the obtained main feature signal Z of decoding is reconstructed Method to primary speech signal includes:
Transformation of coefficient is carried out to the main feature signal Z using observing matrix base Gmxn, obtains observed result Y;
Wherein, Y=GP;
Compressed coefficient P, min are solved using the optimal problem under norm | | P | |0S.T.Y=GT-1S;
Solve the approximation of time series signal S;
Wherein, S is time series signal, and T is normal orthogonal set of bases, and G is observing matrix base;
Primary speech signal is restored according to the approximation of the time series signal S.
8. a kind of voice-transmission system, which is characterized in that including transmitting terminal and receiving end;
Wherein, the transmitting terminal is at least configured to execute following operation:
Obtain primary speech signal;
The primary speech signal is pre-processed, time series signal S is formed;
Wherein, time series signal S is the one-dimensional signal of N variation at any time;
Time series signal S includes multiple signaling points, S={ s1, s2 ..., sn };
Time N includes multiple time points, N={ 1,2 ..., n };
Wherein, the time point i in time N is corresponding with the signaling point s (i) in time series signal S;i∈[1,n];
Extract the main feature signal Z in the time series signal S;
The main feature signal Z is transmitted to receiving end through Recognition with Recurrent Neural Network RNN;
The receiving end is at least configured to execute following operation:
Receive the voice coding sequence H of transmitting terminal transmission;
Voice coding sequence H is decoded based on Recognition with Recurrent Neural Network RNN model to obtain main feature signal Z;
It is encoded main feature signal Z input Recognition with Recurrent Neural Network RNN to obtain voice coding sequence H;
Voice coding sequence H is transmitted to receiving end.
9. it is a kind of for sending the terminal of information, it is used for voice transfer to information receiving terminal, which is characterized in that the information Sending terminal includes:
Memory is configured as storing data and instruction;
The processor communicated is established with memory;
Wherein, when executing the instruction in memory, the processor is configured to executing following operation:
Obtain primary speech signal;
The primary speech signal is pre-processed, time series signal S is formed;
Wherein, time series signal S is the one-dimensional signal of N variation at any time;
Time series signal S includes multiple signaling points, S={ s1, s2 ..., sn };
Time N includes multiple time points, N={ 1,2 ..., n };
Wherein, the time point i in time N is corresponding with the signaling point s (i) in time series signal S;i∈[1,n];
Extract the main feature signal Z in the time series signal S;
It is encoded main feature signal Z input Recognition with Recurrent Neural Network RNN to obtain voice coding sequence H;
Voice coding sequence H is transmitted to receiving end.
10. it is a kind of for receiving the terminal of information, the voice of terminal transmission is sent for receiving information, which is characterized in that described Information receiving terminal includes:
Memory is configured as storing data and instruction;
The processor communicated is established with memory;
Wherein, when executing the instruction in memory, the processor is configured to executing following operation:
Receive the voice coding sequence H that information sends terminal transmission;
Voice coding sequence H is decoded based on Recognition with Recurrent Neural Network RNN model to obtain main feature signal Z;
The main feature signal Z obtained to decoding is reconstructed, and obtains primary speech signal.
CN201810857446.4A 2018-07-31 2018-07-31 A kind of voice transmission method, voice-transmission system and terminal Pending CN109003614A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810857446.4A CN109003614A (en) 2018-07-31 2018-07-31 A kind of voice transmission method, voice-transmission system and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810857446.4A CN109003614A (en) 2018-07-31 2018-07-31 A kind of voice transmission method, voice-transmission system and terminal

Publications (1)

Publication Number Publication Date
CN109003614A true CN109003614A (en) 2018-12-14

Family

ID=64598549

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810857446.4A Pending CN109003614A (en) 2018-07-31 2018-07-31 A kind of voice transmission method, voice-transmission system and terminal

Country Status (1)

Country Link
CN (1) CN109003614A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112951215A (en) * 2021-04-27 2021-06-11 平安科技(深圳)有限公司 Intelligent voice customer service answering method and device and computer equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105469065A (en) * 2015-12-07 2016-04-06 中国科学院自动化研究所 Recurrent neural network-based discrete emotion recognition method
CN105898302A (en) * 2016-04-28 2016-08-24 上海斐讯数据通信技术有限公司 Image transmission method and system based on compressed sensing
CN106782518A (en) * 2016-11-25 2017-05-31 深圳市唯特视科技有限公司 A kind of audio recognition method based on layered circulation neutral net language model
CN106911930A (en) * 2017-03-03 2017-06-30 深圳市唯特视科技有限公司 It is a kind of that the method for perceiving video reconstruction is compressed based on recursive convolution neutral net
CN107317583A (en) * 2017-05-18 2017-11-03 湖北工业大学 Variable step size distributed compression based on Recognition with Recurrent Neural Network perceives method for reconstructing
US20180000385A1 (en) * 2016-06-17 2018-01-04 Blue Willow Systems Inc. Method for detecting and responding to falls by residents within a facility
CN107566978A (en) * 2017-08-22 2018-01-09 上海爱优威软件开发有限公司 A kind of tracking terminal method and system based on intelligent Neural Network

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105469065A (en) * 2015-12-07 2016-04-06 中国科学院自动化研究所 Recurrent neural network-based discrete emotion recognition method
CN105898302A (en) * 2016-04-28 2016-08-24 上海斐讯数据通信技术有限公司 Image transmission method and system based on compressed sensing
US20180000385A1 (en) * 2016-06-17 2018-01-04 Blue Willow Systems Inc. Method for detecting and responding to falls by residents within a facility
CN106782518A (en) * 2016-11-25 2017-05-31 深圳市唯特视科技有限公司 A kind of audio recognition method based on layered circulation neutral net language model
CN106911930A (en) * 2017-03-03 2017-06-30 深圳市唯特视科技有限公司 It is a kind of that the method for perceiving video reconstruction is compressed based on recursive convolution neutral net
CN107317583A (en) * 2017-05-18 2017-11-03 湖北工业大学 Variable step size distributed compression based on Recognition with Recurrent Neural Network perceives method for reconstructing
CN107566978A (en) * 2017-08-22 2018-01-09 上海爱优威软件开发有限公司 A kind of tracking terminal method and system based on intelligent Neural Network

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112951215A (en) * 2021-04-27 2021-06-11 平安科技(深圳)有限公司 Intelligent voice customer service answering method and device and computer equipment
CN112951215B (en) * 2021-04-27 2024-05-07 平安科技(深圳)有限公司 Voice intelligent customer service answering method and device and computer equipment

Similar Documents

Publication Publication Date Title
Shi et al. Communication-efficient edge AI: Algorithms and systems
Lee et al. Deep learning-constructed joint transmission-recognition for Internet of Things
WO2023273769A1 (en) Method for training video label recommendation model, and method for determining video label
EP4138391A1 (en) Mimic compression method and apparatus for video image, and storage medium and terminal
CN110288980A (en) Audio recognition method, the training method of model, device, equipment and storage medium
CN110520909A (en) The neural network processor of bandwidth of memory utilization rate is reduced using the compression and decompression of activation data
CN111901829A (en) Wireless federal learning method based on compressed sensing and quantitative coding
Lu et al. Binarized aggregated network with quantization: Flexible deep learning deployment for CSI feedback in massive MIMO systems
CN111933115A (en) Speech recognition method, apparatus, device and storage medium
CN104301728A (en) Compressed video capture and reconstruction system based on structured sparse dictionary learning
CN114677185A (en) Intelligent large-screen advertisement intelligent recommendation system and recommendation method thereof
CN111767697B (en) Text processing method and device, computer equipment and storage medium
Huu et al. Proposing a Recognition System of Gestures Using MobilenetV2 Combining Single Shot Detector Network for Smart‐Home Applications
CN112289338A (en) Signal processing method and device, computer device and readable storage medium
US20220391425A1 (en) Method and apparatus for processing information
JP2023550211A (en) Method and apparatus for generating text
CN116737895A (en) Data processing method and related equipment
KR20210058059A (en) Unsupervised text summarization method based on sentence embedding and unsupervised text summarization device using the same
CN111045726B (en) Deep learning processing device and method supporting coding and decoding
CN109003614A (en) A kind of voice transmission method, voice-transmission system and terminal
CN109889848A (en) Based on the multiple description coded of convolution self-encoding encoder, coding/decoding method and system
CN109214519B (en) Data processing system, method and device
CN107483969A (en) A kind of data transmission method and system based on PCA
CN115223244B (en) Haptic motion simulation method, device, apparatus and storage medium
CN112802485B (en) Voice data processing method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20181214

WD01 Invention patent application deemed withdrawn after publication