CN104751851B - It is a kind of based on the front and rear frame losing error concealment method and system to Combined estimator - Google Patents

It is a kind of based on the front and rear frame losing error concealment method and system to Combined estimator Download PDF

Info

Publication number
CN104751851B
CN104751851B CN201310747005.6A CN201310747005A CN104751851B CN 104751851 B CN104751851 B CN 104751851B CN 201310747005 A CN201310747005 A CN 201310747005A CN 104751851 B CN104751851 B CN 104751851B
Authority
CN
China
Prior art keywords
frame losing
voice data
current demand
vowel
demand signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310747005.6A
Other languages
Chinese (zh)
Other versions
CN104751851A (en
Inventor
许云峰
王彦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chenxin Technology Co ltd
Qingdao Weixuan Technology Co ltd
Original Assignee
Leadcore Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Leadcore Technology Co Ltd filed Critical Leadcore Technology Co Ltd
Priority to CN201310747005.6A priority Critical patent/CN104751851B/en
Publication of CN104751851A publication Critical patent/CN104751851A/en
Application granted granted Critical
Publication of CN104751851B publication Critical patent/CN104751851B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

Present invention offer is a kind of based on the front and rear frame losing error concealment method and system to Combined estimator, including:Cache the two frame voice data played and the six frame voice data that will be played;When detecting generation frame losing, estimated to reduce frame losing voice data backward according to the voice data before frame losing;Estimated forward to reduce frame losing voice data according to the voice data after frame losing;Estimate according to the voice data of estimation reduction frame losing backward and that forward the voice data for reducing frame losing carries out the voice estimation release signal that cross-fading generation is lost.The present invention utilizes voice short-term stationarity feature, carries out Combined estimator to frame losing according to the voice messaging before and after frame losing, can more accurately estimate the speech frame of loss, the compensation to frame losing is more accurate.

Description

It is a kind of based on the front and rear frame losing error concealment method and system to Combined estimator
Technical field
It is more particularly to a kind of based on the front and rear frame losing error concealment side to Combined estimator the present invention relates to communication technique field Method and system.
Background technology
Discarded Packets compensation (Packet Loss Concealment, PLC) algorithm, also referred to as frame error concealment (Frame Erasure Concealment, FEC) algorithm.Since the problem of network quality may result in loss voice in voice communication Data, the voice stream data for bulk occur disappears, so that the auditory response that voice can be caused to stop abruptly, causes not connecting for voice It is continuous.The voice data of loss can be estimated when this come if compensating the data of these loss, then can just eliminate by In the voice quality loss that loss of voice is brought.This is also the basic function of PLC, estimates the voice signal lost, then uses it Fill the lack part of voice flow.
Discarded Packets compensation is mainly used for packing in coding, then by network transmission, in the decoded audio system of receiving terminal, such as Fruit there are packet loss phenomenon, then carries out Discarded Packets compensation in this process.In many measured code excited linear predictive (Code Exited Linear Prediction, CELP) audio coder & decoder (codec), there is Discarded Packets compensation function.Such as:G.729, amr, efr Deng.But for the codec that some non-CELP are encoded there is no Discarded Packets compensation algorithm, such as:G.711, G.722 etc..So These codecs need to use Discarded Packets compensation algorithm.
Since voice has short-term stationarity, the voice letter of estimation loss can be carried out according to its neighbouring signal Number.According to the viewpoint of CELP, voice has two parts:Sound channel and driving source.It is described mainly to be modeled by pole model, Channel parameters are extracted by linear prediction.The pumping signal of the voice has two kinds:One kind is periodic excitation signal source, in addition One kind is arbitrary excitation (white noise).Both signal constitutes information source excitation e (n).Composite filter is passed through by pumping signal It is final to produce voice signal.The language of loss can be predicted by the short-term stationarity characteristic of this basic sonification model and voice Sound signal part.
In above-mentioned estimation, two parts are usually estimated:First, linear predictor coefficient (Linear prediction coefficient);2nd, pumping signal.These can be estimated by neighbouring signal.
Most of in existing PLC algorithms is all to use extrapolation (Extrapolation), as shown in Figure 1, it is typical case Extrapolation error concealment algorithm flow.Whole process is divided into two parts:Analysis part and comprehensive part.
The analysis part mainly estimates pitch period (T using the signal before frame losing0), linear predictor coefficient (A (z)), residual signals (e (n)) and Modulation recognition (Class), for integrating when, estimate the voice messaging of loss.Wherein: A (z)=1+a1z-1+a2z-2+...+a8z-8
Mainly the information of loss is estimated the comprehensive part.
First determine whether current demand signal is vowel, pitch period continuation if it is is carried out to residual signals e (n);Such as Fruit is not to modify to residual signals e (n) and carry out periodic extension again.
Then the signal after synthesis filter acquisition 1/A (z) synthesis is input to amended residual signals e ' (n) to obtain y(n)。
Finally, according to the length computation fading gain g of current demand signal classification and frame losingmute, and to filter output signal Carry out decay y ' (n)=y (n) gmute(n), the signal y ' (n) of frame losing estimation is finally obtained.
But since characteristic voice has short-term stationarity.General voice is stable in 10ms~30ms.Utilize extrapolation The speech capability that method prediction is lost is very limited amount of, and hardly possible obtains the Long-term change trend of lost frames, such as:Fundamental tone changes, changes in amplitude Deng.So the foreseeable frame number of extrapolation is considerably less, once prediction frame number becomes more, can cause to estimate inaccurate influence voice Quality.
So most at present all can use fading gain to be used to decay to the voice signal after prediction, prevent The frame number of prediction excessively causes forecasting inaccuracy below really to make voice quality degradation on the contrary.So existing Discarded Packets compensation algorithm energy Power is extremely limited, is at best able to the speech frame of 2~3 frame of estimation (mono- frames of 20ms) loss, otherwise occurs since prediction error goes out Cash dominant.The metallic sound refers to because frame losing is excessive, and when Discarded Packets compensation is done, due to the voice signal of estimation, its is same A kind of signal short repetition duration in cycle is long, causes the phenomenon of Cool Metal sound on human auditory system.
The content of the invention
It is an object of the invention to provide a kind of based on the front and rear frame losing error concealment method and system to Combined estimator, with It is very limited to solve the speech capability that technological prediction is lost, once prediction frame number becomes more, can cause to estimate that inaccuracy influences voice The problem of quality.
In order to solve the above technical problems, present invention offer is a kind of based on the front and rear frame losing error concealment side to Combined estimator Method, including:
Cache the two frame voice data played and the six frame voice data that will be played;
When detecting generation frame losing, estimated to reduce frame losing voice data backward according to the voice data before frame losing;According to Voice data after frame losing is estimated to reduce frame losing voice data forward;
Intersection is carried out according to the voice data of estimation reduction frame losing backward and the forward voice data of estimation reduction frame losing to decline Subtract the voice estimation release signal that generation is lost.
Preferably, described based in the front and rear frame losing error concealment method to Combined estimator, it is described according to frame losing before Voice data estimate backward reduce frame losing voice data the step of include:
Linear prediction analysis filter obtains filter parameter;Estimate the pitch period of current demand signal;To current demand signal into Row classification, the species of current demand signal include vowel state, voiceless sound state and conversion state;
Current analysis filter is obtained according to the filter parameter that the linear prediction analysis filter obtains;It is if current Signal is vowel state, then repeats to estimate the residual signals of current demand signal by the pitch period, if current demand signal is non-vowel State draws the residual signals of current demand signal after then modifying to the residual signals of current demand signal;Different classes of according to frame losing is adopted The method to be decayed with piecewise linearity decays;
The voice data part of frame losing is estimated according to the residual signals of the current analysis filter and current demand signal;When Voice signal after frame losing is not vowel, then decays, and the voice after frame losing is vowel, then without decay Keep initial value, the final estimation signal for obtaining frame losing.
Preferably, described based in the front and rear frame losing error concealment method to Combined estimator, the linear prediction divides It is 8 rank linear prediction analysis filters to analyse wave filter.
Preferably, described based in the front and rear frame losing error concealment method to Combined estimator, it is described according to frame losing after Voice data estimate forward reduce frame losing voice data the step of include:
Classify to frame losing, the species of current demand signal includes vowel state, voiceless sound state and conversion state;Estimate current demand signal Pitch period;
A monocyclic estimation signal is obtained, the frame losing information of estimation is obtained by periodic extension forward;
Judge whether the voice data after frame losing is vowel state, if vowel state, then open output control switch, if It is not vowel state, then closes output control switch.
Preferably, described a single-revolution is being obtained based in the front and rear frame losing error concealment method to Combined estimator In the step of estimation signal of phase, the frame losing information estimated by the acquisition of periodic extension forward, only to the voice after frame losing Data are that the voice data of vowel state is estimated.
Correspondingly, the present invention also provides a kind of based on the front and rear frame losing error concealment system to Combined estimator, including:
Buffer part, for caching the two frame voice data played and the six frame voice data that will be played;
Back forecast part, for being estimated to reduce frame losing voice data backward according to the voice data before frame losing;
Forward prediction part, for being estimated forward to reduce frame losing voice data according to the voice data after frame losing;
Cross-fading part, for carrying out cross-fading life according to the back forecast part and the forward prediction part Voice into loss estimates release signal.
Preferably, described based in the front and rear frame losing error concealment system to Combined estimator, the back forecast portion Dividing includes:
Linear prediction analysis unit, filter parameter is obtained for linear prediction analysis filter;
Long-term prediction unit, for estimating the pitch period of current demand signal;
Signal classifier, for classifying to current demand signal, the species of current demand signal includes vowel state, voiceless sound state and turns Change state;
Analysis filter, the filter parameter for being obtained according to the linear prediction analysis filter obtain current point Analyse wave filter;
Pitch period repeats and residual signals modification unit, if being vowel state for current demand signal, passes through the fundamental tone Cycle repeats to estimate the residual signals of current demand signal, and the residual signals of current demand signal are carried out if current demand signal is non-vowel state The residual signals of current demand signal are drawn after modification;
Decay factor computing unit, is declined for the different classes of method to be decayed using piecewise linearity according to frame losing Subtract;
Linear prediction integrates, for estimating frame losing according to the residual signals of the current analysis filter and current demand signal Voice data part;
Binary decision unit, for frame losing after voice signal be not vowel, then decay, after frame losing Voice signal be vowel, then without decay keep initial value;
Attenuation units, for generating the estimation signal of frame losing.
Preferably, described based in the front and rear frame losing error concealment system to Combined estimator, in linear prediction analysis In unit, the linear prediction analysis filter is 8 rank linear prediction analysis filters.
Preferably, described based in the front and rear frame losing error concealment system to Combined estimator, the forward prediction portion Dividing includes:
Signal classifier, for classifying to frame losing, the species of current demand signal includes vowel state, voiceless sound state and conversion State;
Long-term prediction unit, for estimating the pitch period of current demand signal;
Fundamental tone repeats estimation unit, for obtaining a monocyclic estimation signal, is estimated by periodic extension forward The frame losing information of meter;
Vowel decision unit, for judging frame losing after voice data whether be vowel state;
Output control switch, if vowel state, then open output control switch, if not vowel state, then close output control System switch.
Preferably, repeat to estimate in fundamental tone based in the front and rear frame losing error concealment system to Combined estimator described It is only that the voice data of vowel state is estimated to the voice data after frame losing in unit.
It is provided by the invention based on the front and rear frame losing error concealment method and system to Combined estimator, have below beneficial to effect Fruit:The present invention utilizes voice short-term stationarity feature, and Combined estimator is carried out to frame losing according to the voice messaging before and after frame losing, can be more accurate The true speech frame for estimating loss, the compensation to frame losing are more accurate.
Brief description of the drawings
Fig. 1 is existing extrapolation error concealment algorithm flow;
Fig. 2 is the signal based on the front and rear frame losing error concealment method and system to Combined estimator of the embodiment of the present invention Figure;
Fig. 3 is that the decay based on the front and rear frame losing error concealment method and system to Combined estimator of the embodiment of the present invention is bent Line schematic diagram;
Fig. 4-5 is the comparison diagram of the present invention and other embodiment.
Embodiment
Below in conjunction with the drawings and specific embodiments to proposed by the present invention hidden based on the front and rear frame losing mistake to Combined estimator Method and system are hidden to be described in further detail.According to following explanation and claims, advantages and features of the invention will more It is clear.It should be noted that attached drawing uses using very simplified form and non-accurate ratio, only to convenient, apparent Ground aids in illustrating the purpose of the embodiment of the present invention.
Please refer to Fig.2, its be the embodiment of the present invention based on the front and rear frame losing error concealment method to Combined estimator and be The schematic diagram of system.It is described that four parts are included based on the front and rear frame losing error concealment system to Combined estimator, buffer part, after To predicted portions, forward prediction part and cross-fading part.
The buffer part is used to cache the 2 frame voice data played and caches the voice number that 6 frames will play According to.When finding that the next frame data that will play are bad frames, then start error concealment mechanism immediately.Error concealment mechanism is divided into two Part, Part I are that back forecast part is used to estimate reduction frame losing data backward, which utilizes the voice number before frame losing Estimate the voice of loss backward according to voice-based short-term stationarity characteristic;Part II is that forward prediction part is used to estimate forward Meter reduction frame losing data, the voice data after which is terminated using frame losing are estimated to reduce frame losing data forward.Error concealment After reduction estimation is completed, the voice that cross-fading part carries out two parts of estimation voice data cross-fading generation missing is estimated Release signal.
It is described to be specifically comprised the following steps based on the front and rear frame losing error concealment method to Combined estimator:
Step 1:Cache the two frame voice data played and the six frame voice data that will be played;The buffer area Partial cache size is 8 frames (20ms/ frames).Wherein, the front cross frame of buffer area is the caching that has played, rear the six of buffer area Frame is the voice data that will be played.When the next frame data generation frame losing (being bad frame/null frame) for finding to play, then stand Start back forecast part and forward prediction part.
Step 2:Estimated to reduce frame losing voice data backward according to the voice data before frame losing;The back forecast part Including:Linear prediction analysis unit, long-term prediction unit, signal classifier, analysis filter, pitch period repeat and residual error letter Number modification unit, decay factor computing unit, linear prediction are comprehensive, binary decision unit and attenuation units;
First, linear prediction analysis filter A (z) obtains filter parameter using 8 rank linear prediction analysis filters, fixed Justice is A (z)=1+a1z-1+a2z-2+...+a8z-8
Linear prediction analysis is similar with the speech coder of general CELP to include two parts:Adding window and autocorrelation calculation and Du Guest's algorithm (Levinson-Durbinalgorithm).Autocorrelation calculation includes the bandspreading of 60Hz and the white noise school of 40dB Just.Wherein, linear prediction analysis window is an asymmetrical Hamming window:
Further, the pitch period of long-term prediction unit estimation current demand signal.
Caching head x (n), the n=-288 of frame losing ..., -1, (288 sampling points are maximum 2 times of pitch periods) passes through low pass filtered Ripple device simultaneously carries out 4 times down-sampled, obtain bandwidth be 2khz it is down-sampled after signal t (n), n=-72 ..., -1.
Estimation pitch period T for the first timedCalculated by normalized crosscorrelation:
The T calculated according to first timedT=4T is found againdNeighbouring maximal correlation is all to obtain more accurate fundamental tone Phase.
Further, signal classifier classifies current demand signal, the caching head of frame losing, and the species of current demand signal includes Vowel state, voiceless sound state and conversion state;
Further, analysis filter obtains currently according to the filter parameter that the linear prediction analysis filter obtains Analysis filter.Analysis filter is defined as A (z)=1+a1z-1+a2z-2+...+a8z-8, according to the filter of linear analysis acquisition Ripple device parameter obtains current analysis filter, using to past signal x (n), n=-289 ..., and -1, it is filtered with regard to energy Residual signals e (n), n=-289 are obtained ..., -1:
Further, pitch period repeats and the residual signals of phonological component are lost in residual signals modification unit generation.If Current demand signal is vowel state, then repeats to estimate the residual signals of current demand signal, e (n)=e (n-T by the pitch period0); If current demand signal is non-vowel state, first to its amplitude limit,
Then estimate residual Difference signal:E (n)=e (n-T0+(-1)n)
Further, decay factor computing unit, according to frame losing it is different classes of using piecewise linearity decay method into Row decay.The loss voice estimated in order to prevent excessively causes estimation inaccurate, and the method to be decayed using piecewise linearity is declined Subtract.Different attenuation curves is used according to the difference of Classification of Speech.Attenuation curve when in violent state as shown in figure 3, turn It is dotted line to change fading gain curve;When being solid line in other state class attenuation state curves.
Further, linear prediction synthesis is estimated according to the residual signals of the current analysis filter and current demand signal The voice data part of frame losing:
Further, the voice signal after the frame losing of binary decision unit judges is vowel, if not vowel, then Decay, i.e.,:gmute(n)=1.If vowel then keeps initial value without decay.
Finally, the estimation signal of attenuation units generation frame losing:y′1(n)=gmute(n)×y1(n)
Step 3:Estimated forward to reduce frame losing voice data according to the voice data after frame losing;The forward prediction part Including:Signal classifier, long-term prediction unit, fundamental tone repeat estimation unit, vowel decision unit and output control switch.
First, signal classifier classifies current demand signal, the caching tail of frame losing, and the species of current demand signal includes vowel State, voiceless sound state and conversion state;
Further, the pitch period of long-term prediction unit estimation current demand signal;But with the length of the back forecast part When predicting unit further it is accurate calculate pitch period when position it is different.
Caching tail z (n), the n=0 of frame losing, -1 ... 287 (288 sampling points are maximum 2 times of pitch periods) pass through low pass filtered Ripple device simultaneously carries out 4 times down-sampled, obtain bandwidth be 2khz it is down-sampled after signal t (n), n=1 ..., 71.
Estimation pitch period T for the first timedCalculated by normalized crosscorrelation:
The T calculated according to first timedT=4T is found againdNeighbouring maximal correlation is all to obtain more accurate fundamental tone Phase.
Further, fundamental tone repeats estimation unit for the effect being optimal, and forward direction reduction estimation is only after the loss The forward estimation of error signal is carried out when voice is vowel.First, a monocyclic estimation signal is obtained:y2(n)=z (T0- L+n), n=L-1-T0,…,L-1;The frame losing information of estimation is obtained by periodic extension forward again:y2(n)=y2(T0-n), N=0 ..., L-1-T0
Further, vowel decision unit judges whether the voice data after frame losing is vowel state;
Finally, output control switch, if vowel state, then open output control switch, if not vowel state, then close defeated Go out controlling switch.
Step 4:According to the voice data of estimation reduction frame losing backward and the voice data of estimation reduction frame losing carries out forward The voice estimation release signal that cross-fading generation is lost.Cross-fading is used to merge between frame and frame using quarter window,
N=0 ..., N-1
N is the length to forward lap, and general length is 80.
Base this, the present invention utilize voice short-term stationarity feature, frame losing is combined according to the voice messaging before and after frame losing Estimation, can more accurately estimate the speech frame of loss, and the compensation to frame losing is more accurate.
As shown in Figure 4 and Figure 5, it is using the present invention and hidden using other methods mistake when not introducing linear attenuation Hide the contrast of algorithm.When using the present invention, same sound duration is ofer short duration, metallic sound on such human auditory system Sensation can weaken significantly, error concealment algorithm performance can be lifted.
Foregoing description is only the description to present pre-ferred embodiments, not to any restriction of the scope of the invention, this hair Any change, the modification that the those of ordinary skill in bright field does according to the disclosure above content, belong to the protection of claims Scope.

Claims (8)

  1. It is 1. a kind of based on the front and rear frame losing error concealment method to Combined estimator, it is characterised in that including:
    Cache the two frame voice data played and the six frame voice data that will be played;
    When detecting generation frame losing, estimated to reduce frame losing voice data backward according to the voice data before frame losing;According to frame losing Voice data afterwards is estimated forward to reduce frame losing voice data;
    According to the voice data of estimation reduction frame losing backward and the voice data of estimation reduction frame losing carries out cross-fading life forward Voice into loss estimates release signal;
    The voice data according to before frame losing is estimated to include the step of reducing frame losing voice data backward:
    Linear prediction analysis filter obtains filter parameter;Estimate the pitch period of current demand signal;Current demand signal is divided Class, the species of current demand signal include vowel state, voiceless sound state and conversion state;
    Current analysis filter is obtained according to the filter parameter that the linear prediction analysis filter obtains;If current demand signal It is vowel state, then repeats to estimate the residual signals of current demand signal by the pitch period, if current demand signal is non-vowel state The residual signals of current demand signal are drawn after modifying to the residual signals of current demand signal;Divided according to different classes of use of frame losing The method of section linear attenuation decays;
    The voice data part of frame losing is estimated according to the residual signals of the current analysis filter and current demand signal;When described Voice after frame losing is not vowel, then decays, and the voice after the frame losing is vowel, then without decay Keep initial value, the final estimation signal for obtaining frame losing.
  2. 2. as claimed in claim 1 based on the front and rear frame losing error concealment method to Combined estimator, it is characterised in that the line Property forecast analysis wave filter is 8 rank linear prediction analysis filters.
  3. 3. as claimed in claim 1 based on the front and rear frame losing error concealment method to Combined estimator, it is characterised in that described Estimate forward to include the step of reducing frame losing voice data according to the voice data after frame losing:
    Classify to frame losing, the species of current demand signal includes vowel state, voiceless sound state and conversion state;Estimate the fundamental tone of current demand signal Cycle;
    A monocyclic estimation signal is obtained, the frame losing information of estimation is obtained by periodic extension forward;
    Judge whether the voice data after frame losing is vowel state, if vowel state, then open output control switch, if not Vowel state, then close output control switch.
  4. 4. as claimed in claim 3 based on the front and rear frame losing error concealment method to Combined estimator, it is characterised in that obtaining In the step of one monocyclic estimation signal, the frame losing information estimated by the acquisition of periodic extension forward, only frame losing is terminated Voice data afterwards is that the voice data of vowel state is estimated.
  5. It is 5. a kind of based on the front and rear frame losing error concealment system to Combined estimator, it is characterised in that including:
    Buffer part, for caching the two frame voice data played and the six frame voice data that will be played;
    Back forecast part, for being estimated to reduce frame losing voice data backward according to the voice data before frame losing;
    Forward prediction part, for being estimated forward to reduce frame losing voice data according to the voice data after frame losing;
    Cross-fading part, loses for carrying out cross-fading generation according to the back forecast part and the forward prediction part The voice estimation release signal of mistake;
    The back forecast part includes:
    Linear prediction analysis unit, filter parameter is obtained for linear prediction analysis filter;
    Long-term prediction unit, for estimating the pitch period of current demand signal;
    Signal classifier, for classifying to current demand signal, the species of current demand signal includes vowel state, voiceless sound state and conversion State;
    Analysis filter, the filter parameter for being obtained according to the linear prediction analysis filter obtain current analysis filter Ripple device;
    Pitch period repeats and residual signals modification unit, if being vowel state for current demand signal, passes through the pitch period The residual signals of estimation current demand signal are repeated, are modified if current demand signal is non-vowel state to the residual signals of current demand signal The residual signals of current demand signal are drawn afterwards;
    Decay factor computing unit, is decayed for the different classes of method to be decayed using piecewise linearity according to frame losing;
    Linear prediction integrates, for estimating the language of frame losing according to the residual signals of the current analysis filter and current demand signal Sound data portion;
    Binary decision unit, is not vowel for the voice after the frame losing, then decays, when the frame losing terminates Voice afterwards is vowel, then keeps initial value without decay;
    Attenuation units, for generating the estimation signal of frame losing.
  6. 6. as claimed in claim 5 based on the front and rear frame losing error concealment system to Combined estimator, it is characterised in that linear In forecast analysis unit, the linear prediction analysis filter is 8 rank linear prediction analysis filters.
  7. 7. as claimed in claim 5 based on the front and rear frame losing error concealment system to Combined estimator, it is characterised in that before described Include to predicted portions:
    Signal classifier, for classifying to frame losing, the species of current demand signal includes vowel state, voiceless sound state and conversion state;
    Long-term prediction unit, for estimating the pitch period of current demand signal;
    Fundamental tone repeats estimation unit, and for obtaining a monocyclic estimation signal, estimation is obtained by periodic extension forward Frame losing information;
    Vowel decision unit, for judging frame losing after voice data whether be vowel state;
    Output control switch, if vowel state, then open output control switch, if not vowel state, then close output control and open Close.
  8. 8. as claimed in claim 7 based on the front and rear frame losing error concealment system to Combined estimator, it is characterised in that in fundamental tone Repeat in estimation unit, be only that the voice data of vowel state is estimated to the voice data after frame losing.
CN201310747005.6A 2013-12-30 2013-12-30 It is a kind of based on the front and rear frame losing error concealment method and system to Combined estimator Active CN104751851B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310747005.6A CN104751851B (en) 2013-12-30 2013-12-30 It is a kind of based on the front and rear frame losing error concealment method and system to Combined estimator

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310747005.6A CN104751851B (en) 2013-12-30 2013-12-30 It is a kind of based on the front and rear frame losing error concealment method and system to Combined estimator

Publications (2)

Publication Number Publication Date
CN104751851A CN104751851A (en) 2015-07-01
CN104751851B true CN104751851B (en) 2018-04-27

Family

ID=53591410

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310747005.6A Active CN104751851B (en) 2013-12-30 2013-12-30 It is a kind of based on the front and rear frame losing error concealment method and system to Combined estimator

Country Status (1)

Country Link
CN (1) CN104751851B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110278436A (en) * 2019-06-28 2019-09-24 瓴盛科技有限公司 Picture frame mistake hidden method and device

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105788601B (en) * 2014-12-25 2019-08-30 联芯科技有限公司 The shake hidden method and device of VoLTE
CN106788876B (en) * 2015-11-19 2020-01-21 电信科学技术研究院 Method and system for compensating voice packet loss
CN110366029B (en) * 2019-07-04 2021-08-24 中国科学院深圳先进技术研究院 Method and system for inserting image frame between videos and electronic equipment
CN111371534B (en) * 2020-06-01 2020-09-18 腾讯科技(深圳)有限公司 Data retransmission method and device, electronic equipment and storage medium
CN113035205B (en) * 2020-12-28 2022-06-07 阿里巴巴(中国)有限公司 Audio packet loss compensation processing method and device and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101120400A (en) * 2005-01-31 2008-02-06 索诺瑞特公司 Method for generating concealment frames in communication system
CN101207468A (en) * 2006-12-19 2008-06-25 华为技术有限公司 Method, system and apparatus for missing frame hide
CN101325631A (en) * 2007-06-14 2008-12-17 华为技术有限公司 Method and apparatus for implementing bag-losing hide
CN101471073A (en) * 2007-12-27 2009-07-01 华为技术有限公司 Package loss compensation method, apparatus and system based on frequency domain
CN101833954A (en) * 2007-06-14 2010-09-15 华为终端有限公司 Method and device for realizing packet loss concealment
CN102833037A (en) * 2012-07-18 2012-12-19 华为技术有限公司 Speech data packet loss compensation method and device
CN103065636A (en) * 2011-10-24 2013-04-24 中兴通讯股份有限公司 Voice frequency signal frame loss compensation method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101008508B1 (en) * 2006-08-15 2011-01-17 브로드콤 코포레이션 Re-phasing of decoder states after packet loss

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101120400A (en) * 2005-01-31 2008-02-06 索诺瑞特公司 Method for generating concealment frames in communication system
CN101207468A (en) * 2006-12-19 2008-06-25 华为技术有限公司 Method, system and apparatus for missing frame hide
CN101325631A (en) * 2007-06-14 2008-12-17 华为技术有限公司 Method and apparatus for implementing bag-losing hide
CN101833954A (en) * 2007-06-14 2010-09-15 华为终端有限公司 Method and device for realizing packet loss concealment
CN101471073A (en) * 2007-12-27 2009-07-01 华为技术有限公司 Package loss compensation method, apparatus and system based on frequency domain
CN103065636A (en) * 2011-10-24 2013-04-24 中兴通讯股份有限公司 Voice frequency signal frame loss compensation method and device
CN102833037A (en) * 2012-07-18 2012-12-19 华为技术有限公司 Speech data packet loss compensation method and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110278436A (en) * 2019-06-28 2019-09-24 瓴盛科技有限公司 Picture frame mistake hidden method and device

Also Published As

Publication number Publication date
CN104751851A (en) 2015-07-01

Similar Documents

Publication Publication Date Title
CN104751851B (en) It is a kind of based on the front and rear frame losing error concealment method and system to Combined estimator
KR100964402B1 (en) Method and Apparatus for determining encoding mode of audio signal, and method and appartus for encoding/decoding audio signal using it
KR102237718B1 (en) Device and method for reducing quantization noise in a time-domain decoder
JP4658596B2 (en) Method and apparatus for efficient frame loss concealment in speech codec based on linear prediction
US7472059B2 (en) Method and apparatus for robust speech classification
KR100883656B1 (en) Method and apparatus for discriminating audio signal, and method and apparatus for encoding/decoding audio signal using it
KR100367267B1 (en) Multimode speech encoder and decoder
JP6671439B2 (en) Method and apparatus for voice activity detection
EP1747554B1 (en) Audio encoding with different coding frame lengths
EP2535893B1 (en) Device and method for lost frame concealment
ES2380962T3 (en) Procedure and apparatus for coding low transmission rate of high performance deaf speech bits
US10706858B2 (en) Error concealment unit, audio decoder, and related method and computer program fading out a concealed audio frame out according to different damping factors for different frequency bands
KR20080103113A (en) Signal encoding
WO2008067719A1 (en) Sound activity detecting method and sound activity detecting device
KR20020052191A (en) Variable bit-rate celp coding of speech with phonetic classification
US10937432B2 (en) Error concealment unit, audio decoder, and related method and computer program using characteristics of a decoded representation of a properly decoded audio frame
US9293143B2 (en) Bandwidth extension mode selection
KR101794149B1 (en) Noise filling without side information for celp-like coders
CN107818789B (en) Decoding method and decoding device
US6564182B1 (en) Look-ahead pitch determination
EP2608200B1 (en) Estimation of speech energy based on code excited linear prediction (CELP) parameters extracted from a partially-decoded CELP-encoded bit stream
RU2707144C2 (en) Audio encoder and audio signal encoding method
Faúndez-Zanuy Adaptive hybrid speech coding with a MLP/LPC structure
US20220180884A1 (en) Methods and devices for detecting an attack in a sound signal to be coded and for coding the detected attack
Thimmaraja et al. Enhancements in encoded noisy speech data by background noise reduction

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20150701

Assignee: Shanghai Li Ke Semiconductor Technology Co.,Ltd.

Assignor: LEADCORE TECHNOLOGY Co.,Ltd.

Contract record no.: 2018990000159

Denomination of invention: Before and after combined estimation based frame loss error hiding method and system

Granted publication date: 20180427

License type: Common License

Record date: 20180615

EE01 Entry into force of recordation of patent licensing contract
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20180824

Address after: 201206 Pudong New Area, Shanghai, China (Shanghai) free trade pilot area, 1258 A406 3 fourth story room.

Patentee after: Chen core technology Co.,Ltd.

Address before: 201206 4 building, No. 333, No. 41, Qinjiang Road, Shanghai, Xuhui District

Patentee before: LEADCORE TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20221121

Address after: 266500 No. 687, Binhai street, Huangdao District, Qingdao, Shandong

Patentee after: Chenxin Technology Co.,Ltd.

Patentee after: Qingdao Weixuan Technology Co.,Ltd.

Address before: 201206 Pudong New Area, Shanghai, China (Shanghai) free trade pilot area, 1258 A406 3 fourth story room.

Patentee before: Chen core technology Co.,Ltd.

CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: Room 102, Building 16, No. 1699, the Pearl River Road, Huangdao District, Qingdao, Shandong 266499

Patentee after: Chenxin Technology Co.,Ltd.

Patentee after: Qingdao Weixuan Technology Co.,Ltd.

Address before: 266500 No. 687, Binhai street, Huangdao District, Qingdao, Shandong

Patentee before: Chenxin Technology Co.,Ltd.

Patentee before: Qingdao Weixuan Technology Co.,Ltd.