WO2021181746A1 - Sound signal downmixing method, sound signal coding method, sound signal downmixing device, sound signal coding device, program, and recording medium - Google Patents

Sound signal downmixing method, sound signal coding method, sound signal downmixing device, sound signal coding device, program, and recording medium Download PDF

Info

Publication number
WO2021181746A1
WO2021181746A1 PCT/JP2020/041216 JP2020041216W WO2021181746A1 WO 2021181746 A1 WO2021181746 A1 WO 2021181746A1 JP 2020041216 W JP2020041216 W JP 2020041216W WO 2021181746 A1 WO2021181746 A1 WO 2021181746A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound signal
signal
channel
downmix
input sound
Prior art date
Application number
PCT/JP2020/041216
Other languages
French (fr)
Japanese (ja)
Inventor
亮介 杉浦
守谷 健弘
優 鎌本
Original Assignee
日本電信電話株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from PCT/JP2020/010081 external-priority patent/WO2021181473A1/en
Priority claimed from PCT/JP2020/010080 external-priority patent/WO2021181472A1/en
Application filed by 日本電信電話株式会社 filed Critical 日本電信電話株式会社
Priority to JP2022505754A priority Critical patent/JP7396459B2/en
Priority to EP20924291.6A priority patent/EP4120250A4/en
Priority to US17/909,666 priority patent/US20230319498A1/en
Priority to CN202080098232.9A priority patent/CN115280411A/en
Priority to JP2022505842A priority patent/JP7380833B2/en
Priority to JP2022505843A priority patent/JP7380834B2/en
Priority to PCT/JP2021/004642 priority patent/WO2021181977A1/en
Priority to JP2022505844A priority patent/JP7380835B2/en
Priority to PCT/JP2021/004639 priority patent/WO2021181974A1/en
Priority to PCT/JP2021/004641 priority patent/WO2021181976A1/en
Priority to US17/909,677 priority patent/US20230106832A1/en
Priority to US17/908,965 priority patent/US20230106764A1/en
Priority to US17/909,698 priority patent/US20230107976A1/en
Priority to JP2022505845A priority patent/JP7380836B2/en
Priority to PCT/JP2021/004640 priority patent/WO2021181975A1/en
Priority to US17/909,690 priority patent/US20230108927A1/en
Publication of WO2021181746A1 publication Critical patent/WO2021181746A1/en
Priority to JP2023203361A priority patent/JP2024023484A/en

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/008Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing

Definitions

  • the present invention encodes a sound signal in monaural, encodes a sound signal by using monaural coding and stereo coding in combination, processes a sound signal in monaural, and makes a stereo sound signal into a monaural sound signal.
  • the present invention relates to a technique for obtaining a monaural sound signal from a two-channel sound signal in order to perform signal processing using the above.
  • Patent Document 1 There is a technique of Patent Document 1 as a technique of obtaining a monaural sound signal from a two-channel sound signal and embedding coding / decoding the two-channel sound signal and the monaural sound signal.
  • a monaural signal is obtained by averaging the input left channel sound signal and the input right channel sound signal for each corresponding sample, and the monaural signal is encoded (monaural coding).
  • To obtain a monaural code decode the monaural code (monaural decoding) to obtain a monaural local decoding signal, and for each of the left channel and the right channel, the input sound signal and the prediction signal obtained from the monaural local decoding signal.
  • a technique for encoding the difference between and (predicted residual signal) is disclosed.
  • a signal obtained by giving a delay to a monaural locally decoded signal and giving an amplitude ratio is used as a prediction signal, and a delay and amplitude ratio that minimizes the error between the input sound signal and the prediction signal.
  • From the input sound signal either select a prediction signal with, or use a prediction signal with a delay difference and amplitude ratio that maximizes the intercorrelation between the input sound signal and the monaural locally decoded signal.
  • the coding efficiency of each channel can be improved by optimizing the delay and the amplitude ratio given to the monaural locally decoded signal when the prediction signal is obtained.
  • the monaural locally decoded signal is obtained by encoding and decoding the monaural signal obtained by averaging the sound signal of the left channel and the sound signal of the right channel. That is, there is a problem that the technique of Patent Document 1 is not devised to obtain a monaural signal useful for signal processing such as coding processing from a two-channel sound signal.
  • An object of the present invention is to provide a technique for obtaining a monaural signal useful for signal processing such as coding processing from a two-channel sound signal.
  • One aspect of the present invention is a sound signal downmix method for obtaining a downmix signal which is a signal obtained by mixing a left channel input sound signal and a right channel input sound signal, and is a method of obtaining a downmix signal of a left channel input sound signal and a right channel input sound signal.
  • the left-right relationship information acquisition step for obtaining the leading channel information, which is information indicating which is leading, and the left-right correlation coefficient, which is the correlation coefficient between the left channel input sound signal and the right channel input sound signal, and the leading channel information.
  • the left channel input sound signal and the right channel input sound signal, the input sound signal of the preceding channel is included more as the left and right correlation coefficient is larger. It is characterized by having a downmix step of obtaining a downmix signal by weighting and averaging the input sound signal and the right channel input sound signal.
  • One aspect of the present invention is the above-mentioned sound signal downmix method, in which the sample number is t, the left channel input sound signal is x L (t), and the right channel input sound signal is x R (t).
  • the downmix step is x M for each sample number t if the leading channel information indicates that the left channel is leading.
  • One aspect of the present invention includes the above-mentioned sound signal downmix method as a sound signal downmix step, a monaural coding step for encoding the downmix signal obtained by the downmix step to obtain a monaural code, and a left channel input. It is further characterized by having a stereo coding step of encoding a sound signal and a right channel input sound signal to obtain a stereo code.
  • a monaural signal useful for signal processing such as coding processing can be obtained from a two-channel sound signal.
  • the coding device is a sound signal coding device
  • the coding method is a sound signal coding method
  • the decoding device is a sound signal decoding device
  • the decoding method is a decoding method. It is also called a sound signal decoding method.
  • the coding apparatus 100 of the first reference embodiment includes a downmix unit 110, a left channel subtraction gain estimation unit 120, a left channel signal subtraction unit 130, a right channel subtraction gain estimation unit 140, and a right channel signal subtraction unit. It includes 150, a monaural coding unit 160, and a stereo coding unit 170.
  • the coding device 100 encodes a sound signal in the time region of the input 2-channel stereo in units of frames having a predetermined time length of, for example, 20 ms, and has a monaural code CM, a left channel subtraction gain code C ⁇ , and a right channel, which will be described later.
  • the subtraction gain code C ⁇ and the stereo code CS are obtained and output.
  • the sound signal in the time region of the 2-channel stereo input to the encoding device is, for example, a digital sound signal or sound obtained by collecting sounds such as voice and music with each of two microphones and performing AD conversion. It is a signal and consists of a left channel input sound signal and a right channel input sound signal.
  • the codes output by the encoding device that is, the monaural code CM, the left channel subtraction gain code C ⁇ , the right channel subtraction gain code C ⁇ , and the stereo code CS are input to the decoding device.
  • the coding device 100 performs the processes of steps S110 to S170 illustrated in FIG. 2 for each frame.
  • the input sound signal of the left channel input to the coding device 100 and the input sound signal of the right channel input to the coding device 100 are input to the downmix unit 110.
  • the downmix unit 110 obtains and outputs a downmix signal which is a signal obtained by mixing the input sound signal of the left channel and the input sound signal of the right channel from the input sound signal of the left channel and the input sound signal of the right channel. (Step S110).
  • the input sound signal of the left channel x L (1), x L (2), .. ., x L (T) and right channel input sound signals x R (1), x R (2), ..., x R (T) are input.
  • T is a positive integer, for example, if the frame length is 20 ms and the sampling frequency is 32 kHz, T is 640.
  • the left channel subtraction gain estimation unit 120 includes left channel input sound signals x L (1), x L (2), ..., x L (T) input to the coding apparatus 100, and a downmix unit.
  • the downmix signals x M (1), x M (2), ..., x M (T) output by 110 are input.
  • the left channel subtraction gain estimation unit 120 converts the left channel subtraction gain ⁇ and the left channel subtraction gain code C ⁇ , which is a code representing the left channel subtraction gain ⁇ , from the input left channel input sound signal and the downmix signal. Obtain and output (step S120).
  • the left channel subtraction gain estimation unit 120 is exemplified in the method of obtaining the amplitude ratio g in Patent Document 1 and the method of encoding the amplitude ratio g of the left channel subtraction gain ⁇ and the left channel subtraction gain code C ⁇ . It is obtained by a well-known method or a newly proposed method based on the principle of minimizing the quantization error. The principle of minimizing the quantization error and the method based on this principle will be described later.
  • the left channel signal subtraction unit 130 includes left channel input sound signals x L (1), x L (2), ..., x L (T) input to the encoding device 100, and a downmix unit 110.
  • the downmix signals x M (1), x M (2), ..., x M (T) output by are input, and the left channel subtraction gain ⁇ output by the left channel subtraction gain estimation unit 120 is input. ..
  • the left channel signal subtraction unit 130 sets the value ⁇ ⁇ x M (t) obtained by multiplying the sample value x M (t) of the downmix signal by the left channel subtraction gain ⁇ for each corresponding sample t as the input sound of the left channel.
  • y L (t) x L (t) - ⁇ ⁇ x M (t).
  • the left channel signal subtracting unit 130 has already been quantized, which is a monaural coding local decoding signal. Instead of the downmix signal, it is preferable to use the non-quantized downmix signal x M (t) obtained by the downmix unit 110.
  • the left channel subtraction gain estimation unit 120 obtains the left channel subtraction gain ⁇ by a well-known method as exemplified in Patent Document 1 instead of a method based on the principle of minimizing the quantization error, it is encoded.
  • a means for obtaining a local decoding signal corresponding to the monaural code CM is provided in the subsequent stage of the monaural coding unit 160 of the apparatus 100 or in the monaural coding unit 160, and the downmix signal x M (1) is provided in the left channel signal subtraction unit 130.
  • the left channel difference signal may be obtained using x M (1), ⁇ x M (2), ..., ⁇ x M (T).
  • the right channel subtraction gain estimation unit 140 contains the right channel input sound signals x R (1), x R (2), ..., x R (T) input to the encoding device 100, and a downmix unit.
  • the downmix signals x M (1), x M (2), ..., x M (T) output by 110 are input.
  • the right channel subtraction gain estimation unit 140 converts the right channel subtraction gain ⁇ and the right channel subtraction gain code C ⁇ , which is a code representing the right channel subtraction gain ⁇ , from the input right channel input sound signal and the downmix signal. Obtain and output (step S140).
  • the right channel subtraction gain estimation unit 140 is exemplified in the method of obtaining the amplitude ratio g in Patent Document 1 and the method of encoding the amplitude ratio g of the right channel subtraction gain ⁇ and the right channel subtraction gain code C ⁇ . It is obtained by a well-known method or a newly proposed method based on the principle of minimizing the quantization error. The principle of minimizing the quantization error and the method based on this principle will be described later.
  • the right channel signal subtraction unit 150 includes right channel input sound signals x R (1), x R (2), ..., x R (T) input to the encoding device 100, and a downmix unit 110.
  • the downmix signals x M (1), x M (2), ..., x M (T) output by and the right channel subtraction gain ⁇ output by the right channel subtraction gain estimation unit 140 are input. ..
  • the right channel signal subtraction unit 150 sets the value ⁇ ⁇ x M (t) obtained by multiplying the sample value x M (t) of the downmix signal by the right channel subtraction gain ⁇ for each corresponding sample t as the input sound of the right channel.
  • the unquantized downmix signal x M (t) obtained by the downmix unit 110 instead of the quantized downmix signal which is the decoded signal.
  • the right channel subtraction gain estimation unit 140 obtains the right channel subtraction gain ⁇ by a well-known method as exemplified in Patent Document 1 instead of a method based on the principle of minimizing the quantization error, it is encoded.
  • the right channel signal subtraction unit 150 is provided with a means for obtaining a local decoding signal corresponding to the monaural code CM in the subsequent stage of the monaural coding unit 160 of the apparatus 100 or in the monaural coding unit 160.
  • the quantized downmix signal ⁇ x M (1), ⁇ x M (2), ..., ⁇ x M (T), which are the signals, may be used to obtain the right channel difference signal.
  • the downmix signals x M (1), x M (2), ..., x M (T) output by the downmix unit 110 are input to the monaural coding unit 160.
  • the monaural coding unit 160 encodes the input downmix signal with b M bits by a predetermined coding method to obtain a monaural code CM and outputs it (step S160). That is, the b M- bit monaural code CM is obtained from the input T sample downmix signals x M (1), x M (2), ..., x M (T) and output.
  • Any coding method may be used, for example, a coding method such as the 3GPP EVS standard may be used.
  • the stereo coding unit 170 includes the left channel difference signals y L (1), y L (2), ..., y L (T) output by the left channel signal subtraction unit 130, and the right channel signal subtraction unit 150.
  • the right channel difference signal y R (1), y R (2), ..., y R (T) output by is input.
  • the stereo coding unit 170 encodes the input left channel difference signal and right channel difference signal with a total b s bit by a predetermined coding method to obtain a stereo code CS and output the signal (step S170).
  • Any coding method may be used, for example, a stereo coding method corresponding to the stereo decoding method of the MPEG-4 AAC standard may be used, or the input left channel difference signal and right channel may be used.
  • a signal that encodes each difference signal independently may be used, and a stereo code CS may be obtained by combining all the codes obtained by the coding.
  • the stereo coding unit 170 encodes the left channel difference signal with b L bits and b R the right channel difference signal. Encode with bits. That is, the stereo coding unit 170 uses the left channel difference code CL of b L bits from the left channel difference signals y L (1), y L (2), ..., y L (T) of the input T sample. To obtain the right channel difference code CR of b R bits from the input T sample right channel difference signals y R (1), y R (2), ..., y R (T), and left The combination of the channel difference code CL and the right channel difference code CR is output as the stereo code CS.
  • the sum of the b L bit and the b R bit is the b S bit.
  • the stereo coding unit 170 When the input left channel difference signal and right channel difference signal are combined and encoded in one coding method, the stereo coding unit 170 totals the left channel difference signal and the right channel difference signal b S. Encode with bits. That is, the stereo coding unit 170 includes the left channel difference signals y L (1), y L (2), ..., y L (T) of the input T sample and the right channel of the input T sample.
  • the b S- bit stereo code CS is obtained from the difference signals y R (1), y R (2), ..., y R (T) and output.
  • the decoding device 200 of the first reference embodiment includes a monaural decoding unit 210, a stereo decoding unit 220, a left channel subtraction gain decoding unit 230, a left channel signal addition unit 240, a right channel subtraction gain decoding unit 250, and a right side. Includes a channel signal addition unit 260.
  • the decoding device 200 decodes the input monaural code CM, left channel subtraction gain code C ⁇ , right channel subtraction gain code C ⁇ , and stereo code CS in frame units having the same time length as the corresponding coding device 100, and frames.
  • the decoded sound signal (left channel decoded sound signal and right channel decoded sound signal, which will be described later) in the time region of the unit 2-channel stereo is obtained and output.
  • the decoding device 200 may also output a decoded sound signal (monaural decoded sound signal described later) in the monaural time domain.
  • the decoded sound signal output by the decoding device 200 is, for example, DA-converted and reproduced by a speaker so that it can be heard.
  • the decoding device 200 performs the processes of steps S210 to S260 illustrated in FIG. 4 for each frame.
  • the monaural code CM input to the decoding device 200 is input to the monaural decoding unit 210.
  • the monaural decoding unit 210 decodes the input monaural code CM by a predetermined decoding method and outputs a monaural decoding sound signal ⁇ x M (1), ⁇ x M (2), ..., ⁇ x M (T). Obtain and output (step S210).
  • a predetermined decoding method a decoding method corresponding to the coding method used in the monaural coding unit 160 of the corresponding coding device 100 is used.
  • the number of bits of the monaural code CM is b M.
  • the stereo code CS input to the decoding device 200 is input to the stereo decoding unit 220.
  • the stereo decoding unit 220 decodes the input stereo code CS by a predetermined decoding method, and the left channel decoding difference signal ⁇ y L (1), ⁇ y L (2), ..., ⁇ y L (T). ) And the right channel decoding difference signal ⁇ y R (1), ⁇ y R (2), ..., ⁇ y R (T) are obtained and output (step S220).
  • a predetermined decoding method a decoding method corresponding to the coding method used in the stereo coding unit 170 of the corresponding coding device 100 is used.
  • the total number of bits of the stereo code CS is b S.
  • the left channel subtraction gain code C ⁇ input to the decoding device 200 is input to the left channel subtraction gain decoding unit 230.
  • the left channel subtraction gain decoding unit 230 decodes the left channel subtraction gain code C ⁇ to obtain the left channel subtraction gain ⁇ and outputs it (step S230).
  • the left channel subtraction gain decoding unit 230 decodes the left channel subtraction gain code C ⁇ by a decoding method corresponding to the method used in the left channel subtraction gain estimation unit 120 of the corresponding coding apparatus 100 to obtain the left channel subtraction gain ⁇ . obtain.
  • a method in which the unit 230 decodes the left channel subtraction gain code C ⁇ to obtain the left channel subtraction gain ⁇ will be described later.
  • the left channel signal addition unit 240 contains the monaural decoding sound signals ⁇ x M (1), ⁇ x M (2), ..., ⁇ x M (T) output by the monaural decoding unit 210, and the stereo decoding unit 220.
  • the left channel signal addition unit 240 has a sample value of the left channel decoding difference signal ⁇ y L (t), a sample value of the monaural decoding sound signal ⁇ x M (t), and a left channel subtraction gain ⁇ for each corresponding sample t.
  • the right channel subtraction gain code C ⁇ input to the decoding device 200 is input to the right channel subtraction gain decoding unit 250.
  • the right channel subtraction gain decoding unit 250 decodes the right channel subtraction gain code C ⁇ to obtain the right channel subtraction gain ⁇ and outputs it (step S250).
  • the right channel subtraction gain decoding unit 250 decodes the right channel subtraction gain code C ⁇ by a decoding method corresponding to the method used in the right channel subtraction gain estimation unit 140 of the corresponding coding apparatus 100 to obtain the right channel subtraction gain ⁇ . obtain.
  • a method in which unit 250 decodes the right channel subtraction gain code C ⁇ to obtain the right channel subtraction gain ⁇ will be described later.
  • the right channel signal addition unit 260 includes a monaural decoding sound signal ⁇ x M (1), ⁇ x M (2), ..., ⁇ x M (T) output by the monaural decoding unit 210, and a stereo decoding unit 220.
  • the right channel signal addition unit 260 includes a sample value of the right channel decoding difference signal ⁇ y R (t), a sample value of the monaural decoding sound signal ⁇ x M (t), and a right channel subtraction gain ⁇ for each corresponding sample t.
  • the number of bits b L used for coding the left channel difference signal is used.
  • the number of bits b R used to encode the right channel difference signal may not be explicitly determined, but in the following, the number of bits used to encode the left channel difference signal is b L , and the number of bits used to encode the right channel difference signal is b L. It will be described assuming that the number of bits used for coding is b R. Further, although the left channel is mainly described below, the same applies to the right channel.
  • the above-described encoding device 100 uses the downmix signal x M (1), from each sample value of the input sound signal x L (1), x L (2), ..., x L (T) of the left channel.
  • Left channel difference signal y L (1) consisting of the values obtained by multiplying each sample value of x M (2), ..., x M (T) by the left channel subtraction gain ⁇ and subtracting the value obtained.
  • y L (2), ..., y L (T) is encoded with b L bits to produce the downmix signals x M (1), x M (2), ..., x M (T).
  • the decoding device 200 described above has a left channel decoding difference signal ⁇ y L (1), ⁇ y L (2), ..., ⁇ y L (T) from the code of the b L bit (hereinafter, "quantization”.
  • Decrypted left channel difference signal also referred to as quantized left channel difference signal
  • monaural decoded sound signal from b M bit code ⁇ x M (1), ⁇ x M (2), ..., ⁇ x M (T) (hereinafter Then, after decoding the "quantized downmix signal”), the quantized downmix signal obtained by decoding ⁇ x M (1), ⁇ x M (2), ..., ⁇ x M ( Quantized left channel difference signal obtained by decoding the value obtained by multiplying each sample value of T) by the left channel subtraction gain ⁇ ⁇ y L (1), ⁇ y L (2), ..., ⁇ Left channel decoded sound signal which is the decoded sound signal of the left channel by adding to each sample value of y L (T
  • the energy of the quantization error (hereinafter, for convenience, "quantization error caused by coding”) of the decoding signal obtained by encoding / decoding the input signal is approximately proportional to the energy of the input signal.
  • quantization error caused by coding the average energy per sample of the quantization error caused by the coding of the left channel difference signal can be estimated by using the positive number ⁇ L 2 as shown in the following equation (1-0-1), and the downmix signal can be estimated.
  • the average energy per sample of the quantization error caused by coding can be estimated by the following equation (1-0-2) using the positive number ⁇ M 2.
  • the left channel input signal x L (1), x L (2), ..., x L (T) and the downmix signal x M (1), x M (2), ... each sample value is so close that x M (T) can be regarded as the same series.
  • the left channel input signal x L (1), x L (2), ..., x L (T) and the right channel input signal x R (1), x R (2), ... , x R (T) is obtained by collecting the sound emitted by a sound source at the same distance from two microphones in an environment where there is not much background noise or reverberation. Equivalent to.
  • the sample values of the left channel difference signals y L (1), y L (2), ..., y L (T) are the downmix signals x M (1), x M (2). , ..., x M (T) is equivalent to the value obtained by multiplying each sample value by (1- ⁇ ). Therefore, the energy of the left channel difference signal since it expressed by (1- ⁇ ) 2 times the energy of the downmix signal, sigma L 2 described above using the above ⁇ M 2 (1- ⁇ ) 2 ⁇ ⁇ M Since it can be replaced with 2, the average energy per sample of the quantization error caused by the coding of the left channel difference signal can be estimated by the following equation (1-1).
  • the average energy per sample of the quantization error of the signal to be added to the quantized left channel difference signal in the decoding device that is, each sample value of the quantized downmix signal obtained by decoding and the left channel subtraction gain ⁇ .
  • the average energy per sample of the quantization error of the series of values obtained by multiplying and can be estimated by the following equation (1-2).
  • the left channel subtraction gain ⁇ that minimizes the energy of the quantization error of the decoded sound signal of the left channel is obtained by the following equation (1-3).
  • the left channel subtraction gain estimation unit 120 is used.
  • the left channel subtraction gain ⁇ may be obtained by Eq. (1-3).
  • the left channel subtraction gain ⁇ obtained by Eq. (1-3) is a value greater than 0 and less than 1 , and is 0.5 when b L and b M, which are the numbers of bits used for the two encodings, are equal, and is the left channel.
  • the number of bits b L for encoding the difference signal is greater than the number of bits b M for encoding the downmix signal, the closer the value is closer to 0 than 0.5, and the number of bits for encoding the downmix signal.
  • b M is close enough to 0.5 than 1 greater than the number of bits b L to encode the left channel differential signal.
  • the subtraction gain estimation unit 140 may obtain the right channel subtraction gain ⁇ by the following equation (1-3-2).
  • the right channel subtraction gain ⁇ obtained by Eq. (1-3-2) is a value greater than 0 and less than 1, and 0.5 when b R and b M, which are the numbers of bits used for the two encodings, are equal.
  • the number of bits b R for encoding the right channel difference signal is closer to 0 than 0.5 as the number of bits b R for encoding the downmix signal is greater than b M, and the number of bits for encoding the downmix signal is closer to 0. As the number of bits b M is greater than the number of bits b R for encoding the right channel difference signal, the value is closer to 1 than 0.5.
  • the normalized inner product value r L of T is expressed by the following equation (1-4).
  • the normalized inner product value r L obtained by Eq. (1-4) is a real value of the downmix signals x M (1), x M (2), ..., x M (T).
  • the sum of the obtained value (r L - ⁇ ) ⁇ x M (t) and each sample value x L '(t) of the orthogonal signal (r L - ⁇ ) ⁇ x M (t) + x L '(t) ) Is equivalent.
  • Orthogonal signals x L '(1), x L '(2), ..., x L '(T) are downmix signals x M (1), x M (2), ..., x M (T) sum of orthogonality, i.e. to indicate the nature of the inner product is 0, the energy of the left channel differential signal to that doubled (r L-.alpha.) energy of the downmix signal, and the energy of the quadrature signal to) It is represented by. Therefore, the average energy per sample of the quantization error generated by coding the left channel difference signal with b L bits can be estimated by the following equation (1-5) using the positive number ⁇ 2.
  • the left channel subtraction gain ⁇ that minimizes the energy of the quantization error of the decoded sound signal of the left channel is obtained by the following equation (1-6).
  • the left channel subtraction gain estimation unit 120 may obtain the left channel subtraction gain ⁇ by the equation (1-6). That is, considering the principle of minimizing the energy of this quantization error, the left channel subtraction gain ⁇ is determined by the normalized inner product value r L and the number of bits used for coding b L and b M. You should use the value, the correction factor, multiplied by.
  • the correction coefficient is a value greater than 0 and less than 1, and is 0.5 when the number of bits b L for encoding the left channel difference signal and the number of bits b M for encoding the downmix signal are the same.
  • the number of bits b L for encoding the left channel difference signal is closer to 0 than 0.5 as the number of bits b L for encoding the downmix signal is greater than b M, and for encoding the left channel difference signal.
  • the value is closer to 1 than 0.5.
  • the right channel subtraction gain estimation unit 140 calculates the right channel subtraction gain ⁇ by the following equation (1-6-2). ).
  • r R is the input sound signal of the right channel x R (1), x R (2), ..., x R (T) and the downmix signal x M (1), x M (2), ..., a normalized internal product value of x M (T), expressed by the following equation (1-4-2). That is, considering the principle of minimizing the energy of this quantization error, the right channel subtraction gain ⁇ is determined by the normalized inner product value r R and the number of bits used for coding b R and b M.
  • the correction coefficient is a value greater than 0 and less than 1, and the more bits b R for encoding the right channel difference signal than b M for encoding the downmix signal, the more than 0.5. It is closer to 0, and the smaller the number of bits for encoding the right channel difference signal than the number of bits for encoding the downmix signal, the closer the value is to 1 than 0.5.
  • Example 1 shows the left channel input signal x L (1), x L (2), ..., x L (T) and the downmix signal x M (1), x M (2), ...
  • the principle of minimizing the quantization error energy of the decoded sound signal of the left channel including the case where x M (T) cannot be regarded as the same series, and the input sound signal of the right channel x R (1), x Including cases where R (2), ..., x R (T) and the downmix signal x M (1), x M (2), ..., x M (T) cannot be regarded as the same sequence. It is based on the principle of minimizing the energy of the quantization error of the decoded sound signal of the right channel.
  • the left channel subtraction gain estimation unit 120 performs steps S120-14 from the following steps S120-11 shown in FIG.
  • the left channel subtraction gain estimation unit 120 first receives the input left channel input sound signals x L (1), x L (2), ..., x L (T) and the downmix signal x M (1). From, x M (2), ..., x M (T), the normalized internal product value r L for the input sound signal of the left channel of the downmix signal is obtained by Eq. (1-4) (step S120-). 11). Further, the left channel subtraction gain estimation unit 120 uses the number of bits b for coding the left channel difference signals y L (1), y L (2), ..., y L (T) in the stereo coding unit 170.
  • the left channel correction coefficient c L is obtained by the following equation (1-7) (step S120-12).
  • the left channel subtraction gain estimation unit 120 then obtains a value obtained by multiplying the normalized inner product value r L obtained in step S120-11 by the left channel correction coefficient c L obtained in step S120-12 (step). S120-13).
  • the left channel subtraction gain estimation unit 120 uses the multiplication value c obtained in step S120-13 of the stored left channel subtraction gain candidates ⁇ cand (1), ..., ⁇ cand (A).
  • the candidate closest to L ⁇ r L (multiplication value c L ⁇ r L quantization value) is obtained as the left channel subtraction gain ⁇ , and the stored codes C ⁇ cand (1), ..., C ⁇ cand (A) ), The code corresponding to the left channel subtraction gain ⁇ is obtained as the left channel subtraction gain code C ⁇ (step S120-14).
  • the number of bits b L used for coding the left channel difference signals y L (1), y L (2), ..., y L (T) in the stereo coding unit 170 is not explicitly determined. May use half of the number of bits b s of the stereo code CS output by the stereo coding unit 170 (that is, b s / 2) as the number of bits b L.
  • the left channel correction coefficient c L is not a value obtained by Eq.
  • the right channel subtraction gain estimation unit 140 performs steps S140-14 from the following steps S140-11 shown in FIG.
  • the right channel subtraction gain estimation unit 140 first receives the input right channel input sound signals x R (1), x R (2), ..., x R (T) and the downmix signal x M (1). From, x M (2), ..., x M (T), the normalized internal product value r R for the input sound signal of the right channel of the downmix signal is obtained by Eq. (1-4-2) (step). S140-11). Further, the right channel subtraction gain estimation unit 140 uses the number of bits b for coding the right channel difference signals y R (1), y R (2), ..., y R (T) in the stereo coding unit 170.
  • the right channel correction coefficient c R is obtained by the following equation (1-7-2) (step S140-12).
  • the right channel subtraction gain estimation unit 140 then obtains a value obtained by multiplying the normalized inner product value r R obtained in step S140-11 by the right channel correction coefficient c R obtained in step S140-12 (step). S140-13).
  • the right channel subtraction gain estimation unit 140 uses the multiplication value c obtained in step S140-13 of the stored right channel subtraction gain candidates ⁇ cand (1), ..., ⁇ cand (B).
  • the candidate closest to R ⁇ r R (multiplication value c R ⁇ r R quantization value) is obtained as the right channel subtraction gain ⁇ , and the stored codes C ⁇ cand (1), ..., C ⁇ cand (B) ), The code corresponding to the right channel subtraction gain ⁇ is obtained as the right channel subtraction gain code C ⁇ (step S140-14).
  • the number of bits b R used for encoding the right channel difference signals y R (1), y R (2), ..., y R (T) in the stereo coding unit 170 is not explicitly determined. May use half of the number of bits b s of the stereo code CS output by the stereo coding unit 170 (that is, b s / 2) as the number of bits b R.
  • the right channel correction coefficient c R is not a value obtained by Eq. (1-7-2) itself, but a value greater than 0 and less than 1, and the right channel difference signals y R (1), y R (2).
  • the number of bits used to encode R (T) b R and the downmix signal used to encode x M (1), x M (2), ..., x M (T) when the number of bits b M are the same is 0.5, close to 0 than 0.5 the number of bits b R is the more than the number of bits b M, the number of bits b R is close to 1 than about 0.5 less than the number of bits b M It may be a value. These are the same in each example described later.
  • the left channel subtraction gain decoding unit 230 corresponds to the same left channel subtraction gain candidate ⁇ cand (a) and the candidate stored in the left channel subtraction gain estimation unit 120 of the corresponding coding apparatus 100.
  • the left channel subtraction gain decoding unit 230 is a candidate for the left channel subtraction gain corresponding to the input left channel subtraction gain sign C ⁇ among the stored codes C ⁇ cand (1), ..., C ⁇ cand (A). Is obtained as the left channel subtraction gain ⁇ (step S230-11).
  • the right channel subtraction gain decoding unit 250 corresponds to the same right channel subtraction gain candidate ⁇ cand (b) and the candidate stored in the right channel subtraction gain estimation unit 140 of the corresponding coding apparatus 100.
  • the right channel subtraction gain decoding unit 250 is a candidate for the right channel subtraction gain corresponding to the input right channel subtraction gain code C ⁇ among the stored codes C ⁇ cand (1), ..., C ⁇ cand (B). Is obtained as the right channel subtraction gain ⁇ (step S250-11).
  • the same subtraction gain candidate or code may be used for the left channel and the right channel, and the above-mentioned A and B are stored in the left channel subtraction gain estimation unit 120 and the left channel subtraction gain decoding unit 230 as the same value.
  • the set of the subtraction gain candidate ⁇ cand (b) and the code C ⁇ cand (b) corresponding to the candidate may be the same.
  • the number of bits b L used for coding the left channel difference signal in the coding device 100 is the number of bits used for decoding the left channel difference signal in the decoding device 200, and the bits used for coding the downmix signal in the coding device 100. Since the value of the number b M is the number of bits used for decoding the downmix signal by the decoding device 200, the correction coefficient c L can be calculated to be the same value by both the coding device 100 and the decoding device 200. Therefore, the normalized inner product value r L is set as the object of coding and decoding, and the quantization value ⁇ r L of the inner product value normalized by the coding device 100 and the decoding device 200 is multiplied by the correction coefficient c L. The left channel subtraction gain ⁇ may be obtained. The same applies to the right channel. This form will be described as a modification of Example 1.
  • the left channel subtraction gain estimation unit 120 receives the input left channel input sound signals x L (1), x L (2) in the same manner as in step S120-11 of the left channel subtraction gain estimation unit 120 of Example 1. , ..., x L (T) and downmix signal From x M (1), x M (2), ..., x M (T), the left channel of the downmix signal according to equation (1-4).
  • the normalized internal product value r L for the input sound signal of is obtained (step S120-11).
  • the left channel subtraction gain estimation unit 120 then in step S120-11 of the stored left channel normalized inner product value candidates r Lcand (1), ..., r Lcand (A).
  • the candidate closest to the obtained normalized inner product value r L (quantized value of the normalized inner product value r L ) ⁇ r L is obtained, and the stored sign C ⁇ cand (1), ...,
  • the code corresponding to the closest candidate ⁇ r L of the C ⁇ cand (A) is obtained as the left channel subtraction gain code C ⁇ (step S120-15).
  • the left channel subtraction gain estimation unit 120 is similar to step S120-12 of the left channel subtraction gain estimation unit 120 in Example 1, and the left channel difference signal y L (1), y L (2) in the stereo coding unit 170.
  • the left channel correction coefficient c L is obtained by the equation (1-7) (step S120-12).
  • the left channel subtraction gain estimation unit 120 then multiplied the quantized value ⁇ r L of the normalized inner product value obtained in step S120-15 by the left channel correction coefficient c L obtained in step S120-12. The value is obtained as the left channel subtraction gain ⁇ (step S120-16).
  • the right channel subtraction gain estimation unit 140 first receives the input right channel input sound signals x R (1), x R (2) in the same manner as in step S140-11 of the right channel subtraction gain estimation unit 140 of Example 1. , ..., x R (T) and downmix signal From x M (1), x M (2), ..., x M (T), the downmix signal is obtained by Eq. (1-4-2).
  • the normalized internal product value r R for the input sound signal of the right channel is obtained (step S140-11).
  • the right channel subtraction gain estimation unit 140 is then subjected to step S140-11 of the stored right channel normalized inner product value candidates r Rcand (1), ..., r Rcand (B).
  • the candidate closest to the obtained normalized inner product value r R (quantized value of the normalized inner product value r R ) ⁇ r R is obtained and the stored code C ⁇ cand (1), ..., The code corresponding to the closest candidate ⁇ r R of the C ⁇ cand (B) is obtained as the right channel subtraction gain code C ⁇ (step S140-15).
  • the right channel subtraction gain estimation unit 140 is similar to step S140-12 of the right channel subtraction gain estimation unit 140 in Example 1, and the right channel difference signal y R (1), y R (2) in the stereo coding unit 170.
  • the right channel correction coefficient c R is obtained by the equation (1-7-2) (step S140-12).
  • the right channel subtraction gain estimation unit 140 then multiplied the quantized value ⁇ r R of the normalized inner product value obtained in step S140-15 by the right channel correction coefficient c R obtained in step S140-12. The value is obtained as the right channel subtraction gain ⁇ (step S140-16).
  • the left channel subtraction gain decoding unit 230 includes the same left channel normalized inner product value candidate r Lcand (a) as that stored in the left channel subtraction gain estimation unit 120 of the corresponding coding apparatus 100.
  • the left channel subtraction gain decoding unit 230 performs steps S230-14 from the following steps S230-12 shown in FIG. 7.
  • the left channel subtraction gain decoding unit 230 normalizes the left channel corresponding to the input left channel subtraction gain code C ⁇ among the stored codes C ⁇ cand (1), ..., C ⁇ cand (A).
  • the candidate of the inner product value is obtained as the decoded value ⁇ r L of the normalized inner product value of the left channel (step S230-12).
  • the left channel subtraction gain decoding unit 230 is used by the stereo decoding unit 220 to decode the left channel decoding difference signals ⁇ y L (1), ⁇ y L (2), ..., ⁇ y L (T).
  • the number b L the number of bits b M used to decode the monaural decoding sound signal ⁇ x M (1), ⁇ x M (2), ..., ⁇ x M (T) in the monaural decoding unit 210, and the number of bits per frame.
  • the left channel correction coefficient c L is obtained by Eq. (1-7) using the number of samples T of (step S230-13).
  • the left channel subtraction gain decoding unit 230 is a value obtained by multiplying the decoded value ⁇ r L of the normalized inner product value obtained in step S230-12 by the left channel correction coefficient c L obtained in step S230-13. Is obtained as the left channel subtraction gain ⁇ (step S230-14).
  • the stereo decoding unit 220 uses the left channel decoding difference signal ⁇ y L (1), ⁇ y L (2). , ..., ⁇ y
  • the number of bits used for decoding L (T) b L is the number of bits of the left channel difference code CL.
  • the number of bits used for decoding the monaural decoding sound signal ⁇ x M (1), ⁇ x M (2), ..., ⁇ x M (T) in the monaural decoding unit 210 b M is the number of bits of the monaural code CM. Is.
  • the left channel correction coefficient c L is not a value obtained by Eq. (1-7) itself, but a value greater than 0 and less than 1, and the left channel decoding difference signal ⁇ y L (1), ⁇ y L (2).
  • the right channel subtraction gain decoding unit 250 includes the same candidate r Rcand (b) for the normalized inner product value of the right channel as that stored in the right channel subtraction gain estimation unit 140 of the corresponding coding apparatus 100.
  • the right channel subtraction gain decoding unit 250 performs steps S250-14 from the following steps S250-12 shown in FIG. 7.
  • the right channel subtraction gain decoding unit 250 normalizes the right channel corresponding to the input right channel subtraction gain code C ⁇ of the stored codes C ⁇ cand (1), ..., C ⁇ cand (B).
  • the candidate of the inner product value is obtained as the decoded value ⁇ r R of the normalized inner product value of the right channel (step S250-12).
  • the right channel subtraction gain decoding unit 250 is used by the stereo decoding unit 220 to decode the right channel decoding difference signals ⁇ y R (1), ⁇ y R (2), ..., ⁇ y R (T).
  • the number b R the number of bits b M used to decode the monaural decoding sound signal ⁇ x M (1), ⁇ x M (2), ..., ⁇ x M (T) in the monaural decoding unit 210, and the number of bits per frame.
  • the right channel correction coefficient c R is obtained by the equation (1-7-2) using the number of samples T of (step S250-13).
  • the right channel subtraction gain decoding unit 250 then multiplies the decoded value ⁇ r R of the normalized inner product value obtained in step S250-12 with the right channel correction coefficient c R obtained in step S250-13. Is obtained as the right channel subtraction gain ⁇ (step S250-14).
  • the stereo decoding unit 220 uses the right channel decoding difference signal ⁇ y R (1), ⁇ y R (2). , ..., ⁇ y Number of bits used for decoding R (T) b R is the number of bits of the right channel difference code CR.
  • the number of bits b R used for decoding the right channel decoding difference signal ⁇ y R (1), ⁇ y R (2), ..., ⁇ y R (T) in the stereo decoding unit 220 is not explicitly determined.
  • the number of bits used for decoding the monaural decoding sound signal ⁇ x M (1), ⁇ x M (2), ..., ⁇ x M (T) in the monaural decoding unit 210 b M is the number of bits of the monaural code CM. Is.
  • the right channel correction coefficient c R is not a value obtained by Eq.
  • the same normalized inner product value candidates and codes may be used for the left channel and the right channel, and the above-mentioned A and B are set to the same values for the left channel subtraction gain estimation unit 120 and the left channel subtraction gain decoding unit 230.
  • the set of the candidate r Rcand (b) of the normalized inner product value of the right channel stored in the part 250 and the code C ⁇ cand (b) corresponding to the candidate may be the same.
  • the code C ⁇ is referred to as a left channel subtraction gain code because it is a code that substantially corresponds to the left channel subtraction gain ⁇ , and for the purpose of matching the wording in the description of the coding device 100 and the decoding device 200. However, since it represents a normalized inner product value, it may be called a left channel inner product code or the like. The same applies to the code C ⁇ , which may be referred to as a right channel product code or the like.
  • Example 2 An example of using a value considering the input value of the past frame as the normalized inner product value will be described as Example 2.
  • the optimization within the frame that is, the minimization of the quantization error energy of the left channel decoded sound signal and the minimization of the quantization error energy of the right channel decoded sound signal are strictly. Although not guaranteed, it reduces the abrupt fluctuation between frames of the left channel subtraction gain ⁇ and the abrupt fluctuation between frames of the right channel subtraction gain ⁇ , and reduces the noise generated in the decoded sound signal due to the fluctuation. Is. That is, in Example 2, in addition to reducing the energy of the quantization error of the decoded sound signal, the auditory quality of the decoded sound signal is also taken into consideration.
  • Example 2 the coding side, that is, the left channel subtraction gain estimation unit 120 and the right channel subtraction gain estimation unit 140 are different from Example 1, but the decoding side, that is, the left channel subtraction gain decoding unit 230 and the right channel subtraction gain decoding unit. Part 250 is the same as in Example 1.
  • Example 2 will be mainly described as being different from Example 1.
  • the left channel subtraction gain estimation unit 120 performs the following steps S120-111 to S120-113 and steps S120-12 to S120-14 described in Example 1.
  • the left channel subtraction gain estimation unit 120 first receives the input left channel input sound signals x L (1), x L (2), ..., x L (T) and the input downmix signal x. Using M (1), x M (2), ..., x M (T) and the internal product value E L (-1) used in the previous frame, the following equation (1-8) To obtain the internal product value E L (0) used in the current frame (step S120-111).
  • ⁇ L is a predetermined value larger than 0 and less than 1, and is stored in advance in the left channel subtraction gain estimation unit 120.
  • the left channel subtraction gain estimation unit 120 uses the obtained inner product value E L (0) as the “inner product value E L (-1) used in the previous frame” in the next frame, so that the left channel subtraction is subtracted. It is stored in the gain estimation unit 120.
  • the left channel subtraction gain estimation unit 120 also uses the input downmix signals x M (1), x M (2), ..., x M (T) and the downmix signal used in the previous frame. Using the energy E M (-1), the energy E M (0) of the downmix signal used in the current frame is obtained by the following equation (1-9) (step S120-112).
  • ⁇ M is a value larger than 0 and less than 1 and is stored in advance in the left channel subtraction gain estimation unit 120.
  • the left channel subtraction gain estimation unit 120 uses the obtained downmix signal energy E M (0) as the “downmix signal energy E M (-1) used in the previous frame” in the next frame. Therefore, it is stored in the left channel subtraction gain estimation unit 120.
  • the left channel subtraction gain estimation unit 120 uses the inner product value E L (0) obtained in the current frame obtained in step S120-111 and the energy of the downmix signal used in the current frame obtained in step S120-112. Using E M (0), the normalized inner product value r L is obtained by the following equation (1-10) (step S120-113).
  • the left channel subtraction gain estimation unit 120 also performs step S120-12, and then replaces the normalized inner product value r L obtained in step S120-11 with the normalization obtained in step S120-113 described above.
  • Step S120-13 is performed using the obtained inner product value r L , and further, step S120-14 is performed.
  • the right channel subtraction gain estimation unit 140 performs the following steps S140-111 to S140-113 and steps S140-12 to S140-14 described in Example 1.
  • the right channel subtraction gain estimation unit 140 first receives the input right channel input sound signals x R (1), x R (2), ..., x R (T) and the input downmix signal x. Using M (1), x M (2), ..., x M (T) and the internal product value E R (-1) used in the previous frame, the following equation (1-8-) 2) obtains the internal product value E R (0) used in the current frame (step S140-111).
  • ⁇ R is a predetermined value larger than 0 and less than 1, and is stored in advance in the right channel subtraction gain estimation unit 140.
  • the right channel subtraction gain estimation unit 140 uses the obtained inner product value E R (0) as the “inner product value E R (-1) used in the previous frame” in the next frame, so that the right channel subtraction is subtracted. It is stored in the gain estimation unit 140.
  • the right channel subtraction gain estimation unit 140 also uses the input downmix signals x M (1), x M (2), ..., x M (T) and the downmix signal used in the previous frame. Using the energy E M (-1), the energy E M (0) of the downmix signal used in the current frame is obtained by Eq. (1-9) (step S140-112). The right channel subtraction gain estimation unit 140 uses the obtained downmix signal energy E M (0) as the “downmix signal energy E M (-1) used in the previous frame” in the next frame. , Stored in the right channel subtraction gain estimation unit 140.
  • step S140-112 performed by the right channel subtraction gain estimation unit 140 only one of them may be performed.
  • the right channel subtraction gain estimation unit 140 uses the inner product value E R (0) obtained in the current frame obtained in step S140-111 and the energy of the downmix signal used in the current frame obtained in step S140-112. Using E M (0), the normalized inner product value r R is obtained by the following equation (1-10-2) (step S140-113).
  • the right channel subtraction gain estimation unit 140 also performs step S140-12, and then replaces the normalized inner product value r R obtained in step S140-11 with the normalization obtained in step S140-113 described above.
  • Step S140-13 is performed using the obtained inner product value r R , and further, step S140-14 is performed.
  • Example 2 can be modified in the same manner as the modification of Example 1 with respect to Example 1. This form will be described as a modification of Example 2.
  • the coding side that is, the left channel subtraction gain estimation unit 120 and the right channel subtraction gain estimation unit 140 are different from the modification of Example 1, but the decoding side, that is, the left channel subtraction gain decoding unit 230.
  • the right channel subtraction gain decoding unit 250 are the same as the modified example of Example 1. Since the difference from the modification of Example 1 of the modification of Example 2 is the same as that of Example 2, the modification of Example 2 will be described below with reference to the modification of Example 1 and Example 2 as appropriate.
  • the left channel subtraction gain estimation unit 120 includes a candidate r Lcand (a) for the normalized internal product value of the left channel and a code corresponding to the candidate.
  • the left channel subtraction gain estimation unit 120 includes steps S120-111 to S120-113, which are the same as in Example 2, and steps S120-12, S120-15, and S120-, which are the same as the modified example of Example 1. 16 and. Specifically, it is as follows.
  • the left channel subtraction gain estimation unit 120 first receives the input left channel input sound signals x L (1), x L (2), ..., x L (T) and the input downmix signal x. Using M (1), x M (2), ..., x M (T) and the internal product value E L (-1) used in the previous frame, according to equation (1-8), The inner product value E L (0) used in the current frame is obtained (step S120-111).
  • the left channel subtraction gain estimation unit 120 also uses the input downmix signals x M (1), x M (2), ..., x M (T) and the downmix signal used in the previous frame. Using the energy E M (-1), the energy E M (0) of the downmix signal used in the current frame is obtained by Eq.
  • step S120-112 The left channel subtraction gain estimation unit 120 then uses the inner product value E L (0) obtained in the current frame obtained in step S120-111 and the energy of the downmix signal used in the current frame obtained in step S120-112. Using E M (0), the normalized inner product value r L is obtained by Eq. (1-10) (step S120-113). The left channel subtraction gain estimation unit 120 then in step S120-113 of the stored left channel normalized inner product value candidates r Lcand (1), ..., r Lcand (A).
  • the candidate closest to the obtained normalized inner product value r L (quantized value of the normalized inner product value r L ) ⁇ r L is obtained, and the stored sign C ⁇ cand (1), ...,
  • the code corresponding to the closest candidate ⁇ r L of the C ⁇ cand (A) is obtained as the left channel subtraction gain code C ⁇ (step S120-15).
  • the left channel subtraction gain estimation unit 120 uses the number of bits b for coding the left channel difference signals y L (1), y L (2), ..., y L (T) in the stereo coding unit 170.
  • the left channel correction coefficient c L is obtained by the equation (1-7) (step S120-12).
  • the left channel subtraction gain estimation unit 120 then multiplied the quantized value ⁇ r L of the normalized inner product value obtained in step S120-15 by the left channel correction coefficient c L obtained in step S120-12. The value is obtained as the left channel subtraction gain ⁇ (step S120-16).
  • the right channel subtraction gain estimation unit 140 first receives the input right channel input sound signals x R (1), x R (2), ..., x R (T) and the input downmix signal x. Using M (1), x M (2), ..., x M (T) and the internal product value E R (-1) used in the previous frame, Eq. (1-8-2) To obtain the internal product value E R (0) used in the current frame (step S140-111).
  • the right channel subtraction gain estimation unit 140 also uses the input downmix signals x M (1), x M (2), ..., x M (T) and the downmix signal used in the previous frame. Using the energy E M (-1), the energy E M (0) of the downmix signal used in the current frame is obtained by Eq.
  • step S140-112 The right channel subtraction gain estimation unit 140 then uses the inner product value E R (0) obtained in the current frame obtained in step S140-111 and the energy of the downmix signal used in the current frame obtained in step S140-112. Using E M (0), the normalized inner product value r R is obtained by Eq. (1-10-2) (step S140-113). The right channel subtraction gain estimation unit 140 is then subjected to step S140-113 of the stored right channel normalized inner product value candidates r Rcand (1), ..., r Rcand (B).
  • the candidate closest to the obtained normalized inner product value r R (quantized value of the normalized inner product value r R ) ⁇ r R is obtained and the stored code C ⁇ cand (1), ..., The code corresponding to the closest candidate ⁇ r R of the C ⁇ cand (B) is obtained as the right channel subtraction gain code C ⁇ (step S140-15). Further, the right channel subtraction gain estimation unit 140 uses the number of bits b for coding the right channel difference signals y R (1), y R (2), ..., y R (T) in the stereo coding unit 170.
  • the right channel correction coefficient c R is obtained by the equation (1-7-2) (step S140-12).
  • the right channel subtraction gain estimation unit 140 then multiplied the quantized value ⁇ r R of the normalized inner product value obtained in step S140-15 by the right channel correction coefficient c R obtained in step S140-12. The value is obtained as the right channel subtraction gain ⁇ (step S140-16).
  • the downmix signal is used.
  • the left channel subtraction gain ⁇ and the right channel subtraction gain ⁇ are smaller than the values obtained by Example 1 in consideration of the auditory quality. May be. Similarly, the left channel subtraction gain ⁇ and the right channel subtraction gain ⁇ may be smaller than the values obtained by Example 2.
  • the quantization value of the multiplication value c L ⁇ r L of the normalized inner product value r L and the left channel correction coefficient c L is the left channel subtraction gain ⁇ .
  • the quantization value of be the left channel subtraction gain ⁇ .
  • the left channel subtraction gain code C ⁇ Example 1 and Example 2 in the same manner as in the multiplication value c L ⁇ r L as the decoding of the target in the coding and left channel subtraction gain decoding section 230 in the left channel subtraction gain estimator 120 so as to represent the quantized value of the multiplication value c L ⁇ r L, left channel subtraction gain estimation unit 120 and the left channel subtraction gain decoding section 230 multiplies the quantized value and lambda L multiplier c L ⁇ r L
  • the left channel subtraction gain ⁇ may be obtained.
  • the normalized inner product value r L , the left channel correction coefficient c L, and the multiplication value ⁇ L ⁇ c L ⁇ r L of the predetermined value ⁇ L are encoded by the left channel subtraction gain estimation unit 120 and the left channel.
  • the left channel subtraction gain code C ⁇ may represent the quantization value of the multiplication value ⁇ L ⁇ c L ⁇ r L.
  • the quantization value of the multiplication value c R ⁇ r R of the normalized inner product value r R and the right channel correction coefficient c R was defined as the right channel subtraction gain ⁇ .
  • the normalized inner product value r R , the right channel correction coefficient c R, and the quantum of the multiplication value ⁇ R ⁇ c R ⁇ r R of ⁇ R which is a predetermined value greater than 0 and less than 1.
  • the conversion value be the right channel subtraction gain ⁇ .
  • the right channel subtraction gain code C ⁇ as the object of decoding the coding and the right channel subtraction gain decoding section 250 in the right channel subtraction gain estimating unit 140 similarly multiplied value c R ⁇ r R to Example 1 and Example 2 so as to represent the quantized value of the multiplication value c R ⁇ r R, multiplications right channel subtraction gain estimating unit 140 and the right channel subtraction gain decoding section 250 and the quantization value and the lambda R multiplier c R ⁇ r R To obtain the right channel subtraction gain ⁇ .
  • the normalized inner product value r R , the left channel correction coefficient c R, and the multiplication value ⁇ R ⁇ c R ⁇ r R of the predetermined value ⁇ R are encoded by the right channel subtraction gain estimation unit 140 and the right channel.
  • the right channel subtraction gain code C ⁇ may represent the quantization value of the multiplication value ⁇ R ⁇ c R ⁇ r R as the object of decoding by the subtraction gain decoding unit 250. Note that ⁇ R should be the same value as ⁇ L.
  • the correction coefficient c L can be calculated to be the same value by both the coding device 100 and the decoding device 200. Therefore, as in the modified example of Example 1 and the modified example of Example 2, the normalized inner product value r L is used as the object of coding by the left channel subtraction gain estimation unit 120 and decoding by the left channel subtraction gain decoding unit 230.
  • the left channel subtraction gain estimation unit 120 and the left channel subtraction gain decoding unit 230 have the normalized internal product value r L so that the left channel subtraction gain sign C ⁇ represents the quantization value of the normalized internal product value r L.
  • the left channel subtraction gain ⁇ may be obtained by multiplying the quantization value by the left channel correction coefficient c L and ⁇ L , which is a predetermined value larger than 0 and smaller than 1.
  • the normalized inner product value r L and the multiplication value ⁇ L ⁇ r L of ⁇ L which is a value larger than 0 and smaller than 1, are encoded by the left channel subtraction gain estimation unit 120 and the left channel subtraction gain.
  • the left channel subtraction gain estimation unit 120 and the left channel subtraction gain decoding unit 230 are subjected to decoding by the decoding unit 230 so that the left channel subtraction gain code C ⁇ represents the quantization value of the multiplication value ⁇ L ⁇ r L.
  • the left channel subtraction gain ⁇ may be obtained by multiplying the quantization value of the multiplication value ⁇ L ⁇ r L by the left channel correction coefficient c L.
  • the normalized inner product value r R is used as the object of coding by the right channel subtraction gain estimation unit 140 and decoding by the right channel subtraction gain decoding unit 250.
  • the right channel subtraction gain estimation unit 140 and the right channel subtraction gain decoding unit 250 have the normalized internal product value r R so that the right channel subtraction gain sign C ⁇ represents the quantization value of the normalized internal product value r R.
  • the right channel subtraction gain ⁇ may be obtained by multiplying the quantization value by the right channel correction coefficient c R and ⁇ R , which is a predetermined value greater than 0 and less than 1.
  • the normalized inner product value r R and the multiplication value ⁇ R ⁇ r R of ⁇ R which is a value larger than 0 and smaller than 1, are encoded by the right channel subtraction gain estimation unit 140 and the right channel subtraction gain.
  • the right channel subtraction gain estimation unit 140 and the right channel subtraction gain decoding unit 250 are subjected to decoding by the decoding unit 250 so that the right channel subtraction gain code C ⁇ represents the quantization value of the multiplication value ⁇ R ⁇ r R.
  • the right channel subtraction gain ⁇ may be obtained by multiplying the quantization value of the multiplication value ⁇ R ⁇ r R by the right channel correction coefficient c R.
  • Example 4 The hearing quality problem described at the beginning of Example 3 occurs when the correlation between the left channel input sound signal and the right channel input sound signal is small, and this problem occurs between the left channel input sound signal and the right channel input sound signal. It does not occur much when the correlation of the input sound signal is large. Therefore, in Example 4, the left channel input is performed by using the left-right correlation coefficient ⁇ , which is the correlation coefficient between the left channel input sound signal and the right channel input sound signal, instead of the predetermined value of Example 3. The larger the correlation between the sound signal and the input sound signal of the right channel, the smaller the correlation between the input sound signal of the left channel and the input sound signal of the right channel, giving priority to reducing the energy of the quantization error of the decoded sound signal. The more priority is given to suppressing the deterioration of hearing quality.
  • Example 4 the coding side is different from Example 1 and Example 2, but the decoding side, that is, the left channel subtraction gain decoding unit 230 and the right channel subtraction gain decoding unit 250 is the same as in Example 1 and Example 2.
  • the difference between Example 4 and Example 1 and Example 2 will be described.
  • the coding device 100 of Example 4 also includes a left-right relationship information estimation unit 180 as shown by a broken line in FIG.
  • the left-right channel input sound signal input to the coding device 100 and the right channel input sound signal input to the coding device 100 are input to the left-right relationship information estimation unit 180.
  • the left-right relationship information estimation unit 180 obtains the left-right correlation coefficient ⁇ from the input left channel input sound signal and the right channel input sound signal and outputs the left-right correlation coefficient ⁇ (step S180).
  • the left-right correlation coefficient ⁇ is the correlation coefficient between the input sound signal of the left channel and the input sound signal of the right channel, and is a sample sequence of the input sound signal of the left channel x L (1), x L (2), .. ., x L (T) and the sample sequence of the input sound signal of the right channel x R (1), x R (2), ..., x R (T) may have a correlation coefficient of ⁇ 0.
  • Correlation coefficient considering the time difference for example, the correlation coefficient between the sample sequence of the input sound signal of the left channel and the sample sequence of the input sound signal of the right channel whose position is shifted after the sample string by ⁇ sample. It may be ⁇ ⁇ .
  • This ⁇ is the sound signal obtained by AD conversion of the sound picked up by the left channel microphone arranged in a certain space as the left channel input sound signal, and is the right channel microphone arranged in the space. Reaching the microphone for the left channel from the sound source that mainly emits sound in the space, assuming that the sound signal obtained by AD conversion of the collected sound is the input sound signal of the right channel.
  • This is information corresponding to the difference between the time and the arrival time from the sound source to the microphone for the right channel (so-called arrival time difference), and is hereinafter referred to as a left-right time difference.
  • the left-right time difference ⁇ may be obtained by any of the well-known methods, and may be obtained by the method described by the left-right relationship information estimation unit 181 of the second reference embodiment.
  • the above-mentioned correlation coefficient ⁇ ⁇ is a sound signal that reaches the microphone for the left channel from the sound source and is picked up, and a sound signal that reaches the microphone for the right channel from the sound source and is picked up. This is information corresponding to the correlation coefficient of.
  • the left channel subtraction gain estimation unit 120 replaces the step S120-13 with the normalized inner product value r L obtained in step S120-11 or step S120-113 and the left channel correction coefficient obtained in step S120-12. A value obtained by multiplying c L by the left-right correlation coefficient ⁇ obtained in step S180 is obtained (step S120-13 ′′). The left channel subtraction gain estimation unit 120 then replaces step S120-14 with a value obtained.
  • the candidate closest to the multiplication value ⁇ ⁇ c L ⁇ r L obtained in step S120-13 ”of the stored left channel subtraction gain candidates ⁇ cand (1), ..., ⁇ cand (A) (
  • the multiplication value (quantized value of ⁇ ⁇ c L ⁇ r L ) is obtained as the left channel subtraction gain ⁇ , and the left channel subtraction of the stored coefficients C ⁇ cand (1), ..., C ⁇ cand (A) is obtained.
  • the code corresponding to the gain ⁇ is obtained as the left channel subtraction gain code C ⁇ (step S120-14 ′′).
  • step S140-13 the right channel subtraction gain estimation unit 140 uses the normalized inner product value r R obtained in step S140-11 or step S140-113 and the right channel correction coefficient obtained in step S140-12. A value obtained by multiplying c R by the left-right correlation coefficient ⁇ obtained in step S180 is obtained (step S140-13 ′′). The right channel subtraction gain estimation unit 140 then replaces step S140-14 with a value obtained.
  • the candidate closest to the multiplication value ⁇ ⁇ c R ⁇ r R obtained in step S140-13 ”of the stored right channel subtraction gain candidates ⁇ cand (1), ..., ⁇ cand (B) (
  • the multiplication value (quantized value of ⁇ ⁇ c R ⁇ r R ) is obtained as the right channel subtraction gain ⁇ , and the right channel subtraction of the stored codes C ⁇ cand (1), ..., C ⁇ cand (B) is obtained.
  • the code corresponding to the gain ⁇ is obtained as the right channel subtraction gain code C ⁇ (step S140-14 ′′).
  • the correction coefficient c L can be calculated to be the same value by both the coding device 100 and the decoding device 200. Therefore, the multiplication value ⁇ ⁇ r L of the normalized inner product value r L and the left-right correlation coefficient ⁇ is used as the object of coding by the left channel subtraction gain estimation unit 120 and decoding by the left channel subtraction gain decoding unit 230.
  • the left channel subtraction gain sign C ⁇ represents the quantization value of the multiplication value ⁇ ⁇ r L
  • the left channel subtraction gain estimation unit 120 and the left channel subtraction gain decoding unit 230 are the quantization value of the multiplication value ⁇ ⁇ r L.
  • the left channel correction coefficient c L may be multiplied to obtain the left channel subtraction gain ⁇ .
  • the correction coefficient c R can be calculated to be the same value by both the coding device 100 and the decoding device 200. Therefore, the multiplication value ⁇ ⁇ r R of the normalized inner product value r R and the left-right correlation coefficient ⁇ is used as the object of coding by the right channel subtraction gain estimation unit 140 and decoding by the right channel subtraction gain decoding unit 250.
  • the right channel subtraction gain sign C ⁇ represents the quantization value of the multiplication value ⁇ ⁇ r R
  • the right channel subtraction gain estimation unit 140 and the right channel subtraction gain decoding unit 250 represent the quantization value of the multiplication value ⁇ ⁇ r R.
  • the right channel correction coefficient c R may be multiplied to obtain the right channel subtraction gain ⁇ .
  • the coding apparatus 101 of the second reference embodiment includes a downmix unit 110, a left channel subtraction gain estimation unit 120, a left channel signal subtraction unit 130, a right channel subtraction gain estimation unit 140, and a right channel signal subtraction unit. It includes 150, a monaural coding unit 160, a stereo coding unit 170, a left-right relationship information estimation unit 181 and a time shift unit 191.
  • the coding device 101 of the second reference form is different from the coding device 100 of the first reference form in that it includes the left-right relationship information estimation unit 181 and the time shift unit 191 and instead of the signal output by the downmix unit 110.
  • the signal output by the time shift unit 191 is used by the left channel subtraction gain estimation unit 120, the left channel signal subtraction unit 130, the right channel subtraction gain estimation unit 140, and the right channel signal subtraction unit 150.
  • the left-right time difference code C ⁇ which will be described later, is also output.
  • Other configurations and operations of the coding device 101 of the second reference form are the same as those of the coding device 100 of the first reference form.
  • the coding device 101 of the second reference embodiment performs the processes of steps S110 to S191 illustrated in FIG. 11 for each frame.
  • the difference between the coding device 101 of the second reference form and the coding device 100 of the first reference form will be described.
  • the left-right channel input sound signal input to the coding device 101 and the right-channel input sound signal input to the coding device 101 are input to the left-right relationship information estimation unit 181.
  • the left-right relationship information estimation unit 181 obtains and outputs a left-right time difference ⁇ and a left-right time difference code C ⁇ which is a code representing the left-right time difference ⁇ from the input sound signal of the left channel and the input sound signal of the right channel. (Step S181).
  • the sound signal obtained by AD conversion of the sound picked up by the left channel microphone arranged in a certain space is the input sound signal of the left channel, and the right channel microphone arranged in the space.
  • the sound signal obtained by AD conversion of the sound picked up in is the input sound signal of the right channel
  • the sound source that mainly emits sound in the space is transferred to the microphone for the left channel. This is information corresponding to the difference between the arrival time and the arrival time from the sound source to the microphone for the right channel (so-called arrival time difference).
  • the left / right time difference ⁇ is a positive value or a negative value with reference to one of the input sound signals. Can also be taken. That is, the left-right time difference ⁇ is information indicating how far ahead the same sound signal is included in the input sound signal of the left channel or the input sound signal of the right channel.
  • the same sound signal is included in the input sound signal of the left channel before the input sound signal of the right channel, it is also said that the left channel precedes, and the same sound signal is input to the left channel.
  • the input sound signal of the right channel is included before the sound signal, it is also said that the right channel precedes the sound signal.
  • the left-right time difference ⁇ may be obtained by any well-known method.
  • the left-right relationship information estimation unit 181 sets the input sound of the left channel for each candidate sample number ⁇ cand from predetermined ⁇ max to ⁇ min (for example, ⁇ max is a positive number and ⁇ min is a negative number).
  • a value (hereinafter referred to as a correlation value) ⁇ indicating the magnitude of the correlation between the signal sample sequence and the sample sequence of the input sound signal of the right channel located at a position shifted behind the sample sequence by the number of candidate samples ⁇ cand.
  • the cand is calculated, and the number of candidate samples ⁇ cand that maximizes the correlation value ⁇ cand is obtained as the left-right time difference ⁇ .
  • the left-right time difference ⁇ is a positive value when the left channel is ahead, and the left-right time difference ⁇ is a negative value when the right channel is ahead, and the left-right time difference ⁇ is The absolute value is a value (number of preceding samples) indicating how much the preceding channel precedes the other channel.
  • ⁇ cand is a positive value
  • a partial sample sequence of the input sound signal of the right channel x R (1 + ⁇ cand ) , x R (2 + ⁇ cand ), ..., x R (T) and the partial sample sequence of the input sound signal of the left channel located at a position shifted before the relevant partial sample sequence by the number of candidate samples ⁇ cand.
  • one or more samples of past input sound signals consecutive in the sample sequence of the input sound signal of the current frame may also be used to calculate the correlation value ⁇ cand, in which case the input of the past frame
  • the sample sequence of the sound signal may be stored in a storage unit (not shown) in the left-right relationship information estimation unit 181 for a predetermined number of frames.
  • the correlation value ⁇ cand may be calculated using the signal phase information as follows.
  • the left-right relation information estimation unit 181 first receives the input sound signal x L (1), x L (2), ..., x L (T) of the left channel and the input sound signal x R of the right channel.
  • the left-right relation information estimation unit 181 first receives the input sound signal x L (1), x L (2), ..., x L (T) of the left channel and the input sound signal x R of the right channel.
  • 0 to T 0 to T Obtain the frequency spectra X L (k) and X R (k) at each frequency k of -1.
  • the left-right relationship information estimation unit 181 uses the following equation (3-3) to calculate the phase difference spectrum ⁇ (k) at each frequency k. To get. By inverse Fourier transforming the obtained spectrum of the phase difference, the phase difference signal ⁇ ( ⁇ cand ) is obtained for each candidate sample number ⁇ cand from ⁇ max to ⁇ min as shown in the following equation (3-4). ..
  • the absolute values of the obtained phase difference signal ⁇ ( ⁇ cand ) are the input sound signal of the left channel x L (1), x L (2), ..., x L (T) and the input sound signal of the right channel.
  • This phase difference for each candidate sample number ⁇ cand represents a kind of correlation corresponding to the plausibility of the time difference of x R (1), x R (2), ..., x R (T).
  • the absolute value of the signal ⁇ ( ⁇ cand ) is used as the correlation value ⁇ cand.
  • the left-right relationship information estimation unit 181 obtains the number of candidate samples ⁇ cand that maximizes the correlation value ⁇ cand, which is the absolute value of the phase difference signal ⁇ ( ⁇ cand ), as the left-right time difference ⁇ .
  • the absolute value of the phase difference signal ⁇ ( ⁇ cand ) is as the correlation value ⁇ cand
  • a normalized value such as a relative difference from the average of the absolute values of the phase difference signals obtained for each of the candidate samples may be used. That is, for each ⁇ cand , the average value is obtained by the following equation (3-5) using a predetermined positive number ⁇ range , and the obtained average value ⁇ c ( ⁇ cand ) and the phase difference signal ⁇
  • the normalized correlation value obtained by the following equation (3-6) using ( ⁇ cand ) may be used as ⁇ cand.
  • (3-6) is a value of 0 or more and 1 or less, ⁇ cand is so close to 1 that it is plausible as a left-right time difference, and ⁇ cand is not plausible as a left-right time difference. It is a value showing a property close to 0.
  • the left-right relationship information estimation unit 181 may encode the left-right time difference ⁇ by a predetermined coding method so as to obtain the left-right time difference code C ⁇ which is a code that can uniquely identify the left-right time difference ⁇ .
  • a predetermined coding method a well-known coding method such as scalar quantization may be used.
  • ⁇ max and ⁇ min may be positive numbers, and ⁇ max and ⁇ min may be negative numbers. You may.
  • the left-right relationship information estimation unit 181 is used. Furthermore, the correlation value between the sample sequence of the input sound signal of the left channel and the sample sequence of the input sound signal of the right channel located at a position shifted behind the sample string by the left-right time difference ⁇ , that is, from ⁇ max to ⁇ . The maximum value of the correlation value ⁇ cand calculated for each candidate sample number ⁇ cand up to min is output as the left-right correlation coefficient ⁇ (step S180).
  • the time shift unit 191 includes the downmix signals x M (1), x M (2), ..., x M (T) output by the downmix unit 110 and the left and right output by the left-right relationship information estimation unit 181.
  • the time difference ⁇ and is input.
  • the time shift unit 191 has a downmix signal x M (1), x M ( 2), ..., x M (T) are output as they are to the left channel subtraction gain estimation unit 120 and the left channel signal subtraction unit 130 (that is, used by the left channel subtraction gain estimation unit 120 and the left channel signal subtraction unit 130).
  • the downmix signal was delayed by
  • the input downmix signal is output as it is to the subtraction gain estimation unit of the channel and the signal subtraction unit of the channel, and the left channel.
  • of the left-right time difference ⁇ is the subtraction gain estimation part of the channel and the channel.
  • the time shift unit 191 uses the downmix signal of the past frame to obtain the delay downmix signal, the downmix signal input in the past frame is stored in the storage unit (not shown) in the time shift unit 191. Is stored for a predetermined number of frames.
  • the left channel subtraction gain estimation unit 120 and the right channel subtraction gain estimation unit 140 do not use a method based on the principle of minimizing the quantization error, but a well-known method as exemplified in Patent Document 1 for the left channel subtraction gain ⁇ .
  • a means for obtaining a local decoding signal corresponding to the monaural code CM is provided in the subsequent stage of the monaural coding unit 160 of the coding device 101 or in the monaural coding unit 160, and the time shift is performed.
  • the quantized downmix signal ⁇ x M which is a locally decoded signal for monaural coding.
  • the above-mentioned processing may be performed using (1), ⁇ x M (2), ..., ⁇ x M (T).
  • the time shift section 191 replaces the downmix signals x M (1), x M (2), ..., x M (T) with the quantized downmix signal ⁇ x M (1).
  • ⁇ x M (2), ..., ⁇ x M (T) is output and the delay downmix signal x M' (1), x M' (2), ..., x M' (T) Instead, the delayed quantized downmix signal ⁇ x M' (1), ⁇ x M' (2), ..., ⁇ x M' (T) is output.
  • the left channel subtraction gain estimation unit 120, the left channel signal subtraction unit 130, the right channel subtraction gain estimation unit 140, and the right channel signal subtraction unit 150 perform the same operations as described in the first reference embodiment by the downmix unit 110.
  • the downmix signal x M (1), x M (2), ..., x M (T) the downmix signal x M (1), x M (2) input from the time shift section 191.
  • ..., x M (T) or delayed downmix signals x M' (1), x M' (2), ..., x M' (T) steps S120, S130, S140, S150).
  • the left channel subtraction gain estimation unit 120, the left channel signal subtraction unit 130, the right channel subtraction gain estimation unit 140, and the right channel signal subtraction unit 150 are the downmix signals x M (1), determined by the time shift unit 191.
  • the time shift section 191 replaces the downmix signals x M (1), x M (2), ..., x M (T) with the quantized downmix signals ⁇ x M (1), ⁇ x M.
  • the delay downmix signal x M' (1), x M' (2), ..., x M' (T) is replaced with a delay.
  • the quantized downmix signal ⁇ x M' (1), ⁇ x M' (2), ..., ⁇ x M' (T) is output, the left channel subtraction gain estimation unit 120 and the left channel
  • the signal subtraction unit 130, the right channel subtraction gain estimation unit 140, and the right channel signal subtraction unit 150 are the quantized downmix signals input from the time shift unit 191 ⁇ x M (1), ⁇ x M (2),. .., ⁇ x M (T) or delayed quantized downmix signal ⁇ x M' (1), ⁇ x M' (2), ..., ⁇ x M' (T) I do.
  • the decoding device 201 of the second reference embodiment includes a monaural decoding unit 210, a stereo decoding unit 220, a left channel subtraction gain decoding unit 230, a left channel signal addition unit 240, a right channel subtraction gain decoding unit 250, and a right side. It includes a channel signal addition unit 260, a left-right time difference decoding unit 271, and a time shift unit 281.
  • the decoding device 201 of the second reference form is different from the decoding device 200 of the first reference form in that the left-right time difference code C ⁇ , which will be described later, is also input in addition to the above-mentioned codes, and the time shift with the left-right time difference decoding unit 271.
  • the left channel signal addition unit 240 and the right channel signal addition unit 260 use the signal output by the time shift unit 281 instead of the signal output by the monaural decoding unit 210.
  • Other configurations and operations of the decoding device 201 of the second reference form are the same as those of the decoding device 200 of the first reference form.
  • the decoding device 201 of the second reference embodiment performs the processes of steps S210 to S281 illustrated in FIG. 13 for each frame.
  • the difference between the decoding device 201 of the second reference form and the decoding device 200 of the first reference form will be described.
  • the left-right time difference code C ⁇ input to the decoding device 201 is input to the left-right time difference decoding unit 271.
  • the left-right time difference decoding unit 271 decodes the left-right time difference code C ⁇ by a predetermined decoding method to obtain the left-right time difference ⁇ and outputs it (step S271).
  • a predetermined decoding method a decoding method corresponding to the coding method used in the left-right relationship information estimation unit 181 of the corresponding coding device 101 is used.
  • the left-right time difference ⁇ obtained by the left-right time difference decoding unit 271 is the same value as the left-right time difference ⁇ obtained by the left-right relationship information estimation unit 181 of the corresponding coding device 101, and is any one within the range from ⁇ max to ⁇ min. The value.
  • the time shift unit 281 includes a monaural decoding sound signal ⁇ x M (1), ⁇ x M (2), ..., ⁇ x M (T) output by the monaural decoding unit 210, and a left-right time difference decoding unit 271.
  • the output left-right time difference ⁇ and is input.
  • the time shift unit 281 has a monaural decoded sound signal ⁇ x M (1), ⁇ when the left-right time difference ⁇ is a positive value (that is, when the left-right time difference ⁇ indicates that the left channel precedes).
  • x M (2), ..., ⁇ x M (T) is output to the left channel signal adder 240 as it is (that is, it is decided to be used in the left channel signal adder 240), and the monaural decoded sound signal is
  • the monaural decoded sound signal is
  • the monaural decoded sound signal ⁇ x M (1), ⁇ x M (2),. .., ⁇ x M (T) is output as it is to the left channel signal addition unit 240 and the right channel signal addition unit 260 (that is, it is determined to be used by the left channel signal addition unit 240 and the right channel signal addition unit 260).
  • Step S281 Since the time shift unit 281 uses the monaural decoded sound signal of the past frame in order to obtain the delayed monaural decoded sound signal, the monaural input in the past frame is stored in the storage unit (not shown) in the time shift unit 281.
  • the decoded sound signal is stored for a predetermined number of frames.
  • the left channel signal addition unit 240 and the right channel signal addition unit 260 perform the same operation as described in the first reference embodiment, but the monaural decoding sound signal ⁇ x M (1), ⁇ x M ( 2), ..., ⁇ x M (2), ..., ⁇ x M monaural decoded sound signal input from the time shift section 281 instead of ⁇ x M (1), ⁇ x M (2), ..., ⁇ x M
  • This is done using (T) or the delayed monaural decoded sound signal ⁇ x M' (1), ⁇ x M' (2), ..., ⁇ x M' (T) (steps S240, S260).
  • the left channel signal addition unit 240 and the right channel signal addition unit 260 are the monaural decoded sound signals ⁇ x M (1), ⁇ x M (2), ..., ⁇ x M determined by the time shift unit 281. Same as described in the first reference mode using (T) or delayed monaural decoded sound signal ⁇ x M' (1), ⁇ x M' (2), ..., ⁇ x M'(T). Do the action.
  • the first embodiment is a modification of the coding device 101 of the second reference embodiment to generate a downmix signal in consideration of the relationship between the input sound signal of the left channel and the input sound signal of the right channel. ..
  • the coding apparatus of the first embodiment will be described. Since the code obtained by the coding device of the first embodiment can be decoded by the decoding device 201 of the second reference embodiment, the description of the decoding device will be omitted.
  • the coding apparatus 102 of the first embodiment includes a downmix unit 112, a left channel subtraction gain estimation unit 120, a left channel signal subtraction unit 130, a right channel subtraction gain estimation unit 140, and a right channel signal subtraction unit. It includes 150, a monaural coding unit 160, a stereo coding unit 170, a left-right relationship information estimation unit 182, and a time shift unit 191.
  • the coding device 102 of the first embodiment is different from the coding device 101 of the second reference embodiment in that the left-right relation information estimation unit 182 is replaced with the left-right relation information estimation unit 181 and the downmix unit 110 is replaced with the down-mix unit 110. As shown by the broken line in FIG.
  • the left-right relationship information estimation unit 182 obtains and outputs the left-right correlation coefficient ⁇ and the preceding channel information, and the output left-right correlation coefficient ⁇ and the preceding channel information are downmixed. It is input to and used in unit 112.
  • Other configurations and operations of the coding device 102 of the first embodiment are the same as those of the coding device 101 of the second reference embodiment.
  • the coding device 102 of the first embodiment performs the processing of steps S112 to S191 illustrated in FIG. 14 for each frame.
  • the difference between the coding device 102 of the first embodiment and the coding device 101 of the second reference embodiment will be described.
  • the left-right channel input sound signal input to the coding device 102 and the right channel input sound signal input to the coding device 102 are input to the left-right relationship information estimation unit 182.
  • the left-right relationship information estimation unit 182 uses the input sound signal of the left channel and the input sound signal of the right channel to obtain the left-right time difference ⁇ , the left-right time difference code C ⁇ which is a code representing the left-right time difference ⁇ , and the left-right correlation coefficient ⁇ . And the preceding channel information are obtained and output (step S182).
  • the process in which the left-right relationship information estimation unit 182 obtains the left-right time difference ⁇ and the left-right time difference code C ⁇ is the same as the left-right relationship information estimation unit 181 of the second reference form.
  • the left-right correlation coefficient ⁇ is the sound signal picked up from the sound source reaching the microphone for the left channel and the sound signal picked up from the sound source in the above assumption in the explanation of the left-right relationship information estimation unit 181 of the second reference form. This is information corresponding to the correlation coefficient between the sound signal that reaches the microphone for the channel and is picked up.
  • the preceding channel information is information corresponding to which microphone the sound emitted from the sound source reaches earlier, and the same sound signal is included in either the left channel input sound signal or the right channel input sound signal first. It is information indicating whether or not the signal is used, and is information indicating which channel, the left channel or the right channel, precedes.
  • the left-right relationship information estimation unit 182 is based on the sample sequence of the input sound signal of the left channel and the left-right time difference ⁇ from the sample sequence.
  • the left-right relationship information estimation unit 182 obtains and outputs information indicating that the left channel is ahead as the leading channel information, and the left-right time difference ⁇ is negative. If it is a value, information indicating that the right channel is leading is obtained and output as leading channel information.
  • the left-right time difference ⁇ is 0, the left-right relationship information estimation unit 182 may obtain and output information indicating that the left channel is leading as leading channel information, or the right channel may be leading. Information indicating that there is a leading channel may be obtained and output as leading channel information, but information indicating that none of the channels is leading may be obtained and output as leading channel information.
  • the downmix unit 112 includes a left channel input sound signal input to the coding device 102, a right channel input sound signal input to the coding device 102, and a left-right phase output by the left-right relationship information estimation unit 182.
  • the relation number ⁇ and the preceding channel information output by the left-right relation information estimation unit 182 are input.
  • the downmix unit 112 includes the input sound signal of the preceding channel among the input sound signal of the left channel and the input sound signal of the right channel in the downmix signal more as the left-right correlation coefficient ⁇ is larger. As described above, the input sound signal of the left channel and the input sound signal of the right channel are weighted and averaged to obtain a downmix signal and output (step S112).
  • the left-right relationship obtained can be obtained. Since the number ⁇ is a value of 0 or more and 1 or less, the downmix unit 112 uses a weight determined by the left-right correlation coefficient ⁇ for each corresponding sample number t to input sound signal x L (t) of the left channel. ) And the input sound signal x R (t) of the right channel are weighted and added to obtain the downmix signal x M (t).
  • the downmix unit 112 obtains the downmix signal in this way, the smaller the left-right correlation coefficient ⁇ of the downmix signal, the smaller the correlation between the left channel input sound signal and the right channel input sound signal.
  • the signal obtained by averaging the input sound signal of the left channel and the input sound signal of the right channel is closer, and the larger the left-right correlation coefficient ⁇ , that is, the greater the correlation between the input sound signal of the left channel and the input sound signal of the right channel. The closer it is to the input sound signal of the preceding channel among the input sound signal of the left channel and the input sound signal of the right channel.
  • the downmix unit 112 inputs the left channel so that the input sound signal of the left channel and the input sound signal of the right channel are included in the downmix signal with the same weight. It is preferable to obtain a downmix signal by averaging the sound signal and the input sound signal of the right channel and output it. Therefore, when the preceding channel information indicates that none of the channels is preceded by the downmix unit 112, the input sound signal x L (t) of the left channel and the input sound of the right channel are used for each sample number t.
  • x M (t) (x L (t) + x R (t)) / 2, which is the average of the signals x R (t), be the downmix signal x M (t).
  • the coding device 100 of the first reference embodiment may also be modified to generate a downmix signal in consideration of the relationship between the input sound signal of the left channel and the input sound signal of the right channel. 2 This will be described as an embodiment. Since the code obtained by the coding device of the second embodiment can be decoded by the decoding device 200 of the first reference embodiment, the description of the decoding device will be omitted.
  • the coding apparatus 103 of the second embodiment includes a downmix unit 112, a left channel subtraction gain estimation unit 120, a left channel signal subtraction unit 130, a right channel subtraction gain estimation unit 140, and a right channel signal subtraction unit. It includes 150, a monaural coding unit 160, a stereo coding unit 170, and a left-right relationship information estimation unit 183.
  • the coding device 103 of the second embodiment is different from the coding device 100 of the first reference embodiment in that the downmix unit 112 is included in place of the downmix unit 110, and as shown by the broken line in FIG.
  • the left-right relationship information estimation unit 183 includes the unit 183 to obtain and output the left-right correlation coefficient ⁇ and the preceding channel information, and the output left-right correlation coefficient ⁇ and the preceding channel information are input to and used in the downmix unit 112. Is.
  • Other configurations and operations of the coding device 103 of the second embodiment are the same as those of the coding device 100 of the first reference embodiment. Further, the operation of the downmix unit 112 of the coding device 103 of the second embodiment is the same as the operation of the downmix unit 112 of the coding device 102 of the first embodiment.
  • the coding device 103 of the second embodiment performs the processing of steps S112 to S183 illustrated in FIG. 15 for each frame.
  • the difference between the coding device 103 of the second embodiment and the coding device 100 of the first reference embodiment and the coding device 102 of the first embodiment will be described.
  • the left-right channel input sound signal input to the coding device 103 and the right-channel input sound signal input to the coding device 103 are input to the left-right relationship information estimation unit 183.
  • the left-right relationship information estimation unit 183 obtains and outputs the left-right correlation coefficient ⁇ and the preceding channel information from the input left channel input sound signal and the right channel input sound signal (step S183).
  • the left-right correlation coefficient ⁇ and the preceding channel information obtained and output by the left-right relationship information estimation unit 183 are the same as those described in the first embodiment. That is, the left-right relationship information estimation unit 183 may be the same as the left-right relationship information estimation unit 182 except that the left-right time difference ⁇ and the left-right time difference code C ⁇ do not have to be output.
  • the left and right relationship information estimating unit 183 tau for each candidate sample number tau cand from max to tau min, displacement and sample sequence of the input sound signal of the left channel, after the relevant sample sequence only the candidate sample number tau cand min
  • the maximum value of the correlation value ⁇ cand with the sample string of the input sound signal of the right channel at the position is obtained as the left-right correlation coefficient ⁇ and output, and ⁇ cand when the correlation value is the maximum value is positive. If it is a value, information indicating that the left channel is leading is obtained and output as leading channel information, and if ⁇ cand is a negative value when the correlation value is the maximum value, the right channel is obtained.
  • the information indicating that is preceded is obtained and output as the preceding channel information.
  • the left-right relationship information estimation unit 183 may obtain and output information indicating that the left channel is ahead as the leading channel information. , Information indicating that the right channel is leading may be obtained and output as leading channel information, but information indicating that none of the channels is leading may be obtained and output as leading channel information.
  • the coding device 104 of the third embodiment includes a left-right relationship information estimation unit 183, a downmix unit 112, a monaural coding unit 160, and a stereo coding unit 174.
  • the coding device 104 of the third embodiment performs the processes of step S183, step S112, step S160, and step S174 illustrated in FIG. 17 for each frame.
  • the coding device 104 of the third embodiment will be described with reference to the description of the second embodiment as appropriate.
  • the left-right relationship information estimation unit 183 is the same as the left-right relationship information estimation unit 183 of the second embodiment.
  • the left-right channel input sound signal input to the coding device 104 and the right-channel input sound signal input to the coding device 104 are input to the left-right relationship information estimation unit 183.
  • the left-right relation information estimation unit 183 has a left-right correlation coefficient ⁇ which is a correlation coefficient between the left channel input sound signal and the right channel input sound signal from the input left channel input sound signal and the right channel input sound signal.
  • the preceding channel information which is information indicating which of the input sound signal of the left channel and the input sound signal of the right channel precedes, is obtained and output (step S183).
  • the downmix unit 112 is the same as the downmix unit 112 of the second embodiment.
  • the downmix unit 112 includes a left channel input sound signal input to the coding device 104, a right channel input sound signal input to the coding device 104, and a left-right phase output by the left-right relationship information estimation unit 183.
  • the relation number ⁇ and the preceding channel information output by the left-right relation information estimation unit 183 are input.
  • the downmix unit 112 includes the input sound signal of the preceding channel among the input sound signal of the left channel and the input sound signal of the right channel in the downmix signal more as the left-right correlation coefficient ⁇ is larger.
  • the input sound signal of the left channel and the input sound signal of the right channel are weighted and averaged to obtain a downmix signal and output (step S112).
  • the input sound signal of the left channel is x L (t)
  • the input sound signal of the right channel is x R (t)
  • the downmix signal is x M (t)
  • the monaural coding unit 160 is the same as the monaural coding unit 160 of the second embodiment.
  • the downmix signal output by the downmix unit 112 is input to the monaural coding unit 160.
  • the monaural coding unit 160 encodes the input downmix signal to obtain a monaural code CM and outputs it (step S160).
  • Any coding method may be used for the monaural coding unit 160, and for example, a coding method such as the 3GPP EVS standard may be used.
  • the coding method is a coding method that performs coding processing independently of the stereo coding unit 174, which will be described later, that is, coding processing performed by the stereo code CS'obtained by the stereo coding unit 174 or the stereo coding unit 174.
  • the coding method may be used in which the coding process is performed without using the information obtained in
  • a coding method may be used in which the coding process is performed using the above.
  • Step S174 The input sound signal of the left channel input to the coding device 104 and the input sound signal of the right channel input to the coding device 104 are input to the stereo coding unit 174.
  • the stereo coding unit 174 encodes the input left channel input sound signal and the right channel input sound signal to obtain the stereo code CS'and output it (step S174).
  • the stereo coding unit 174 may use any coding method, for example, a stereo coding method corresponding to the stereo decoding method of the MPEG-4 AAC standard may be used, or the input of the input left channel may be used.
  • a coding method may be used in which the sound signal and the input sound signal of the right channel are coded independently, and the combination of all the codes obtained by the coding may be defined as the stereo code CS'.
  • the coding method is obtained in a coding method that performs coding processing independently of the monaural coding unit 160, that is, a monaural code CM obtained by the monaural coding unit 160 or a coding process performed by the monaural coding unit 160. It may be a coding method that performs coding processing without using information, or a code using the monaural code CM obtained by the monaural coding unit 160 or the information obtained in the coding process performed by the monaural coding unit 160. It may be a coding method that performs the conversion process.
  • the encoding device at least encodes the downmix signal obtained from the input sound signal of the left channel and the input sound signal of the right channel to obtain a code
  • the encoding device at least encodes the downmix signal obtained from the input sound signal of the left channel and the input sound signal of the right channel to obtain a code
  • Any coding device may adopt a configuration in which a downmix signal is obtained in consideration of the relationship between the input sound signal of the left channel and the input sound signal of the right channel.
  • the signal processing device not limited to the coding device, at least signals the downmix signal obtained from the input sound signal of the left channel and the input sound signal of the right channel to obtain a signal processing result
  • Any signal processing device may adopt a configuration in which a downmix signal is obtained in consideration of the relationship between the input sound signal of the left channel and the input sound signal of the right channel.
  • the downmix device used in the previous stage of these coding devices and signal processing devices a configuration for obtaining a downmix signal in consideration of the relationship between the input sound signal of the left channel and the input sound signal of the right channel may be adopted. good.
  • the sound signal coding device 105 of the fourth embodiment includes a left-right relationship information estimation unit 183, a downmix unit 112, and a coding unit 195.
  • the sound signal coding device 105 of the fourth embodiment performs the processes of step S183, step S112, and step S195 illustrated in FIG. 19 for each frame.
  • the sound signal coding device 105 of the fourth embodiment will be described with reference to the description of the second embodiment as appropriate.
  • the left-right relation information estimation unit 183 is the same as the left-right relation information estimation unit 183 of the second embodiment, and from the input left channel input sound signal and the right channel input sound signal, the left channel input sound signal and the right Obtaining the left-right correlation coefficient ⁇ , which is the correlation coefficient of the input sound signal of the channel, and the preceding channel information, which is information indicating which of the input sound signal of the left channel and the input sound signal of the right channel precedes. Output (step S183).
  • the downmix unit 112 is the same as the downmix unit 112 of the second embodiment, and the downmix signal includes the input sound signal of the channel that precedes the input sound signal of the left channel and the input sound signal of the right channel.
  • the downmix signal is obtained and output by weighting and averaging the input sound signal of the left channel and the input sound signal of the right channel so that the larger the left-right correlation coefficient ⁇ is, the larger the value is included (step S112).
  • Encoding unit 195 At least the downmix signal output by the downmix unit 112 is input to the coding unit 195.
  • the coding unit 195 at least encodes the input downmix signal to obtain a sound signal code and outputs it (step S195).
  • the coding unit 195 may also encode the input sound signal of the left channel and the input sound signal of the right channel, and may include the code obtained by this coding in the sound signal code and output it. In this case, as shown by the broken line in FIG. 18, the input sound signal of the left channel and the input sound signal of the right channel are also input to the coding unit 195.
  • the sound signal processing device 305 of the fourth embodiment includes a left-right relationship information estimation unit 183, a downmix unit 112, and a signal processing unit 315.
  • the sound signal processing device 305 of the fourth embodiment performs the processes of step S183, step S112, and step S315 illustrated in FIG. 21 for each frame.
  • step S183, step S112, and step S315 illustrated in FIG. 21 for each frame.
  • Signal processing unit 315 At least the downmix signal output by the downmix unit 112 is input to the signal processing unit 315.
  • the signal processing unit 315 at least performs signal processing on the input downmix signal to obtain a signal processing result and output it (step S315).
  • the signal processing unit 315 may also process the input sound signal of the left channel and the input sound signal of the right channel to obtain a signal processing result. In this case, as shown by a broken line in FIG. 20, the signal processing unit 315 may obtain a signal processing result.
  • the input sound signal of the left channel and the input sound signal of the right channel are also input to the 315.
  • the signal processing unit 315 may perform signal processing using the downmix signal on the input sound signal of each channel to obtain the output sound signal of each channel as the signal processing result, or the third embodiment.
  • This signal is obtained with respect to the left channel decoded sound signal and the right channel decoded sound signal obtained by decoding the code CS'obtained by the stereo coding unit 174 with a decoding device provided with a decoding unit corresponding to the stereo coding unit 174.
  • Processing may be performed. That is, it is a digital audio signal or acoustic signal obtained by collecting the left channel input sound signal and the right channel input sound signal input to the sound signal processing device 305 by each of the two microphones and performing AD conversion.
  • the left channel input sound signal and the right channel input sound signal input to the sound signal processing device 305 are the left channel decoded sound signal and the right channel decoded sound obtained by decoding the code. It may be a signal, or it may be a sound signal obtained in any way as long as it is a stereo two-channel sound signal.
  • the left channel input sound signal and the right channel input sound signal input to the sound signal processing device 305 are the left channel decoded sound signal and the right channel decoded sound signal obtained by decoding the code with another device, etc.
  • the same left-right correlation coefficient ⁇ and one or both of the preceding channel information obtained by the left-right relationship information estimation unit 183 may be obtained by another device.
  • the sound signal processing device 305 has the left-right phase obtained by another device. Either or both of the relation number ⁇ and the preceding channel information may be input.
  • the left-right relationship information estimation unit 183 may obtain the left-right correlation coefficient ⁇ or the preceding channel information that has not been input to the sound signal processing device 305, and the left-right correlation coefficient ⁇ and the preceding channel information.
  • the sound signal processing device 305 does not include the left-right relationship information estimation unit 183 and does not need to perform step S183. That is, as shown by the two-point chain line in FIG. 20, the sound signal processing device 305 includes the left-right relation information acquisition unit 185, and the left-right relation information acquisition unit 185 provides the left channel input sound signal and the right channel input sound signal.
  • the left-right correlation coefficient ⁇ which is the correlation coefficient of
  • the preceding channel information which is information indicating which of the input sound signal of the left channel and the input sound signal of the right channel precedes, should be obtained and output. It may be good (step S185). It can be said that the left-right relationship information estimation unit 183 and step S183 of each device described above are also in the category of the left-right relationship information acquisition unit 185 and step S185.
  • the sound signal downmix device 405 of the fourth embodiment includes the left-right relationship information acquisition unit 185 and the downmix unit 112.
  • the sound signal downmix device 405 performs the processes of steps S185 and S112 illustrated in FIG. 23 for each frame.
  • the sound signal downmix device 405 will be described with reference to the description of the second embodiment as appropriate. Similar to the sound signal processing device 305, the left channel input sound signal and the right channel input sound signal input to the sound signal downmix device 405 are picked up by each of the two microphones and AD-converted.
  • It may be a obtained digital audio signal or acoustic signal, a left channel decoded sound signal and a right channel decoded sound signal obtained by decoding a code, or a stereo two-channel sound. As long as it is a signal, it may be a sound signal obtained in any way.
  • left-right relationship information acquisition unit 185 which of the left-right correlation coefficient ⁇ , which is the correlation coefficient between the left channel input sound signal and the right channel input sound signal, and the left channel input sound signal and the right channel input sound signal precedes.
  • the preceding channel information which is information indicating whether or not the signal is being used, is obtained and output (step S185).
  • the left-right relationship information acquisition unit 185 is transferred from the separate device to the sound signal downmix device 405 as shown by the alternate long and short dash line in FIG.
  • the input left-right correlation coefficient ⁇ and preceding channel information are obtained and output to the downmix unit 112.
  • the left-right relationship information acquisition unit 185 includes a left-right relationship information estimation unit 183 as shown by a broken line in FIG. 22.
  • the left-right relationship information estimation unit 183 obtains the left-right correlation coefficient ⁇ and the preceding channel information from the left channel input sound signal and the right channel input sound signal, similarly to the left-right relationship information estimation unit 183 of the second embodiment. Output to the downmix unit 112.
  • the left-right relation information acquisition unit 185 includes the left-right relation information estimation unit 183 as shown by the broken line in FIG. ..
  • the left-right relationship information estimation unit 183 of the left-right relationship information acquisition unit 185 uses the left-right correlation coefficient ⁇ not obtained by another device or the preceding channel information not obtained by another device to obtain the left-right relationship information estimation unit 183 of the second embodiment. Similar to 183, it is obtained from the input sound signal of the left channel and the input sound signal of the right channel, and is output to the downmix unit 112.
  • the left-right relationship information acquisition unit 185 reduces the sound signal from the other device as shown by the alternate long and short dash line in FIG.
  • the left-right correlation coefficient ⁇ or the preceding channel information input to the mixing device 405 is output to the downmix unit 112.
  • the downmix unit 112 is the same as the downmix unit 112 of the second embodiment, and the left channel is input to the downmix signal based on the preceding channel information acquired by the left-right relationship information acquisition unit 185 and the left-right correlation coefficient.
  • the input sound signal of the preceding channel is included more as the left-right correlation coefficient ⁇ is larger, so that the input sound signal of the left channel and the input sound of the right channel are included.
  • the signals are weighted and averaged to obtain a downmix signal and output (step S112).
  • the input sound signal of the left channel is x L (t)
  • the input sound signal of the right channel is x R (t)
  • the downmix signal is x M (t)
  • each part of each coding device, each decoding device, the sound signal coding device, the sound signal processing device, and the sound signal downmixing device described above may be realized by a computer, and in this case, the functions that each device should have.
  • the processing content of is described by the program. Then, by loading this program into the storage unit 1020 of the computer 1000 shown in FIG. 24 and operating it in the arithmetic processing unit 1010, the input unit 1030, the output unit 1040, and the like, various processing functions in each of the above devices can be performed on the computer. It will be realized.
  • the program that describes this processing content can be recorded on a computer-readable recording medium.
  • the computer-readable recording medium is, for example, a non-temporary recording medium, specifically, a magnetic recording device, an optical disk, or the like.
  • the distribution of this program is carried out, for example, by selling, transferring, or renting a portable recording medium such as a DVD or CD-ROM on which the program is recorded.
  • the program may be stored in the storage device of the server computer, and the program may be distributed by transferring the program from the server computer to another computer via the network.
  • a computer that executes such a program first transfers the program recorded on the portable recording medium or the program transferred from the server computer to the auxiliary recording unit 1050, which is its own non-temporary storage device. Store. Then, at the time of executing the process, the computer reads the program stored in the auxiliary recording unit 1050, which is its own non-temporary storage device, into the storage unit 1020, and executes the process according to the read program. Further, as another execution form of this program, the computer may read the program directly from the portable recording medium into the storage unit 1020 and execute the processing according to the program, and further, the program from the server computer to this computer may be executed. Each time the is transferred, the processing according to the received program may be executed sequentially.
  • ASP Application Service Provider
  • the program in this embodiment includes information to be used for processing by a computer and equivalent to the program (data that is not a direct command to the computer but has a property of defining the processing of the computer, etc.).
  • the present device is configured by executing a predetermined program on the computer, but at least a part of these processing contents may be realized by hardware.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Mathematical Physics (AREA)
  • Stereophonic System (AREA)

Abstract

This sound signal downmixing device acquires a downmixed signal that is a signal obtained by mixing a left channel input sound signal and a right channel input sound signal, and is provided with a left-right relationship information acquisition unit 185 which acquires preceding channel information that is information indicating which of the left channel input sound signal and the right channel input sound signal precedes, and a left-right correlation coefficient that is a correlation coefficient between the left channel input sound signal and the right channel input sound signal; and a downmixing unit 112 which acquires a downmixed signal by weighted-averaging the left channel input sound signal and the right channel input sound signal such that the preceding channel input sound signal out of the left channel input sound signal and the right channel input sound signal is contained in a larger amount as the left-right correlation coefficient is larger on the basis of the preceding channel information and the left-right correlation coefficient.

Description

音信号ダウンミックス方法、音信号符号化方法、音信号ダウンミックス装置、音信号符号化装置、プログラム及び記録媒体Sound signal downmixing method, sound signal coding method, sound signal downmixing device, sound signal coding device, program and recording medium
 本発明は、音信号をモノラルで符号化したり、モノラル符号化とステレオ符号化を併用して音信号を符号化したり、音信号をモノラルで信号処理したり、ステレオの音信号にモノラルの音信号を用いた信号処理をしたりするために、2チャネルの音信号からモノラルの音信号を得る技術に関する。 The present invention encodes a sound signal in monaural, encodes a sound signal by using monaural coding and stereo coding in combination, processes a sound signal in monaural, and makes a stereo sound signal into a monaural sound signal. The present invention relates to a technique for obtaining a monaural sound signal from a two-channel sound signal in order to perform signal processing using the above.
 2チャネルの音信号からモノラルの音信号を得て、2チャネルの音信号とモノラルの音信号をエンベデッド符号化/復号する技術として、特許文献1の技術がある。特許文献1には、入力された左チャネルの音信号と入力された右チャネルの音信号を対応するサンプルごとに平均することでモノラル信号を得て、モノラル信号を符号化(モノラル符号化)してモノラル符号を得て、モノラル符号を復号(モノラル復号)してモノラル局部復号信号を得て、左チャネルと右チャネルのそれぞれについて、入力された音信号と、モノラル局部復号信号から得た予測信号と、の差分(予測残差信号)を符号化する技術が開示されている。特許文献1の技術では、それぞれのチャネルについて、モノラル局部復号信号に遅延を与えて振幅比を与えた信号を予測信号として、入力された音信号と予測信号の誤差が最小となる遅延と振幅比を有する予測信号を選択するか、または、入力された音信号とモノラル局部復号信号との間の相互相関を最大にする遅延差と振幅比を有する予測信号を用いて、入力された音信号から予測信号を減算して予測残差信号を得て、予測残差信号を符号化/復号の対象とすることで、各チャネルの復号音信号の音質劣化を抑えている。 There is a technique of Patent Document 1 as a technique of obtaining a monaural sound signal from a two-channel sound signal and embedding coding / decoding the two-channel sound signal and the monaural sound signal. In Patent Document 1, a monaural signal is obtained by averaging the input left channel sound signal and the input right channel sound signal for each corresponding sample, and the monaural signal is encoded (monaural coding). To obtain a monaural code, decode the monaural code (monaural decoding) to obtain a monaural local decoding signal, and for each of the left channel and the right channel, the input sound signal and the prediction signal obtained from the monaural local decoding signal. A technique for encoding the difference between and (predicted residual signal) is disclosed. In the technique of Patent Document 1, for each channel, a signal obtained by giving a delay to a monaural locally decoded signal and giving an amplitude ratio is used as a prediction signal, and a delay and amplitude ratio that minimizes the error between the input sound signal and the prediction signal. From the input sound signal, either select a prediction signal with, or use a prediction signal with a delay difference and amplitude ratio that maximizes the intercorrelation between the input sound signal and the monaural locally decoded signal. By subtracting the predicted signal to obtain the predicted residual signal and targeting the predicted residual signal for coding / decoding, deterioration of the sound quality of the decoded sound signal of each channel is suppressed.
WO2006-070751号公報WO2006-070751 Gazette
 特許文献1の技術では、予測信号を得る際にモノラル局部復号信号に与える遅延と振幅比を最適化することで、各チャネルの符号化効率を高めることができる。しかし、特許文献1の技術では、モノラル局部復号信号は左チャネルの音信号と右チャネルの音信号を平均して得たモノラル信号を符号化・復号して得たものである。すなわち、特許文献1の技術には、2チャネルの音信号から符号化処理などの信号処理に有用なモノラル信号を得る工夫がされていないという課題がある。
 本発明では、2チャネルの音信号から符号化処理などの信号処理に有用なモノラル信号を得る技術を提供することを目的とする。
In the technique of Patent Document 1, the coding efficiency of each channel can be improved by optimizing the delay and the amplitude ratio given to the monaural locally decoded signal when the prediction signal is obtained. However, in the technique of Patent Document 1, the monaural locally decoded signal is obtained by encoding and decoding the monaural signal obtained by averaging the sound signal of the left channel and the sound signal of the right channel. That is, there is a problem that the technique of Patent Document 1 is not devised to obtain a monaural signal useful for signal processing such as coding processing from a two-channel sound signal.
An object of the present invention is to provide a technique for obtaining a monaural signal useful for signal processing such as coding processing from a two-channel sound signal.
 本発明の一態様は、左チャネル入力音信号と右チャネル入力音信号を混合した信号であるダウンミックス信号を得る音信号ダウンミックス方法であって、左チャネル入力音信号と右チャネル入力音信号のどちらが先行しているかを表す情報である先行チャネル情報と、左チャネル入力音信号と右チャネル入力音信号の相関係数である左右相関係数と、を得る左右関係情報取得ステップと、先行チャネル情報と左右相関係数とに基づき、左チャネル入力音信号と右チャネル入力音信号のうちの先行しているチャネルの入力音信号のほうが、左右相関係数が大きいほど大きく含まれるように、左チャネル入力音信号と右チャネル入力音信号を重み付け平均してダウンミックス信号を得るダウンミックスステップと、を有することを特徴とする。 One aspect of the present invention is a sound signal downmix method for obtaining a downmix signal which is a signal obtained by mixing a left channel input sound signal and a right channel input sound signal, and is a method of obtaining a downmix signal of a left channel input sound signal and a right channel input sound signal. The left-right relationship information acquisition step for obtaining the leading channel information, which is information indicating which is leading, and the left-right correlation coefficient, which is the correlation coefficient between the left channel input sound signal and the right channel input sound signal, and the leading channel information. And the left channel input sound signal and the right channel input sound signal, the input sound signal of the preceding channel is included more as the left and right correlation coefficient is larger. It is characterized by having a downmix step of obtaining a downmix signal by weighting and averaging the input sound signal and the right channel input sound signal.
 本発明の一態様は、前記の音信号ダウンミックス方法であって、サンプル番号をtとし、左チャネル入力音信号をxL(t)とし、右チャネル入力音信号をxR(t)とし、ダウンミックス信号をxM(t)とし、左右相関係数をγとして、ダウンミックスステップは、先行チャネル情報が左チャネルが先行していることを表す場合には、各サンプル番号tについて、xM(t)=((1+γ)/2)×xL(t)+((1-γ)/2)×xR(t)によりダウンミックス信号を得て、先行チャネル情報が右チャネルが先行していることを表す場合には、各サンプル番号tについて、xM(t)=((1-γ)/2)×xL(t)+((1+γ)/2)×xR(t)によりダウンミックス信号を得て、先行チャネル情報が何れのチャネルも先行していないことを表す場合には、各サンプル番号tについて、xM(t)=(xL(t)+xR(t))/2によりダウンミックス信号を得ることを特徴とする。 One aspect of the present invention is the above-mentioned sound signal downmix method, in which the sample number is t, the left channel input sound signal is x L (t), and the right channel input sound signal is x R (t). With the downmix signal as x M (t) and the left-right correlation coefficient as γ, the downmix step is x M for each sample number t if the leading channel information indicates that the left channel is leading. The downmix signal is obtained by (t) = ((1 + γ) / 2) × x L (t) + ((1-γ) / 2) × x R (t), and the preceding channel information is the right channel. For each sample number t, x M (t) = ((1-γ) / 2) × x L (t) + ((1 + γ) / 2) × x If the downmix signal is obtained by R (t) and the preceding channel information indicates that none of the channels are leading, then for each sample number t, x M (t) = (x L (t) + It is characterized by obtaining a downmix signal by x R (t)) / 2.
 本発明の一態様は、前記の音信号ダウンミックス方法を音信号ダウンミックスステップとして有し、ダウンミックスステップが得たダウンミックス信号を符号化してモノラル符号を得るモノラル符号化ステップと、左チャネル入力音信号と右チャネル入力音信号を符号化してステレオ符号を得るステレオ符号化ステップと、を更に有することを特徴とする。 One aspect of the present invention includes the above-mentioned sound signal downmix method as a sound signal downmix step, a monaural coding step for encoding the downmix signal obtained by the downmix step to obtain a monaural code, and a left channel input. It is further characterized by having a stereo coding step of encoding a sound signal and a right channel input sound signal to obtain a stereo code.
 本発明によれば、2チャネルの音信号から符号化処理などの信号処理に有用なモノラル信号を得ることができる。 According to the present invention, a monaural signal useful for signal processing such as coding processing can be obtained from a two-channel sound signal.
第1参考形態と第2実施形態の符号化装置の例を示すブロック図である。It is a block diagram which shows the example of the coding apparatus of 1st reference embodiment and 2nd Embodiment. 第1参考形態の符号化装置の処理の例を示す流れ図である。It is a flow chart which shows the example of the processing of the coding apparatus of 1st reference form. 第1参考形態の復号装置の例を示すブロック図である。It is a block diagram which shows the example of the decoding apparatus of the 1st reference form. 第1参考形態の復号装置の処理の例を示す流れ図である。It is a flow chart which shows the example of the processing of the decoding apparatus of the 1st reference form. 第1参考形態の左チャネル減算利得推定部と右チャネル減算利得推定部の処理の例を示す流れ図である。It is a flow chart which shows the example of the processing of the left channel subtraction gain estimation part and the right channel subtraction gain estimation part of the 1st reference form. 第1参考形態の左チャネル減算利得推定部と右チャネル減算利得推定部の処理の例を示す流れ図である。It is a flow chart which shows the example of the processing of the left channel subtraction gain estimation part and the right channel subtraction gain estimation part of the 1st reference form. 第1参考形態の左チャネル減算利得復号部と右チャネル減算利得復号部の処理の例を示す流れ図である。It is a flow chart which shows the example of the processing of the left channel subtraction gain decoding part and the right channel subtraction gain decoding part of the 1st reference embodiment. 第1参考形態の左チャネル減算利得推定部と右チャネル減算利得推定部の処理の例を示す流れ図である。It is a flow chart which shows the example of the processing of the left channel subtraction gain estimation part and the right channel subtraction gain estimation part of the 1st reference form. 第1参考形態の左チャネル減算利得推定部と右チャネル減算利得推定部の処理の例を示す流れ図である。It is a flow chart which shows the example of the processing of the left channel subtraction gain estimation part and the right channel subtraction gain estimation part of the 1st reference form. 第2参考形態と第1実施形態の符号化装置の例を示すブロック図である。It is a block diagram which shows the example of the coding apparatus of the 2nd reference embodiment and 1st Embodiment. 第2参考形態の符号化装置の処理の例を示す流れ図である。It is a flow chart which shows the example of the processing of the coding apparatus of 2nd reference form. 第2参考形態の復号装置の例を示すブロック図である。It is a block diagram which shows the example of the decoding apparatus of the 2nd reference form. 第2参考形態の復号装置の処理の例を示す流れ図である。It is a flow chart which shows the example of the processing of the decoding apparatus of the 2nd reference form. 第1実施形態の符号化装置の処理の例を示す流れ図である。It is a flow chart which shows the example of the processing of the coding apparatus of 1st Embodiment. 第2実施形態の符号化装置の処理の例を示す流れ図である。It is a flow chart which shows the example of the processing of the coding apparatus of 2nd Embodiment. 第3実施形態の符号化装置の例を示すブロック図である。It is a block diagram which shows the example of the coding apparatus of 3rd Embodiment. 第3実施形態の符号化装置の処理の例を示す流れ図である。It is a flow chart which shows the example of the processing of the coding apparatus of 3rd Embodiment. 第4実施形態の音信号符号化装置の例を示すブロック図である。It is a block diagram which shows the example of the sound signal coding apparatus of 4th Embodiment. 第4実施形態の音信号符号化装置の処理の例を示す流れ図である。It is a flow chart which shows the example of the processing of the sound signal coding apparatus of 4th Embodiment. 第4実施形態の音信号処理装置の例を示すブロック図である。It is a block diagram which shows the example of the sound signal processing apparatus of 4th Embodiment. 第4実施形態の音信号処理装置の処理の例を示す流れ図である。It is a flow chart which shows the example of the processing of the sound signal processing apparatus of 4th Embodiment. 第4実施形態の音信号ダウンミックス装置の例を示すブロック図である。It is a block diagram which shows the example of the sound signal downmix apparatus of 4th Embodiment. 第4実施形態の音信号ダウンミックス装置の処理の例を示す流れ図である。It is a flow chart which shows the example of the processing of the sound signal downmix apparatus of 4th Embodiment. 本発明の実施形態における各装置を実現するコンピュータの機能構成の一例を示す図である。It is a figure which shows an example of the functional structure of the computer which realizes each apparatus in embodiment of this invention.
 まず、明細書における表記方法について説明する。ある文字xに対する^xのような上付き添え字の”^”は、本来”x”の真上に記載されるべきである。しかし、明細書の記載表記の制約上、^xと記載することもある。
<第1参考形態>
 発明の実施形態を説明する前に、第1参考形態と第2参考形態として、第2実施形態の発明と第1実施形態の発明を実施するための元となる形態の符号化装置と復号装置について説明する。なお、明細書及び特許請求の範囲において、符号化装置のことを音信号符号化装置、符号化方法のことを音信号符号化方法、復号装置のことを音信号復号装置、復号方法のことを音信号復号方法と呼ぶこともある。
First, the notation method in the specification will be described. The superscript "^", such as ^ x for a character x, should be written directly above the "x". However, due to restrictions on the description notation in the specification, it may be described as ^ x.
<1st reference form>
Before explaining the embodiment of the invention, as the first reference embodiment and the second reference embodiment, the coding device and the decoding device of the original form for carrying out the invention of the second embodiment and the invention of the first embodiment. Will be described. Within the scope of the specification and patent claims, the coding device is a sound signal coding device, the coding method is a sound signal coding method, the decoding device is a sound signal decoding device, and the decoding method is a decoding method. It is also called a sound signal decoding method.
≪符号化装置100≫
 第1参考形態の符号化装置100は、図1に示す通り、ダウンミックス部110と左チャネル減算利得推定部120と左チャネル信号減算部130と右チャネル減算利得推定部140と右チャネル信号減算部150とモノラル符号化部160とステレオ符号化部170を含む。符号化装置100は、例えば20msの所定の時間長のフレーム単位で、入力された2チャネルステレオの時間領域の音信号を符号化して、後述するモノラル符号CMと左チャネル減算利得符号Cαと右チャネル減算利得符号Cβとステレオ符号CSとを得て出力する。符号化装置に入力される2チャネルステレオの時間領域の音信号は、例えば、音声や音楽などの音を2個のマイクロホンそれぞれで収音してAD変換して得られたディジタルの音声信号又は音響信号であり、左チャネルの入力音信号と右チャネルの入力音信号から成る。符号化装置が出力する符号、すなわち、モノラル符号CMと左チャネル減算利得符号Cαと右チャネル減算利得符号Cβとステレオ符号CS、は復号装置へ入力される。符号化装置100は、各フレームについて、図2に例示するステップS110からステップS170の処理を行う。
<< Encoding device 100 >>
As shown in FIG. 1, the coding apparatus 100 of the first reference embodiment includes a downmix unit 110, a left channel subtraction gain estimation unit 120, a left channel signal subtraction unit 130, a right channel subtraction gain estimation unit 140, and a right channel signal subtraction unit. It includes 150, a monaural coding unit 160, and a stereo coding unit 170. The coding device 100 encodes a sound signal in the time region of the input 2-channel stereo in units of frames having a predetermined time length of, for example, 20 ms, and has a monaural code CM, a left channel subtraction gain code Cα, and a right channel, which will be described later. The subtraction gain code Cβ and the stereo code CS are obtained and output. The sound signal in the time region of the 2-channel stereo input to the encoding device is, for example, a digital sound signal or sound obtained by collecting sounds such as voice and music with each of two microphones and performing AD conversion. It is a signal and consists of a left channel input sound signal and a right channel input sound signal. The codes output by the encoding device, that is, the monaural code CM, the left channel subtraction gain code Cα, the right channel subtraction gain code Cβ, and the stereo code CS are input to the decoding device. The coding device 100 performs the processes of steps S110 to S170 illustrated in FIG. 2 for each frame.
[ダウンミックス部110]
 ダウンミックス部110には、符号化装置100に入力された左チャネルの入力音信号と、符号化装置100に入力された右チャネルの入力音信号と、が入力される。ダウンミックス部110は、入力された左チャネルの入力音信号と右チャネルの入力音信号から、左チャネルの入力音信号と右チャネルの入力音信号を混合した信号であるダウンミックス信号を得て出力する(ステップS110)。
[Downmix section 110]
The input sound signal of the left channel input to the coding device 100 and the input sound signal of the right channel input to the coding device 100 are input to the downmix unit 110. The downmix unit 110 obtains and outputs a downmix signal which is a signal obtained by mixing the input sound signal of the left channel and the input sound signal of the right channel from the input sound signal of the left channel and the input sound signal of the right channel. (Step S110).
 例えば、フレーム当たりのサンプル数をTとすると、ダウンミックス部110には、符号化装置100にフレーム単位で入力された左チャネルの入力音信号xL(1), xL(2), ..., xL(T)と右チャネルの入力音信号xR(1), xR(2), ..., xR(T)が入力される。ここで、Tは正の整数であり、例えば、フレーム長が20msであり、サンプリング周波数が32kHzであれば、Tは640である。ダウンミックス部110は、入力された左チャネルの入力音信号と右チャネルの入力音信号の対応するサンプルごとのサンプル値の平均値による系列をダウンミックス信号xM(1), xM(2), ..., xM(T)として得て出力する。すなわち、各サンプル番号をtとすると、xM(t)=(xL(t)+xR(t))/2である。 For example, assuming that the number of samples per frame is T, the input sound signal of the left channel x L (1), x L (2), .. ., x L (T) and right channel input sound signals x R (1), x R (2), ..., x R (T) are input. Here, T is a positive integer, for example, if the frame length is 20 ms and the sampling frequency is 32 kHz, T is 640. The downmix unit 110 downmix signals x M (1), x M (2) for a series of the average values of the sample values of the input left channel input sound signal and the right channel input sound signal for each corresponding sample. , ..., x M (T) and output. That is, if each sample number is t, then x M (t) = (x L (t) + x R (t)) / 2.
[左チャネル減算利得推定部120]
 左チャネル減算利得推定部120には、符号化装置100に入力された左チャネルの入力音信号xL(1), xL(2), ..., xL(T)と、ダウンミックス部110が出力したダウンミックス信号xM(1), xM(2), ..., xM(T)と、が入力される。左チャネル減算利得推定部120は、入力された左チャネルの入力音信号とダウンミックス信号から、左チャネル減算利得αと、左チャネル減算利得αを表す符号である左チャネル減算利得符号Cαと、を得て出力する(ステップS120)。左チャネル減算利得推定部120は、左チャネル減算利得αと左チャネル減算利得符号Cαを、特許文献1で振幅比gを求めている方法やその振幅比gを符号化する方法に例示されるような周知の方法、または、新たに発案した量子化誤差を最小化する原理に基づく方法で求める。量子化誤差を最小化する原理とこの原理に基づく方法については後述する。
[Left channel subtraction gain estimation unit 120]
The left channel subtraction gain estimation unit 120 includes left channel input sound signals x L (1), x L (2), ..., x L (T) input to the coding apparatus 100, and a downmix unit. The downmix signals x M (1), x M (2), ..., x M (T) output by 110 are input. The left channel subtraction gain estimation unit 120 converts the left channel subtraction gain α and the left channel subtraction gain code Cα, which is a code representing the left channel subtraction gain α, from the input left channel input sound signal and the downmix signal. Obtain and output (step S120). The left channel subtraction gain estimation unit 120 is exemplified in the method of obtaining the amplitude ratio g in Patent Document 1 and the method of encoding the amplitude ratio g of the left channel subtraction gain α and the left channel subtraction gain code Cα. It is obtained by a well-known method or a newly proposed method based on the principle of minimizing the quantization error. The principle of minimizing the quantization error and the method based on this principle will be described later.
[左チャネル信号減算部130]
 左チャネル信号減算部130には、符号化装置100に入力された左チャネルの入力音信号xL(1), xL(2), ..., xL(T)と、ダウンミックス部110が出力したダウンミックス信号xM(1), xM(2), ..., xM(T)と、左チャネル減算利得推定部120が出力した左チャネル減算利得αと、が入力される。左チャネル信号減算部130は、対応するサンプルtごとに、ダウンミックス信号のサンプル値xM(t)と左チャネル減算利得αとを乗算した値α×xM(t)を左チャネルの入力音信号のサンプル値xL(t)から減算した値xL(t)-α×xM(t)による系列を左チャネル差分信号yL(1), yL(2), ..., yL(T)として得て出力する(ステップS130)。すなわち、yL(t)=xL(t)-α×xM(t)である。符号化装置100においては、局部復号信号を得るための遅延や演算処理量を要さないようにするために、左チャネル信号減算部130では、モノラル符号化の局部復号信号である量子化済みのダウンミックス信号ではなく、ダウンミックス部110が得た量子化されていないダウンミックス信号xM(t)を用いるとよい。ただし、左チャネル減算利得推定部120が量子化誤差を最小化する原理に基づく方法ではなく特許文献1に例示されているような周知の方法で左チャネル減算利得αを得る場合には、符号化装置100のモノラル符号化部160の後段またはモノラル符号化部160内にモノラル符号CMに対応する局部復号信号を得る手段を備えて、左チャネル信号減算部130では、ダウンミックス信号xM(1), xM(2), ..., xM(T)に代えて、特許文献1などの従来の符号化装置と同様に、モノラル符号化の局部復号信号である量子化済みダウンミックス信号^xM(1), ^xM(2), ..., ^xM(T)を用いて左チャネル差分信号を得てもよい。
[Left channel signal subtraction unit 130]
The left channel signal subtraction unit 130 includes left channel input sound signals x L (1), x L (2), ..., x L (T) input to the encoding device 100, and a downmix unit 110. The downmix signals x M (1), x M (2), ..., x M (T) output by are input, and the left channel subtraction gain α output by the left channel subtraction gain estimation unit 120 is input. .. The left channel signal subtraction unit 130 sets the value α × x M (t) obtained by multiplying the sample value x M (t) of the downmix signal by the left channel subtraction gain α for each corresponding sample t as the input sound of the left channel. signal sample values x L values by subtracting from the (t) x L (t) -α × x M (t) left channel differential signal series by y L (1), y L (2), ..., y Obtained as L (T) and output (step S130). That is, y L (t) = x L (t) -α × x M (t). In the coding apparatus 100, in order to avoid the delay and the amount of arithmetic processing required for obtaining the local decoding signal, the left channel signal subtracting unit 130 has already been quantized, which is a monaural coding local decoding signal. Instead of the downmix signal, it is preferable to use the non-quantized downmix signal x M (t) obtained by the downmix unit 110. However, when the left channel subtraction gain estimation unit 120 obtains the left channel subtraction gain α by a well-known method as exemplified in Patent Document 1 instead of a method based on the principle of minimizing the quantization error, it is encoded. A means for obtaining a local decoding signal corresponding to the monaural code CM is provided in the subsequent stage of the monaural coding unit 160 of the apparatus 100 or in the monaural coding unit 160, and the downmix signal x M (1) is provided in the left channel signal subtraction unit 130. , x M (2), ..., x M (T), instead of the conventional coding device such as Patent Document 1, the quantized downmix signal which is a local decoding signal of monaural coding ^ The left channel difference signal may be obtained using x M (1), ^ x M (2), ..., ^ x M (T).
[右チャネル減算利得推定部140]
 右チャネル減算利得推定部140には、符号化装置100に入力された右チャネルの入力音信号xR(1), xR(2), ..., xR(T)と、ダウンミックス部110が出力したダウンミックス信号xM(1), xM(2), ..., xM(T)と、が入力される。右チャネル減算利得推定部140は、入力された右チャネルの入力音信号とダウンミックス信号から、右チャネル減算利得βと、右チャネル減算利得βを表す符号である右チャネル減算利得符号Cβと、を得て出力する(ステップS140)。右チャネル減算利得推定部140は、右チャネル減算利得βと右チャネル減算利得符号Cβを、特許文献1で振幅比gを求めている方法やその振幅比gを符号化する方法に例示されるような周知の方法、または、新たに発案した量子化誤差を最小化する原理に基づく方法で求める。量子化誤差を最小化する原理とこの原理に基づく方法については後述する。
[Right channel subtraction gain estimation unit 140]
The right channel subtraction gain estimation unit 140 contains the right channel input sound signals x R (1), x R (2), ..., x R (T) input to the encoding device 100, and a downmix unit. The downmix signals x M (1), x M (2), ..., x M (T) output by 110 are input. The right channel subtraction gain estimation unit 140 converts the right channel subtraction gain β and the right channel subtraction gain code Cβ, which is a code representing the right channel subtraction gain β, from the input right channel input sound signal and the downmix signal. Obtain and output (step S140). The right channel subtraction gain estimation unit 140 is exemplified in the method of obtaining the amplitude ratio g in Patent Document 1 and the method of encoding the amplitude ratio g of the right channel subtraction gain β and the right channel subtraction gain code Cβ. It is obtained by a well-known method or a newly proposed method based on the principle of minimizing the quantization error. The principle of minimizing the quantization error and the method based on this principle will be described later.
[右チャネル信号減算部150]
 右チャネル信号減算部150には、符号化装置100に入力された右チャネルの入力音信号xR(1), xR(2), ..., xR(T)と、ダウンミックス部110が出力したダウンミックス信号xM(1), xM(2), ..., xM(T)と、右チャネル減算利得推定部140が出力した右チャネル減算利得βと、が入力される。右チャネル信号減算部150は、対応するサンプルtごとに、ダウンミックス信号のサンプル値xM(t)と右チャネル減算利得βとを乗算した値β×xM(t)を右チャネルの入力音信号のサンプル値xR(t)から減算した値xR(t)-β×xM(t)による系列を右チャネル差分信号yR(1), yR(2), ..., yR(T)として得て出力する(ステップS150)。すなわち、yR(t)=xR(t)-β×xM(t)である。右チャネル信号減算部150では、左チャネル信号減算部130と同様に、符号化装置100において局部復号信号を得るための遅延や演算処理量を要さないようにするために、モノラル符号化の局部復号信号である量子化済みのダウンミックス信号ではなく、ダウンミックス部110が得た量子化されていないダウンミックス信号xM(t)を用いるとよい。ただし、右チャネル減算利得推定部140が量子化誤差を最小化する原理に基づく方法ではなく特許文献1に例示されているような周知の方法で右チャネル減算利得βを得る場合には、符号化装置100のモノラル符号化部160の後段またはモノラル符号化部160内にモノラル符号CMに対応する局部復号信号を得る手段を備えて、左チャネル信号減算部130と同様に、右チャネル信号減算部150では、ダウンミックス信号xM(1), xM(2), ..., xM(T)に代えて、特許文献1などの従来の符号化装置と同様に、モノラル符号化の局部復号信号である量子化済みダウンミックス信号^xM(1), ^xM(2), ..., ^xM(T)を用いて右チャネル差分信号を得てもよい。
[Right channel signal subtraction unit 150]
The right channel signal subtraction unit 150 includes right channel input sound signals x R (1), x R (2), ..., x R (T) input to the encoding device 100, and a downmix unit 110. The downmix signals x M (1), x M (2), ..., x M (T) output by and the right channel subtraction gain β output by the right channel subtraction gain estimation unit 140 are input. .. The right channel signal subtraction unit 150 sets the value β × x M (t) obtained by multiplying the sample value x M (t) of the downmix signal by the right channel subtraction gain β for each corresponding sample t as the input sound of the right channel. signal sample values x R value was subtracted from (t) x R (t) -β × x M (t) the right channel series by the difference signal y R (1), y R (2), ..., y Obtained as R (T) and output (step S150). That is, y R (t) = x R (t) -β × x M (t). In the right channel signal subtraction unit 150, similarly to the left channel signal subtraction unit 130, in order to prevent the coding apparatus 100 from requiring a delay or a calculation processing amount for obtaining a local decoding signal, a local part of monaural coding is required. It is preferable to use the unquantized downmix signal x M (t) obtained by the downmix unit 110 instead of the quantized downmix signal which is the decoded signal. However, when the right channel subtraction gain estimation unit 140 obtains the right channel subtraction gain β by a well-known method as exemplified in Patent Document 1 instead of a method based on the principle of minimizing the quantization error, it is encoded. Like the left channel signal subtraction unit 130, the right channel signal subtraction unit 150 is provided with a means for obtaining a local decoding signal corresponding to the monaural code CM in the subsequent stage of the monaural coding unit 160 of the apparatus 100 or in the monaural coding unit 160. Then, instead of the downmix signals x M (1), x M (2), ..., x M (T), local decoding of monaural coding is performed as in the conventional coding device such as Patent Document 1. The quantized downmix signal ^ x M (1), ^ x M (2), ..., ^ x M (T), which are the signals, may be used to obtain the right channel difference signal.
[モノラル符号化部160]
 モノラル符号化部160には、ダウンミックス部110が出力したダウンミックス信号xM(1), xM(2), ..., xM(T)が入力される。モノラル符号化部160は、入力されたダウンミックス信号を所定の符号化方式でbMビットで符号化してモノラル符号CMを得て出力する(ステップS160)。すなわち、入力されたTサンプルのダウンミックス信号xM(1), xM(2), ..., xM(T)からbMビットのモノラル符号CMを得て出力する。符号化方式としては、どのようなものを用いてもよく、例えば3GPP EVS規格のような符号化方式を用いればよい。
[Monaural coding unit 160]
The downmix signals x M (1), x M (2), ..., x M (T) output by the downmix unit 110 are input to the monaural coding unit 160. The monaural coding unit 160 encodes the input downmix signal with b M bits by a predetermined coding method to obtain a monaural code CM and outputs it (step S160). That is, the b M- bit monaural code CM is obtained from the input T sample downmix signals x M (1), x M (2), ..., x M (T) and output. Any coding method may be used, for example, a coding method such as the 3GPP EVS standard may be used.
[ステレオ符号化部170]
 ステレオ符号化部170には、左チャネル信号減算部130が出力した左チャネル差分信号yL(1), yL(2), ..., yL(T)と、右チャネル信号減算部150が出力した右チャネル差分信号yR(1), yR(2), ..., yR(T)と、が入力される。ステレオ符号化部170は、入力された左チャネル差分信号と右チャネル差分信号を所定の符号化方式で合計bsビットで符号化してステレオ符号CSを得て出力する(ステップS170)。すなわち、入力されたTサンプルの左チャネル差分信号yL(1), yL(2), ..., yL(T)と、入力されたTサンプルの右チャネル差分信号yR(1), yR(2), ..., yR(T)と、から合計bSビットのステレオ符号CSを得て出力する。符号化方式としては、どのようなものを用いてもよく、例えばMPEG-4 AAC規格のステレオ復号方式に対応するステレオ符号化方式を用いてもよいし、入力された左チャネル差分信号と右チャネル差分信号それぞれを独立して符号化するものを用いてもよく、符号化により得られた符号全てを合わせたものをステレオ符号CSとすればよい。
[Stereo coding unit 170]
The stereo coding unit 170 includes the left channel difference signals y L (1), y L (2), ..., y L (T) output by the left channel signal subtraction unit 130, and the right channel signal subtraction unit 150. The right channel difference signal y R (1), y R (2), ..., y R (T) output by is input. The stereo coding unit 170 encodes the input left channel difference signal and right channel difference signal with a total b s bit by a predetermined coding method to obtain a stereo code CS and output the signal (step S170). That is, the left channel difference signal y L (1), y L (2), ..., y L (T) of the input T sample and the right channel difference signal y R (1) of the input T sample. , y R (2), ..., y R (T), and output a stereo code CS with a total of b S bits. Any coding method may be used, for example, a stereo coding method corresponding to the stereo decoding method of the MPEG-4 AAC standard may be used, or the input left channel difference signal and right channel may be used. A signal that encodes each difference signal independently may be used, and a stereo code CS may be obtained by combining all the codes obtained by the coding.
 入力された左チャネル差分信号と右チャネル差分信号それぞれを独立して符号化する場合には、ステレオ符号化部170は、左チャネル差分信号をbLビットで符号化し、右チャネル差分信号をbRビットで符号化する。すなわち、ステレオ符号化部170は、入力されたTサンプルの左チャネル差分信号yL(1), yL(2), ..., yL(T)からbLビットの左チャネル差分符号CLを得て、入力されたTサンプルの右チャネル差分信号yR(1), yR(2), ..., yR(T)からbRビットの右チャネル差分符号CRを得て、左チャネル差分符号CLと右チャネル差分符号CRを合わせたものをステレオ符号CSとして出力する。ここで、bLビットとbRビットの合計がbSビットである。 When the input left channel difference signal and right channel difference signal are encoded independently, the stereo coding unit 170 encodes the left channel difference signal with b L bits and b R the right channel difference signal. Encode with bits. That is, the stereo coding unit 170 uses the left channel difference code CL of b L bits from the left channel difference signals y L (1), y L (2), ..., y L (T) of the input T sample. To obtain the right channel difference code CR of b R bits from the input T sample right channel difference signals y R (1), y R (2), ..., y R (T), and left The combination of the channel difference code CL and the right channel difference code CR is output as the stereo code CS. Here, the sum of the b L bit and the b R bit is the b S bit.
 入力された左チャネル差分信号と右チャネル差分信号を1つの符号化方式の中で合わせて符号化する場合には、ステレオ符号化部170は、左チャネル差分信号と右チャネル差分信号を合計bSビットで符号化する。すなわち、ステレオ符号化部170は、入力されたTサンプルの左チャネル差分信号yL(1), yL(2), ..., yL(T)と、入力されたTサンプルの右チャネル差分信号yR(1), yR(2), ..., yR(T)と、からbSビットのステレオ符号CSを得て出力する。 When the input left channel difference signal and right channel difference signal are combined and encoded in one coding method, the stereo coding unit 170 totals the left channel difference signal and the right channel difference signal b S. Encode with bits. That is, the stereo coding unit 170 includes the left channel difference signals y L (1), y L (2), ..., y L (T) of the input T sample and the right channel of the input T sample. The b S- bit stereo code CS is obtained from the difference signals y R (1), y R (2), ..., y R (T) and output.
≪復号装置200≫
 第1参考形態の復号装置200は、図3に示す通り、モノラル復号部210とステレオ復号部220と左チャネル減算利得復号部230と左チャネル信号加算部240と右チャネル減算利得復号部250と右チャネル信号加算部260とを含む。復号装置200は、対応する符号化装置100と同じ時間長のフレーム単位で、入力されたモノラル符号CMと左チャネル減算利得符号Cαと右チャネル減算利得符号Cβとステレオ符号CSを復号して、フレーム単位の2チャネルステレオの時間領域の復号音信号(後述する左チャネル復号音信号と右チャネル復号音信号)を得て出力する。復号装置200は、図3に破線で示すように、モノラルの時間領域の復号音信号(後述するモノラル復号音信号)も出力してもよい。復号装置200が出力した復号音信号は、例えば、DA変換され、スピーカで再生されることで、受聴可能とされる。復号装置200は、各フレームについて、図4に例示するステップS210からステップS260の処理を行う。
<< Decoding device 200 >>
As shown in FIG. 3, the decoding device 200 of the first reference embodiment includes a monaural decoding unit 210, a stereo decoding unit 220, a left channel subtraction gain decoding unit 230, a left channel signal addition unit 240, a right channel subtraction gain decoding unit 250, and a right side. Includes a channel signal addition unit 260. The decoding device 200 decodes the input monaural code CM, left channel subtraction gain code Cα, right channel subtraction gain code Cβ, and stereo code CS in frame units having the same time length as the corresponding coding device 100, and frames. The decoded sound signal (left channel decoded sound signal and right channel decoded sound signal, which will be described later) in the time region of the unit 2-channel stereo is obtained and output. As shown by the broken line in FIG. 3, the decoding device 200 may also output a decoded sound signal (monaural decoded sound signal described later) in the monaural time domain. The decoded sound signal output by the decoding device 200 is, for example, DA-converted and reproduced by a speaker so that it can be heard. The decoding device 200 performs the processes of steps S210 to S260 illustrated in FIG. 4 for each frame.
[モノラル復号部210]
 モノラル復号部210には、復号装置200に入力されたモノラル符号CMが入力される。モノラル復号部210は、入力されたモノラル符号CMを所定の復号方式で復号してモノラル復号音信号^xM(1), ^xM(2), ..., ^xM(T)を得て出力する(ステップS210)。所定の復号方式としては、対応する符号化装置100のモノラル符号化部160で用いた符号化方式に対応する復号方式を用いる。モノラル符号CMのビット数はbMである。
[Monaural decoding unit 210]
The monaural code CM input to the decoding device 200 is input to the monaural decoding unit 210. The monaural decoding unit 210 decodes the input monaural code CM by a predetermined decoding method and outputs a monaural decoding sound signal ^ x M (1), ^ x M (2), ..., ^ x M (T). Obtain and output (step S210). As a predetermined decoding method, a decoding method corresponding to the coding method used in the monaural coding unit 160 of the corresponding coding device 100 is used. The number of bits of the monaural code CM is b M.
[ステレオ復号部220]
 ステレオ復号部220には、復号装置200に入力されたステレオ符号CSが入力される。ステレオ復号部220は、入力されたステレオ符号CSを所定の復号方式で復号して、左チャネル復号差分信号^yL(1), ^yL(2), ..., ^yL(T)と、右チャネル復号差分信号^yR(1), ^yR(2), ..., ^yR(T)と、を得て出力する(ステップS220)。所定の復号方式としては、対応する符号化装置100のステレオ符号化部170で用いた符号化方式に対応する復号方式を用いる。ステレオ符号CSの合計ビット数はbSである。
[Stereo Decoding Unit 220]
The stereo code CS input to the decoding device 200 is input to the stereo decoding unit 220. The stereo decoding unit 220 decodes the input stereo code CS by a predetermined decoding method, and the left channel decoding difference signal ^ y L (1), ^ y L (2), ..., ^ y L (T). ) And the right channel decoding difference signal ^ y R (1), ^ y R (2), ..., ^ y R (T) are obtained and output (step S220). As a predetermined decoding method, a decoding method corresponding to the coding method used in the stereo coding unit 170 of the corresponding coding device 100 is used. The total number of bits of the stereo code CS is b S.
[左チャネル減算利得復号部230]
 左チャネル減算利得復号部230には、復号装置200に入力された左チャネル減算利得符号Cαが入力される。左チャネル減算利得復号部230は、左チャネル減算利得符号Cαを復号して左チャネル減算利得αを得て出力する(ステップS230)。左チャネル減算利得復号部230は、対応する符号化装置100の左チャネル減算利得推定部120で用いた方法に対応する復号方法で左チャネル減算利得符号Cαを復号して、左チャネル減算利得αを得る。対応する符号化装置100の左チャネル減算利得推定部120が量子化誤差を最小化する原理に基づく方法で左チャネル減算利得αと左チャネル減算利得符号Cαを得た場合の、左チャネル減算利得復号部230が左チャネル減算利得符号Cαを復号して左チャネル減算利得αを得る方法については後述する。
[Left channel subtraction gain decoding unit 230]
The left channel subtraction gain code Cα input to the decoding device 200 is input to the left channel subtraction gain decoding unit 230. The left channel subtraction gain decoding unit 230 decodes the left channel subtraction gain code Cα to obtain the left channel subtraction gain α and outputs it (step S230). The left channel subtraction gain decoding unit 230 decodes the left channel subtraction gain code Cα by a decoding method corresponding to the method used in the left channel subtraction gain estimation unit 120 of the corresponding coding apparatus 100 to obtain the left channel subtraction gain α. obtain. Left channel subtraction gain decoding when the left channel subtraction gain estimation unit 120 of the corresponding coding apparatus 100 obtains the left channel subtraction gain α and the left channel subtraction gain code Cα by a method based on the principle of minimizing the quantization error. A method in which the unit 230 decodes the left channel subtraction gain code Cα to obtain the left channel subtraction gain α will be described later.
[左チャネル信号加算部240]
 左チャネル信号加算部240には、モノラル復号部210が出力したモノラル復号音信号^xM(1), ^xM(2), ..., ^xM(T)と、ステレオ復号部220が出力した左チャネル復号差分信号^yL(1), ^yL(2), ..., ^yL(T)と、左チャネル減算利得復号部230が出力した左チャネル減算利得αと、が入力される。左チャネル信号加算部240は、対応するサンプルtごとに、左チャネル復号差分信号のサンプル値^yL(t)と、モノラル復号音信号のサンプル値^xM(t)と左チャネル減算利得αとを乗算した値α×^xM(t)と、を加算した値^yL(t)+α×^xM(t)による系列を左チャネル復号音信号^xL(1), ^xL(2), ..., ^xL(T)として得て出力する(ステップS240)。すなわち、^xL(t)=^yL(t)+α×^xM(t)である。
[Left channel signal addition unit 240]
The left channel signal addition unit 240 contains the monaural decoding sound signals ^ x M (1), ^ x M (2), ..., ^ x M (T) output by the monaural decoding unit 210, and the stereo decoding unit 220. Left channel decoding difference signal ^ y L (1), ^ y L (2), ..., ^ y L (T) output by, and left channel subtraction gain α output by left channel subtraction gain decoding unit 230. , Is entered. The left channel signal addition unit 240 has a sample value of the left channel decoding difference signal ^ y L (t), a sample value of the monaural decoding sound signal ^ x M (t), and a left channel subtraction gain α for each corresponding sample t. preparative and multiplication value α × ^ x M (t) , a value obtained by adding the ^ y L (t) + α × ^ x M (t) the sequence by left channel decoded audio signal ^ x L (1), ^ It is obtained and output as x L (2), ..., ^ x L (T) (step S240). That is, ^ x L (t) = ^ y L (t) + α × ^ x M (t).
[右チャネル減算利得復号部250]
 右チャネル減算利得復号部250には、復号装置200に入力された右チャネル減算利得符号Cβが入力される。右チャネル減算利得復号部250は、右チャネル減算利得符号Cβを復号して右チャネル減算利得βを得て出力する(ステップS250)。右チャネル減算利得復号部250は、対応する符号化装置100の右チャネル減算利得推定部140で用いた方法に対応する復号方法で右チャネル減算利得符号Cβを復号して、右チャネル減算利得βを得る。対応する符号化装置100の右チャネル減算利得推定部140が量子化誤差を最小化する原理に基づく方法で右チャネル減算利得βと右チャネル減算利得符号Cβを得た場合の、右チャネル減算利得復号部250が右チャネル減算利得符号Cβを復号して右チャネル減算利得βを得る方法については後述する。
[Right channel subtraction gain decoding unit 250]
The right channel subtraction gain code Cβ input to the decoding device 200 is input to the right channel subtraction gain decoding unit 250. The right channel subtraction gain decoding unit 250 decodes the right channel subtraction gain code Cβ to obtain the right channel subtraction gain β and outputs it (step S250). The right channel subtraction gain decoding unit 250 decodes the right channel subtraction gain code Cβ by a decoding method corresponding to the method used in the right channel subtraction gain estimation unit 140 of the corresponding coding apparatus 100 to obtain the right channel subtraction gain β. obtain. Right channel subtraction gain decoding when the right channel subtraction gain estimation unit 140 of the corresponding coding apparatus 100 obtains the right channel subtraction gain β and the right channel subtraction gain code Cβ by a method based on the principle of minimizing the quantization error. A method in which unit 250 decodes the right channel subtraction gain code Cβ to obtain the right channel subtraction gain β will be described later.
[右チャネル信号加算部260]
 右チャネル信号加算部260には、モノラル復号部210が出力したモノラル復号音信号^xM(1), ^xM(2), ..., ^xM(T)と、ステレオ復号部220が出力した右チャネル復号差分信号^yR(1), ^yR(2), ..., ^yR(T)と、右チャネル減算利得復号部250が出力した右チャネル減算利得βと、が入力される。右チャネル信号加算部260は、対応するサンプルtごとに、右チャネル復号差分信号のサンプル値^yR(t)と、モノラル復号音信号のサンプル値^xM(t)と右チャネル減算利得βとを乗算した値β×^xM(t)と、を加算した値^yR(t)+β×^xM(t)による系列を右チャネル復号音信号^xR(1), ^xR(2), ..., ^xR(T)として得て出力する(ステップS260)。すなわち、^xR(t)=^yR(t)+β×^xM(t)である。
[Right channel signal addition unit 260]
The right channel signal addition unit 260 includes a monaural decoding sound signal ^ x M (1), ^ x M (2), ..., ^ x M (T) output by the monaural decoding unit 210, and a stereo decoding unit 220. Right channel decoding difference signal ^ y R (1), ^ y R (2), ..., ^ y R (T) output by, and right channel subtraction gain β output by right channel subtraction gain decoding unit 250. , Is entered. The right channel signal addition unit 260 includes a sample value of the right channel decoding difference signal ^ y R (t), a sample value of the monaural decoding sound signal ^ x M (t), and a right channel subtraction gain β for each corresponding sample t. preparative multiplication value β × ^ x M (t), the plus value ^ y R (t) + β × ^ x M (t) right channel decoded audio signal series by ^ x R (1), ^ It is obtained and output as x R (2), ..., ^ x R (T) (step S260). That is, ^ x R (t) = ^ y R (t) + β × ^ x M (t).
〔量子化誤差を最小化する原理〕
 以下、量子化誤差を最小化する原理について説明する。ステレオ符号化部170において入力された左チャネル差分信号と右チャネル差分信号を1つの符号化方式の中で合わせて符号化する場合には、左チャネル差分信号の符号化に用いるビット数bLと右チャネル差分信号の符号化に用いるビット数bRは陽に定まっていないこともあり得るが、以下では、左チャネル差分信号の符号化に用いるビット数がbLであり、右チャネル差分信号の符号化に用いるビット数がbRであるとして説明する。また、以下では主に左チャネルについて説明するが、右チャネルについても同様である。
[Principle of minimizing quantization error]
The principle of minimizing the quantization error will be described below. When the left channel difference signal and the right channel difference signal input in the stereo coding unit 170 are combined and coded in one coding method, the number of bits b L used for coding the left channel difference signal is used. The number of bits b R used to encode the right channel difference signal may not be explicitly determined, but in the following, the number of bits used to encode the left channel difference signal is b L , and the number of bits used to encode the right channel difference signal is b L. It will be described assuming that the number of bits used for coding is b R. Further, although the left channel is mainly described below, the same applies to the right channel.
 上述した符号化装置100は、左チャネルの入力音信号xL(1), xL(2), ..., xL(T)の各サンプル値から、ダウンミックス信号xM(1), xM(2), ..., xM(T)の各サンプル値に左チャネル減算利得αを乗算して得た値を減算して得た値からなる左チャネル差分信号yL(1), yL(2), ..., yL(T)をbLビットで符号化して、ダウンミックス信号xM(1), xM(2), ..., xM(T)をbMビットで符号化する。また、上述した復号装置200は、bLビットの符号から左チャネル復号差分信号^yL(1), ^yL(2), ..., ^yL(T)(以下では、「量子化済み左チャネル差分信号」ともいう)を復号し、bMビットの符号からモノラル復号音信号^xM(1), ^xM(2), ..., ^xM(T)(以下では、「量子化済みダウンミックス信号」ともいう)を復号した後、復号により得た量子化済みダウンミックス信号^xM(1), ^xM(2), ..., ^xM(T)の各サンプル値に左チャネル減算利得αを乗算して得た値を復号により得た量子化済み左チャネル差分信号^yL(1), ^yL(2), ..., ^yL(T)の各サンプル値に加算することで左チャネルの復号音信号である左チャネル復号音信号^xL(1), ^xL(2), ..., ^xL(T)を得る。符号化装置100及び復号装置200は、上記の処理で得られる左チャネルの復号音信号が有する量子化誤差のエネルギーが小さくなるように設計されるべきである。 The above-described encoding device 100 uses the downmix signal x M (1), from each sample value of the input sound signal x L (1), x L (2), ..., x L (T) of the left channel. Left channel difference signal y L (1) consisting of the values obtained by multiplying each sample value of x M (2), ..., x M (T) by the left channel subtraction gain α and subtracting the value obtained. , y L (2), ..., y L (T) is encoded with b L bits to produce the downmix signals x M (1), x M (2), ..., x M (T). b Encode with M bits. Further, the decoding device 200 described above has a left channel decoding difference signal ^ y L (1), ^ y L (2), ..., ^ y L (T) from the code of the b L bit (hereinafter, "quantization". Decrypted left channel difference signal (also referred to as quantized left channel difference signal), and monaural decoded sound signal from b M bit code ^ x M (1), ^ x M (2), ..., ^ x M (T) (hereinafter Then, after decoding the "quantized downmix signal"), the quantized downmix signal obtained by decoding ^ x M (1), ^ x M (2), ..., ^ x M ( Quantized left channel difference signal obtained by decoding the value obtained by multiplying each sample value of T) by the left channel subtraction gain α ^ y L (1), ^ y L (2), ..., ^ Left channel decoded sound signal which is the decoded sound signal of the left channel by adding to each sample value of y L (T) ^ x L (1), ^ x L (2), ..., ^ x L (T ). The coding device 100 and the decoding device 200 should be designed so that the energy of the quantization error of the decoding sound signal of the left channel obtained by the above processing is small.
 入力信号を符号化・復号して得られる復号信号が有する量子化誤差(以下、便宜的に「符号化により生じる量子化誤差」という)のエネルギーは、多くの場合、入力信号のエネルギーにおおよそ比例し、符号化に用いるサンプルごとのビット数の値に対して指数的に小さくなる傾向にある。したがって、左チャネル差分信号の符号化により生じる量子化誤差のサンプルあたりの平均エネルギーは正の数σL 2を用いて下記の式(1-0-1)のように推定でき、ダウンミックス信号の符号化により生じる量子化誤差のサンプルあたりの平均エネルギーは正の数σM 2を用いて下記の式(1-0-2)のように推定できる。
Figure JPOXMLDOC01-appb-M000001

Figure JPOXMLDOC01-appb-M000002
In many cases, the energy of the quantization error (hereinafter, for convenience, "quantization error caused by coding") of the decoding signal obtained by encoding / decoding the input signal is approximately proportional to the energy of the input signal. However, it tends to be exponentially smaller than the value of the number of bits for each sample used for coding. Therefore, the average energy per sample of the quantization error caused by the coding of the left channel difference signal can be estimated by using the positive number σ L 2 as shown in the following equation (1-0-1), and the downmix signal can be estimated. The average energy per sample of the quantization error caused by coding can be estimated by the following equation (1-0-2) using the positive number σ M 2.
Figure JPOXMLDOC01-appb-M000001

Figure JPOXMLDOC01-appb-M000002
 ここで仮に、左チャネルの入力音信号xL(1), xL(2), ..., xL(T)とダウンミックス信号xM(1), xM(2), ..., xM(T)が同一の系列とみなせるほど各サンプル値が近い値となっているとする。例えば、左チャネルの入力音信号xL(1), xL(2), ..., xL(T)と右チャネルの入力信号xR(1), xR(2), ..., xR(T)が、背景雑音や反響が多くない環境下で、2個のマイクロホンから等距離にある音源が発した音を収音して得たものであるケースなどが、この条件に相当する。この条件の下では左チャネル差分信号yL(1), yL(2), ..., yL(T)の各サンプル値は、ダウンミックス信号xM(1), xM(2), ..., xM(T)の各サンプル値に(1-α)を乗算して得た値と等価となる。したがって、左チャネル差分信号のエネルギーはダウンミックス信号のエネルギーの(1-α)2倍で表せることから、上記のσL 2は上記のσM 2を用いて(1-α)2×σM 2と置き換えることができるため、左チャネル差分信号の符号化により生じる量子化誤差のサンプルあたりの平均エネルギーは下記の式(1-1)のように推定できる。
Figure JPOXMLDOC01-appb-M000003

また、復号装置において量子化済み左チャネル差分信号に加算する信号が有する量子化誤差のサンプルあたりの平均エネルギー、すなわち、復号により得た量子化済みダウンミックス信号の各サンプル値と左チャネル減算利得αとを乗算して得た値の系列が有する量子化誤差のサンプルあたりの平均エネルギーは、下記の式(1-2)のように推定できる。
Figure JPOXMLDOC01-appb-M000004
Here, suppose that the input sound signal of the left channel x L (1), x L (2), ..., x L (T) and the downmix signal x M (1), x M (2), ... It is assumed that each sample value is so close that x M (T) can be regarded as the same series. For example, the left channel input signal x L (1), x L (2), ..., x L (T) and the right channel input signal x R (1), x R (2), ... , x R (T) is obtained by collecting the sound emitted by a sound source at the same distance from two microphones in an environment where there is not much background noise or reverberation. Equivalent to. Under this condition, the sample values of the left channel difference signals y L (1), y L (2), ..., y L (T) are the downmix signals x M (1), x M (2). , ..., x M (T) is equivalent to the value obtained by multiplying each sample value by (1-α). Therefore, the energy of the left channel difference signal since it expressed by (1-α) 2 times the energy of the downmix signal, sigma L 2 described above using the above σ M 2 (1-α) 2 × σ M Since it can be replaced with 2, the average energy per sample of the quantization error caused by the coding of the left channel difference signal can be estimated by the following equation (1-1).
Figure JPOXMLDOC01-appb-M000003

Further, the average energy per sample of the quantization error of the signal to be added to the quantized left channel difference signal in the decoding device, that is, each sample value of the quantized downmix signal obtained by decoding and the left channel subtraction gain α. The average energy per sample of the quantization error of the series of values obtained by multiplying and can be estimated by the following equation (1-2).
Figure JPOXMLDOC01-appb-M000004
 左チャネル差分信号の符号化により生じる量子化誤差と、復号により得た量子化済みダウンミックス信号の各サンプル値に左チャネル減算利得αで乗算して得た値の系列が有する量子化誤差と、が互いに相関を持たないと仮定すると、左チャネルの復号音信号が有する量子化誤差のサンプルあたりの平均エネルギーは、式(1-1)と式(1-2)の和で推定される。左チャネルの復号音信号が有する量子化誤差のエネルギーを最小化する左チャネル減算利得αは、下記の式(1-3)のように求められる。
Figure JPOXMLDOC01-appb-M000005
The quantization error caused by the coding of the left channel difference signal, and the quantization error of the sequence of values obtained by multiplying each sample value of the quantized downmix signal obtained by decoding by the left channel subtraction gain α. Assuming that they do not correlate with each other, the average energy per sample of the quantization error of the decoded sound signal of the left channel is estimated by the sum of equations (1-1) and (1-2). The left channel subtraction gain α that minimizes the energy of the quantization error of the decoded sound signal of the left channel is obtained by the following equation (1-3).
Figure JPOXMLDOC01-appb-M000005
 つまり、左チャネルの入力音信号xL(1), xL(2), ..., xL(T)とダウンミックス信号xM(1), xM(2), ..., xM(T)が同一の系列とみなせるほど各サンプル値が近い値となっている条件において左チャネルの復号音信号が有する量子化誤差を最小化するためには、左チャネル減算利得推定部120は左チャネル減算利得αを式(1-3)で求めればよい。式(1-3)で得られる左チャネル減算利得αは、0より大きく1未満の値であり、2つの符号化に用いるビット数であるbLとbMが等しいときには0.5であり、左チャネル差分信号を符号化するためのビット数bLがダウンミックス信号を符号化するためのビット数bMよりも多いほど0.5より0に近い値であり、ダウンミックス信号を符号化するためのビット数bMが左チャネル差分信号を符号化するためのビット数bLよりも多いほど0.5より1に近い値である。 That is, the left channel input sound signal x L (1), x L (2), ..., x L (T) and the downmix signal x M (1), x M (2), ..., x In order to minimize the quantization error of the decoded sound signal of the left channel under the condition that the sample values are so close that M (T) can be regarded as the same series, the left channel subtraction gain estimation unit 120 is used. The left channel subtraction gain α may be obtained by Eq. (1-3). The left channel subtraction gain α obtained by Eq. (1-3) is a value greater than 0 and less than 1 , and is 0.5 when b L and b M, which are the numbers of bits used for the two encodings, are equal, and is the left channel. The number of bits b L for encoding the difference signal is greater than the number of bits b M for encoding the downmix signal, the closer the value is closer to 0 than 0.5, and the number of bits for encoding the downmix signal. b M is close enough to 0.5 than 1 greater than the number of bits b L to encode the left channel differential signal.
 右チャネルについても同様であり、右チャネルの入力音信号xR(1), xR(2), ..., xR(T)とダウンミックス信号xM(1), xM(2), ..., xM(T)が同一の系列とみなせるほど各サンプル値が近い値となっている条件において右チャネルの復号音信号が有する量子化誤差を最小化するためには、右チャネル減算利得推定部140は右チャネル減算利得βを下記の式(1-3-2)で求めればよい。
Figure JPOXMLDOC01-appb-M000006

 式(1-3-2)で得られる右チャネル減算利得βは、0より大きく1未満の値であり、2つの符号化に用いるビット数であるbRとbMが等しいときには0.5であり、右チャネル差分信号を符号化するためのビット数bRがダウンミックス信号を符号化するためのビット数bMよりも多いほど0.5より0に近い値であり、ダウンミックス信号を符号化するためのビット数bMが右チャネル差分信号を符号化するためのビット数bRよりも多いほど0.5より1に近い値である。
The same applies to the right channel, and the input sound signals of the right channel x R (1), x R (2), ..., x R (T) and the downmix signal x M (1), x M (2) In order to minimize the quantization error of the decoded sound signal of the right channel under the condition that the sample values are so close that the, ..., x M (T) can be regarded as the same series, the right channel The subtraction gain estimation unit 140 may obtain the right channel subtraction gain β by the following equation (1-3-2).
Figure JPOXMLDOC01-appb-M000006

The right channel subtraction gain β obtained by Eq. (1-3-2) is a value greater than 0 and less than 1, and 0.5 when b R and b M, which are the numbers of bits used for the two encodings, are equal. The number of bits b R for encoding the right channel difference signal is closer to 0 than 0.5 as the number of bits b R for encoding the downmix signal is greater than b M, and the number of bits for encoding the downmix signal is closer to 0. As the number of bits b M is greater than the number of bits b R for encoding the right channel difference signal, the value is closer to 1 than 0.5.
 次に、左チャネルの入力音信号xL(1), xL(2), ..., xL(T)とダウンミックス信号xM(1), xM(2), ..., xM(T)が同一の系列とみなせない場合も含む、左チャネルの復号音信号が有する量子化誤差のエネルギーを最小化する原理について説明する。 Next, the left channel input sound signal x L (1), x L (2), ..., x L (T) and the downmix signal x M (1), x M (2), ..., The principle of minimizing the energy of the quantization error of the decoded sound signal of the left channel, including the case where x M (T) cannot be regarded as the same sequence, will be described.
 左チャネルの入力音信号xL(1), xL(2), ..., xL(T)とダウンミックス信号xM(1), xM(2), ..., xM(T)の正規化された内積値rLは、下記の式(1-4)で表される。
Figure JPOXMLDOC01-appb-M000007

 式(1-4)によって得られる正規化された内積値rLは、実数値であって、ダウンミックス信号xM(1), xM(2), ..., xM(T)の各サンプル値に実数値rL'を乗算してサンプル値の系列rL'×xM(1), rL'×xM(2), ..., rL'×xM(T)を得たときに、得られたサンプル値の系列と左チャネルの入力音信号の各サンプル値との差分により得られる系列xL(1)-rL'×xM(1), xL(2)-rL'×xM(2), ..., xL(T)-rL'×xM(T)のエネルギーが最小となる実数値rL'と同じ値である。
Left channel input sound signal x L (1), x L (2), ..., x L (T) and downmix signal x M (1), x M (2), ..., x M ( The normalized inner product value r L of T) is expressed by the following equation (1-4).
Figure JPOXMLDOC01-appb-M000007

The normalized inner product value r L obtained by Eq. (1-4) is a real value of the downmix signals x M (1), x M (2), ..., x M (T). Multiply each sample value by the real number r L' and a series of sample values r L '× x M (1), r L '× x M (2), ..., r L '× x M (T) The sequence obtained by the difference between the sequence of the obtained sample values and each sample value of the input sound signal of the left channel x L (1) -r L '× x M (1), x L ( 2) -r L '× x M (2), ..., x L (T) -r L '× x M (T) is the same value as the real value r L' that minimizes the energy.
 左チャネルの入力音信号xL(1), xL(2), ..., xL(T)は、各サンプル番号tについて、xL(t)=rL×xM(t)+(xL(t)- rL×xM(t))と分解できる。ここで、xL(t)- rL×xM(t)の各値によって構成される系列を直交信号xL’(1), xL’(2), ..., xL’(T)とすると、当該分解によれば、左チャネル差分信号の各サンプル値yL(t)=xL(t)-αxM(t)は、ダウンミックス信号xM(1), xM(2), ..., xM(T)の各サンプル値xM(t)に、正規化された内積値rL及び左チャネル減算利得αを用いた(rL-α)を乗算して得た値(rL-α)×xM(t)と、直交信号の各サンプル値xL’(t)との和(rL-α)×xM(t)+xL’(t)と等価となる。直交信号xL’(1), xL’(2), ..., xL’(T)はダウンミックス信号xM(1), xM(2), ..., xM(T)に対して直交性、つまり内積が0となる性質を示すため、左チャネル差分信号のエネルギーはダウンミックス信号のエネルギーを(rL-α)2倍したものと、直交信号のエネルギーとの和で表される。したがって、左チャネル差分信号をbLビットで符号化することにより生じる量子化誤差のサンプルあたりの平均エネルギーは正の数σ2を用いて下記の式(1-5)のように推定できる。
Figure JPOXMLDOC01-appb-M000008
The input sound signals of the left channel x L (1), x L (2), ..., x L (T) are x L (t) = r L × x M (t) + for each sample number t. It can be decomposed as (x L (t)-r L × x M (t)). Here, the sequence composed of the values of x L (t) -r L × x M (t) is the orthogonal signal x L '(1), x L '(2), ..., x L '( Assuming T), according to the decomposition, each sample value y L (t) = x L (t) -α x M (t) of the left channel difference signal is the downmix signal x M (1), x M ( 2), ..., x M (T) sample values x M (t) multiplied by (r L -α) using the normalized inner product value r L and the left channel subtraction gain α The sum of the obtained value (r L -α) × x M (t) and each sample value x L '(t) of the orthogonal signal (r L -α) × x M (t) + x L '(t) ) Is equivalent. Orthogonal signals x L '(1), x L '(2), ..., x L '(T) are downmix signals x M (1), x M (2), ..., x M (T) sum of orthogonality, i.e. to indicate the nature of the inner product is 0, the energy of the left channel differential signal to that doubled (r L-.alpha.) energy of the downmix signal, and the energy of the quadrature signal to) It is represented by. Therefore, the average energy per sample of the quantization error generated by coding the left channel difference signal with b L bits can be estimated by the following equation (1-5) using the positive number σ 2.
Figure JPOXMLDOC01-appb-M000008
 左チャネル差分信号の符号化により生じる量子化誤差と、復号により得られた量子化済みダウンミックス信号の各サンプル値に左チャネル減算利得αを乗算して得た値の系列が有する量子化誤差と、が互いに相関を持たないと仮定すると、左チャネルの復号音信号が有する量子化誤差のサンプルあたりの平均エネルギーは、式(1-5)と式(1-2)の和で推定される。左チャネルの復号音信号が有する量子化誤差のエネルギーを最小化する左チャネル減算利得αは、下記の式(1-6)のように求められる。
Figure JPOXMLDOC01-appb-M000009
The quantization error caused by the coding of the left channel difference signal and the quantization error of the series of values obtained by multiplying each sample value of the quantized downmix signal obtained by decoding by the left channel subtraction gain α. Assuming that, does not correlate with each other, the average energy per sample of the quantization error of the decoded sound signal of the left channel is estimated by the sum of equations (1-5) and (1-2). The left channel subtraction gain α that minimizes the energy of the quantization error of the decoded sound signal of the left channel is obtained by the following equation (1-6).
Figure JPOXMLDOC01-appb-M000009
 つまり、左チャネルの復号音信号が有する量子化誤差を最小化するためには、左チャネル減算利得推定部120は左チャネル減算利得αを式(1-6)で求めればよい。すなわち、この量子化誤差のエネルギーを最小化する原理を考慮すると、左チャネル減算利得αには、正規化された内積値rLと、符号化に用いるビット数であるbLとbMによって決まる値である補正係数と、を乗算したものを使用するべきである。当該補正係数は、0より大きく1未満の値であり、左チャネル差分信号を符号化するためのビット数bLとダウンミックス信号を符号化するためのビット数bMが同じであるときには0.5であり、左チャネル差分信号を符号化するためのビット数bLがダウンミックス信号を符号化するためのビット数bMよりも多いほど0.5より0に近く、左チャネル差分信号を符号化するためのビット数bLがダウンミックス信号を符号化するためのビット数bMよりも少ないほど0.5より1に近い値である。 That is, in order to minimize the quantization error of the decoded sound signal of the left channel, the left channel subtraction gain estimation unit 120 may obtain the left channel subtraction gain α by the equation (1-6). That is, considering the principle of minimizing the energy of this quantization error, the left channel subtraction gain α is determined by the normalized inner product value r L and the number of bits used for coding b L and b M. You should use the value, the correction factor, multiplied by. The correction coefficient is a value greater than 0 and less than 1, and is 0.5 when the number of bits b L for encoding the left channel difference signal and the number of bits b M for encoding the downmix signal are the same. Yes, the number of bits b L for encoding the left channel difference signal is closer to 0 than 0.5 as the number of bits b L for encoding the downmix signal is greater than b M, and for encoding the left channel difference signal. When the number of bits b L is less than the number of bits b M for encoding the downmix signal, the value is closer to 1 than 0.5.
 右チャネルについても同様であり、右チャネルの復号音信号が有する量子化誤差を最小化するためには、右チャネル減算利得推定部140は右チャネル減算利得βを下記の式(1-6-2)で求めればよい。
Figure JPOXMLDOC01-appb-M000010

ここで、rRは、右チャネルの入力音信号xR(1), xR(2), ..., xR(T)とダウンミックス信号xM(1), xM(2), ..., xM(T)の正規化された内積値であり、下記の式(1-4-2)で表される。
Figure JPOXMLDOC01-appb-M000011

すなわち、この量子化誤差のエネルギーを最小化する原理を考慮すると、右チャネル減算利得βには、正規化された内積値rRと、符号化に用いるビット数であるbRとbMによって決まる値である補正係数と、を乗算したものを使用するべきである。当該補正係数は、0より大きく1未満の値であり、右チャネル差分信号を符号化するためのビット数bRがダウンミックス信号を符号化するためのビット数bMよりも多いほど0.5よりも0に近く、右チャネル差分信号を符号化するためのビット数がダウンミックス信号を符号化するためのビット数よりも少ないほど0.5よりも1に近い値である。
The same applies to the right channel, and in order to minimize the quantization error of the decoded sound signal of the right channel, the right channel subtraction gain estimation unit 140 calculates the right channel subtraction gain β by the following equation (1-6-2). ).
Figure JPOXMLDOC01-appb-M000010

Here, r R is the input sound signal of the right channel x R (1), x R (2), ..., x R (T) and the downmix signal x M (1), x M (2), ..., a normalized internal product value of x M (T), expressed by the following equation (1-4-2).
Figure JPOXMLDOC01-appb-M000011

That is, considering the principle of minimizing the energy of this quantization error, the right channel subtraction gain β is determined by the normalized inner product value r R and the number of bits used for coding b R and b M. You should use the value, the correction factor, multiplied by. The correction coefficient is a value greater than 0 and less than 1, and the more bits b R for encoding the right channel difference signal than b M for encoding the downmix signal, the more than 0.5. It is closer to 0, and the smaller the number of bits for encoding the right channel difference signal than the number of bits for encoding the downmix signal, the closer the value is to 1 than 0.5.
〔量子化誤差を最小化する原理に基づく減算利得の推定と復号〕
 上述した量子化誤差を最小化する原理に基づく減算利得の推定と復号の具体例を説明する。各例では、符号化装置100において減算利得の推定を行う左チャネル減算利得推定部120と右チャネル減算利得推定部140、復号装置200において減算利得の復号を行う左チャネル減算利得復号部230と右チャネル減算利得復号部250、について説明する。
[Estimation and decoding of subtraction gain based on the principle of minimizing quantization error]
A specific example of subtraction gain estimation and decoding based on the principle of minimizing the quantization error described above will be described. In each example, the left channel subtraction gain estimation unit 120 and the right channel subtraction gain estimation unit 140 that estimate the subtraction gain in the coding apparatus 100, and the left channel subtraction gain decoding unit 230 and the right that decode the subtraction gain in the decoding apparatus 200. The channel subtraction gain decoding unit 250 will be described.
〔〔例1〕〕
 例1は、左チャネルの入力音信号xL(1), xL(2), ..., xL(T)とダウンミックス信号xM(1), xM(2), ..., xM(T)が同一の系列とみなせない場合も含む、左チャネルの復号音信号が有する量子化誤差のエネルギーを最小化する原理と、右チャネルの入力音信号xR(1), xR(2), ..., xR(T)とダウンミックス信号xM(1), xM(2), ..., xM(T)が同一の系列とみなせない場合も含む、右チャネルの復号音信号が有する量子化誤差のエネルギーを最小化する原理と、に基づくものである。
[[Example 1]]
Example 1 shows the left channel input signal x L (1), x L (2), ..., x L (T) and the downmix signal x M (1), x M (2), ... The principle of minimizing the quantization error energy of the decoded sound signal of the left channel, including the case where x M (T) cannot be regarded as the same series, and the input sound signal of the right channel x R (1), x Including cases where R (2), ..., x R (T) and the downmix signal x M (1), x M (2), ..., x M (T) cannot be regarded as the same sequence. It is based on the principle of minimizing the energy of the quantization error of the decoded sound signal of the right channel.
〔〔〔左チャネル減算利得推定部120〕〕〕
 左チャネル減算利得推定部120には、左チャネル減算利得の候補αcand(a)と当該候補に対応する符号Cαcand(a)との組が複数組(A組、a=1, ..., A)予め記憶されている。左チャネル減算利得推定部120は、図5に示す以下のステップS120-11からステップS120-14を行う。
[[[Left channel subtraction gain estimation unit 120]]]
In the left channel subtraction gain estimation unit 120, a plurality of pairs of the left channel subtraction gain candidate α cand (a) and the code Cα cand (a) corresponding to the candidate are set (A set, a = 1, ... , A) It is stored in advance. The left channel subtraction gain estimation unit 120 performs steps S120-14 from the following steps S120-11 shown in FIG.
 左チャネル減算利得推定部120は、まず、入力された左チャネルの入力音信号xL(1), xL(2), ..., xL(T)とダウンミックス信号xM(1), xM(2), ..., xM(T)から、式(1-4)によりダウンミックス信号の左チャネルの入力音信号に対する正規化された内積値rLを得る(ステップS120-11)。また、左チャネル減算利得推定部120は、ステレオ符号化部170において左チャネル差分信号yL(1), yL(2), ..., yL(T)の符号化に用いるビット数bLと、モノラル符号化部160においてダウンミックス信号xM(1), xM(2), ..., xM(T)の符号化に用いるビット数bMと、フレーム当たりのサンプル数Tと、を用いて下記の式(1-7)により左チャネル補正係数cLを得る(ステップS120-12)。
Figure JPOXMLDOC01-appb-M000012

左チャネル減算利得推定部120は、次に、ステップS120-11で得た正規化された内積値rLとステップS120-12で得た左チャネル補正係数cLとを乗算した値を得る(ステップS120-13)。左チャネル減算利得推定部120は、次に、記憶されている左チャネル減算利得の候補αcand(1), ..., αcand(A)のうちのステップS120-13で得た乗算値cL×rLに最も近い候補(乗算値cL×rLの量子化値)を左チャネル減算利得αとして得て、記憶されている符号Cαcand(1), ..., Cαcand(A)のうちの左チャネル減算利得αに対応する符号を左チャネル減算利得符号Cαとして得る(ステップS120-14)。
The left channel subtraction gain estimation unit 120 first receives the input left channel input sound signals x L (1), x L (2), ..., x L (T) and the downmix signal x M (1). From, x M (2), ..., x M (T), the normalized internal product value r L for the input sound signal of the left channel of the downmix signal is obtained by Eq. (1-4) (step S120-). 11). Further, the left channel subtraction gain estimation unit 120 uses the number of bits b for coding the left channel difference signals y L (1), y L (2), ..., y L (T) in the stereo coding unit 170. L , the number of bits b M used to encode the downmix signals x M (1), x M (2), ..., x M (T) in the monaural coding unit 160, and the number of samples per frame T. And, the left channel correction coefficient c L is obtained by the following equation (1-7) (step S120-12).
Figure JPOXMLDOC01-appb-M000012

The left channel subtraction gain estimation unit 120 then obtains a value obtained by multiplying the normalized inner product value r L obtained in step S120-11 by the left channel correction coefficient c L obtained in step S120-12 (step). S120-13). The left channel subtraction gain estimation unit 120 then uses the multiplication value c obtained in step S120-13 of the stored left channel subtraction gain candidates α cand (1), ..., α cand (A). The candidate closest to L × r L (multiplication value c L × r L quantization value) is obtained as the left channel subtraction gain α, and the stored codes Cα cand (1), ..., Cα cand (A) ), The code corresponding to the left channel subtraction gain α is obtained as the left channel subtraction gain code Cα (step S120-14).
 なお、ステレオ符号化部170において左チャネル差分信号yL(1), yL(2), ..., yL(T)の符号化に用いるビット数bLが陽に定まっていない場合には、ステレオ符号化部170が出力するステレオ符号CSのビット数bsの2分の1(すなわち、bs/2)をビット数bLとして用いればよい。また、左チャネル補正係数cLは、式(1-7)そのもので得られる値ではなく、0より大きく1未満の値であり、左チャネル差分信号yL(1), yL(2), ..., yL(T)の符号化に用いるビット数bLとダウンミックス信号xM(1), xM(2), ..., xM(T)の符号化に用いるビット数bMが同じであるときには0.5であり、ビット数bLがビット数bMよりも多いほど0.5より0に近く、ビット数bLがビット数bMよりも少ないほど0.5より1に近い値としてもよい。これらは、後述する各例でも同様である。 When the number of bits b L used for coding the left channel difference signals y L (1), y L (2), ..., y L (T) in the stereo coding unit 170 is not explicitly determined. May use half of the number of bits b s of the stereo code CS output by the stereo coding unit 170 (that is, b s / 2) as the number of bits b L. The left channel correction coefficient c L is not a value obtained by Eq. (1-7) itself, but a value greater than 0 and less than 1, and the left channel difference signals y L (1), y L (2), ..., The number of bits used to encode y L (T) b L and the number of bits used to encode the downmix signal x M (1), x M (2), ..., x M (T) when b M are the same is 0.5, close to 0 than 0.5 the number of bits b L is the more than the number of bits b M, the number of bits b L is as close to 1 than 0.5 as less than the number of bits b M May be good. These are the same in each example described later.
〔〔〔右チャネル減算利得推定部140〕〕〕
 右チャネル減算利得推定部140には、右チャネル減算利得の候補βcand(b)と当該候補に対応する符号Cβcand(b)との組が複数組(B組、b=1, ..., B)予め記憶されている。右チャネル減算利得推定部140は、図5に示す以下のステップS140-11からステップS140-14を行う。
[[[Right channel subtraction gain estimation unit 140]]]
In the right channel subtraction gain estimation unit 140, there are a plurality of pairs of the right channel subtraction gain candidate β cand (b) and the code Cβ cand (b) corresponding to the candidate (group B, b = 1, ... , B) Pre-stored. The right channel subtraction gain estimation unit 140 performs steps S140-14 from the following steps S140-11 shown in FIG.
 右チャネル減算利得推定部140は、まず、入力された右チャネルの入力音信号xR(1), xR(2), ..., xR(T)とダウンミックス信号xM(1), xM(2), ..., xM(T)から、式(1-4-2)によりダウンミックス信号の右チャネルの入力音信号に対する正規化された内積値rRを得る(ステップS140-11)。また、右チャネル減算利得推定部140は、ステレオ符号化部170において右チャネル差分信号yR(1), yR(2), ..., yR(T)の符号化に用いるビット数bRと、モノラル符号化部160においてダウンミックス信号xM(1), xM(2), ..., xM(T)の符号化に用いるビット数bMと、フレーム当たりのサンプル数Tと、を用いて下記の式(1-7-2)により右チャネル補正係数cRを得る(ステップS140-12)。
Figure JPOXMLDOC01-appb-M000013

右チャネル減算利得推定部140は、次に、ステップS140-11で得た正規化された内積値rRとステップS140-12で得た右チャネル補正係数cRとを乗算した値を得る(ステップS140-13)。右チャネル減算利得推定部140は、次に、記憶されている右チャネル減算利得の候補βcand(1), ..., βcand(B)のうちのステップS140-13で得た乗算値cR×rRに最も近い候補(乗算値cR×rRの量子化値)を右チャネル減算利得βとして得て、記憶されている符号Cβcand(1), ..., Cβcand(B)のうちの右チャネル減算利得βに対応する符号を右チャネル減算利得符号Cβとして得る(ステップS140-14)。
The right channel subtraction gain estimation unit 140 first receives the input right channel input sound signals x R (1), x R (2), ..., x R (T) and the downmix signal x M (1). From, x M (2), ..., x M (T), the normalized internal product value r R for the input sound signal of the right channel of the downmix signal is obtained by Eq. (1-4-2) (step). S140-11). Further, the right channel subtraction gain estimation unit 140 uses the number of bits b for coding the right channel difference signals y R (1), y R (2), ..., y R (T) in the stereo coding unit 170. R , the number of bits b M used to encode the downmix signals x M (1), x M (2), ..., x M (T) in the monaural coding unit 160, and the number of samples per frame T. And, the right channel correction coefficient c R is obtained by the following equation (1-7-2) (step S140-12).
Figure JPOXMLDOC01-appb-M000013

The right channel subtraction gain estimation unit 140 then obtains a value obtained by multiplying the normalized inner product value r R obtained in step S140-11 by the right channel correction coefficient c R obtained in step S140-12 (step). S140-13). The right channel subtraction gain estimation unit 140 then uses the multiplication value c obtained in step S140-13 of the stored right channel subtraction gain candidates β cand (1), ..., β cand (B). The candidate closest to R × r R (multiplication value c R × r R quantization value) is obtained as the right channel subtraction gain β, and the stored codes Cβ cand (1), ..., Cβ cand (B) ), The code corresponding to the right channel subtraction gain β is obtained as the right channel subtraction gain code Cβ (step S140-14).
 なお、ステレオ符号化部170において右チャネル差分信号yR(1), yR(2), ..., yR(T)の符号化に用いるビット数bRが陽に定まっていない場合には、ステレオ符号化部170が出力するステレオ符号CSのビット数bsの2分の1(すなわち、bs/2)をビット数bRとして用いればよい。また、右チャネル補正係数cRは、式(1-7-2)そのもので得られる値ではなく、0より大きく1未満の値であり、右チャネル差分信号yR(1), yR(2), ..., yR(T)の符号化に用いるビット数bRとダウンミックス信号xM(1), xM(2), ..., xM(T)の符号化に用いるビット数bMが同じであるときには0.5であり、ビット数bRがビット数bMよりも多いほど0.5より0に近く、ビット数bRがビット数bMよりも少ないほど0.5より1に近い値としてもよい。これらは、後述する各例でも同様である。 When the number of bits b R used for encoding the right channel difference signals y R (1), y R (2), ..., y R (T) in the stereo coding unit 170 is not explicitly determined. May use half of the number of bits b s of the stereo code CS output by the stereo coding unit 170 (that is, b s / 2) as the number of bits b R. The right channel correction coefficient c R is not a value obtained by Eq. (1-7-2) itself, but a value greater than 0 and less than 1, and the right channel difference signals y R (1), y R (2). ), ..., y The number of bits used to encode R (T) b R and the downmix signal used to encode x M (1), x M (2), ..., x M (T) when the number of bits b M are the same is 0.5, close to 0 than 0.5 the number of bits b R is the more than the number of bits b M, the number of bits b R is close to 1 than about 0.5 less than the number of bits b M It may be a value. These are the same in each example described later.
〔〔〔左チャネル減算利得復号部230〕〕〕
 左チャネル減算利得復号部230には、対応する符号化装置100の左チャネル減算利得推定部120に記憶されているものと同じ、左チャネル減算利得の候補αcand(a)と当該候補に対応する符号Cαcand(a)との組が複数組(A組、a=1, ..., A)予め記憶されている。左チャネル減算利得復号部230は、記憶されている符号Cαcand(1), ..., Cαcand(A)のうちの入力された左チャネル減算利得符号Cαに対応する左チャネル減算利得の候補を左チャネル減算利得αとして得る(ステップS230-11)。
[[[Left channel subtraction gain decoding unit 230]]]
The left channel subtraction gain decoding unit 230 corresponds to the same left channel subtraction gain candidate α cand (a) and the candidate stored in the left channel subtraction gain estimation unit 120 of the corresponding coding apparatus 100. A plurality of sets (A set, a = 1, ..., A) with the reference numeral Cα cand (a) are stored in advance. The left channel subtraction gain decoding unit 230 is a candidate for the left channel subtraction gain corresponding to the input left channel subtraction gain sign Cα among the stored codes Cα cand (1), ..., Cα cand (A). Is obtained as the left channel subtraction gain α (step S230-11).
〔〔〔右チャネル減算利得復号部250〕〕〕
 右チャネル減算利得復号部250には、対応する符号化装置100の右チャネル減算利得推定部140に記憶されているものと同じ、右チャネル減算利得の候補βcand(b)と当該候補に対応する符号Cβcand(b)との組が複数組(B組、b=1, ..., B)予め記憶されている。右チャネル減算利得復号部250は、記憶されている符号Cβcand(1), ..., Cβcand(B)のうちの入力された右チャネル減算利得符号Cβに対応する右チャネル減算利得の候補を右チャネル減算利得βとして得る(ステップS250-11)。
[[[Right channel subtraction gain decoding unit 250]]]
The right channel subtraction gain decoding unit 250 corresponds to the same right channel subtraction gain candidate β cand (b) and the candidate stored in the right channel subtraction gain estimation unit 140 of the corresponding coding apparatus 100. A plurality of sets (group B, b = 1, ..., B) with the code Cβ cand (b) are stored in advance. The right channel subtraction gain decoding unit 250 is a candidate for the right channel subtraction gain corresponding to the input right channel subtraction gain code Cβ among the stored codes Cβ cand (1), ..., Cβ cand (B). Is obtained as the right channel subtraction gain β (step S250-11).
 なお、左チャネルと右チャネルでは同じ減算利得の候補や符号を用いればよく、上述したAとBを同じ値として、左チャネル減算利得推定部120と左チャネル減算利得復号部230に記憶されている左チャネル減算利得の候補αcand(a)と当該候補に対応する符号Cαcand(a)との組と、右チャネル減算利得推定部140と右チャネル減算利得復号部250に記憶されている右チャネル減算利得の候補βcand(b)と当該候補に対応する符号Cβcand(b)との組と、を同じにしてもよい。 The same subtraction gain candidate or code may be used for the left channel and the right channel, and the above-mentioned A and B are stored in the left channel subtraction gain estimation unit 120 and the left channel subtraction gain decoding unit 230 as the same value. The pair of the left channel subtraction gain candidate α cand (a) and the code Cα cand (a) corresponding to the candidate, and the right channel stored in the right channel subtraction gain estimation unit 140 and the right channel subtraction gain decoding unit 250. The set of the subtraction gain candidate β cand (b) and the code Cβ cand (b) corresponding to the candidate may be the same.
〔〔例1の変形例〕〕
 符号化装置100で左チャネル差分信号の符号化に用いるビット数bLは復号装置200で左チャネル差分信号の復号に用いるビット数であり、符号化装置100でダウンミックス信号の符号化に用いるビット数bMの値は復号装置200でダウンミックス信号の復号に用いるビット数であるので、補正係数cLは符号化装置100でも復号装置200でも同じ値を計算することができる。したがって、正規化された内積値rLを符号化と復号の対象として、符号化装置100と復号装置200で正規化された内積値の量子化値^rLに補正係数cLを乗算して左チャネル減算利得αを得てもよい。右チャネルについても同様である。この形態を例1の変形例として説明する。
[[Modified example of Example 1]]
The number of bits b L used for coding the left channel difference signal in the coding device 100 is the number of bits used for decoding the left channel difference signal in the decoding device 200, and the bits used for coding the downmix signal in the coding device 100. Since the value of the number b M is the number of bits used for decoding the downmix signal by the decoding device 200, the correction coefficient c L can be calculated to be the same value by both the coding device 100 and the decoding device 200. Therefore, the normalized inner product value r L is set as the object of coding and decoding, and the quantization value ^ r L of the inner product value normalized by the coding device 100 and the decoding device 200 is multiplied by the correction coefficient c L. The left channel subtraction gain α may be obtained. The same applies to the right channel. This form will be described as a modification of Example 1.
〔〔〔左チャネル減算利得推定部120〕〕〕
 左チャネル減算利得推定部120には、左チャネルの正規化された内積値の候補rLcand(a)と当該候補に対応する符号Cαcand(a)との組が複数組(A組、a=1, ..., A)予め記憶されている。左チャネル減算利得推定部120は、図6に示す通り、例1でも説明したステップS120-11とステップS120-12と、下記のステップS120-15とステップS120-16と、を行う。
[[[Left channel subtraction gain estimation unit 120]]]
In the left channel subtraction gain estimation unit 120, there are a plurality of pairs of the candidate r Lcand (a) of the normalized inner product value of the left channel and the sign Cα cand (a) corresponding to the candidate (pair A, a =). 1, ..., A) It is stored in advance. As shown in FIG. 6, the left channel subtraction gain estimation unit 120 performs step S120-11 and step S120-12 described in Example 1 and the following steps S120-15 and S120-16.
 左チャネル減算利得推定部120は、まず、例1の左チャネル減算利得推定部120のステップS120-11と同様に、入力された左チャネルの入力音信号xL(1), xL(2), ..., xL(T)とダウンミックス信号xM(1), xM(2), ..., xM(T)から、式(1-4)によりダウンミックス信号の左チャネルの入力音信号に対する正規化された内積値rLを得る(ステップS120-11)。左チャネル減算利得推定部120は、次に、記憶されている左チャネルの正規化された内積値の候補rLcand(1), ..., rLcand(A)のうちのステップS120-11で得た正規化された内積値rLに最も近い候補(正規化された内積値rLの量子化値)^rLを得て、記憶されている符号Cαcand(1), ..., Cαcand(A)のうちの当該最も近い候補^rLに対応する符号を左チャネル減算利得符号Cαとして得る(ステップS120-15)。また、左チャネル減算利得推定部120は、例1の左チャネル減算利得推定部120のステップS120-12と同様に、ステレオ符号化部170において左チャネル差分信号yL(1), yL(2), ..., yL(T)の符号化に用いるビット数bLと、モノラル符号化部160においてダウンミックス信号xM(1), xM(2), ..., xM(T)の符号化に用いるビット数bMと、フレーム当たりのサンプル数Tと、を用いて式(1-7)により左チャネル補正係数cLを得る(ステップS120-12)。左チャネル減算利得推定部120は、次に、ステップS120-15で得た正規化された内積値の量子化値^rLとステップS120-12で得た左チャネル補正係数cLとを乗算した値を左チャネル減算利得αとして得る(ステップS120-16)。 First, the left channel subtraction gain estimation unit 120 receives the input left channel input sound signals x L (1), x L (2) in the same manner as in step S120-11 of the left channel subtraction gain estimation unit 120 of Example 1. , ..., x L (T) and downmix signal From x M (1), x M (2), ..., x M (T), the left channel of the downmix signal according to equation (1-4). The normalized internal product value r L for the input sound signal of is obtained (step S120-11). The left channel subtraction gain estimation unit 120 then in step S120-11 of the stored left channel normalized inner product value candidates r Lcand (1), ..., r Lcand (A). The candidate closest to the obtained normalized inner product value r L (quantized value of the normalized inner product value r L ) ^ r L is obtained, and the stored sign Cα cand (1), ..., The code corresponding to the closest candidate ^ r L of the Cα cand (A) is obtained as the left channel subtraction gain code Cα (step S120-15). Further, the left channel subtraction gain estimation unit 120 is similar to step S120-12 of the left channel subtraction gain estimation unit 120 in Example 1, and the left channel difference signal y L (1), y L (2) in the stereo coding unit 170. ), ..., y L (T) The number of bits used to encode b L and the downmix signal x M (1), x M (2), ..., x M (in the monaural coding unit 160) Using the number of bits b M used for coding T) and the number of samples T per frame, the left channel correction coefficient c L is obtained by the equation (1-7) (step S120-12). The left channel subtraction gain estimation unit 120 then multiplied the quantized value ^ r L of the normalized inner product value obtained in step S120-15 by the left channel correction coefficient c L obtained in step S120-12. The value is obtained as the left channel subtraction gain α (step S120-16).
〔〔〔右チャネル減算利得推定部140〕〕〕
 右チャネル減算利得推定部140には、右チャネルの正規化された内積値の候補rRcand(b)と当該候補に対応する符号Cβcand(b)との組が複数組(B組、b=1, ..., B)予め記憶されている。右チャネル減算利得推定部140は、図6に示す通り、例1でも説明したステップS140-11とステップS140-12と、下記のステップS140-15とステップS140-16と、を行う。
[[[Right channel subtraction gain estimation unit 140]]]
In the right channel subtraction gain estimation unit 140, there are a plurality of pairs of the candidate r Rcand (b) of the normalized inner product value of the right channel and the sign Cβ cand (b) corresponding to the candidate (set B, b =). 1, ..., B) Pre-stored. As shown in FIG. 6, the right channel subtraction gain estimation unit 140 performs step S140-11 and step S140-12 described in Example 1 and the following steps S140-15 and S140-16.
 右チャネル減算利得推定部140は、まず、例1の右チャネル減算利得推定部140のステップS140-11と同様に、入力された右チャネルの入力音信号xR(1), xR(2), ..., xR(T)とダウンミックス信号xM(1), xM(2), ..., xM(T)から、式(1-4-2)によりダウンミックス信号の右チャネルの入力音信号に対する正規化された内積値rRを得る(ステップS140-11)。右チャネル減算利得推定部140は、次に、記憶されている右チャネルの正規化された内積値の候補rRcand(1), ..., rRcand(B)のうちのステップS140-11で得た正規化された内積値rRに最も近い候補(正規化された内積値rRの量子化値)^rRを得て、記憶されている符号Cβcand(1), ..., Cβcand(B)のうちの当該最も近い候補^rRに対応する符号を右チャネル減算利得符号Cβとして得る(ステップS140-15)。また、右チャネル減算利得推定部140は、例1の右チャネル減算利得推定部140のステップS140-12と同様に、ステレオ符号化部170において右チャネル差分信号yR(1), yR(2), ..., yR(T)の符号化に用いるビット数bRと、モノラル符号化部160においてダウンミックス信号xM(1), xM(2), ..., xM(T)の符号化に用いるビット数bMと、フレーム当たりのサンプル数Tと、を用いて式(1-7-2)により右チャネル補正係数cRを得る(ステップS140-12)。右チャネル減算利得推定部140は、次に、ステップS140-15で得た正規化された内積値の量子化値^rRとステップS140-12で得た右チャネル補正係数cRとを乗算した値を右チャネル減算利得βとして得る(ステップS140-16)。 First, the right channel subtraction gain estimation unit 140 first receives the input right channel input sound signals x R (1), x R (2) in the same manner as in step S140-11 of the right channel subtraction gain estimation unit 140 of Example 1. , ..., x R (T) and downmix signal From x M (1), x M (2), ..., x M (T), the downmix signal is obtained by Eq. (1-4-2). The normalized internal product value r R for the input sound signal of the right channel is obtained (step S140-11). The right channel subtraction gain estimation unit 140 is then subjected to step S140-11 of the stored right channel normalized inner product value candidates r Rcand (1), ..., r Rcand (B). The candidate closest to the obtained normalized inner product value r R (quantized value of the normalized inner product value r R ) ^ r R is obtained and the stored code Cβ cand (1), ..., The code corresponding to the closest candidate ^ r R of the Cβ cand (B) is obtained as the right channel subtraction gain code Cβ (step S140-15). Further, the right channel subtraction gain estimation unit 140 is similar to step S140-12 of the right channel subtraction gain estimation unit 140 in Example 1, and the right channel difference signal y R (1), y R (2) in the stereo coding unit 170. ), ..., y The number of bits used to encode R (T) b R and the downmix signal x M (1), x M (2), ..., x M (in the monaural coding unit 160) Using the number of bits b M used for encoding T) and the number of samples T per frame, the right channel correction coefficient c R is obtained by the equation (1-7-2) (step S140-12). The right channel subtraction gain estimation unit 140 then multiplied the quantized value ^ r R of the normalized inner product value obtained in step S140-15 by the right channel correction coefficient c R obtained in step S140-12. The value is obtained as the right channel subtraction gain β (step S140-16).
〔〔〔左チャネル減算利得復号部230〕〕〕
 左チャネル減算利得復号部230には、対応する符号化装置100の左チャネル減算利得推定部120に記憶されているものと同じ、左チャネルの正規化された内積値の候補rLcand(a)と当該候補に対応する符号Cαcand(a)との組が複数組(A組、a=1, ..., A)予め記憶されている。左チャネル減算利得復号部230は、図7に示す以下のステップS230-12からステップS230-14を行う。
[[[Left channel subtraction gain decoding unit 230]]]
The left channel subtraction gain decoding unit 230 includes the same left channel normalized inner product value candidate r Lcand (a) as that stored in the left channel subtraction gain estimation unit 120 of the corresponding coding apparatus 100. A plurality of sets (A set, a = 1, ..., A) with the code Cα cand (a) corresponding to the candidate are stored in advance. The left channel subtraction gain decoding unit 230 performs steps S230-14 from the following steps S230-12 shown in FIG. 7.
 左チャネル減算利得復号部230は、記憶されている符号Cαcand(1), ..., Cαcand(A)のうちの入力された左チャネル減算利得符号Cαに対応する左チャネルの正規化された内積値の候補を左チャネルの正規化された内積値の復号値^rLとして得る(ステップS230-12)。また、左チャネル減算利得復号部230は、ステレオ復号部220において左チャネル復号差分信号^yL(1), ^yL(2), ..., ^yL(T)の復号に用いるビット数bLと、モノラル復号部210においてモノラル復号音信号^xM(1), ^xM(2), ..., ^xM(T)の復号に用いるビット数bMと、フレーム当たりのサンプル数Tと、を用いて式(1-7)により左チャネル補正係数cLを得る(ステップS230-13)。左チャネル減算利得復号部230は、次に、ステップS230-12で得た正規化された内積値の復号値^rLとステップS230-13で得た左チャネル補正係数cLとを乗算した値を左チャネル減算利得αとして得る(ステップS230-14)。 The left channel subtraction gain decoding unit 230 normalizes the left channel corresponding to the input left channel subtraction gain code Cα among the stored codes Cα cand (1), ..., Cα cand (A). The candidate of the inner product value is obtained as the decoded value ^ r L of the normalized inner product value of the left channel (step S230-12). Further, the left channel subtraction gain decoding unit 230 is used by the stereo decoding unit 220 to decode the left channel decoding difference signals ^ y L (1), ^ y L (2), ..., ^ y L (T). The number b L , the number of bits b M used to decode the monaural decoding sound signal ^ x M (1), ^ x M (2), ..., ^ x M (T) in the monaural decoding unit 210, and the number of bits per frame. The left channel correction coefficient c L is obtained by Eq. (1-7) using the number of samples T of (step S230-13). Next, the left channel subtraction gain decoding unit 230 is a value obtained by multiplying the decoded value ^ r L of the normalized inner product value obtained in step S230-12 by the left channel correction coefficient c L obtained in step S230-13. Is obtained as the left channel subtraction gain α (step S230-14).
 なお、ステレオ符号CSが左チャネル差分符号CLと右チャネル差分符号CRを合わせたものである場合には、ステレオ復号部220において左チャネル復号差分信号^yL(1), ^yL(2), ..., ^yL(T)の復号に用いるビット数bLとは左チャネル差分符号CLのビット数である。ステレオ復号部220において左チャネル復号差分信号^yL(1), ^yL(2), ..., ^yL(T)の復号に用いるビット数bLが陽に定まっていない場合には、ステレオ復号部220に入力されるステレオ符号CSのビット数bsの2分の1(すなわち、bs/2)をビット数bLとして用いればよい。モノラル復号部210においてモノラル復号音信号^xM(1), ^xM(2), ..., ^xM(T)の復号に用いるビット数bMとは、モノラル符号CMのビット数である。左チャネル補正係数cLは、式(1-7)そのもので得られる値ではなく、0より大きく1未満の値であり、左チャネル復号差分信号^yL(1), ^yL(2), ..., ^yL(T)の復号に用いるビット数bLとモノラル復号音信号^xM(1), ^xM(2), ..., ^xM(T)の復号に用いるビット数bMが同じであるときには0.5であり、ビット数bLがビット数bMよりも多いほど0.5より0に近く、ビット数bLがビット数bMよりも少ないほど0.5より1に近い値としてもよい。 When the stereo code CS is a combination of the left channel difference code CL and the right channel difference code CR, the stereo decoding unit 220 uses the left channel decoding difference signal ^ y L (1), ^ y L (2). , ..., ^ y The number of bits used for decoding L (T) b L is the number of bits of the left channel difference code CL. When the number of bits b L used for decoding the left channel decoding difference signal ^ y L (1), ^ y L (2), ..., ^ y L (T) in the stereo decoding unit 220 is not explicitly determined. May use half of the number of bits b s of the stereo code CS input to the stereo decoding unit 220 (that is, b s / 2) as the number of bits b L. The number of bits used for decoding the monaural decoding sound signal ^ x M (1), ^ x M (2), ..., ^ x M (T) in the monaural decoding unit 210 b M is the number of bits of the monaural code CM. Is. The left channel correction coefficient c L is not a value obtained by Eq. (1-7) itself, but a value greater than 0 and less than 1, and the left channel decoding difference signal ^ y L (1), ^ y L (2). , ..., ^ y L (T) decoding of bit number b L and monaural decoding sound signal ^ x M (1), ^ x M (2), ..., ^ x M (T) When the number of bits b M used for is the same, it is 0.5, when the number of bits b L is larger than the number of bits b M, it is closer to 0 than 0.5, and when the number of bits b L is less than the number of bits b M, it is closer to 1 than 0.5. It may be a value close to.
〔〔〔右チャネル減算利得復号部250〕〕〕
 右チャネル減算利得復号部250には、対応する符号化装置100の右チャネル減算利得推定部140に記憶されているものと同じ、右チャネルの正規化された内積値の候補rRcand(b)と当該候補に対応する符号Cβcand(b)との組が複数組(B組、b=1, ..., B)予め記憶されている。右チャネル減算利得復号部250は、図7に示す以下のステップS250-12からステップS250-14を行う。
[[[Right channel subtraction gain decoding unit 250]]]
The right channel subtraction gain decoding unit 250 includes the same candidate r Rcand (b) for the normalized inner product value of the right channel as that stored in the right channel subtraction gain estimation unit 140 of the corresponding coding apparatus 100. A plurality of sets (B set, b = 1, ..., B) with the code Cβ cand (b) corresponding to the candidate are stored in advance. The right channel subtraction gain decoding unit 250 performs steps S250-14 from the following steps S250-12 shown in FIG. 7.
 右チャネル減算利得復号部250は、記憶されている符号Cβcand(1), ..., Cβcand(B)のうちの入力された右チャネル減算利得符号Cβに対応する右チャネルの正規化された内積値の候補を右チャネルの正規化された内積値の復号値^rRとして得る(ステップS250-12)。また、右チャネル減算利得復号部250は、ステレオ復号部220において右チャネル復号差分信号^yR(1), ^yR(2), ..., ^yR(T)の復号に用いるビット数bRと、モノラル復号部210においてモノラル復号音信号^xM(1), ^xM(2), ..., ^xM(T)の復号に用いるビット数bMと、フレーム当たりのサンプル数Tと、を用いて式(1-7-2)により右チャネル補正係数cRを得る(ステップS250-13)。右チャネル減算利得復号部250は、次に、ステップS250-12で得た正規化された内積値の復号値^rRとステップS250-13で得た右チャネル補正係数cRとを乗算した値を右チャネル減算利得βとして得る(ステップS250-14)。 The right channel subtraction gain decoding unit 250 normalizes the right channel corresponding to the input right channel subtraction gain code Cβ of the stored codes Cβ cand (1), ..., Cβ cand (B). The candidate of the inner product value is obtained as the decoded value ^ r R of the normalized inner product value of the right channel (step S250-12). Further, the right channel subtraction gain decoding unit 250 is used by the stereo decoding unit 220 to decode the right channel decoding difference signals ^ y R (1), ^ y R (2), ..., ^ y R (T). The number b R , the number of bits b M used to decode the monaural decoding sound signal ^ x M (1), ^ x M (2), ..., ^ x M (T) in the monaural decoding unit 210, and the number of bits per frame. The right channel correction coefficient c R is obtained by the equation (1-7-2) using the number of samples T of (step S250-13). The right channel subtraction gain decoding unit 250 then multiplies the decoded value ^ r R of the normalized inner product value obtained in step S250-12 with the right channel correction coefficient c R obtained in step S250-13. Is obtained as the right channel subtraction gain β (step S250-14).
 なお、ステレオ符号CSが左チャネル差分符号CLと右チャネル差分符号CRを合わせたものである場合には、ステレオ復号部220において右チャネル復号差分信号^yR(1), ^yR(2), ..., ^yR(T)の復号に用いるビット数bRとは右チャネル差分符号CRのビット数である。ステレオ復号部220において右チャネル復号差分信号^yR(1), ^yR(2), ..., ^yR(T)の復号に用いるビット数bRが陽に定まっていない場合には、ステレオ復号部220に入力されるステレオ符号CSのビット数bsの2分の1(すなわち、bs/2)をビット数bRとして用いればよい。モノラル復号部210においてモノラル復号音信号^xM(1), ^xM(2), ..., ^xM(T)の復号に用いるビット数bMとは、モノラル符号CMのビット数である。右チャネル補正係数cRは、式(1-7-2)そのもので得られる値ではなく、0より大きく1未満の値であり、右チャネル復号差分信号^yR(1), ^yR(2), ..., ^yR(T)の復号に用いるビット数bRとモノラル復号音信号^xM(1), ^xM(2), ..., ^xM(T)の復号に用いるビット数bMが同じであるときには0.5であり、ビット数bRがビット数bMよりも多いほど0.5より0に近く、ビット数bRがビット数bMよりも少ないほど0.5より1に近い値としてもよい。 When the stereo code CS is a combination of the left channel difference code CL and the right channel difference code CR, the stereo decoding unit 220 uses the right channel decoding difference signal ^ y R (1), ^ y R (2). , ..., ^ y Number of bits used for decoding R (T) b R is the number of bits of the right channel difference code CR. When the number of bits b R used for decoding the right channel decoding difference signal ^ y R (1), ^ y R (2), ..., ^ y R (T) in the stereo decoding unit 220 is not explicitly determined. May use half of the number of bits b s of the stereo code CS input to the stereo decoding unit 220 (that is, b s / 2) as the number of bits b R. The number of bits used for decoding the monaural decoding sound signal ^ x M (1), ^ x M (2), ..., ^ x M (T) in the monaural decoding unit 210 b M is the number of bits of the monaural code CM. Is. The right channel correction coefficient c R is not a value obtained by Eq. (1-7-2) itself, but a value greater than 0 and less than 1, and the right channel decoding difference signal ^ y R (1), ^ y R ( 2), ..., ^ y The number of bits used to decode R (T) b R and the monaural decoded sound signal ^ x M (1), ^ x M (2), ..., ^ x M (T) 0.5 when the number of bits b M are the same used for decoding the closer to 0 than about 0.5 bits b R is larger than the number of bits b M, the number of bits b R is the smaller than the number of bits b M 0.5 It may be a value closer to 1.
 なお、左チャネルと右チャネルでは同じ正規化された内積値の候補や符号を用いればよく、上述したAとBを同じ値として、左チャネル減算利得推定部120と左チャネル減算利得復号部230に記憶されている左チャネルの正規化された内積値の候補rLcand(a)と当該候補に対応する符号Cαcand(a)との組と、右チャネル減算利得推定部140と右チャネル減算利得復号部250に記憶されている右チャネルの正規化された内積値の候補rRcand(b)と当該候補に対応する符号Cβcand(b)との組と、を同じにしてもよい。 Note that the same normalized inner product value candidates and codes may be used for the left channel and the right channel, and the above-mentioned A and B are set to the same values for the left channel subtraction gain estimation unit 120 and the left channel subtraction gain decoding unit 230. The set of the stored left channel normalized inner product value candidate r Lcand (a) and the code Cα cand (a) corresponding to the candidate, the right channel subtraction gain estimation unit 140, and the right channel subtraction gain decoding. The set of the candidate r Rcand (b) of the normalized inner product value of the right channel stored in the part 250 and the code Cβ cand (b) corresponding to the candidate may be the same.
 なお、符号Cαは、実質的には左チャネル減算利得αに対応する符号であること、符号化装置100と復号装置200の説明中で文言を整合させる目的、などから左チャネル減算利得符号と呼んでいるが、正規化された内積値を表すものであることからすると左チャネル内積符号などと呼んでもよいものである。符号Cβについても同様であり、右チャネル内積符号などと呼んでもよい。 The code Cα is referred to as a left channel subtraction gain code because it is a code that substantially corresponds to the left channel subtraction gain α, and for the purpose of matching the wording in the description of the coding device 100 and the decoding device 200. However, since it represents a normalized inner product value, it may be called a left channel inner product code or the like. The same applies to the code Cβ, which may be referred to as a right channel product code or the like.
〔〔例2〕〕
 正規化された内積値として過去のフレームの入力の値も考慮した値を用いる例を例2として説明する。例2は、フレーム内での最適性、すなわち、左チャネルの復号音信号が有する量子化誤差のエネルギーの最小化と右チャネルの復号音信号が有する量子化誤差のエネルギーの最小化は厳密には保証されないが、左チャネル減算利得αのフレーム間の急激な変動と右チャネル減算利得βのフレーム間の急激な変動を少なくして、当該変動に由来して復号音信号に生じるノイズを低減するものである。すなわち、例2は、復号音信号が有する量子化誤差のエネルギーを小さくすることに加えて復号音信号の聴覚品質も考慮したものである。
[[Example 2]]
An example of using a value considering the input value of the past frame as the normalized inner product value will be described as Example 2. In Example 2, the optimization within the frame, that is, the minimization of the quantization error energy of the left channel decoded sound signal and the minimization of the quantization error energy of the right channel decoded sound signal are strictly. Although not guaranteed, it reduces the abrupt fluctuation between frames of the left channel subtraction gain α and the abrupt fluctuation between frames of the right channel subtraction gain β, and reduces the noise generated in the decoded sound signal due to the fluctuation. Is. That is, in Example 2, in addition to reducing the energy of the quantization error of the decoded sound signal, the auditory quality of the decoded sound signal is also taken into consideration.
 例2は、符号化側、すなわち、左チャネル減算利得推定部120と右チャネル減算利得推定部140は例1と異なるが、復号側、すなわち、左チャネル減算利得復号部230と右チャネル減算利得復号部250は例1と同じである。以下、例2が例1と異なる点を中心に説明する。 In Example 2, the coding side, that is, the left channel subtraction gain estimation unit 120 and the right channel subtraction gain estimation unit 140 are different from Example 1, but the decoding side, that is, the left channel subtraction gain decoding unit 230 and the right channel subtraction gain decoding unit. Part 250 is the same as in Example 1. Hereinafter, Example 2 will be mainly described as being different from Example 1.
〔〔〔左チャネル減算利得推定部120〕〕〕
 左チャネル減算利得推定部120は、図8に示す通り、下記のステップS120-111からステップS120-113と、例1で説明したステップS120-12からステップS120-14と、を行う。
[[[Left channel subtraction gain estimation unit 120]]]
As shown in FIG. 8, the left channel subtraction gain estimation unit 120 performs the following steps S120-111 to S120-113 and steps S120-12 to S120-14 described in Example 1.
 左チャネル減算利得推定部120は、まず、入力された左チャネルの入力音信号xL(1), xL(2), ..., xL(T)と、入力されたダウンミックス信号xM(1), xM(2), ..., xM(T)と、前のフレームで用いた内積値EL(-1)と、を用いて、下記の式(1-8)により、現在のフレームで用いる内積値EL(0)を得る(ステップS120-111)。
Figure JPOXMLDOC01-appb-M000014

ここで、εLは、0より大きく1未満の予め定めた値であり、左チャネル減算利得推定部120に予め記憶されている。なお、左チャネル減算利得推定部120は、得た内積値EL(0)を、「前のフレームで用いた内積値EL(-1)」として次のフレームで用いるために、左チャネル減算利得推定部120内に記憶する。
The left channel subtraction gain estimation unit 120 first receives the input left channel input sound signals x L (1), x L (2), ..., x L (T) and the input downmix signal x. Using M (1), x M (2), ..., x M (T) and the internal product value E L (-1) used in the previous frame, the following equation (1-8) To obtain the internal product value E L (0) used in the current frame (step S120-111).
Figure JPOXMLDOC01-appb-M000014

Here, ε L is a predetermined value larger than 0 and less than 1, and is stored in advance in the left channel subtraction gain estimation unit 120. The left channel subtraction gain estimation unit 120 uses the obtained inner product value E L (0) as the “inner product value E L (-1) used in the previous frame” in the next frame, so that the left channel subtraction is subtracted. It is stored in the gain estimation unit 120.
 左チャネル減算利得推定部120は、また、入力されたダウンミックス信号xM(1), xM(2), ..., xM(T)と、前のフレームで用いたダウンミックス信号のエネルギーEM(-1)と、を用いて、下記の式(1-9)により、現在のフレームで用いるダウンミックス信号のエネルギーEM(0)を得る(ステップS120-112)。
Figure JPOXMLDOC01-appb-M000015

ここで、εMは、0より大きく1未満で予め定めた値であり、左チャネル減算利得推定部120に予め記憶されている。なお、左チャネル減算利得推定部120は、得たダウンミックス信号のエネルギーEM(0)を、「前のフレームで用いたダウンミックス信号のエネルギーEM(-1)」として次のフレームで用いるために、左チャネル減算利得推定部120内に記憶する。
The left channel subtraction gain estimation unit 120 also uses the input downmix signals x M (1), x M (2), ..., x M (T) and the downmix signal used in the previous frame. Using the energy E M (-1), the energy E M (0) of the downmix signal used in the current frame is obtained by the following equation (1-9) (step S120-112).
Figure JPOXMLDOC01-appb-M000015

Here, ε M is a value larger than 0 and less than 1 and is stored in advance in the left channel subtraction gain estimation unit 120. The left channel subtraction gain estimation unit 120 uses the obtained downmix signal energy E M (0) as the “downmix signal energy E M (-1) used in the previous frame” in the next frame. Therefore, it is stored in the left channel subtraction gain estimation unit 120.
 左チャネル減算利得推定部120は、次に、ステップS120-111で得た現在のフレームで用いる内積値EL(0)と、ステップS120-112で得た現在のフレームで用いるダウンミックス信号のエネルギーEM(0)を用いて、正規化された内積値rLを下記の式(1-10)で得る(ステップS120-113)。
Figure JPOXMLDOC01-appb-M000016
The left channel subtraction gain estimation unit 120 then uses the inner product value E L (0) obtained in the current frame obtained in step S120-111 and the energy of the downmix signal used in the current frame obtained in step S120-112. Using E M (0), the normalized inner product value r L is obtained by the following equation (1-10) (step S120-113).
Figure JPOXMLDOC01-appb-M000016
 左チャネル減算利得推定部120は、また、ステップS120-12を行い、次に、ステップS120-11で得た正規化された内積値rLに代えて上述したステップS120-113で得た正規化された内積値rLを用いてステップS120-13を行い、さらに、ステップS120-14を行う。 The left channel subtraction gain estimation unit 120 also performs step S120-12, and then replaces the normalized inner product value r L obtained in step S120-11 with the normalization obtained in step S120-113 described above. Step S120-13 is performed using the obtained inner product value r L , and further, step S120-14 is performed.
 なお、上記のεL及びεMは、1に近いほど正規化された内積値rLには過去のフレームの左チャネルの入力音信号とダウンミックス信号の影響が含まれやすくなり、正規化された内積値rLや、正規化された内積値rLにより得られる左チャネル減算利得αのフレーム間の変動は小さくなる。 The closer to 1 the above ε L and ε M are, the more the normalized inner product value r L is likely to include the influence of the input sound signal and the downmix signal of the left channel of the past frame, and is normalized. The variation between frames of the left channel subtraction gain α obtained by the inner product value r L and the normalized inner product value r L becomes smaller.
〔〔〔右チャネル減算利得推定部140〕〕〕
 右チャネル減算利得推定部140は、図8に示す通り、以下のステップS140-111からステップS140-113と、例1で説明したステップS140-12からステップS140-14と、を行う。
[[[Right channel subtraction gain estimation unit 140]]]
As shown in FIG. 8, the right channel subtraction gain estimation unit 140 performs the following steps S140-111 to S140-113 and steps S140-12 to S140-14 described in Example 1.
 右チャネル減算利得推定部140は、まず、入力された右チャネルの入力音信号xR(1), xR(2), ..., xR(T)と、入力されたダウンミックス信号xM(1), xM(2), ..., xM(T)と、前のフレームで用いた内積値ER(-1)と、を用いて、下記の式(1-8-2)により、現在のフレームで用いる内積値ER(0)を得る(ステップS140-111)。
Figure JPOXMLDOC01-appb-M000017

ここで、εRは、0より大きく1未満の予め定めた値であり、右チャネル減算利得推定部140に予め記憶されている。なお、右チャネル減算利得推定部140は、得た内積値ER(0)を、「前のフレームで用いた内積値ER(-1)」として次のフレームで用いるために、右チャネル減算利得推定部140内に記憶する。
The right channel subtraction gain estimation unit 140 first receives the input right channel input sound signals x R (1), x R (2), ..., x R (T) and the input downmix signal x. Using M (1), x M (2), ..., x M (T) and the internal product value E R (-1) used in the previous frame, the following equation (1-8-) 2) obtains the internal product value E R (0) used in the current frame (step S140-111).
Figure JPOXMLDOC01-appb-M000017

Here, ε R is a predetermined value larger than 0 and less than 1, and is stored in advance in the right channel subtraction gain estimation unit 140. The right channel subtraction gain estimation unit 140 uses the obtained inner product value E R (0) as the “inner product value E R (-1) used in the previous frame” in the next frame, so that the right channel subtraction is subtracted. It is stored in the gain estimation unit 140.
 右チャネル減算利得推定部140は、また、入力されたダウンミックス信号xM(1), xM(2), ..., xM(T)と、前のフレームで用いたダウンミックス信号のエネルギーEM(-1)と、を用いて、式(1-9)により、現在のフレームで用いるダウンミックス信号のエネルギーEM(0)を得る(ステップS140-112)。右チャネル減算利得推定部140は、得たダウンミックス信号のエネルギーEM(0)を、「前のフレームで用いたダウンミックス信号のエネルギーEM(-1)」として次のフレームで用いるために、右チャネル減算利得推定部140内に記憶する。なお、左チャネル減算利得推定部120でも式(1-9)により現在のフレームで用いるダウンミックス信号のエネルギーEM(0)を得るので、左チャネル減算利得推定部120が行うステップS120-112と右チャネル減算利得推定部140が行うステップS140-112は何れか一方のみを行うようにしてもよい。 The right channel subtraction gain estimation unit 140 also uses the input downmix signals x M (1), x M (2), ..., x M (T) and the downmix signal used in the previous frame. Using the energy E M (-1), the energy E M (0) of the downmix signal used in the current frame is obtained by Eq. (1-9) (step S140-112). The right channel subtraction gain estimation unit 140 uses the obtained downmix signal energy E M (0) as the “downmix signal energy E M (-1) used in the previous frame” in the next frame. , Stored in the right channel subtraction gain estimation unit 140. Incidentally, the equation (1-9) even left channel subtraction gain estimation unit 120 so obtain energy E M downmix signal used in the current frame (0), and steps S120-112 performed by the left channel subtraction gain estimator 120 In step S140-112 performed by the right channel subtraction gain estimation unit 140, only one of them may be performed.
 右チャネル減算利得推定部140は、次に、ステップS140-111で得た現在のフレームで用いる内積値ER(0)と、ステップS140-112で得た現在のフレームで用いるダウンミックス信号のエネルギーEM(0)を用いて、正規化された内積値rRを下記の式(1-10-2)で得る(ステップS140-113)。
Figure JPOXMLDOC01-appb-M000018
The right channel subtraction gain estimation unit 140 then uses the inner product value E R (0) obtained in the current frame obtained in step S140-111 and the energy of the downmix signal used in the current frame obtained in step S140-112. Using E M (0), the normalized inner product value r R is obtained by the following equation (1-10-2) (step S140-113).
Figure JPOXMLDOC01-appb-M000018
 右チャネル減算利得推定部140は、また、ステップS140-12を行い、次に、ステップS140-11で得た正規化された内積値rRに代えて上述したステップS140-113で得た正規化された内積値rRを用いてステップS140-13を行い、さらに、ステップS140-14を行う。 The right channel subtraction gain estimation unit 140 also performs step S140-12, and then replaces the normalized inner product value r R obtained in step S140-11 with the normalization obtained in step S140-113 described above. Step S140-13 is performed using the obtained inner product value r R , and further, step S140-14 is performed.
 なお、上記のεR及びεMは、1に近いほど正規化された内積値rRには過去のフレームの右チャネルの入力音信号とダウンミックス信号の影響が含まれやすくなり、正規化された内積値rRや、正規化された内積値rRにより得られる右チャネル減算利得βのフレーム間の変動は小さくなる。 The closer to 1 the above ε R and ε M are, the more the normalized inner product value r R tends to include the influence of the input sound signal and the downmix signal of the right channel of the past frame, and is normalized. The variation between frames of the right channel subtraction gain β obtained by the inner product value r R and the normalized inner product value r R becomes smaller.
〔〔例2の変形例〕〕
 例2についても、例1に対する例1の変形例と同様の変形ができる。この形態を例2の変形例として説明する。例2の変形例は、符号化側、すなわち、左チャネル減算利得推定部120と右チャネル減算利得推定部140は例1の変形例と異なるが、復号側、すなわち、左チャネル減算利得復号部230と右チャネル減算利得復号部250は例1の変形例と同じである。例2の変形例の例1の変形例と異なる点は例2と同様であるので、以下では、例2の変形例について、例1の変形例と例2を適宜参照して説明する。
[[Modified example of Example 2]]
Example 2 can be modified in the same manner as the modification of Example 1 with respect to Example 1. This form will be described as a modification of Example 2. In the modification of Example 2, the coding side, that is, the left channel subtraction gain estimation unit 120 and the right channel subtraction gain estimation unit 140 are different from the modification of Example 1, but the decoding side, that is, the left channel subtraction gain decoding unit 230. And the right channel subtraction gain decoding unit 250 are the same as the modified example of Example 1. Since the difference from the modification of Example 1 of the modification of Example 2 is the same as that of Example 2, the modification of Example 2 will be described below with reference to the modification of Example 1 and Example 2 as appropriate.
〔〔〔左チャネル減算利得推定部120〕〕〕
 左チャネル減算利得推定部120には、例1の変形例の左チャネル減算利得推定部120と同様に、左チャネルの正規化された内積値の候補rLcand(a)と当該候補に対応する符号Cαcand(a)との組が複数組(A組、a=1, ..., A)予め記憶されている。左チャネル減算利得推定部120は、図9に示す通り、例2と同じステップS120-111からステップS120-113と、例1の変形例と同じステップS120-12とステップS120-15とステップS120-16と、を行う。具体的には以下の通りである。
[[[Left channel subtraction gain estimation unit 120]]]
Similar to the left channel subtraction gain estimation unit 120 of the modified example of Example 1, the left channel subtraction gain estimation unit 120 includes a candidate r Lcand (a) for the normalized internal product value of the left channel and a code corresponding to the candidate. A plurality of pairs with Cα cand (a) (group A, a = 1, ..., A) are stored in advance. As shown in FIG. 9, the left channel subtraction gain estimation unit 120 includes steps S120-111 to S120-113, which are the same as in Example 2, and steps S120-12, S120-15, and S120-, which are the same as the modified example of Example 1. 16 and. Specifically, it is as follows.
 左チャネル減算利得推定部120は、まず、入力された左チャネルの入力音信号xL(1), xL(2), ..., xL(T)と、入力されたダウンミックス信号xM(1), xM(2), ..., xM(T)と、前のフレームで用いた内積値EL(-1)と、を用いて、式(1-8)により、現在のフレームで用いる内積値EL(0)を得る(ステップS120-111)。左チャネル減算利得推定部120は、また、入力されたダウンミックス信号xM(1), xM(2), ..., xM(T)と、前のフレームで用いたダウンミックス信号のエネルギーEM(-1)と、を用いて、式(1-9)により、現在のフレームで用いるダウンミックス信号のエネルギーEM(0)を得る(ステップS120-112)。左チャネル減算利得推定部120は、次に、ステップS120-111で得た現在のフレームで用いる内積値EL(0)と、ステップS120-112で得た現在のフレームで用いるダウンミックス信号のエネルギーEM(0)を用いて、式(1-10)により、正規化された内積値rLを得る(ステップS120-113)。左チャネル減算利得推定部120は、次に、記憶されている左チャネルの正規化された内積値の候補rLcand(1), ..., rLcand(A)のうちのステップS120-113で得た正規化された内積値rLに最も近い候補(正規化された内積値rLの量子化値)^rLを得て、記憶されている符号Cαcand(1), ..., Cαcand(A)のうちの当該最も近い候補^rLに対応する符号を左チャネル減算利得符号Cαとして得る(ステップS120-15)。また、左チャネル減算利得推定部120は、ステレオ符号化部170において左チャネル差分信号yL(1), yL(2), ..., yL(T)の符号化に用いるビット数bLと、モノラル符号化部160においてダウンミックス信号xM(1), xM(2), ..., xM(T)の符号化に用いるビット数bMと、フレーム当たりのサンプル数Tと、を用いて、式(1-7)により、左チャネル補正係数cLを得る(ステップS120-12)。左チャネル減算利得推定部120は、次に、ステップS120-15で得た正規化された内積値の量子化値^rLとステップS120-12で得た左チャネル補正係数cLとを乗算した値を左チャネル減算利得αとして得る(ステップS120-16)。 The left channel subtraction gain estimation unit 120 first receives the input left channel input sound signals x L (1), x L (2), ..., x L (T) and the input downmix signal x. Using M (1), x M (2), ..., x M (T) and the internal product value E L (-1) used in the previous frame, according to equation (1-8), The inner product value E L (0) used in the current frame is obtained (step S120-111). The left channel subtraction gain estimation unit 120 also uses the input downmix signals x M (1), x M (2), ..., x M (T) and the downmix signal used in the previous frame. Using the energy E M (-1), the energy E M (0) of the downmix signal used in the current frame is obtained by Eq. (1-9) (step S120-112). The left channel subtraction gain estimation unit 120 then uses the inner product value E L (0) obtained in the current frame obtained in step S120-111 and the energy of the downmix signal used in the current frame obtained in step S120-112. Using E M (0), the normalized inner product value r L is obtained by Eq. (1-10) (step S120-113). The left channel subtraction gain estimation unit 120 then in step S120-113 of the stored left channel normalized inner product value candidates r Lcand (1), ..., r Lcand (A). The candidate closest to the obtained normalized inner product value r L (quantized value of the normalized inner product value r L ) ^ r L is obtained, and the stored sign Cα cand (1), ..., The code corresponding to the closest candidate ^ r L of the Cα cand (A) is obtained as the left channel subtraction gain code Cα (step S120-15). Further, the left channel subtraction gain estimation unit 120 uses the number of bits b for coding the left channel difference signals y L (1), y L (2), ..., y L (T) in the stereo coding unit 170. L , the number of bits b M used to encode the downmix signals x M (1), x M (2), ..., x M (T) in the monaural coding unit 160, and the number of samples per frame T. And, the left channel correction coefficient c L is obtained by the equation (1-7) (step S120-12). The left channel subtraction gain estimation unit 120 then multiplied the quantized value ^ r L of the normalized inner product value obtained in step S120-15 by the left channel correction coefficient c L obtained in step S120-12. The value is obtained as the left channel subtraction gain α (step S120-16).
〔〔〔右チャネル減算利得推定部140〕〕〕
 右チャネル減算利得推定部140には、例1の変形例の右チャネル減算利得推定部140と同様に、右チャネルの正規化された内積値の候補rRcand(b)と当該候補に対応する符号Cβcand(b)との組が複数組(B組、b=1, ..., B)予め記憶されている。右チャネル減算利得推定部140は、図9に示す通り、例2と同じステップS140-111からステップS140-113と、例1の変形例と同じステップS140-12とステップS140-15とステップS140-16と、を行う。具体的には以下の通りである。
[[[Right channel subtraction gain estimation unit 140]]]
Similar to the right channel subtraction gain estimation unit 140 of the modified example of Example 1, the right channel subtraction gain estimation unit 140 includes a candidate r Rcand (b) for the normalized internal product value of the right channel and a code corresponding to the candidate. Multiple pairs with Cβ cand (b) (group B, b = 1, ..., B) are stored in advance. As shown in FIG. 9, the right channel subtraction gain estimation unit 140 includes steps S140-111 to S140-113, which are the same as in Example 2, and steps S140-12, S140-15, and S140-, which are the same as the modified example of Example 1. 16 and. Specifically, it is as follows.
 右チャネル減算利得推定部140は、まず、入力された右チャネルの入力音信号xR(1), xR(2), ..., xR(T)と、入力されたダウンミックス信号xM(1), xM(2), ..., xM(T)と、前のフレームで用いた内積値ER(-1)と、を用いて、式(1-8-2)により、現在のフレームで用いる内積値ER(0)を得る(ステップS140-111)。右チャネル減算利得推定部140は、また、入力されたダウンミックス信号xM(1), xM(2), ..., xM(T)と、前のフレームで用いたダウンミックス信号のエネルギーEM(-1)と、を用いて、式(1-9)により、現在のフレームで用いるダウンミックス信号のエネルギーEM(0)を得る(ステップS140-112)。右チャネル減算利得推定部140は、次に、ステップS140-111で得た現在のフレームで用いる内積値ER(0)と、ステップS140-112で得た現在のフレームで用いるダウンミックス信号のエネルギーEM(0)を用いて、式(1-10-2)により、正規化された内積値rRを得る(ステップS140-113)。右チャネル減算利得推定部140は、次に、記憶されている右チャネルの正規化された内積値の候補rRcand(1), ..., rRcand(B)のうちのステップS140-113で得た正規化された内積値rRに最も近い候補(正規化された内積値rRの量子化値)^rRを得て、記憶されている符号Cβcand(1), ..., Cβcand(B)のうちの当該最も近い候補^rRに対応する符号を右チャネル減算利得符号Cβとして得る(ステップS140-15)。また、右チャネル減算利得推定部140は、ステレオ符号化部170において右チャネル差分信号yR(1), yR(2), ..., yR(T)の符号化に用いるビット数bRと、モノラル符号化部160においてダウンミックス信号xM(1), xM(2), ..., xM(T)の符号化に用いるビット数bMと、フレーム当たりのサンプル数Tと、を用いて、式(1-7-2)により右チャネル補正係数cRを得る(ステップS140-12)。右チャネル減算利得推定部140は、次に、ステップS140-15で得た正規化された内積値の量子化値^rRとステップS140-12で得た右チャネル補正係数cRとを乗算した値を右チャネル減算利得βとして得る(ステップS140-16)。 The right channel subtraction gain estimation unit 140 first receives the input right channel input sound signals x R (1), x R (2), ..., x R (T) and the input downmix signal x. Using M (1), x M (2), ..., x M (T) and the internal product value E R (-1) used in the previous frame, Eq. (1-8-2) To obtain the internal product value E R (0) used in the current frame (step S140-111). The right channel subtraction gain estimation unit 140 also uses the input downmix signals x M (1), x M (2), ..., x M (T) and the downmix signal used in the previous frame. Using the energy E M (-1), the energy E M (0) of the downmix signal used in the current frame is obtained by Eq. (1-9) (step S140-112). The right channel subtraction gain estimation unit 140 then uses the inner product value E R (0) obtained in the current frame obtained in step S140-111 and the energy of the downmix signal used in the current frame obtained in step S140-112. Using E M (0), the normalized inner product value r R is obtained by Eq. (1-10-2) (step S140-113). The right channel subtraction gain estimation unit 140 is then subjected to step S140-113 of the stored right channel normalized inner product value candidates r Rcand (1), ..., r Rcand (B). The candidate closest to the obtained normalized inner product value r R (quantized value of the normalized inner product value r R ) ^ r R is obtained and the stored code Cβ cand (1), ..., The code corresponding to the closest candidate ^ r R of the Cβ cand (B) is obtained as the right channel subtraction gain code Cβ (step S140-15). Further, the right channel subtraction gain estimation unit 140 uses the number of bits b for coding the right channel difference signals y R (1), y R (2), ..., y R (T) in the stereo coding unit 170. R , the number of bits b M used to encode the downmix signals x M (1), x M (2), ..., x M (T) in the monaural coding unit 160, and the number of samples per frame T. And, the right channel correction coefficient c R is obtained by the equation (1-7-2) (step S140-12). The right channel subtraction gain estimation unit 140 then multiplied the quantized value ^ r R of the normalized inner product value obtained in step S140-15 by the right channel correction coefficient c R obtained in step S140-12. The value is obtained as the right channel subtraction gain β (step S140-16).
 〔〔例3〕〕
 例えば、左チャネルの入力音信号に含まれている音声や音楽などの音と、右チャネルの入力音信号に含まれている音声や音楽などの音と、が異なる場合には、ダウンミックス信号には左チャネルの入力音信号の成分も右チャネルの入力音信号の成分も含まれ得るため、左チャネル減算利得αとして大きな値を用いるほど、左チャネル復号音信号の中に本来聴こえるはずのない右チャネルの入力音信号に由来する音が含まれているように聞こえてしまい、右チャネル減算利得βとして大きな値を用いるほど、右チャネル復号音信号の中に本来聴こえるはずのない左チャネルの入力音信号に由来する音が含まれているように聞こえてしまうという課題がある。そこで、復号音信号が有する量子化誤差のエネルギーの最小化は厳密には保証されないものの、聴覚品質を考慮して、左チャネル減算利得αと右チャネル減算利得βを例1により求まる値より小さい値としてもよい。また同様に、左チャネル減算利得αと右チャネル減算利得βを例2により求まる値より小さい値としてもよい。
[[Example 3]]
For example, if the sound such as voice and music contained in the input sound signal of the left channel and the sound such as voice and music contained in the input sound signal of the right channel are different, the downmix signal is used. Can include both the component of the input sound signal of the left channel and the component of the input sound signal of the right channel, so the larger the value used as the left channel subtraction gain α, the more the right that should not be heard in the left channel decoded sound signal. It sounds like it contains sound derived from the channel input sound signal, and the larger the value used for the right channel subtraction gain β, the more left channel input sound that should not be heard in the right channel decoded sound signal. There is a problem that it sounds as if the sound derived from the signal is included. Therefore, although the minimization of the energy of the quantization error of the decoded sound signal is not strictly guaranteed, the left channel subtraction gain α and the right channel subtraction gain β are smaller than the values obtained by Example 1 in consideration of the auditory quality. May be. Similarly, the left channel subtraction gain α and the right channel subtraction gain β may be smaller than the values obtained by Example 2.
 具体的には、左チャネルについては、例1および例2において、正規化された内積値rLと左チャネル補正係数cLの乗算値cL×rLの量子化値を左チャネル減算利得αとしていたのを、例3では、正規化された内積値rLと左チャネル補正係数cLと0より大きく1より小さい予め定めた値であるλLの乗算値λL×cL×rLの量子化値を左チャネル減算利得αとする。従って、例1や例2と同様に乗算値cL×rLを左チャネル減算利得推定部120での符号化と左チャネル減算利得復号部230での復号の対象として左チャネル減算利得符号Cαが乗算値cL×rLの量子化値を表すようにして、左チャネル減算利得推定部120と左チャネル減算利得復号部230が乗算値cL×rLの量子化値とλLを乗算して左チャネル減算利得αを得るようにしてもよい。または、正規化された内積値rLと左チャネル補正係数cLと予め定めた値λLの乗算値λL×cL×rLを左チャネル減算利得推定部120での符号化と左チャネル減算利得復号部230での復号の対象として、左チャネル減算利得符号Cαが乗算値λL×cL×rLの量子化値を表すようにしてもよい。 Specifically, for the left channel, in Examples 1 and 2, the quantization value of the multiplication value c L × r L of the normalized inner product value r L and the left channel correction coefficient c L is the left channel subtraction gain α. the had been with, in example 3, a normalized inner product value r L and the left channel correction factor c multiplied value of L and is smaller than 1 predetermined value greater than 0 λ L λ L × c L × r L Let the quantization value of be the left channel subtraction gain α. Accordingly, the left channel subtraction gain code Cα Example 1 and Example 2 in the same manner as in the multiplication value c L × r L as the decoding of the target in the coding and left channel subtraction gain decoding section 230 in the left channel subtraction gain estimator 120 so as to represent the quantized value of the multiplication value c L × r L, left channel subtraction gain estimation unit 120 and the left channel subtraction gain decoding section 230 multiplies the quantized value and lambda L multiplier c L × r L The left channel subtraction gain α may be obtained. Alternatively, the normalized inner product value r L , the left channel correction coefficient c L, and the multiplication value λ L × c L × r L of the predetermined value λ L are encoded by the left channel subtraction gain estimation unit 120 and the left channel. As the object of decoding by the subtraction gain decoding unit 230, the left channel subtraction gain code Cα may represent the quantization value of the multiplication value λ L × c L × r L.
 同様に、右チャネルについては、例1および例2において、正規化された内積値rRと右チャネル補正係数cRの乗算値cR×rRの量子化値を右チャネル減算利得βとしていたのを、例3では、正規化された内積値rRと右チャネル補正係数cRと0より大きく1より小さい予め定めた値であるλRの乗算値λR×cR×rRの量子化値を右チャネル減算利得βとする。従って、例1や例2と同様に乗算値cR×rRを右チャネル減算利得推定部140での符号化と右チャネル減算利得復号部250での復号の対象として右チャネル減算利得符号Cβが乗算値cR×rRの量子化値を表すようにして、右チャネル減算利得推定部140と右チャネル減算利得復号部250が乗算値cR×rRの量子化値とλRとを乗算して右チャネル減算利得βを得るようにしてもよい。または、正規化された内積値rRと左チャネル補正係数cRと予め定めた値λRの乗算値λR×cR×rRを右チャネル減算利得推定部140での符号化と右チャネル減算利得復号部250での復号の対象として、右チャネル減算利得符号Cβが乗算値λR×cR×rRの量子化値を表すようにしてもよい。なお、λRはλLと同じ値とするとよい。 Similarly, for the right channel, in Examples 1 and 2, the quantization value of the multiplication value c R × r R of the normalized inner product value r R and the right channel correction coefficient c R was defined as the right channel subtraction gain β. In Example 3, the normalized inner product value r R , the right channel correction coefficient c R, and the quantum of the multiplication value λ R × c R × r R of λ R , which is a predetermined value greater than 0 and less than 1. Let the conversion value be the right channel subtraction gain β. Accordingly, the right channel subtraction gain code Cβ as the object of decoding the coding and the right channel subtraction gain decoding section 250 in the right channel subtraction gain estimating unit 140 similarly multiplied value c R × r R to Example 1 and Example 2 so as to represent the quantized value of the multiplication value c R × r R, multiplications right channel subtraction gain estimating unit 140 and the right channel subtraction gain decoding section 250 and the quantization value and the lambda R multiplier c R × r R To obtain the right channel subtraction gain β. Alternatively, the normalized inner product value r R , the left channel correction coefficient c R, and the multiplication value λ R × c R × r R of the predetermined value λ R are encoded by the right channel subtraction gain estimation unit 140 and the right channel. The right channel subtraction gain code Cβ may represent the quantization value of the multiplication value λ R × c R × r R as the object of decoding by the subtraction gain decoding unit 250. Note that λ R should be the same value as λ L.
〔〔例3の変形例〕〕
 上述したように補正係数cLは符号化装置100でも復号装置200でも同じ値を計算することができる。従って、例1の変形例や例2の変形例と同様に正規化された内積値rLを左チャネル減算利得推定部120での符号化と左チャネル減算利得復号部230での復号の対象として左チャネル減算利得符号Cαが正規化された内積値rLの量子化値を表すようにして、左チャネル減算利得推定部120と左チャネル減算利得復号部230が正規化された内積値rLの量子化値と左チャネル補正係数cLと0より大きく1より小さい予め定めた値であるλLを乗算して左チャネル減算利得αを得るようにしてもよい。または、正規化された内積値rLと0より大きく1より小さい予め定めた値であるλLの乗算値λL×rLを左チャネル減算利得推定部120での符号化と左チャネル減算利得復号部230での復号の対象として、左チャネル減算利得符号Cαが乗算値λL×rLの量子化値を表すようにして、左チャネル減算利得推定部120と左チャネル減算利得復号部230が乗算値λL×rLの量子化値と左チャネル補正係数cLを乗算して左チャネル減算利得αを得るようにしてもよい。
[[Modified example of Example 3]]
As described above, the correction coefficient c L can be calculated to be the same value by both the coding device 100 and the decoding device 200. Therefore, as in the modified example of Example 1 and the modified example of Example 2, the normalized inner product value r L is used as the object of coding by the left channel subtraction gain estimation unit 120 and decoding by the left channel subtraction gain decoding unit 230. The left channel subtraction gain estimation unit 120 and the left channel subtraction gain decoding unit 230 have the normalized internal product value r L so that the left channel subtraction gain sign Cα represents the quantization value of the normalized internal product value r L. The left channel subtraction gain α may be obtained by multiplying the quantization value by the left channel correction coefficient c L and λ L , which is a predetermined value larger than 0 and smaller than 1. Alternatively, the normalized inner product value r L and the multiplication value λ L × r L of λ L , which is a value larger than 0 and smaller than 1, are encoded by the left channel subtraction gain estimation unit 120 and the left channel subtraction gain. The left channel subtraction gain estimation unit 120 and the left channel subtraction gain decoding unit 230 are subjected to decoding by the decoding unit 230 so that the left channel subtraction gain code Cα represents the quantization value of the multiplication value λ L × r L. The left channel subtraction gain α may be obtained by multiplying the quantization value of the multiplication value λ L × r L by the left channel correction coefficient c L.
 右チャネルについても同様であり、補正係数cRは符号化装置100でも復号装置200でも同じ値を計算することができる。従って、例1の変形例や例2の変形例と同様に正規化された内積値rRを右チャネル減算利得推定部140での符号化と右チャネル減算利得復号部250での復号の対象として右チャネル減算利得符号Cβが正規化された内積値rRの量子化値を表すようにして、右チャネル減算利得推定部140と右チャネル減算利得復号部250が正規化された内積値rRの量子化値と右チャネル補正係数cRと0より大きく1より小さい予め定めた値であるλRを乗算して右チャネル減算利得βを得るようにしてもよい。または、正規化された内積値rRと0より大きく1より小さい予め定めた値であるλRの乗算値λR×rRを右チャネル減算利得推定部140での符号化と右チャネル減算利得復号部250での復号の対象として、右チャネル減算利得符号Cβが乗算値λR×rRの量子化値を表すようにして、右チャネル減算利得推定部140と右チャネル減算利得復号部250が乗算値λR×rRの量子化値と右チャネル補正係数cRを乗算して右チャネル減算利得βを得るようにしてもよい。 The same applies to the right channel, and the correction coefficient c R can be calculated to be the same value by both the coding device 100 and the decoding device 200. Therefore, as in the modified example of Example 1 and the modified example of Example 2, the normalized inner product value r R is used as the object of coding by the right channel subtraction gain estimation unit 140 and decoding by the right channel subtraction gain decoding unit 250. The right channel subtraction gain estimation unit 140 and the right channel subtraction gain decoding unit 250 have the normalized internal product value r R so that the right channel subtraction gain sign Cβ represents the quantization value of the normalized internal product value r R. The right channel subtraction gain β may be obtained by multiplying the quantization value by the right channel correction coefficient c R and λ R , which is a predetermined value greater than 0 and less than 1. Alternatively, the normalized inner product value r R and the multiplication value λ R × r R of λ R , which is a value larger than 0 and smaller than 1, are encoded by the right channel subtraction gain estimation unit 140 and the right channel subtraction gain. The right channel subtraction gain estimation unit 140 and the right channel subtraction gain decoding unit 250 are subjected to decoding by the decoding unit 250 so that the right channel subtraction gain code Cβ represents the quantization value of the multiplication value λ R × r R. The right channel subtraction gain β may be obtained by multiplying the quantization value of the multiplication value λ R × r R by the right channel correction coefficient c R.
 〔〔例4〕〕
 例3の冒頭で説明した聴覚品質の課題が生じるのは左チャネルの入力音信号と右チャネルの入力音信号の相関が小さいときであって、この課題は左チャネルの入力音信号と右チャネルの入力音信号の相関が大きいときにはあまり生じない。そこで、例4では、例3の予め定めた値に代えて、左チャネルの入力音信号と右チャネルの入力音信号の相関係数である左右相関係数γを用いることで、左チャネルの入力音信号と右チャネルの入力音信号の相関が大きいほど、復号音信号が有する量子化誤差のエネルギーを小さくすることを優先し、左チャネルの入力音信号と右チャネルの入力音信号の相関が小さいほど、聴覚品質の劣化を抑えることを優先する。
[[Example 4]]
The hearing quality problem described at the beginning of Example 3 occurs when the correlation between the left channel input sound signal and the right channel input sound signal is small, and this problem occurs between the left channel input sound signal and the right channel input sound signal. It does not occur much when the correlation of the input sound signal is large. Therefore, in Example 4, the left channel input is performed by using the left-right correlation coefficient γ, which is the correlation coefficient between the left channel input sound signal and the right channel input sound signal, instead of the predetermined value of Example 3. The larger the correlation between the sound signal and the input sound signal of the right channel, the smaller the correlation between the input sound signal of the left channel and the input sound signal of the right channel, giving priority to reducing the energy of the quantization error of the decoded sound signal. The more priority is given to suppressing the deterioration of hearing quality.
 例4は、符号化側は例1および例2と異なるが、復号側、すなわち、左チャネル減算利得復号部230と右チャネル減算利得復号部250は例1および例2と同じである。以下、例4が例1および例2と異なる点について説明する。 In Example 4, the coding side is different from Example 1 and Example 2, but the decoding side, that is, the left channel subtraction gain decoding unit 230 and the right channel subtraction gain decoding unit 250 is the same as in Example 1 and Example 2. Hereinafter, the difference between Example 4 and Example 1 and Example 2 will be described.
 〔〔〔左右関係情報推定部180〕〕〕    
 例4の符号化装置100は、図1に破線で示すように左右関係情報推定部180も含む。左右関係情報推定部180には、符号化装置100に入力された左チャネルの入力音信号と、符号化装置100に入力された右チャネルの入力音信号と、が入力される。左右関係情報推定部180は、入力された左チャネルの入力音信号と右チャネルの入力音信号から左右相関係数γを得て出力する(ステップS180)。
[[[Left-right relationship information estimation unit 180]]]
The coding device 100 of Example 4 also includes a left-right relationship information estimation unit 180 as shown by a broken line in FIG. The left-right channel input sound signal input to the coding device 100 and the right channel input sound signal input to the coding device 100 are input to the left-right relationship information estimation unit 180. The left-right relationship information estimation unit 180 obtains the left-right correlation coefficient γ from the input left channel input sound signal and the right channel input sound signal and outputs the left-right correlation coefficient γ (step S180).
 左右相関係数γは、左チャネルの入力音信号と右チャネルの入力音信号の相関係数であり、左チャネルの入力音信号のサンプル列xL(1), xL(2), ..., xL(T)と右チャネルの入力音信号のサンプル列x(1), xR(2), ..., xR(T)の相関係数γ0であってもよいし、時間差を考慮した相関係数、例えば、左チャネルの入力音信号のサンプル列と、τサンプルだけ当該サンプル列より後にずれた位置にある右チャネルの入力音信号のサンプル列と、の相関係数γτであってもよい。 The left-right correlation coefficient γ is the correlation coefficient between the input sound signal of the left channel and the input sound signal of the right channel, and is a sample sequence of the input sound signal of the left channel x L (1), x L (2), .. ., x L (T) and the sample sequence of the input sound signal of the right channel x R (1), x R (2), ..., x R (T) may have a correlation coefficient of γ 0. , Correlation coefficient considering the time difference, for example, the correlation coefficient between the sample sequence of the input sound signal of the left channel and the sample sequence of the input sound signal of the right channel whose position is shifted after the sample string by τ sample. It may be γ τ.
 このτは、ある空間に配置した左チャネル用のマイクロホンで収音した音をAD変換して得られた音信号が左チャネルの入力音信号であり、当該空間に配置した右チャネル用のマイクロホンで収音した音をAD変換して得られた音信号が右チャネルの入力音信号である、と仮定したときの、当該空間で主に音を発している音源から左チャネル用のマイクロホンへの到達時間と、当該音源から右チャネル用のマイクロホンへの到達時間と、の差(いわゆる到来時間差)に相当する情報であり、以降では左右時間差と呼ぶ。左右時間差τは、周知の何れの方法で求めてもよく、第2参考形態の左右関係情報推定部181で説明する方法などで求めればよい。すなわち、上述した相関係数γτは、音源から左チャネル用のマイクロホンに到達して収音された音信号と、当該音源から右チャネル用のマイクロホンに到達して収音された音信号と、の相関係数に相当する情報である。 This τ is the sound signal obtained by AD conversion of the sound picked up by the left channel microphone arranged in a certain space as the left channel input sound signal, and is the right channel microphone arranged in the space. Reaching the microphone for the left channel from the sound source that mainly emits sound in the space, assuming that the sound signal obtained by AD conversion of the collected sound is the input sound signal of the right channel. This is information corresponding to the difference between the time and the arrival time from the sound source to the microphone for the right channel (so-called arrival time difference), and is hereinafter referred to as a left-right time difference. The left-right time difference τ may be obtained by any of the well-known methods, and may be obtained by the method described by the left-right relationship information estimation unit 181 of the second reference embodiment. That is, the above-mentioned correlation coefficient γ τ is a sound signal that reaches the microphone for the left channel from the sound source and is picked up, and a sound signal that reaches the microphone for the right channel from the sound source and is picked up. This is information corresponding to the correlation coefficient of.
 〔〔〔左チャネル減算利得推定部120〕〕〕
 左チャネル減算利得推定部120は、ステップS120-13に代えて、ステップS120-11またはステップS120-113で得た正規化された内積値rLと、ステップS120-12で得た左チャネル補正係数cLと、ステップS180で得た左右相関係数γと、を乗算した値を得る(ステップS120-13”)。左チャネル減算利得推定部120は、次に、ステップS120-14に代えて、記憶されている左チャネル減算利得の候補αcand(1), ..., αcand(A)のうちのステップS120-13”で得た乗算値γ×cL×rLに最も近い候補(乗算値γ×cL×rLの量子化値)を左チャネル減算利得αとして得て、記憶されている符号Cαcand(1), ..., Cαcand(A)のうちの左チャネル減算利得αに対応する符号を左チャネル減算利得符号Cαとして得る(ステップS120-14”)。
[[[Left channel subtraction gain estimation unit 120]]]
The left channel subtraction gain estimation unit 120 replaces the step S120-13 with the normalized inner product value r L obtained in step S120-11 or step S120-113 and the left channel correction coefficient obtained in step S120-12. A value obtained by multiplying c L by the left-right correlation coefficient γ obtained in step S180 is obtained (step S120-13 ″). The left channel subtraction gain estimation unit 120 then replaces step S120-14 with a value obtained. The candidate closest to the multiplication value γ × c L × r L obtained in step S120-13 ”of the stored left channel subtraction gain candidates α cand (1), ..., α cand (A) ( The multiplication value (quantized value of γ × c L × r L ) is obtained as the left channel subtraction gain α, and the left channel subtraction of the stored coefficients Cα cand (1), ..., Cα cand (A) is obtained. The code corresponding to the gain α is obtained as the left channel subtraction gain code Cα (step S120-14 ″).
 〔〔〔右チャネル減算利得推定部140〕〕〕
 右チャネル減算利得推定部140は、ステップS140-13に代えて、ステップS140-11またはステップS140-113で得た正規化された内積値rRと、ステップS140-12で得た右チャネル補正係数cRと、ステップS180で得た左右相関係数γと、を乗算した値を得る(ステップS140-13”)。右チャネル減算利得推定部140は、次に、ステップS140-14に代えて、記憶されている右チャネル減算利得の候補βcand(1), ..., βcand(B)のうちのステップS140-13”で得た乗算値γ×cR×rRに最も近い候補(乗算値γ×cR×rRの量子化値)を右チャネル減算利得βとして得て、記憶されている符号Cβcand(1), ..., Cβcand(B)のうちの右チャネル減算利得βに対応する符号を右チャネル減算利得符号Cβとして得る(ステップS140-14”)。
[[[Right channel subtraction gain estimation unit 140]]]
Instead of step S140-13, the right channel subtraction gain estimation unit 140 uses the normalized inner product value r R obtained in step S140-11 or step S140-113 and the right channel correction coefficient obtained in step S140-12. A value obtained by multiplying c R by the left-right correlation coefficient γ obtained in step S180 is obtained (step S140-13 ″). The right channel subtraction gain estimation unit 140 then replaces step S140-14 with a value obtained. The candidate closest to the multiplication value γ × c R × r R obtained in step S140-13 ”of the stored right channel subtraction gain candidates β cand (1), ..., β cand (B) ( The multiplication value (quantized value of γ × c R × r R ) is obtained as the right channel subtraction gain β, and the right channel subtraction of the stored codes Cβ cand (1), ..., Cβ cand (B) is obtained. The code corresponding to the gain β is obtained as the right channel subtraction gain code Cβ (step S140-14 ″).
〔〔例4の変形例〕〕
 上述したように補正係数cLは符号化装置100でも復号装置200でも同じ値を計算することができる。従って、正規化された内積値rLと左右相関係数γの乗算値γ×rLを左チャネル減算利得推定部120での符号化と左チャネル減算利得復号部230での復号の対象として、左チャネル減算利得符号Cαが乗算値γ×rLの量子化値を表すようにして、左チャネル減算利得推定部120と左チャネル減算利得復号部230が乗算値γ×rLの量子化値と左チャネル補正係数cLを乗算して左チャネル減算利得αを得るようにしてもよい。
[[Modified example of Example 4]]
As described above, the correction coefficient c L can be calculated to be the same value by both the coding device 100 and the decoding device 200. Therefore, the multiplication value γ × r L of the normalized inner product value r L and the left-right correlation coefficient γ is used as the object of coding by the left channel subtraction gain estimation unit 120 and decoding by the left channel subtraction gain decoding unit 230. The left channel subtraction gain sign Cα represents the quantization value of the multiplication value γ × r L , and the left channel subtraction gain estimation unit 120 and the left channel subtraction gain decoding unit 230 are the quantization value of the multiplication value γ × r L. The left channel correction coefficient c L may be multiplied to obtain the left channel subtraction gain α.
 右チャネルについても同様であり、補正係数cRは符号化装置100でも復号装置200でも同じ値を計算することができる。従って、正規化された内積値rRと左右相関係数γの乗算値γ×rRを右チャネル減算利得推定部140での符号化と右チャネル減算利得復号部250での復号の対象として、右チャネル減算利得符号Cβが乗算値γ×rRの量子化値を表すようにして、右チャネル減算利得推定部140と右チャネル減算利得復号部250が乗算値γ×rRの量子化値と右チャネル補正係数cRを乗算して右チャネル減算利得βを得るようにしてもよい。 The same applies to the right channel, and the correction coefficient c R can be calculated to be the same value by both the coding device 100 and the decoding device 200. Therefore, the multiplication value γ × r R of the normalized inner product value r R and the left-right correlation coefficient γ is used as the object of coding by the right channel subtraction gain estimation unit 140 and decoding by the right channel subtraction gain decoding unit 250. The right channel subtraction gain sign Cβ represents the quantization value of the multiplication value γ × r R , and the right channel subtraction gain estimation unit 140 and the right channel subtraction gain decoding unit 250 represent the quantization value of the multiplication value γ × r R. The right channel correction coefficient c R may be multiplied to obtain the right channel subtraction gain β.
<第2参考形態>
 第2参考形態の符号化装置と復号装置について説明する。
<Second reference form>
The coding device and the decoding device of the second reference embodiment will be described.
≪符号化装置101≫
 第2参考形態の符号化装置101は、図10に示す通り、ダウンミックス部110と左チャネル減算利得推定部120と左チャネル信号減算部130と右チャネル減算利得推定部140と右チャネル信号減算部150とモノラル符号化部160とステレオ符号化部170と左右関係情報推定部181と時間シフト部191を含む。第2参考形態の符号化装置101が第1参考形態の符号化装置100と異なるのは、左右関係情報推定部181と時間シフト部191を含むことと、ダウンミックス部110が出力した信号に代えて時間シフト部191が出力した信号を左チャネル減算利得推定部120と左チャネル信号減算部130と右チャネル減算利得推定部140と右チャネル信号減算部150が用いることと、上述した各符号に加えて後述する左右時間差符号Cτも出力すること、である。第2参考形態の符号化装置101のその他の構成及び動作は第1参考形態の符号化装置100と同じである。第2参考形態の符号化装置101は、各フレームについて、図11に例示するステップS110からステップS191の処理を行う。以下、第2参考形態の符号化装置101が第1参考形態の符号化装置100と異なる点について説明する。
<< Encoding device 101 >>
As shown in FIG. 10, the coding apparatus 101 of the second reference embodiment includes a downmix unit 110, a left channel subtraction gain estimation unit 120, a left channel signal subtraction unit 130, a right channel subtraction gain estimation unit 140, and a right channel signal subtraction unit. It includes 150, a monaural coding unit 160, a stereo coding unit 170, a left-right relationship information estimation unit 181 and a time shift unit 191. The coding device 101 of the second reference form is different from the coding device 100 of the first reference form in that it includes the left-right relationship information estimation unit 181 and the time shift unit 191 and instead of the signal output by the downmix unit 110. The signal output by the time shift unit 191 is used by the left channel subtraction gain estimation unit 120, the left channel signal subtraction unit 130, the right channel subtraction gain estimation unit 140, and the right channel signal subtraction unit 150. The left-right time difference code Cτ, which will be described later, is also output. Other configurations and operations of the coding device 101 of the second reference form are the same as those of the coding device 100 of the first reference form. The coding device 101 of the second reference embodiment performs the processes of steps S110 to S191 illustrated in FIG. 11 for each frame. Hereinafter, the difference between the coding device 101 of the second reference form and the coding device 100 of the first reference form will be described.
[左右関係情報推定部181]
 左右関係情報推定部181には、符号化装置101に入力された左チャネルの入力音信号と、符号化装置101に入力された右チャネルの入力音信号と、が入力される。左右関係情報推定部181は、入力された左チャネルの入力音信号と右チャネルの入力音信号から、左右時間差τと、左右時間差τを表す符号である左右時間差符号Cτと、を得て出力する(ステップS181)。
[Left-right relationship information estimation unit 181]
The left-right channel input sound signal input to the coding device 101 and the right-channel input sound signal input to the coding device 101 are input to the left-right relationship information estimation unit 181. The left-right relationship information estimation unit 181 obtains and outputs a left-right time difference τ and a left-right time difference code Cτ which is a code representing the left-right time difference τ from the input sound signal of the left channel and the input sound signal of the right channel. (Step S181).
 左右時間差τは、ある空間に配置した左チャネル用のマイクロホンで収音した音をAD変換して得られた音信号が左チャネルの入力音信号であり、当該空間に配置した右チャネル用のマイクロホンで収音した音をAD変換して得られた音信号が右チャネルの入力音信号である、と仮定したときの、当該空間で主に音を発している音源から左チャネル用のマイクロホンへの到達時間と、当該音源から右チャネル用のマイクロホンへの到達時間と、の差(いわゆる到来時間差)に相当する情報である。なお、到来時間差だけではなく、どちらのマイクロホンに早く到達しているかの情報も左右時間差τに含めるために、左右時間差τは、何れか一方の入力音信号を基準として正の値も負の値も取り得るものとする。すなわち、左右時間差τは、同じ音信号が左チャネルの入力音信号と右チャネルの入力音信号のどちらにどれくらい先に含まれているかを表す情報である。以下では、同じ音信号が右チャネルの入力音信号よりも左チャネルの入力音信号に先に含まれている場合には、左チャネルが先行しているともいい、同じ音信号が左チャネルの入力音信号よりも右チャネルの入力音信号に先に含まれている場合には、右チャネルが先行しているともいう。 With respect to the left-right time difference τ, the sound signal obtained by AD conversion of the sound picked up by the left channel microphone arranged in a certain space is the input sound signal of the left channel, and the right channel microphone arranged in the space. Assuming that the sound signal obtained by AD conversion of the sound picked up in is the input sound signal of the right channel, the sound source that mainly emits sound in the space is transferred to the microphone for the left channel. This is information corresponding to the difference between the arrival time and the arrival time from the sound source to the microphone for the right channel (so-called arrival time difference). In order to include not only the arrival time difference but also the information on which microphone is arriving earlier in the left / right time difference τ, the left / right time difference τ is a positive value or a negative value with reference to one of the input sound signals. Can also be taken. That is, the left-right time difference τ is information indicating how far ahead the same sound signal is included in the input sound signal of the left channel or the input sound signal of the right channel. In the following, when the same sound signal is included in the input sound signal of the left channel before the input sound signal of the right channel, it is also said that the left channel precedes, and the same sound signal is input to the left channel. When the input sound signal of the right channel is included before the sound signal, it is also said that the right channel precedes the sound signal.
 左右時間差τは周知の何れの方法で求めてもよい。例えば、左右関係情報推定部181は、予め定めたτmaxからτminまで(例えば、τmaxは正の数、τminは負の数)の各候補サンプル数τcandについて、左チャネルの入力音信号のサンプル列と、候補サンプル数τcand分だけ当該サンプル列より後にずれた位置にある右チャネルの入力音信号のサンプル列と、の相関の大きさを表す値(以下、相関値という)γcandを計算して、相関値γcandが最大となる候補サンプル数τcandを左右時間差τとして得る。すなわち、この例では、左チャネルが先行している場合には左右時間差τは正の値であり、右チャネルが先行している場合には左右時間差τは負の値であり、左右時間差τの絶対値が、先行しているチャネルがもう一方のチャネルに対してどれくらい先行しているかを表す値(先行しているサンプル数)である。例えば、フレーム内のサンプルのみを用いて相関値γcandを計算する場合には、τcandが正の値の場合には、右チャネルの入力音信号の部分サンプル列xR(1+τcand), xR(2+τcand), ..., xR(T)と、候補サンプル数τcand分だけ当該部分サンプル列より前にずれた位置にある左チャネルの入力音信号の部分サンプル列xL(1), xL(2), ..., xL(T-τcand)と、の相関係数の絶対値を相関値γcandとして計算し、τcandが負の値の場合には、左チャネルの入力音信号の部分サンプル列xL(1-τcand), xL(2-τcand), ..., xL(T)と、候補サンプル数-τcand分だけ当該部分サンプル列より前にずれた位置にある右チャネルの入力音信号の部分サンプル列xR(1), xR(2), ..., xR(T+τcand)と、の相関係数の絶対値を相関値γcandとして計算すればよい。もちろん、相関値γcandを計算するために現在のフレームの入力音信号のサンプル列に連続する過去の入力音信号の1個以上のサンプルも用いてもよく、この場合には過去のフレームの入力音信号のサンプル列を予め定めたフレーム数分だけ左右関係情報推定部181内の図示しない記憶部に記憶しておくようにすればよい。 The left-right time difference τ may be obtained by any well-known method. For example, the left-right relationship information estimation unit 181 sets the input sound of the left channel for each candidate sample number τ cand from predetermined τ max to τ min (for example, τ max is a positive number and τ min is a negative number). A value (hereinafter referred to as a correlation value) γ indicating the magnitude of the correlation between the signal sample sequence and the sample sequence of the input sound signal of the right channel located at a position shifted behind the sample sequence by the number of candidate samples τ cand. The cand is calculated, and the number of candidate samples τ cand that maximizes the correlation value γ cand is obtained as the left-right time difference τ. That is, in this example, the left-right time difference τ is a positive value when the left channel is ahead, and the left-right time difference τ is a negative value when the right channel is ahead, and the left-right time difference τ is The absolute value is a value (number of preceding samples) indicating how much the preceding channel precedes the other channel. For example, when calculating the correlation value γ cand using only the samples in the frame, if τ cand is a positive value, a partial sample sequence of the input sound signal of the right channel x R (1 + τ cand ) , x R (2 + τ cand ), ..., x R (T), and the partial sample sequence of the input sound signal of the left channel located at a position shifted before the relevant partial sample sequence by the number of candidate samples τ cand. Calculate the absolute value of the correlation coefficient of x L (1), x L (2), ..., x L (T-τ cand ) as the correlation value γ cand , and when τ cand is a negative value the partial sample sequence of the input sound signal of the left channel x L (1-τ cand) , x L (2-τ cand), ..., and x L (T), only the candidate sample number-tau cand min The phase of the partial sample strings x R (1), x R (2), ..., x R (T + τ cand ) of the input sound signal of the right channel located at a position shifted before the relevant partial sample sequence. The absolute value of the number of relations may be calculated as the correlation value γ cand. Of course, one or more samples of past input sound signals consecutive in the sample sequence of the input sound signal of the current frame may also be used to calculate the correlation value γ cand, in which case the input of the past frame The sample sequence of the sound signal may be stored in a storage unit (not shown) in the left-right relationship information estimation unit 181 for a predetermined number of frames.
 また例えば、相関係数の絶対値に代えて、以下のように信号の位相の情報を用いて相関値γcandを計算してもよい。この例においては、左右関係情報推定部181は、まず左チャネルの入力音信号xL(1), xL(2), ..., xL(T)及び右チャネルの入力音信号xR(1), xR(2), ..., xR(T)のそれぞれを、下記の式(3-1)及び式(3-2)のようにフーリエ変換することにより、0からT-1の各周波数kにおける周波数スペクトルXL(k)及びXR(k)を得る。
Figure JPOXMLDOC01-appb-M000019

Figure JPOXMLDOC01-appb-M000020

左右関係情報推定部181は、得られた周波数スペクトルXL(k)及びXR(k)を用いて、下記の式(3-3)により、各周波数kにおける位相差のスペクトルφ(k)を得る。
Figure JPOXMLDOC01-appb-M000021

得られた位相差のスペクトルを逆フーリエ変換することにより、下記の式(3-4)のようにτmaxからτminまでの各候補サンプル数τcandについて位相差信号ψ(τcand)を得る。
Figure JPOXMLDOC01-appb-M000022

得られた位相差信号ψ(τcand)の絶対値は、左チャネルの入力音信号xL(1), xL(2), ..., xL(T)及び右チャネルの入力音信号xR(1), xR(2), ..., xR(T)の時間差の尤もらしさに対応したある種の相関を表すものであるので、各候補サンプル数τcandに対するこの位相差信号ψ(τcand)の絶対値を相関値γcandとして用いる。左右関係情報推定部181は、この位相差信号ψ(τcand)の絶対値である相関値γcandが最大となる候補サンプル数τcandを左右時間差τとして得る。なお、相関値γcandとして位相差信号ψ(τcand)の絶対値をそのまま用いることに代えて、例えば各τcandについて位相差信号ψ(τcand)の絶対値に対するτcand前後にある複数個の候補サンプル数それぞれについて得られた位相差信号の絶対値の平均との相対差のような、正規化された値を用いてもよい。つまり、各τcandについて、予め定めた正の数τrangeを用いて、下記の式(3-5)により平均値を得て、得られた平均値ψccand)と位相差信号ψ(τcand)を用いて下記の式(3-6)により得られる正規化された相関値をγcandとして用いてもよい。
Figure JPOXMLDOC01-appb-M000023

Figure JPOXMLDOC01-appb-M000024

なお、式(3-6)により得られる正規化された相関値は、0以上1以下の値であり、τcandが左右時間差として尤もらしいほど1に近く、τcandが左右時間差として尤もらしくないほど0に近い性質を示す値である。
Further, for example, instead of the absolute value of the correlation coefficient, the correlation value γ cand may be calculated using the signal phase information as follows. In this example, the left-right relation information estimation unit 181 first receives the input sound signal x L (1), x L (2), ..., x L (T) of the left channel and the input sound signal x R of the right channel. By Fourier transforming each of (1), x R (2), ..., x R (T) as shown in the following equations (3-1) and (3-2), 0 to T Obtain the frequency spectra X L (k) and X R (k) at each frequency k of -1.
Figure JPOXMLDOC01-appb-M000019

Figure JPOXMLDOC01-appb-M000020

Using the obtained frequency spectra X L (k) and X R (k), the left-right relationship information estimation unit 181 uses the following equation (3-3) to calculate the phase difference spectrum φ (k) at each frequency k. To get.
Figure JPOXMLDOC01-appb-M000021

By inverse Fourier transforming the obtained spectrum of the phase difference, the phase difference signal ψ (τ cand ) is obtained for each candidate sample number τ cand from τ max to τ min as shown in the following equation (3-4). ..
Figure JPOXMLDOC01-appb-M000022

The absolute values of the obtained phase difference signal ψ (τ cand ) are the input sound signal of the left channel x L (1), x L (2), ..., x L (T) and the input sound signal of the right channel. This phase difference for each candidate sample number τ cand represents a kind of correlation corresponding to the plausibility of the time difference of x R (1), x R (2), ..., x R (T). The absolute value of the signal ψ (τ cand ) is used as the correlation value γ cand. The left-right relationship information estimation unit 181 obtains the number of candidate samples τ cand that maximizes the correlation value γ cand, which is the absolute value of the phase difference signal ψ (τ cand ), as the left-right time difference τ. Instead of using the absolute value of the phase difference signal ψ (τ cand ) as it is as the correlation value γ cand, for example, for each τ cand , a plurality of τ cands before and after the absolute value of the phase difference signal ψ (τ cand). A normalized value such as a relative difference from the average of the absolute values of the phase difference signals obtained for each of the candidate samples may be used. That is, for each τ cand , the average value is obtained by the following equation (3-5) using a predetermined positive number τ range , and the obtained average value ψ ccand ) and the phase difference signal ψ The normalized correlation value obtained by the following equation (3-6) using (τ cand ) may be used as γ cand.
Figure JPOXMLDOC01-appb-M000023

Figure JPOXMLDOC01-appb-M000024

The normalized correlation value obtained by Eq. (3-6) is a value of 0 or more and 1 or less, τ cand is so close to 1 that it is plausible as a left-right time difference, and τ cand is not plausible as a left-right time difference. It is a value showing a property close to 0.
 また、左右関係情報推定部181は、左右時間差τを所定の符号化方式で符号化して、左右時間差τを一意に特定可能な符号である左右時間差符号Cτを得るようにすればよい。所定の符号化方式としては、スカラ量子化などの周知の符号化方式を用いればよい。なお、予め定めた各候補サンプル数は、τmaxからτminまでの各整数値であってもよいし、τmaxからτminまでの間にある分数値や小数値を含んでいてもよいし、τmaxからτminまでの間にある何れかの整数値を含まないでもよい。また、τmax=-τminであってもよいし、そうでなくてもよい。また、何れかのチャネルが必ず先行しているような特殊な入力音信号を対象とする場合には、τmaxもτminも正の数としたり、τmaxもτminも負の数としたりしてもよい。 Further, the left-right relationship information estimation unit 181 may encode the left-right time difference τ by a predetermined coding method so as to obtain the left-right time difference code Cτ which is a code that can uniquely identify the left-right time difference τ. As a predetermined coding method, a well-known coding method such as scalar quantization may be used. Each candidate number of samples is a predetermined, may be a respective integral values from tau max to tau min, may also include a fractional value and small values in between the tau max to tau min , Τ max to τ min may not include any integer value. Also, τ max = -τ min may or may not be the case. Also, when targeting a special input sound signal that any channel always precedes, τ max and τ min may be positive numbers, and τ max and τ min may be negative numbers. You may.
 なお、符号化装置101が第1参考形態で説明した例4または例4の変形例の量子化誤差を最小化する原理に基づく減算利得の推定を行う場合には、左右関係情報推定部181は、さらに、左チャネルの入力音信号のサンプル列と、左右時間差τ分だけ当該サンプル列より後にずれた位置にある右チャネルの入力音信号のサンプル列と、の相関値、すなわち、τmaxからτminまでの各候補サンプル数τcandについて計算した相関値γcandのうちの最大値、を左右相関係数γとして出力する(ステップS180)。 When the coding device 101 estimates the subtraction gain based on the principle of minimizing the quantization error of the example 4 or the modified example of the example 4 described in the first reference embodiment, the left-right relationship information estimation unit 181 is used. Furthermore, the correlation value between the sample sequence of the input sound signal of the left channel and the sample sequence of the input sound signal of the right channel located at a position shifted behind the sample string by the left-right time difference τ, that is, from τ max to τ. The maximum value of the correlation value γ cand calculated for each candidate sample number τ cand up to min is output as the left-right correlation coefficient γ (step S180).
[時間シフト部191]
 時間シフト部191には、ダウンミックス部110が出力したダウンミックス信号xM(1), xM(2), ..., xM(T)と、左右関係情報推定部181が出力した左右時間差τと、が入力される。時間シフト部191は、左右時間差τが正の値である場合(すなわち、左右時間差τが左チャネルが先行していることを表す場合)には、ダウンミックス信号xM(1), xM(2), ..., xM(T)をそのまま左チャネル減算利得推定部120と左チャネル信号減算部130に出力し(すなわち、左チャネル減算利得推定部120と左チャネル信号減算部130で用いることを決定し)、ダウンミックス信号を|τ|サンプル(左右時間差τの絶対値分のサンプル数、左右時間差τが表す大きさ分のサンプル数)遅らせた信号xM(1-|τ|), xM(2-|τ|), ..., xM(T-|τ|)である遅延ダウンミックス信号xM'(1), xM'(2), ..., xM'(T)を右チャネル減算利得推定部140と右チャネル信号減算部150に出力し(すなわち、右チャネル減算利得推定部140と右チャネル信号減算部150で用いることを決定し)、左右時間差τが負の値である場合(すなわち、左右時間差τが右チャネルが先行していることを表す場合)には、ダウンミックス信号を|τ|サンプル遅らせた信号xM(1-|τ|), xM(2-|τ|), ..., xM(T-|τ|)である遅延ダウンミックス信号xM'(1), xM'(2), ..., xM'(T)を左チャネル減算利得推定部120と左チャネル信号減算部130に出力し(すなわち、左チャネル減算利得推定部120と左チャネル信号減算部130で用いることを決定し)、ダウンミックス信号xM(1), xM(2), ..., xM(T)をそのまま右チャネル減算利得推定部140と右チャネル信号減算部150に出力し(すなわち、右チャネル減算利得推定部140と右チャネル信号減算部150で用いることを決定し)、左右時間差τが0である場合(すなわち、左右時間差τが何れのチャネルも先行していないことを表す場合)には、ダウンミックス信号xM(1), xM(2), ..., xM(T)をそのまま左チャネル減算利得推定部120と左チャネル信号減算部130と右チャネル減算利得推定部140と右チャネル信号減算部150に出力する(すなわち、左チャネル減算利得推定部120と左チャネル信号減算部130と右チャネル減算利得推定部140と右チャネル信号減算部150で用いることを決定する)(ステップS191)。すなわち、左チャネルと右チャネルのうちの上述した到達時間が短いほうのチャネルについては、入力されたダウンミックス信号をそのまま当該チャネルの減算利得推定部と当該チャネルの信号減算部に出力し、左チャネルと右チャネルのうちの上述した到達時間が長いほうのチャネルについては、入力されたダウンミックス信号を左右時間差τの絶対値|τ|だけ遅らせた信号を当該チャネルの減算利得推定部と当該チャネルの信号減算部に出力する。なお、時間シフト部191では遅延ダウンミックス信号を得るために過去のフレームのダウンミックス信号を用いることから、時間シフト部191内の図示しない記憶部には、過去のフレームで入力されたダウンミックス信号を予め定めたフレーム数分だけ記憶しておく。また、左チャネル減算利得推定部120と右チャネル減算利得推定部140が量子化誤差を最小化する原理に基づく方法ではなく特許文献1に例示されているような周知の方法で左チャネル減算利得αと右チャネル減算利得βを得る場合には、符号化装置101のモノラル符号化部160の後段またはモノラル符号化部160内にモノラル符号CMに対応する局部復号信号を得る手段を備えて、時間シフト部191では、ダウンミックス信号xM(1), xM(2), ..., xM(T)に代えて、モノラル符号化の局部復号信号である量子化済みダウンミックス信号^xM(1), ^xM(2), ..., ^xM(T)を用いて上述した処理を行ってもよい。この場合には、時間シフト部191は、ダウンミックス信号xM(1), xM(2), ..., xM(T)に代えて量子化済みダウンミックス信号^xM(1), ^xM(2), ..., ^xM(T)を出力し、遅延ダウンミックス信号xM'(1), xM'(2), ..., xM'(T)に代えて遅延量子化済みダウンミックス信号^xM'(1), ^xM'(2), ..., ^xM'(T)を出力する。
[Time shift unit 191]
The time shift unit 191 includes the downmix signals x M (1), x M (2), ..., x M (T) output by the downmix unit 110 and the left and right output by the left-right relationship information estimation unit 181. The time difference τ and is input. When the left-right time difference τ is a positive value (that is, when the left-right time difference τ indicates that the left channel precedes), the time shift unit 191 has a downmix signal x M (1), x M ( 2), ..., x M (T) are output as they are to the left channel subtraction gain estimation unit 120 and the left channel signal subtraction unit 130 (that is, used by the left channel subtraction gain estimation unit 120 and the left channel signal subtraction unit 130). The downmix signal was delayed by | τ | samples (the number of samples for the absolute value of the left-right time difference τ, the number of samples for the magnitude represented by the left-right time difference τ) x M (1- | τ |) , x M (2- | τ |), ..., x M (T- | τ |) delay downmix signal x M' (1), x M' (2), ..., x M ' (T) is output to the right channel subtraction gain estimation unit 140 and the right channel signal subtraction unit 150 (that is, it is decided to be used by the right channel subtraction gain estimation unit 140 and the right channel signal subtraction unit 150), and the left-right time difference τ If is a negative value (ie, if the left-right time difference τ indicates that the right channel is ahead), then the downmix signal is | τ | sample delayed signal x M (1- | τ |), Delayed downmix signal x M (2- | τ |), ..., x M (T- | τ |) x M' (1), x M' (2), ..., x M' (T) is output to the left channel subtraction gain estimation unit 120 and the left channel signal subtraction unit 130 (that is, it is determined to be used in the left channel subtraction gain estimation unit 120 and the left channel signal subtraction unit 130), and the downmix signal x M (1), x M (2), ..., x M (T) are output as they are to the right channel subtraction gain estimation unit 140 and the right channel signal subtraction unit 150 (that is, with the right channel subtraction gain estimation unit 140). When it is determined to be used in the right channel signal subtractor 150) and the left-right time difference τ is 0 (that is, when the left-right time difference τ indicates that neither channel precedes), the downmix signal x M (1), x M (2), ..., x M (T) as they are Left channel subtraction gain estimation unit 120, left channel signal subtraction unit 130, right channel subtraction gain estimation unit 140, and right channel signal subtraction unit 150 (That is, the left channel subtraction gain estimation unit 120, the left channel signal subtraction unit 130, the right channel subtraction gain estimation unit 140, and the right channel Determined to be used by the Chanel signal subtractor 150) (step S191). That is, for the channel having the shorter arrival time of the left channel and the right channel, the input downmix signal is output as it is to the subtraction gain estimation unit of the channel and the signal subtraction unit of the channel, and the left channel. For the channel with the longer arrival time of the right channel and the above-mentioned channel, the signal obtained by delaying the input downmix signal by the absolute value | τ | of the left-right time difference τ is the subtraction gain estimation part of the channel and the channel. Output to the signal subtractor. Since the time shift unit 191 uses the downmix signal of the past frame to obtain the delay downmix signal, the downmix signal input in the past frame is stored in the storage unit (not shown) in the time shift unit 191. Is stored for a predetermined number of frames. Further, the left channel subtraction gain estimation unit 120 and the right channel subtraction gain estimation unit 140 do not use a method based on the principle of minimizing the quantization error, but a well-known method as exemplified in Patent Document 1 for the left channel subtraction gain α. In order to obtain the right channel subtraction gain β, a means for obtaining a local decoding signal corresponding to the monaural code CM is provided in the subsequent stage of the monaural coding unit 160 of the coding device 101 or in the monaural coding unit 160, and the time shift is performed. In part 191 instead of the downmix signals x M (1), x M (2), ..., x M (T), the quantized downmix signal ^ x M, which is a locally decoded signal for monaural coding. The above-mentioned processing may be performed using (1), ^ x M (2), ..., ^ x M (T). In this case, the time shift section 191 replaces the downmix signals x M (1), x M (2), ..., x M (T) with the quantized downmix signal ^ x M (1). , ^ x M (2), ..., ^ x M (T) is output and the delay downmix signal x M' (1), x M' (2), ..., x M' (T) Instead, the delayed quantized downmix signal ^ x M' (1), ^ x M' (2), ..., ^ x M' (T) is output.
[左チャネル減算利得推定部120、左チャネル信号減算部130、右チャネル減算利得推定部140、右チャネル信号減算部150]
 左チャネル減算利得推定部120と左チャネル信号減算部130と右チャネル減算利得推定部140と右チャネル信号減算部150は、第1参考形態で説明したのと同じ動作を、ダウンミックス部110が出力したダウンミックス信号xM(1), xM(2), ..., xM(T)に代えて、時間シフト部191から入力されたダウンミックス信号xM(1), xM(2), ..., xM(T)または遅延ダウンミックス信号xM'(1), xM'(2), ..., xM'(T)を用いて行う(ステップS120、S130、S140、S150)。すなわち、左チャネル減算利得推定部120と左チャネル信号減算部130と右チャネル減算利得推定部140と右チャネル信号減算部150は、時間シフト部191で決定されたダウンミックス信号xM(1), xM(2), ..., xM(T)または遅延ダウンミックス信号xM'(1), xM'(2), ..., xM'(T)を用いて、第1参考形態で説明したのと同じ動作を行う。なお、時間シフト部191がダウンミックス信号xM(1), xM(2), ..., xM(T)に代えて量子化済みダウンミックス信号^xM(1), ^xM(2), ..., ^xM(T)を出力し、遅延ダウンミックス信号xM'(1), xM'(2), ..., xM'(T)に代えて遅延量子化済みダウンミックス信号^xM'(1), ^xM'(2), ..., ^xM'(T)を出力した場合には、左チャネル減算利得推定部120と左チャネル信号減算部130と右チャネル減算利得推定部140と右チャネル信号減算部150は、時間シフト部191から入力された量子化済みダウンミックス信号^xM(1), ^xM(2), ..., ^xM(T)または遅延量子化済みダウンミックス信号^xM'(1), ^xM'(2), ..., ^xM'(T)を用いて上述した処理を行う。
[Left channel subtraction gain estimation unit 120, left channel signal subtraction unit 130, right channel subtraction gain estimation unit 140, right channel signal subtraction unit 150]
The left channel subtraction gain estimation unit 120, the left channel signal subtraction unit 130, the right channel subtraction gain estimation unit 140, and the right channel signal subtraction unit 150 perform the same operations as described in the first reference embodiment by the downmix unit 110. Instead of the downmix signal x M (1), x M (2), ..., x M (T), the downmix signal x M (1), x M (2) input from the time shift section 191. ), ..., x M (T) or delayed downmix signals x M' (1), x M' (2), ..., x M' (T) (steps S120, S130, S140, S150). That is, the left channel subtraction gain estimation unit 120, the left channel signal subtraction unit 130, the right channel subtraction gain estimation unit 140, and the right channel signal subtraction unit 150 are the downmix signals x M (1), determined by the time shift unit 191. Using x M (2), ..., x M (T) or the delayed downmix signal x M' (1), x M' (2), ..., x M' (T), the first Performs the same operation as described in the reference form. Note that the time shift section 191 replaces the downmix signals x M (1), x M (2), ..., x M (T) with the quantized downmix signals ^ x M (1), ^ x M. (2), ..., ^ x M (T) is output, and the delay downmix signal x M' (1), x M' (2), ..., x M' (T) is replaced with a delay. When the quantized downmix signal ^ x M' (1), ^ x M' (2), ..., ^ x M' (T) is output, the left channel subtraction gain estimation unit 120 and the left channel The signal subtraction unit 130, the right channel subtraction gain estimation unit 140, and the right channel signal subtraction unit 150 are the quantized downmix signals input from the time shift unit 191 ^ x M (1), ^ x M (2),. .., ^ x M (T) or delayed quantized downmix signal ^ x M' (1), ^ x M' (2), ..., ^ x M' (T) I do.
≪復号装置201≫
 第2参考形態の復号装置201は、図12に示す通り、モノラル復号部210とステレオ復号部220と左チャネル減算利得復号部230と左チャネル信号加算部240と右チャネル減算利得復号部250と右チャネル信号加算部260と左右時間差復号部271と時間シフト部281を含む。第2参考形態の復号装置201が第1参考形態の復号装置200と異なるのは、上述した各符号に加えて後述する左右時間差符号Cτも入力されることと、左右時間差復号部271と時間シフト部281を含むことと、モノラル復号部210が出力した信号に代えて時間シフト部281が出力した信号を左チャネル信号加算部240と右チャネル信号加算部260が用いること、である。第2参考形態の復号装置201のその他の構成及び動作は第1参考形態の復号装置200と同じである。第2参考形態の復号装置201は、各フレームについて、図13に例示するステップS210からステップS281の処理を行う。以下、第2参考形態の復号装置201が第1参考形態の復号装置200と異なる点について説明する。
<< Decoding Device 201 >>
As shown in FIG. 12, the decoding device 201 of the second reference embodiment includes a monaural decoding unit 210, a stereo decoding unit 220, a left channel subtraction gain decoding unit 230, a left channel signal addition unit 240, a right channel subtraction gain decoding unit 250, and a right side. It includes a channel signal addition unit 260, a left-right time difference decoding unit 271, and a time shift unit 281. The decoding device 201 of the second reference form is different from the decoding device 200 of the first reference form in that the left-right time difference code Cτ, which will be described later, is also input in addition to the above-mentioned codes, and the time shift with the left-right time difference decoding unit 271. The left channel signal addition unit 240 and the right channel signal addition unit 260 use the signal output by the time shift unit 281 instead of the signal output by the monaural decoding unit 210. Other configurations and operations of the decoding device 201 of the second reference form are the same as those of the decoding device 200 of the first reference form. The decoding device 201 of the second reference embodiment performs the processes of steps S210 to S281 illustrated in FIG. 13 for each frame. Hereinafter, the difference between the decoding device 201 of the second reference form and the decoding device 200 of the first reference form will be described.
[左右時間差復号部271]
 左右時間差復号部271には、復号装置201に入力された左右時間差符号Cτが入力される。左右時間差復号部271は、左右時間差符号Cτを所定の復号方式で復号して左右時間差τを得て出力する(ステップS271)。所定の復号方式としては、対応する符号化装置101の左右関係情報推定部181で用いた符号化方式に対応する復号方式を用いる。左右時間差復号部271が得る左右時間差τは、対応する符号化装置101の左右関係情報推定部181が得た左右時間差τと同じ値であり、τmaxからτminまでの範囲内の何れかの値である。
[Left and right time difference decoding unit 271]
The left-right time difference code Cτ input to the decoding device 201 is input to the left-right time difference decoding unit 271. The left-right time difference decoding unit 271 decodes the left-right time difference code Cτ by a predetermined decoding method to obtain the left-right time difference τ and outputs it (step S271). As a predetermined decoding method, a decoding method corresponding to the coding method used in the left-right relationship information estimation unit 181 of the corresponding coding device 101 is used. The left-right time difference τ obtained by the left-right time difference decoding unit 271 is the same value as the left-right time difference τ obtained by the left-right relationship information estimation unit 181 of the corresponding coding device 101, and is any one within the range from τ max to τ min. The value.
[時間シフト部281]
 時間シフト部281には、モノラル復号部210が出力したモノラル復号音信号^xM(1), ^xM(2), ..., ^xM(T)と、左右時間差復号部271が出力した左右時間差τと、が入力される。時間シフト部281は、左右時間差τが正の値である場合(すなわち、左右時間差τが左チャネルが先行していることを表す場合)には、モノラル復号音信号^xM(1), ^xM(2), ..., ^xM(T)をそのまま左チャネル信号加算部240に出力し(すなわち、左チャネル信号加算部240で用いることを決定し)、モノラル復号音信号を|τ|サンプル遅らせた信号^xM(1-|τ|), ^xM(2-|τ|), ..., ^xM(T-|τ|)である遅延モノラル復号音信号^xM'(1), ^xM'(2), ..., ^xM'(T)を右チャネル信号加算部260に出力し(すなわち、右チャネル信号加算部260で用いることを決定し)、左右時間差τが負の値である場合(すなわち、左右時間差τが右チャネルが先行していることを表す場合)には、モノラル復号音信号を|τ|サンプル遅らせた信号^xM(1-|τ|), ^xM(2-|τ|), ..., ^xM(T-|τ|)である遅延モノラル復号音信号^xM'(1), ^xM'(2), ..., ^xM'(T)を左チャネル信号加算部240に出力し(すなわち、左チャネル信号加算部240で用いることを決定し)、モノラル復号音信号^xM(1), ^xM(2), ..., ^xM(T)をそのまま右チャネル信号加算部260に出力し(すなわち、右チャネル信号加算部260で用いることを決定し)、左右時間差τが0である場合(すなわち、左右時間差τが何れのチャネルも先行していないことを表す場合)には、モノラル復号音信号^xM(1), ^xM(2), ..., ^xM(T)をそのまま左チャネル信号加算部240と右チャネル信号加算部260に出力する(すなわち、左チャネル信号加算部240と右チャネル信号加算部260で用いることを決定する)(ステップS281)。なお、時間シフト部281では遅延モノラル復号音信号を得るために過去のフレームのモノラル復号音信号を用いることから、時間シフト部281内の図示しない記憶部には、過去のフレームで入力されたモノラル復号音信号を予め定めたフレーム数分だけ記憶しておく。
[Time shift unit 281]
The time shift unit 281 includes a monaural decoding sound signal ^ x M (1), ^ x M (2), ..., ^ x M (T) output by the monaural decoding unit 210, and a left-right time difference decoding unit 271. The output left-right time difference τ and is input. The time shift unit 281 has a monaural decoded sound signal ^ x M (1), ^ when the left-right time difference τ is a positive value (that is, when the left-right time difference τ indicates that the left channel precedes). x M (2), ..., ^ x M (T) is output to the left channel signal adder 240 as it is (that is, it is decided to be used in the left channel signal adder 240), and the monaural decoded sound signal is | τ | Sample Delayed monaural decoded sound signal ^ x M (1- | τ |), ^ x M (2- | τ |), ..., ^ x M (T- | τ |) It is decided to output x M' (1), ^ x M' (2), ..., ^ x M' (T) to the right channel signal adder 260 (that is, to use it in the right channel signal adder 260). If the left-right time difference τ is a negative value (that is, if the left-right time difference τ indicates that the right channel is ahead), the monaural decoded sound signal is | τ | sample delayed signal ^ x M Delayed monaural decoded sound signal that is (1- | τ |), ^ x M (2- | τ |), ..., ^ x M (T- | τ |) ^ x M' (1), ^ x M' (2), ..., ^ x M' (T) is output to the left channel signal adder 240 (that is, it is decided to be used in the left channel signal adder 240), and the monaural decoded sound signal ^ x M (1), ^ x M (2), ..., ^ x M (T) are output to the right channel signal adder 260 as they are (that is, it is decided to use them in the right channel signal adder 260). When the left-right time difference τ is 0 (that is, when the left-right time difference τ indicates that neither channel precedes), the monaural decoded sound signal ^ x M (1), ^ x M (2),. .., ^ x M (T) is output as it is to the left channel signal addition unit 240 and the right channel signal addition unit 260 (that is, it is determined to be used by the left channel signal addition unit 240 and the right channel signal addition unit 260). (Step S281). Since the time shift unit 281 uses the monaural decoded sound signal of the past frame in order to obtain the delayed monaural decoded sound signal, the monaural input in the past frame is stored in the storage unit (not shown) in the time shift unit 281. The decoded sound signal is stored for a predetermined number of frames.
[左チャネル信号加算部240、右チャネル信号加算部260]
 左チャネル信号加算部240と右チャネル信号加算部260は、第1参考形態で説明したのと同じ動作を、モノラル復号部210が出力したモノラル復号音信号^xM(1), ^xM(2), ..., ^xM(T)に代えて、時間シフト部281から入力されたモノラル復号音信号^xM(1), ^xM(2), ..., ^xM(T)または遅延モノラル復号音信号^xM'(1), ^xM'(2), ..., ^xM'(T)を用いて行う(ステップS240、S260)。すなわち、左チャネル信号加算部240と右チャネル信号加算部260は、時間シフト部281で決定されたモノラル復号音信号^xM(1), ^xM(2), ..., ^xM(T)または遅延モノラル復号音信号^xM'(1), ^xM'(2), ..., ^xM'(T)を用いて、第1参考形態で説明したのと同じ動作を行う。
[Left channel signal addition unit 240, right channel signal addition unit 260]
The left channel signal addition unit 240 and the right channel signal addition unit 260 perform the same operation as described in the first reference embodiment, but the monaural decoding sound signal ^ x M (1), ^ x M ( 2), ..., ^ x M (2), ..., ^ x M monaural decoded sound signal input from the time shift section 281 instead of ^ x M (1), ^ x M (2), ..., ^ x M This is done using (T) or the delayed monaural decoded sound signal ^ x M' (1), ^ x M' (2), ..., ^ x M' (T) (steps S240, S260). That is, the left channel signal addition unit 240 and the right channel signal addition unit 260 are the monaural decoded sound signals ^ x M (1), ^ x M (2), ..., ^ x M determined by the time shift unit 281. Same as described in the first reference mode using (T) or delayed monaural decoded sound signal ^ x M' (1), ^ x M' (2), ..., ^ x M'(T). Do the action.
<第1実施形態>
 第2参考形態の符号化装置101に対して、左チャネルの入力音信号と右チャネルの入力音信号の関係を考慮してダウンミックス信号を生成する変形をしたのが、第1実施形態である。以下、第1実施形態の符号化装置について説明する。なお、第1実施形態の符号化装置が得た符号は、第2参考形態の復号装置201で復号することができるので、復号装置の説明は省略する。
<First Embodiment>
The first embodiment is a modification of the coding device 101 of the second reference embodiment to generate a downmix signal in consideration of the relationship between the input sound signal of the left channel and the input sound signal of the right channel. .. Hereinafter, the coding apparatus of the first embodiment will be described. Since the code obtained by the coding device of the first embodiment can be decoded by the decoding device 201 of the second reference embodiment, the description of the decoding device will be omitted.
≪符号化装置102≫
 第1実施形態の符号化装置102は、図10に示す通り、ダウンミックス部112と左チャネル減算利得推定部120と左チャネル信号減算部130と右チャネル減算利得推定部140と右チャネル信号減算部150とモノラル符号化部160とステレオ符号化部170と左右関係情報推定部182と時間シフト部191を含む。第1実施形態の符号化装置102が第2参考形態の符号化装置101と異なるのは、左右関係情報推定部181に代えて左右関係情報推定部182を含み、ダウンミックス部110に代えてダウンミックス部112を含み、図10に破線で示す通り、左右関係情報推定部182が左右相関係数γと先行チャネル情報を得て出力し、出力した左右相関係数γと先行チャネル情報がダウンミックス部112に入力されて用いられることである。第1実施形態の符号化装置102のその他の構成及び動作は第2参考形態の符号化装置101と同じである。第1実施形態の符号化装置102は、各フレームについて、図14に例示するステップS112からステップS191の処理を行う。以下、第1実施形態の符号化装置102が第2参考形態の符号化装置101と異なる点について説明する。
<< Encoding device 102 >>
As shown in FIG. 10, the coding apparatus 102 of the first embodiment includes a downmix unit 112, a left channel subtraction gain estimation unit 120, a left channel signal subtraction unit 130, a right channel subtraction gain estimation unit 140, and a right channel signal subtraction unit. It includes 150, a monaural coding unit 160, a stereo coding unit 170, a left-right relationship information estimation unit 182, and a time shift unit 191. The coding device 102 of the first embodiment is different from the coding device 101 of the second reference embodiment in that the left-right relation information estimation unit 182 is replaced with the left-right relation information estimation unit 181 and the downmix unit 110 is replaced with the down-mix unit 110. As shown by the broken line in FIG. 10, the left-right relationship information estimation unit 182 obtains and outputs the left-right correlation coefficient γ and the preceding channel information, and the output left-right correlation coefficient γ and the preceding channel information are downmixed. It is input to and used in unit 112. Other configurations and operations of the coding device 102 of the first embodiment are the same as those of the coding device 101 of the second reference embodiment. The coding device 102 of the first embodiment performs the processing of steps S112 to S191 illustrated in FIG. 14 for each frame. Hereinafter, the difference between the coding device 102 of the first embodiment and the coding device 101 of the second reference embodiment will be described.
[左右関係情報推定部182]
 左右関係情報推定部182には、符号化装置102に入力された左チャネルの入力音信号と、符号化装置102に入力された右チャネルの入力音信号と、が入力される。左右関係情報推定部182は、入力された左チャネルの入力音信号と右チャネルの入力音信号から、左右時間差τと、左右時間差τを表す符号である左右時間差符号Cτと、左右相関係数γと、先行チャネル情報と、を得て出力する(ステップS182)。左右関係情報推定部182が左右時間差τと左右時間差符号Cτを得る処理は、第2参考形態の左右関係情報推定部181と同様である。
[Left-right relationship information estimation unit 182]
The left-right channel input sound signal input to the coding device 102 and the right channel input sound signal input to the coding device 102 are input to the left-right relationship information estimation unit 182. The left-right relationship information estimation unit 182 uses the input sound signal of the left channel and the input sound signal of the right channel to obtain the left-right time difference τ, the left-right time difference code Cτ which is a code representing the left-right time difference τ, and the left-right correlation coefficient γ. And the preceding channel information are obtained and output (step S182). The process in which the left-right relationship information estimation unit 182 obtains the left-right time difference τ and the left-right time difference code Cτ is the same as the left-right relationship information estimation unit 181 of the second reference form.
 左右相関係数γは、第2参考形態の左右関係情報推定部181の説明箇所で上述した仮定における、音源から左チャネル用のマイクロホンに到達して収音された音信号と、当該音源から右チャネル用のマイクロホンに到達して収音された音信号と、の相関係数に相当する情報である。先行チャネル情報は、音源が発した音がどちらのマイクロホンに早く到達しているかに相当する情報であり、同じ音信号が左チャネルの入力音信号と右チャネルの入力音信号のどちらに先に含まれているかを表す情報であり、左チャネルと右チャネルのどちらのチャネルが先行しているかを表す情報である。 The left-right correlation coefficient γ is the sound signal picked up from the sound source reaching the microphone for the left channel and the sound signal picked up from the sound source in the above assumption in the explanation of the left-right relationship information estimation unit 181 of the second reference form. This is information corresponding to the correlation coefficient between the sound signal that reaches the microphone for the channel and is picked up. The preceding channel information is information corresponding to which microphone the sound emitted from the sound source reaches earlier, and the same sound signal is included in either the left channel input sound signal or the right channel input sound signal first. It is information indicating whether or not the signal is used, and is information indicating which channel, the left channel or the right channel, precedes.
 第2参考形態の左右関係情報推定部181の説明箇所で上述した例であれば、左右関係情報推定部182は、左チャネルの入力音信号のサンプル列と、左右時間差τ分だけ当該サンプル列より後にずれた位置にある右チャネルの入力音信号のサンプル列と、の相関値、すなわち、τmaxからτminまでの各候補サンプル数τcandについて計算した相関値γcandのうちの最大値、を左右相関係数γとして得て出力する。また、左右関係情報推定部182は、左右時間差τが正の値である場合には、左チャネルが先行していることを表す情報を先行チャネル情報として得て出力し、左右時間差τが負の値である場合には、右チャネルが先行していることを表す情報を先行チャネル情報として得て出力する。左右関係情報推定部182は、左右時間差τが0である場合には、左チャネルが先行していることを表す情報を先行チャネル情報として得て出力してもよいし、右チャネルが先行していることを表す情報を先行チャネル情報として得て出力してもよいが、何れのチャネルも先行していないことを表す情報を先行チャネル情報として得て出力するとよい。 In the above-mentioned example in the description of the left-right relationship information estimation unit 181 of the second reference form, the left-right relationship information estimation unit 182 is based on the sample sequence of the input sound signal of the left channel and the left-right time difference τ from the sample sequence. The correlation value with the sample sequence of the input sound signal of the right channel located at a later position, that is, the maximum value of the correlation value γ cand calculated for each candidate sample number τ cand from τ max to τ min. Obtained as the left-right correlation coefficient γ and output. Further, when the left-right time difference τ is a positive value, the left-right relationship information estimation unit 182 obtains and outputs information indicating that the left channel is ahead as the leading channel information, and the left-right time difference τ is negative. If it is a value, information indicating that the right channel is leading is obtained and output as leading channel information. When the left-right time difference τ is 0, the left-right relationship information estimation unit 182 may obtain and output information indicating that the left channel is leading as leading channel information, or the right channel may be leading. Information indicating that there is a leading channel may be obtained and output as leading channel information, but information indicating that none of the channels is leading may be obtained and output as leading channel information.
[ダウンミックス部112]
 ダウンミックス部112には、符号化装置102に入力された左チャネルの入力音信号と、符号化装置102に入力された右チャネルの入力音信号と、左右関係情報推定部182が出力した左右相関係数γと、左右関係情報推定部182が出力した先行チャネル情報と、が入力される。ダウンミックス部112は、ダウンミックス信号に、左チャネルの入力音信号と右チャネルの入力音信号のうちの先行しているチャネルの入力音信号のほうが、左右相関係数γが大きいほど大きく含まれるように、左チャネルの入力音信号と右チャネルの入力音信号を重み付け平均してダウンミックス信号を得て出力する(ステップS112)。
[Downmix section 112]
The downmix unit 112 includes a left channel input sound signal input to the coding device 102, a right channel input sound signal input to the coding device 102, and a left-right phase output by the left-right relationship information estimation unit 182. The relation number γ and the preceding channel information output by the left-right relation information estimation unit 182 are input. The downmix unit 112 includes the input sound signal of the preceding channel among the input sound signal of the left channel and the input sound signal of the right channel in the downmix signal more as the left-right correlation coefficient γ is larger. As described above, the input sound signal of the left channel and the input sound signal of the right channel are weighted and averaged to obtain a downmix signal and output (step S112).
 例えば、第2参考形態の左右関係情報推定部181の説明箇所で上述した例のように相関値に相関係数の絶対値や正規化された値を用いているならば、得られる左右相関係数γは0以上1以下の値であるため、ダウンミックス部112は、対応する各サンプル番号tに対して、左右相関係数γで定まる重みを用いて左チャネルの入力音信号xL(t)と右チャネルの入力音信号xR(t)を重み付け加算したものをダウンミックス信号xM(t)とすればよい。具体的には、ダウンミックス部112は、先行チャネル情報が左チャネルが先行していることを表す情報である場合、すなわち、左チャネルが先行している場合には、xM(t)= ((1+γ)/2)×xL(t)+((1-γ)/2)×xR(t)、先行チャネル情報が右チャネルが先行していることを表す情報である場合、すなわち、右チャネルが先行している場合には、xM(t)= ((1-γ)/2)×xL(t)+((1+γ)/2)×xR(t)、としてダウンミックス信号xM(t)を得ればよい。ダウンミックス部112がこのようにダウンミックス信号を得ることで、当該ダウンミックス信号は、左右相関係数γが小さいほど、つまり左チャネルの入力音信号と右チャネルの入力音信号の相関が小さいほど、左チャネルの入力音信号と右チャネルの入力音信号の平均により得られる信号に近く、左右相関係数γが大きいほど、つまり左チャネルの入力音信号と右チャネルの入力音信号の相関が大きいほど、左チャネルの入力音信号と右チャネルの入力音信号のうちの先行しているチャネルの入力音信号に近い。 For example, if the absolute value of the correlation coefficient or the normalized value is used for the correlation value as in the above-mentioned example in the explanation part of the left-right relationship information estimation unit 181 of the second reference form, the left-right relationship obtained can be obtained. Since the number γ is a value of 0 or more and 1 or less, the downmix unit 112 uses a weight determined by the left-right correlation coefficient γ for each corresponding sample number t to input sound signal x L (t) of the left channel. ) And the input sound signal x R (t) of the right channel are weighted and added to obtain the downmix signal x M (t). Specifically, in the downmix unit 112, when the preceding channel information is information indicating that the left channel is leading, that is, when the left channel is leading, x M (t) = ( (1 + γ) / 2) × x L (t) + ((1-γ) / 2) × x R (t), when the preceding channel information is information indicating that the right channel is leading That is, when the right channel precedes, x M (t) = ((1-γ) / 2) × x L (t) + ((1 + γ) / 2) × x R (t) , As the downmix signal x M (t). When the downmix unit 112 obtains the downmix signal in this way, the smaller the left-right correlation coefficient γ of the downmix signal, the smaller the correlation between the left channel input sound signal and the right channel input sound signal. , The signal obtained by averaging the input sound signal of the left channel and the input sound signal of the right channel is closer, and the larger the left-right correlation coefficient γ, that is, the greater the correlation between the input sound signal of the left channel and the input sound signal of the right channel. The closer it is to the input sound signal of the preceding channel among the input sound signal of the left channel and the input sound signal of the right channel.
 なお、ダウンミックス部112は、何れのチャネルも先行していない場合には、左チャネルの入力音信号と右チャネルの入力音信号が同じ重みでダウンミックス信号に含まれるように、左チャネルの入力音信号と右チャネルの入力音信号を平均してダウンミックス信号を得て出力するのがよい。そこで、ダウンミックス部112は、先行チャネル情報が何れのチャネルも先行していないことを表す場合には、各サンプル番号tについて、左チャネルの入力音信号xL(t)と右チャネルの入力音信号xR(t)を平均したxM(t)=(xL(t)+xR(t))/2をダウンミックス信号xM(t)とする。 When none of the channels precedes, the downmix unit 112 inputs the left channel so that the input sound signal of the left channel and the input sound signal of the right channel are included in the downmix signal with the same weight. It is preferable to obtain a downmix signal by averaging the sound signal and the input sound signal of the right channel and output it. Therefore, when the preceding channel information indicates that none of the channels is preceded by the downmix unit 112, the input sound signal x L (t) of the left channel and the input sound of the right channel are used for each sample number t. Let x M (t) = (x L (t) + x R (t)) / 2, which is the average of the signals x R (t), be the downmix signal x M (t).
<第2実施形態>
 第1参考形態の符号化装置100に対しても、左チャネルの入力音信号と右チャネルの入力音信号の関係を考慮してダウンミックス信号を生成する変形をしてもよく、この形態を第2実施形態として説明する。なお、第2実施形態の符号化装置が得た符号は、第1参考形態の復号装置200で復号することができるので、復号装置の説明は省略する。
<Second Embodiment>
The coding device 100 of the first reference embodiment may also be modified to generate a downmix signal in consideration of the relationship between the input sound signal of the left channel and the input sound signal of the right channel. 2 This will be described as an embodiment. Since the code obtained by the coding device of the second embodiment can be decoded by the decoding device 200 of the first reference embodiment, the description of the decoding device will be omitted.
≪符号化装置103≫
 第2実施形態の符号化装置103は、図1に示す通り、ダウンミックス部112と左チャネル減算利得推定部120と左チャネル信号減算部130と右チャネル減算利得推定部140と右チャネル信号減算部150とモノラル符号化部160とステレオ符号化部170と左右関係情報推定部183を含む。第2実施形態の符号化装置103が第1参考形態の符号化装置100と異なるのは、ダウンミックス部110に代えてダウンミックス部112を含み、図1に破線で示す通り、左右関係情報推定部183を含み、左右関係情報推定部183が左右相関係数γと先行チャネル情報を得て出力し、出力した左右相関係数γと先行チャネル情報がダウンミックス部112に入力されて用いられることである。第2実施形態の符号化装置103のその他の構成及び動作は第1参考形態の符号化装置100と同じである。また、第2実施形態の符号化装置103のダウンミックス部112の動作は、第1実施形態の符号化装置102のダウンミックス部112の動作と同じである。第2実施形態の符号化装置103は、各フレームについて、図15に例示するステップS112からステップS183の処理を行う。以下、第2実施形態の符号化装置103が第1参考形態の符号化装置100とも第1実施形態の符号化装置102とも異なる点について説明する。
<< Encoding device 103 >>
As shown in FIG. 1, the coding apparatus 103 of the second embodiment includes a downmix unit 112, a left channel subtraction gain estimation unit 120, a left channel signal subtraction unit 130, a right channel subtraction gain estimation unit 140, and a right channel signal subtraction unit. It includes 150, a monaural coding unit 160, a stereo coding unit 170, and a left-right relationship information estimation unit 183. The coding device 103 of the second embodiment is different from the coding device 100 of the first reference embodiment in that the downmix unit 112 is included in place of the downmix unit 110, and as shown by the broken line in FIG. 1, the left-right relationship information estimation The left-right relationship information estimation unit 183 includes the unit 183 to obtain and output the left-right correlation coefficient γ and the preceding channel information, and the output left-right correlation coefficient γ and the preceding channel information are input to and used in the downmix unit 112. Is. Other configurations and operations of the coding device 103 of the second embodiment are the same as those of the coding device 100 of the first reference embodiment. Further, the operation of the downmix unit 112 of the coding device 103 of the second embodiment is the same as the operation of the downmix unit 112 of the coding device 102 of the first embodiment. The coding device 103 of the second embodiment performs the processing of steps S112 to S183 illustrated in FIG. 15 for each frame. Hereinafter, the difference between the coding device 103 of the second embodiment and the coding device 100 of the first reference embodiment and the coding device 102 of the first embodiment will be described.
[左右関係情報推定部183]
 左右関係情報推定部183には、符号化装置103に入力された左チャネルの入力音信号と、符号化装置103に入力された右チャネルの入力音信号と、が入力される。左右関係情報推定部183は、入力された左チャネルの入力音信号と右チャネルの入力音信号から、左右相関係数γと、先行チャネル情報と、を得て出力する(ステップS183)。
[Left-right relationship information estimation unit 183]
The left-right channel input sound signal input to the coding device 103 and the right-channel input sound signal input to the coding device 103 are input to the left-right relationship information estimation unit 183. The left-right relationship information estimation unit 183 obtains and outputs the left-right correlation coefficient γ and the preceding channel information from the input left channel input sound signal and the right channel input sound signal (step S183).
 左右関係情報推定部183が得て出力する左右相関係数γと先行チャネル情報は、第1実施形態で説明したものと同じである。すなわち、左右関係情報推定部183は、左右時間差τと左右時間差符号Cτを得ずに出力しないでよいこと以外は左右関係情報推定部182と同じでよい。 The left-right correlation coefficient γ and the preceding channel information obtained and output by the left-right relationship information estimation unit 183 are the same as those described in the first embodiment. That is, the left-right relationship information estimation unit 183 may be the same as the left-right relationship information estimation unit 182 except that the left-right time difference τ and the left-right time difference code Cτ do not have to be output.
 例えば、左右関係情報推定部183は、τmaxからτminまでの各候補サンプル数τcandについて、左チャネルの入力音信号のサンプル列と、各候補サンプル数τcand分だけ当該サンプル列より後にずれた位置にある右チャネルの入力音信号のサンプル列と、の相関値γcandのうちの最大値を左右相関係数γとして得て出力し、相関値が最大値のときのτcandが正の値である場合には、左チャネルが先行していることを表す情報を先行チャネル情報として得て出力し、相関値が最大値のときのτcandが負の値である場合には、右チャネルが先行していることを表す情報を先行チャネル情報として得て出力する。左右関係情報推定部183は、相関値が最大値のときのτcandが0である場合には、左チャネルが先行していることを表す情報を先行チャネル情報として得て出力してもよいし、右チャネルが先行していることを表す情報を先行チャネル情報として得て出力してもよいが、何れのチャネルも先行していないことを表す情報を先行チャネル情報として得て出力するとよい。 For example, the left and right relationship information estimating unit 183, tau for each candidate sample number tau cand from max to tau min, displacement and sample sequence of the input sound signal of the left channel, after the relevant sample sequence only the candidate sample number tau cand min The maximum value of the correlation value γ cand with the sample string of the input sound signal of the right channel at the position is obtained as the left-right correlation coefficient γ and output, and τ cand when the correlation value is the maximum value is positive. If it is a value, information indicating that the left channel is leading is obtained and output as leading channel information, and if τ cand is a negative value when the correlation value is the maximum value, the right channel is obtained. The information indicating that is preceded is obtained and output as the preceding channel information. When the τ cand is 0 when the correlation value is the maximum value, the left-right relationship information estimation unit 183 may obtain and output information indicating that the left channel is ahead as the leading channel information. , Information indicating that the right channel is leading may be obtained and output as leading channel information, but information indicating that none of the channels is leading may be obtained and output as leading channel information.
<第3実施形態>
 各チャネルの差分信号ではなく各チャネルの入力音信号をステレオ符号化する符号化装置に対しても、左チャネルの入力音信号と右チャネルの入力音信号の関係を考慮してダウンミックス信号を得る構成を採用してもよく、この形態を第3実施形態として説明する。
<Third Embodiment>
Even for a coding device that stereo-encodes the input sound signal of each channel instead of the difference signal of each channel, a downmix signal is obtained in consideration of the relationship between the input sound signal of the left channel and the input sound signal of the right channel. A configuration may be adopted, and this embodiment will be described as a third embodiment.
≪符号化装置104≫
 第3実施形態の符号化装置104は、図16に示す通り、左右関係情報推定部183とダウンミックス部112とモノラル符号化部160とステレオ符号化部174を含む。第3実施形態の符号化装置104は、各フレームについて、図17に例示するステップS183とステップS112とステップS160とステップS174の処理を行う。以下、第3実施形態の符号化装置104について、第2実施形態の説明を適宜参照して説明する。
<< Encoding device 104 >>
As shown in FIG. 16, the coding device 104 of the third embodiment includes a left-right relationship information estimation unit 183, a downmix unit 112, a monaural coding unit 160, and a stereo coding unit 174. The coding device 104 of the third embodiment performs the processes of step S183, step S112, step S160, and step S174 illustrated in FIG. 17 for each frame. Hereinafter, the coding device 104 of the third embodiment will be described with reference to the description of the second embodiment as appropriate.
[左右関係情報推定部183]
 左右関係情報推定部183は、第2実施形態の左右関係情報推定部183と同じである。左右関係情報推定部183には、符号化装置104に入力された左チャネルの入力音信号と、符号化装置104に入力された右チャネルの入力音信号と、が入力される。左右関係情報推定部183は、入力された左チャネルの入力音信号と右チャネルの入力音信号から、左チャネルの入力音信号と右チャネルの入力音信号の相関係数である左右相関係数γと、左チャネルの入力音信号と右チャネルの入力音信号のどちらが先行しているかを表す情報である先行チャネル情報と、を得て出力する(ステップS183)。
[Left-right relationship information estimation unit 183]
The left-right relationship information estimation unit 183 is the same as the left-right relationship information estimation unit 183 of the second embodiment. The left-right channel input sound signal input to the coding device 104 and the right-channel input sound signal input to the coding device 104 are input to the left-right relationship information estimation unit 183. The left-right relation information estimation unit 183 has a left-right correlation coefficient γ which is a correlation coefficient between the left channel input sound signal and the right channel input sound signal from the input left channel input sound signal and the right channel input sound signal. And the preceding channel information, which is information indicating which of the input sound signal of the left channel and the input sound signal of the right channel precedes, is obtained and output (step S183).
[ダウンミックス部112]
 ダウンミックス部112は、第2実施形態のダウンミックス部112と同じである。ダウンミックス部112には、符号化装置104に入力された左チャネルの入力音信号と、符号化装置104に入力された右チャネルの入力音信号と、左右関係情報推定部183が出力した左右相関係数γと、左右関係情報推定部183が出力した先行チャネル情報と、が入力される。ダウンミックス部112は、ダウンミックス信号に、左チャネルの入力音信号と右チャネルの入力音信号のうちの先行しているチャネルの入力音信号のほうが、左右相関係数γが大きいほど大きく含まれるように、左チャネルの入力音信号と右チャネルの入力音信号を重み付け平均してダウンミックス信号を得て出力する(ステップS112)。
[Downmix section 112]
The downmix unit 112 is the same as the downmix unit 112 of the second embodiment. The downmix unit 112 includes a left channel input sound signal input to the coding device 104, a right channel input sound signal input to the coding device 104, and a left-right phase output by the left-right relationship information estimation unit 183. The relation number γ and the preceding channel information output by the left-right relation information estimation unit 183 are input. The downmix unit 112 includes the input sound signal of the preceding channel among the input sound signal of the left channel and the input sound signal of the right channel in the downmix signal more as the left-right correlation coefficient γ is larger. As described above, the input sound signal of the left channel and the input sound signal of the right channel are weighted and averaged to obtain a downmix signal and output (step S112).
 例えば、サンプル番号をtとし、左チャネルの入力音信号をxL(t)とし、右チャネルの入力音信号をxR(t)とし、ダウンミックス信号をxM(t)とすると、ダウンミックス部112は、先行チャネル情報が左チャネルが先行していることを表す場合には、各サンプル番号tについて、xM(t)=((1+γ)/2)×xL(t)+((1-γ)/2)×xR(t)によりダウンミックス信号を得て、先行チャネル情報が右チャネルが先行していることを表す場合には、各サンプル番号tについて、xM(t)=((1-γ)/2)×xL(t)+((1+γ)/2)×xR(t)によりダウンミックス信号を得て、先行チャネル情報が何れのチャネルも先行していないことを表す場合には、各サンプル番号tについて、xM(t)=(xL(t)+xR(t))/2によりダウンミックス信号を得る。 For example, if the sample number is t, the input sound signal of the left channel is x L (t), the input sound signal of the right channel is x R (t), and the downmix signal is x M (t), then downmix is performed. Part 112 indicates that, for each sample number t, x M (t) = ((1 + γ) / 2) × x L (t) + when the preceding channel information indicates that the left channel is leading. When the downmix signal is obtained by ((1-γ) / 2) × x R (t) and the preceding channel information indicates that the right channel is leading, x M (for each sample number t) A downmix signal is obtained by t) = ((1-γ) / 2) × x L (t) + ((1 + γ) / 2) × x R (t), and the preceding channel information is any channel. To indicate that there is no precedent, for each sample number t, the downmix signal is obtained by x M (t) = (x L (t) + x R (t)) / 2.
[モノラル符号化部160]
 モノラル符号化部160は、第2実施形態のモノラル符号化部160と同じである。モノラル符号化部160には、ダウンミックス部112が出力したダウンミックス信号が入力される。モノラル符号化部160は、入力されたダウンミックス信号を符号化してモノラル符号CMを得て出力する(ステップS160)。モノラル符号化部160は、どのような符号化方式を用いてもよく、例えば3GPP EVS規格のような符号化方式を用いればよい。符号化方式は、後述するステレオ符号化部174と独立して符号化処理を行う符号化方式、すなわち、ステレオ符号化部174で得られるステレオ符号CS’やステレオ符号化部174が行う符号化処理において得られる情報を用いずに符号化処理を行う符号化方式であってもよいし、ステレオ符号化部174で得られるステレオ符号CS’やステレオ符号化部174が行う符号化処理において得られる情報を用いて符号化処理を行う符号化方式であってもよい。
[Monaural coding unit 160]
The monaural coding unit 160 is the same as the monaural coding unit 160 of the second embodiment. The downmix signal output by the downmix unit 112 is input to the monaural coding unit 160. The monaural coding unit 160 encodes the input downmix signal to obtain a monaural code CM and outputs it (step S160). Any coding method may be used for the monaural coding unit 160, and for example, a coding method such as the 3GPP EVS standard may be used. The coding method is a coding method that performs coding processing independently of the stereo coding unit 174, which will be described later, that is, coding processing performed by the stereo code CS'obtained by the stereo coding unit 174 or the stereo coding unit 174. The coding method may be used in which the coding process is performed without using the information obtained in A coding method may be used in which the coding process is performed using the above.
[ステレオ符号化部174]
 ステレオ符号化部174には、符号化装置104に入力された左チャネルの入力音信号と、符号化装置104に入力された右チャネルの入力音信号と、が入力される。ステレオ符号化部174は、入力された左チャネルの入力音信号と右チャネルの入力音信号を符号化してステレオ符号CS’を得て出力する(ステップS174)。ステレオ符号化部174は、どのような符号化方式を用いてもよく、例えばMPEG-4 AAC規格のステレオ復号方式に対応するステレオ符号化方式を用いてもよいし、入力された左チャネルの入力音信号と右チャネルの入力音信号それぞれを独立して符号化する符号化方式を用いてもよく、符号化により得られた符号全てを合わせたものをステレオ符号CS’とすればよい。符号化方式は、モノラル符号化部160と独立して符号化処理を行う符号化方式、すなわち、モノラル符号化部160で得られるモノラル符号CMやモノラル符号化部160が行う符号化処理において得られる情報を用いずに符号化処理を行う符号化方式であってもよいし、モノラル符号化部160で得られるモノラル符号CMやモノラル符号化部160が行う符号化処理において得られる情報を用いて符号化処理を行う符号化方式であってもよい。
[Stereo coding unit 174]
The input sound signal of the left channel input to the coding device 104 and the input sound signal of the right channel input to the coding device 104 are input to the stereo coding unit 174. The stereo coding unit 174 encodes the input left channel input sound signal and the right channel input sound signal to obtain the stereo code CS'and output it (step S174). The stereo coding unit 174 may use any coding method, for example, a stereo coding method corresponding to the stereo decoding method of the MPEG-4 AAC standard may be used, or the input of the input left channel may be used. A coding method may be used in which the sound signal and the input sound signal of the right channel are coded independently, and the combination of all the codes obtained by the coding may be defined as the stereo code CS'. The coding method is obtained in a coding method that performs coding processing independently of the monaural coding unit 160, that is, a monaural code CM obtained by the monaural coding unit 160 or a coding process performed by the monaural coding unit 160. It may be a coding method that performs coding processing without using information, or a code using the monaural code CM obtained by the monaural coding unit 160 or the information obtained in the coding process performed by the monaural coding unit 160. It may be a coding method that performs the conversion process.
<第4実施形態>
 以上の実施形態での説明からも分かる通り、符号化装置が、左チャネルの入力音信号と右チャネルの入力音信号から得たダウンミックス信号を少なくとも符号化して符号を得るのであれば、それがどのような符号化装置であっても、左チャネルの入力音信号と右チャネルの入力音信号の関係を考慮してダウンミックス信号を得る構成を採用してもよい。また、符号化装置に限らず、信号処理装置が、左チャネルの入力音信号と右チャネルの入力音信号から得たダウンミックス信号を少なくとも信号処理して信号処理結果を得るのであれば、それがどのような信号処理装置であっても、左チャネルの入力音信号と右チャネルの入力音信号の関係を考慮してダウンミックス信号を得る構成を採用してもよい。さらに、これらの符号化装置や信号処理装置の前段で用いるダウンミックス装置として、左チャネルの入力音信号と右チャネルの入力音信号の関係を考慮してダウンミックス信号を得る構成を採用してもよい。これらの形態を第4実施形態として説明する。
<Fourth Embodiment>
As can be seen from the description in the above embodiment, if the encoding device at least encodes the downmix signal obtained from the input sound signal of the left channel and the input sound signal of the right channel to obtain a code, it is Any coding device may adopt a configuration in which a downmix signal is obtained in consideration of the relationship between the input sound signal of the left channel and the input sound signal of the right channel. Further, if the signal processing device, not limited to the coding device, at least signals the downmix signal obtained from the input sound signal of the left channel and the input sound signal of the right channel to obtain a signal processing result, it is Any signal processing device may adopt a configuration in which a downmix signal is obtained in consideration of the relationship between the input sound signal of the left channel and the input sound signal of the right channel. Further, as the downmix device used in the previous stage of these coding devices and signal processing devices, a configuration for obtaining a downmix signal in consideration of the relationship between the input sound signal of the left channel and the input sound signal of the right channel may be adopted. good. These embodiments will be described as the fourth embodiment.
≪音信号符号化装置105≫
 第4実施形態の音信号符号化装置105は、図18に示す通り、左右関係情報推定部183とダウンミックス部112と符号化部195を含む。第4実施形態の音信号符号化装置105は、各フレームについて、図19に例示するステップS183とステップS112とステップS195の処理を行う。以下、第4実施形態の音信号符号化装置105について、第2実施形態の説明を適宜参照して説明する。
<< Sound signal coding device 105 >>
As shown in FIG. 18, the sound signal coding device 105 of the fourth embodiment includes a left-right relationship information estimation unit 183, a downmix unit 112, and a coding unit 195. The sound signal coding device 105 of the fourth embodiment performs the processes of step S183, step S112, and step S195 illustrated in FIG. 19 for each frame. Hereinafter, the sound signal coding device 105 of the fourth embodiment will be described with reference to the description of the second embodiment as appropriate.
[左右関係情報推定部183]
 左右関係情報推定部183は、第2実施形態の左右関係情報推定部183と同じであり、入力された左チャネルの入力音信号と右チャネルの入力音信号から、左チャネルの入力音信号と右チャネルの入力音信号の相関係数である左右相関係数γと、左チャネルの入力音信号と右チャネルの入力音信号のどちらが先行しているかを表す情報である先行チャネル情報と、を得て出力する(ステップS183)。
[Left-right relationship information estimation unit 183]
The left-right relation information estimation unit 183 is the same as the left-right relation information estimation unit 183 of the second embodiment, and from the input left channel input sound signal and the right channel input sound signal, the left channel input sound signal and the right Obtaining the left-right correlation coefficient γ, which is the correlation coefficient of the input sound signal of the channel, and the preceding channel information, which is information indicating which of the input sound signal of the left channel and the input sound signal of the right channel precedes. Output (step S183).
[ダウンミックス部112]
 ダウンミックス部112は、第2実施形態のダウンミックス部112と同じであり、ダウンミックス信号に、左チャネルの入力音信号と右チャネルの入力音信号のうちの先行しているチャネルの入力音信号のほうが、左右相関係数γが大きいほど大きく含まれるように、左チャネルの入力音信号と右チャネルの入力音信号を重み付け平均してダウンミックス信号を得て出力する(ステップS112)。
[Downmix section 112]
The downmix unit 112 is the same as the downmix unit 112 of the second embodiment, and the downmix signal includes the input sound signal of the channel that precedes the input sound signal of the left channel and the input sound signal of the right channel. The downmix signal is obtained and output by weighting and averaging the input sound signal of the left channel and the input sound signal of the right channel so that the larger the left-right correlation coefficient γ is, the larger the value is included (step S112).
[符号化部195]
 符号化部195には、ダウンミックス部112が出力したダウンミックス信号が少なくとも入力される。符号化部195は、入力されたダウンミックス信号を少なくとも符号化して音信号符号を得て出力する(ステップS195)。符号化部195は、左チャネルの入力音信号と右チャネルの入力音信号も符号化してもよく、この符号化で得た符号も音信号符号に含めて出力してもよい。この場合には、図18に破線で示すように、符号化部195には左チャネルの入力音信号と右チャネルの入力音信号も入力される。
[Encoding unit 195]
At least the downmix signal output by the downmix unit 112 is input to the coding unit 195. The coding unit 195 at least encodes the input downmix signal to obtain a sound signal code and outputs it (step S195). The coding unit 195 may also encode the input sound signal of the left channel and the input sound signal of the right channel, and may include the code obtained by this coding in the sound signal code and output it. In this case, as shown by the broken line in FIG. 18, the input sound signal of the left channel and the input sound signal of the right channel are also input to the coding unit 195.
≪音信号処理装置305≫
 第4実施形態の音信号処理装置305は、図20に示す通り、左右関係情報推定部183とダウンミックス部112と信号処理部315を含む。第4実施形態の音信号処理装置305は、各フレームについて、図21に例示するステップS183とステップS112とステップS315の処理を行う。以下、第4実施形態の音信号処理装置305について、第4実施形態の音信号符号化装置105と異なる点を説明する。
≪Sound signal processing device 305≫
As shown in FIG. 20, the sound signal processing device 305 of the fourth embodiment includes a left-right relationship information estimation unit 183, a downmix unit 112, and a signal processing unit 315. The sound signal processing device 305 of the fourth embodiment performs the processes of step S183, step S112, and step S315 illustrated in FIG. 21 for each frame. Hereinafter, the difference between the sound signal processing device 305 of the fourth embodiment and the sound signal coding device 105 of the fourth embodiment will be described.
[信号処理部315]
 信号処理部315には、ダウンミックス部112が出力したダウンミックス信号が少なくとも入力される。信号処理部315は、入力されたダウンミックス信号を少なくとも信号処理して信号処理結果を得て出力する(ステップS315)。信号処理部315は、左チャネルの入力音信号と右チャネルの入力音信号も信号処理して信号処理結果を得てもよく、この場合には、図20に破線で示すように、信号処理部315には左チャネルの入力音信号と右チャネルの入力音信号も入力される。信号処理部315は、例えば、各チャネルの入力音信号に対してダウンミックス信号を用いた信号処理を行って各チャネルの出力音信号を信号処理結果として得てもよいし、第3実施形態のステレオ符号化部174で得た符号CS’をステレオ符号化部174に対応する復号部を備える復号装置で復号して得た左チャネルの復号音信号と右チャネルの復号音信号に対してこの信号処理を行ってもよい。すなわち、音信号処理装置305に入力される左チャネルの入力音信号と右チャネルの入力音信号が2個のマイクロホンそれぞれで収音してAD変換して得られたディジタルの音声信号又は音響信号であるのは必須ではなく、音信号処理装置305に入力される左チャネルの入力音信号と右チャネルの入力音信号は、符号を復号して得た左チャネルの復号音信号と右チャネルの復号音信号であってもよいし、ステレオの2チャネルの音信号であればどのようにして得られた音信号であってもよい。
[Signal processing unit 315]
At least the downmix signal output by the downmix unit 112 is input to the signal processing unit 315. The signal processing unit 315 at least performs signal processing on the input downmix signal to obtain a signal processing result and output it (step S315). The signal processing unit 315 may also process the input sound signal of the left channel and the input sound signal of the right channel to obtain a signal processing result. In this case, as shown by a broken line in FIG. 20, the signal processing unit 315 may obtain a signal processing result. The input sound signal of the left channel and the input sound signal of the right channel are also input to the 315. For example, the signal processing unit 315 may perform signal processing using the downmix signal on the input sound signal of each channel to obtain the output sound signal of each channel as the signal processing result, or the third embodiment. This signal is obtained with respect to the left channel decoded sound signal and the right channel decoded sound signal obtained by decoding the code CS'obtained by the stereo coding unit 174 with a decoding device provided with a decoding unit corresponding to the stereo coding unit 174. Processing may be performed. That is, it is a digital audio signal or acoustic signal obtained by collecting the left channel input sound signal and the right channel input sound signal input to the sound signal processing device 305 by each of the two microphones and performing AD conversion. It is not essential that the left channel input sound signal and the right channel input sound signal input to the sound signal processing device 305 are the left channel decoded sound signal and the right channel decoded sound obtained by decoding the code. It may be a signal, or it may be a sound signal obtained in any way as long as it is a stereo two-channel sound signal.
 音信号処理装置305に入力される左チャネルの入力音信号と右チャネルの入力音信号が別装置で符号を復号して得た左チャネルの復号音信号と右チャネルの復号音信号である場合などには、左右関係情報推定部183が得るのと同じ左右相関係数γと先行チャネル情報の何れか一方または両方が別装置で得られている場合がある。左右相関係数γと先行チャネル情報の何れか一方または両方が別装置で得られている場合は、図20に一点鎖線で示す通り、音信号処理装置305には、別装置で得た左右相関係数γと先行チャネル情報の何れか一方または両方が入力されるようにすればよい。この場合には、左右関係情報推定部183は、音信号処理装置305に入力されなかった左右相関係数γまたは先行チャネル情報を得るようにすればよく、左右相関係数γと先行チャネル情報の両方が音信号処理装置305に入力される場合には、音信号処理装置305は、左右関係情報推定部183を備えずステップS183を行わないでよい。すなわち、音信号処理装置305は、図20に二点鎖線で示す通り、左右関係情報取得部185を備えて、左右関係情報取得部185が、左チャネルの入力音信号と右チャネルの入力音信号の相関係数である左右相関係数γと、左チャネルの入力音信号と右チャネルの入力音信号のどちらが先行しているかを表す情報である先行チャネル情報と、を得て出力するようにすればよい(ステップS185)。なお、上述した各装置の左右関係情報推定部183とステップS183も、左右関係情報取得部185とステップS185の範疇であるといえる。 When the left channel input sound signal and the right channel input sound signal input to the sound signal processing device 305 are the left channel decoded sound signal and the right channel decoded sound signal obtained by decoding the code with another device, etc. In some cases, the same left-right correlation coefficient γ and one or both of the preceding channel information obtained by the left-right relationship information estimation unit 183 may be obtained by another device. When either or both of the left-right correlation coefficient γ and the preceding channel information are obtained by another device, as shown by the alternate long and short dash line in FIG. 20, the sound signal processing device 305 has the left-right phase obtained by another device. Either or both of the relation number γ and the preceding channel information may be input. In this case, the left-right relationship information estimation unit 183 may obtain the left-right correlation coefficient γ or the preceding channel information that has not been input to the sound signal processing device 305, and the left-right correlation coefficient γ and the preceding channel information. When both are input to the sound signal processing device 305, the sound signal processing device 305 does not include the left-right relationship information estimation unit 183 and does not need to perform step S183. That is, as shown by the two-point chain line in FIG. 20, the sound signal processing device 305 includes the left-right relation information acquisition unit 185, and the left-right relation information acquisition unit 185 provides the left channel input sound signal and the right channel input sound signal. The left-right correlation coefficient γ, which is the correlation coefficient of, and the preceding channel information, which is information indicating which of the input sound signal of the left channel and the input sound signal of the right channel precedes, should be obtained and output. It may be good (step S185). It can be said that the left-right relationship information estimation unit 183 and step S183 of each device described above are also in the category of the left-right relationship information acquisition unit 185 and step S185.
≪音信号ダウンミックス装置405≫
 第4実施形態の音信号ダウンミックス装置405は、図22に示す通り、左右関係情報取得部185とダウンミックス部112を含む。音信号ダウンミックス装置405は、各フレームについて、図23に例示するステップS185とステップS112の処理を行う。以下、音信号ダウンミックス装置405について、第2実施形態の説明を適宜参照して説明する。なお、音信号処理装置305と同様に、音信号ダウンミックス装置405に入力される左チャネルの入力音信号と右チャネルの入力音信号は、2個のマイクロホンそれぞれで収音してAD変換して得られたディジタルの音声信号又は音響信号であってもよいし、符号を復号して得た左チャネルの復号音信号と右チャネルの復号音信号であってもよいし、ステレオの2チャネルの音信号であればどのようにして得られた音信号であってもよい。
≪Sound signal downmix device 405≫
As shown in FIG. 22, the sound signal downmix device 405 of the fourth embodiment includes the left-right relationship information acquisition unit 185 and the downmix unit 112. The sound signal downmix device 405 performs the processes of steps S185 and S112 illustrated in FIG. 23 for each frame. Hereinafter, the sound signal downmix device 405 will be described with reference to the description of the second embodiment as appropriate. Similar to the sound signal processing device 305, the left channel input sound signal and the right channel input sound signal input to the sound signal downmix device 405 are picked up by each of the two microphones and AD-converted. It may be a obtained digital audio signal or acoustic signal, a left channel decoded sound signal and a right channel decoded sound signal obtained by decoding a code, or a stereo two-channel sound. As long as it is a signal, it may be a sound signal obtained in any way.
[左右関係情報取得部185]
 左右関係情報取得部185は、左チャネルの入力音信号と右チャネルの入力音信号の相関係数である左右相関係数γと、左チャネルの入力音信号と右チャネルの入力音信号のどちらが先行しているかを表す情報である先行チャネル情報と、を得て出力する(ステップS185)。
[Left-right relationship information acquisition unit 185]
In the left-right relationship information acquisition unit 185, which of the left-right correlation coefficient γ, which is the correlation coefficient between the left channel input sound signal and the right channel input sound signal, and the left channel input sound signal and the right channel input sound signal precedes. The preceding channel information, which is information indicating whether or not the signal is being used, is obtained and output (step S185).
 左右相関係数γと先行チャネル情報の両方が別装置で得られている場合には、図22に一点鎖線で示すように、左右関係情報取得部185は別装置から音信号ダウンミックス装置405に入力された左右相関係数γと先行チャネル情報を得てダウンミックス部112に対して出力する。 When both the left-right correlation coefficient γ and the preceding channel information are obtained by another device, the left-right relationship information acquisition unit 185 is transferred from the separate device to the sound signal downmix device 405 as shown by the alternate long and short dash line in FIG. The input left-right correlation coefficient γ and preceding channel information are obtained and output to the downmix unit 112.
 左右相関係数γと先行チャネル情報の両方が別装置で得られていない場合には、図22に破線で示すように、左右関係情報取得部185は左右関係情報推定部183を備える。左右関係情報推定部183は、左右相関係数γと先行チャネル情報を、第2実施形態の左右関係情報推定部183と同様に左チャネルの入力音信号と右チャネルの入力音信号から得て、ダウンミックス部112に対して出力する。 When both the left-right correlation coefficient γ and the preceding channel information are not obtained by another device, the left-right relationship information acquisition unit 185 includes a left-right relationship information estimation unit 183 as shown by a broken line in FIG. 22. The left-right relationship information estimation unit 183 obtains the left-right correlation coefficient γ and the preceding channel information from the left channel input sound signal and the right channel input sound signal, similarly to the left-right relationship information estimation unit 183 of the second embodiment. Output to the downmix unit 112.
 左右相関係数γと先行チャネル情報の何れか一方が別装置で得られていない場合には、図22に破線で示すように、左右関係情報取得部185は、左右関係情報推定部183を備える。左右関係情報取得部185の左右関係情報推定部183は、別装置で得られていない左右相関係数γまたは別装置で得られていない先行チャネル情報を、第2実施形態の左右関係情報推定部183と同様に左チャネルの入力音信号と右チャネルの入力音信号から得て、ダウンミックス部112に対して出力する。別装置で得られている左右相関係数γまたは別装置で得られている先行チャネル情報については、左右関係情報取得部185は、図22に一点鎖線で示すように、別装置から音信号ダウンミックス装置405に入力された左右相関係数γまたは先行チャネル情報をダウンミックス部112に対して出力する。 When either the left-right correlation coefficient γ or the preceding channel information is not obtained by another device, the left-right relation information acquisition unit 185 includes the left-right relation information estimation unit 183 as shown by the broken line in FIG. .. The left-right relationship information estimation unit 183 of the left-right relationship information acquisition unit 185 uses the left-right correlation coefficient γ not obtained by another device or the preceding channel information not obtained by another device to obtain the left-right relationship information estimation unit 183 of the second embodiment. Similar to 183, it is obtained from the input sound signal of the left channel and the input sound signal of the right channel, and is output to the downmix unit 112. Regarding the left-right correlation coefficient γ obtained by another device or the preceding channel information obtained by another device, the left-right relationship information acquisition unit 185 reduces the sound signal from the other device as shown by the alternate long and short dash line in FIG. The left-right correlation coefficient γ or the preceding channel information input to the mixing device 405 is output to the downmix unit 112.
[ダウンミックス部112]
 ダウンミックス部112は、第2実施形態のダウンミックス部112と同じであり、左右関係情報取得部185が取得した先行チャネル情報と左右相関係数と基づいて、ダウンミックス信号に、左チャネルの入力音信号と右チャネルの入力音信号のうちの先行しているチャネルの入力音信号のほうが、左右相関係数γが大きいほど大きく含まれるように、左チャネルの入力音信号と右チャネルの入力音信号を重み付け平均してダウンミックス信号を得て出力する(ステップS112)。
[Downmix section 112]
The downmix unit 112 is the same as the downmix unit 112 of the second embodiment, and the left channel is input to the downmix signal based on the preceding channel information acquired by the left-right relationship information acquisition unit 185 and the left-right correlation coefficient. Of the sound signal and the input sound signal of the right channel, the input sound signal of the preceding channel is included more as the left-right correlation coefficient γ is larger, so that the input sound signal of the left channel and the input sound of the right channel are included. The signals are weighted and averaged to obtain a downmix signal and output (step S112).
 例えば、サンプル番号をtとし、左チャネルの入力音信号をxL(t)とし、右チャネルの入力音信号をxR(t)とし、ダウンミックス信号をxM(t)とすると、ダウンミックス部112は、先行チャネル情報が左チャネルが先行していることを表す場合には、各サンプル番号tについて、xM(t)=((1+γ)/2)×xL(t)+((1-γ)/2)×xR(t)によりダウンミックス信号を得て、先行チャネル情報が右チャネルが先行していることを表す場合には、各サンプル番号tについて、xM(t)=((1-γ)/2)×xL(t)+((1+γ)/2)×xR(t)によりダウンミックス信号を得て、先行チャネル情報が何れのチャネルも先行していないことを表す場合には、各サンプル番号tについて、xM(t)=(xL(t)+xR(t))/2によりダウンミックス信号を得る。 For example, if the sample number is t, the input sound signal of the left channel is x L (t), the input sound signal of the right channel is x R (t), and the downmix signal is x M (t), then downmix is performed. Part 112 indicates that, for each sample number t, x M (t) = ((1 + γ) / 2) × x L (t) + when the preceding channel information indicates that the left channel is leading. When the downmix signal is obtained by ((1-γ) / 2) × x R (t) and the preceding channel information indicates that the right channel is leading, x M (for each sample number t) A downmix signal is obtained by t) = ((1-γ) / 2) × x L (t) + ((1 + γ) / 2) × x R (t), and the preceding channel information is any channel. To indicate that there is no precedent, for each sample number t, the downmix signal is obtained by x M (t) = (x L (t) + x R (t)) / 2.
<プログラム及び記録媒体>
 上述した各符号化装置と各復号装置と音信号符号化装置と音信号処理装置と音信号ダウンミックス装置の各部の処理をコンピュータにより実現してもよく、この場合は各装置が有すべき機能の処理内容はプログラムによって記述される。そして、このプログラムを図24に示すコンピュータ1000の記憶部1020に読み込ませ、演算処理部1010、入力部1030、出力部1040などに動作させることにより、上記各装置における各種の処理機能がコンピュータ上で実現される。
<Programs and recording media>
The processing of each part of each coding device, each decoding device, the sound signal coding device, the sound signal processing device, and the sound signal downmixing device described above may be realized by a computer, and in this case, the functions that each device should have. The processing content of is described by the program. Then, by loading this program into the storage unit 1020 of the computer 1000 shown in FIG. 24 and operating it in the arithmetic processing unit 1010, the input unit 1030, the output unit 1040, and the like, various processing functions in each of the above devices can be performed on the computer. It will be realized.
 この処理内容を記述したプログラムは、コンピュータで読み取り可能な記録媒体に記録しておくことができる。コンピュータで読み取り可能な記録媒体は、例えば、非一時的な記録媒体であり、具体的には、磁気記録装置、光ディスク、等である。 The program that describes this processing content can be recorded on a computer-readable recording medium. The computer-readable recording medium is, for example, a non-temporary recording medium, specifically, a magnetic recording device, an optical disk, or the like.
 また、このプログラムの流通は、例えば、そのプログラムを記録したDVD、CD-ROM等の可搬型記録媒体を販売、譲渡、貸与等することによって行う。さらに、このプログラムをサーバコンピュータの記憶装置に格納しておき、ネットワークを介して、サーバコンピュータから他のコンピュータにそのプログラムを転送することにより、このプログラムを流通させる構成としてもよい。 The distribution of this program is carried out, for example, by selling, transferring, or renting a portable recording medium such as a DVD or CD-ROM on which the program is recorded. Further, the program may be stored in the storage device of the server computer, and the program may be distributed by transferring the program from the server computer to another computer via the network.
 このようなプログラムを実行するコンピュータは、例えば、まず、可搬型記録媒体に記録されたプログラムもしくはサーバコンピュータから転送されたプログラムを、一旦、自己の非一時的な記憶装置である補助記録部1050に格納する。そして、処理の実行時、このコンピュータは、自己の非一時的な記憶装置である補助記録部1050に格納されたプログラムを記憶部1020に読み込み、読み込んだプログラムに従った処理を実行する。また、このプログラムの別の実行形態として、コンピュータが可搬型記録媒体から直接プログラムを記憶部1020に読み込み、そのプログラムに従った処理を実行することとしてもよく、さらに、このコンピュータにサーバコンピュータからプログラムが転送されるたびに、逐次、受け取ったプログラムに従った処理を実行することとしてもよい。また、サーバコンピュータから、このコンピュータへのプログラムの転送は行わず、その実行指示と結果取得のみによって処理機能を実現する、いわゆるASP(Application Service Provider)型のサービスによって、上述の処理を実行する構成としてもよい。なお、本形態におけるプログラムには、電子計算機による処理の用に供する情報であってプログラムに準ずるもの(コンピュータに対する直接の指令ではないがコンピュータの処理を規定する性質を有するデータ等)を含むものとする。 A computer that executes such a program first transfers the program recorded on the portable recording medium or the program transferred from the server computer to the auxiliary recording unit 1050, which is its own non-temporary storage device. Store. Then, at the time of executing the process, the computer reads the program stored in the auxiliary recording unit 1050, which is its own non-temporary storage device, into the storage unit 1020, and executes the process according to the read program. Further, as another execution form of this program, the computer may read the program directly from the portable recording medium into the storage unit 1020 and execute the processing according to the program, and further, the program from the server computer to this computer may be executed. Each time the is transferred, the processing according to the received program may be executed sequentially. In addition, the above processing is executed by a so-called ASP (Application Service Provider) type service that realizes the processing function only by the execution instruction and result acquisition without transferring the program from the server computer to this computer. May be. The program in this embodiment includes information to be used for processing by a computer and equivalent to the program (data that is not a direct command to the computer but has a property of defining the processing of the computer, etc.).
 また、この形態では、コンピュータ上で所定のプログラムを実行させることにより、本装置を構成することとしたが、これらの処理内容の少なくとも一部をハードウェア的に実現することとしてもよい。 Further, in this form, the present device is configured by executing a predetermined program on the computer, but at least a part of these processing contents may be realized by hardware.
 その他、この発明の趣旨を逸脱しない範囲で適宜変更が可能であることはいうまでもない。 Needless to say, other changes can be made as appropriate without departing from the spirit of the present invention.

Claims (10)

  1. 左チャネル入力音信号と右チャネル入力音信号を混合した信号であるダウンミックス信号を得る音信号ダウンミックス方法であって、
    前記左チャネル入力音信号と前記右チャネル入力音信号のどちらが先行しているかを表す情報である先行チャネル情報と、前記左チャネル入力音信号と前記右チャネル入力音信号の相関係数である左右相関係数と、を得る左右関係情報取得ステップと、
    前記先行チャネル情報と前記左右相関係数とに基づき、前記左チャネル入力音信号と前記右チャネル入力音信号のうちの先行しているチャネルの入力音信号のほうが、前記左右相関係数が大きいほど大きく含まれるように、前記左チャネル入力音信号と前記右チャネル入力音信号を重み付け平均して前記ダウンミックス信号を得るダウンミックスステップと、
    を含むことを特徴とする音信号ダウンミックス方法。
    A sound signal downmix method for obtaining a downmix signal, which is a mixture of a left channel input sound signal and a right channel input sound signal.
    Leading channel information, which is information indicating which of the left channel input sound signal and the right channel input sound signal precedes, and left and right phases, which are correlation coefficients between the left channel input sound signal and the right channel input sound signal. The number of relationships, the left-right relationship information acquisition step to obtain, and
    Based on the preceding channel information and the left-right correlation coefficient, the larger the left-right correlation coefficient is, the larger the preceding channel input sound signal of the left channel input sound signal and the right channel input sound signal is. A downmix step of weighting and averaging the left channel input sound signal and the right channel input sound signal to obtain the downmix signal so as to be largely included.
    A sound signal downmix method characterized by including.
  2. 請求項1に記載の音信号ダウンミックス方法であって、
    サンプル番号をtとし、前記左チャネル入力音信号をxL(t)とし、前記右チャネル入力音信号をxR(t)とし、前記ダウンミックス信号をxM(t)とし、前記左右相関係数をγとして、
    前記ダウンミックスステップは、
    前記先行チャネル情報が左チャネルが先行していることを表す場合には、各サンプル番号tについて、xM(t)=((1+γ)/2)×xL(t)+((1-γ)/2)×xR(t)により前記ダウンミックス信号を得て、
    前記先行チャネル情報が右チャネルが先行していることを表す場合には、各サンプル番号tについて、xM(t)=((1-γ)/2)×xL(t)+((1+γ)/2)×xR(t)により前記ダウンミックス信号を得て、
    前記先行チャネル情報が何れのチャネルも先行していないことを表す場合には、各サンプル番号tについて、xM(t)=(xL(t)+xR(t))/2により前記ダウンミックス信号を得る
    ことを特徴とする音信号ダウンミックス方法。
    The sound signal downmixing method according to claim 1.
    Let the sample number be t, the left channel input sound signal be x L (t), the right channel input sound signal be x R (t), the downmix signal be x M (t), and the left-right phase relationship. Let the number be γ
    The downmix step
    When the preceding channel information indicates that the left channel precedes, x M (t) = ((1 + γ) / 2) × x L (t) + ((1) for each sample number t. Obtaining the downmix signal by -γ) / 2) × x R (t),
    When the preceding channel information indicates that the right channel precedes, x M (t) = ((1-γ) / 2) × x L (t) + ((1) for each sample number t. Obtaining the downmix signal by + γ) / 2) × x R (t),
    When the preceding channel information indicates that none of the channels are preceded, for each sample number t, the down is performed by x M (t) = (x L (t) + x R (t)) / 2. A sound signal downmix method characterized by obtaining a mixed signal.
  3. 請求項1または2に記載の音信号ダウンミックス方法を音信号ダウンミックスステップとして含み、
    前記ダウンミックスステップが得た前記ダウンミックス信号を符号化してモノラル符号を得るモノラル符号化ステップと、
    前記左チャネル入力音信号と前記右チャネル入力音信号を符号化してステレオ符号を得るステレオ符号化ステップと、
    を更に含む
    ことを特徴とする音信号符号化方法。
    The sound signal downmix method according to claim 1 or 2 is included as a sound signal downmix step.
    A monaural coding step that encodes the downmix signal obtained by the downmix step to obtain a monaural code, and
    A stereo coding step of encoding the left channel input sound signal and the right channel input sound signal to obtain a stereo code, and
    A sound signal coding method comprising further.
  4. 左チャネル入力音信号と右チャネル入力音信号を混合した信号であるダウンミックス信号を得る音信号ダウンミックス装置であって、
    前記左チャネル入力音信号と前記右チャネル入力音信号のどちらが先行しているかを表す情報である先行チャネル情報と、前記左チャネル入力音信号と前記右チャネル入力音信号の相関係数である左右相関係数と、を得る左右関係情報取得部と、
    前記先行チャネル情報と前記左右相関係数とに基づき、前記左チャネル入力音信号と前記右チャネル入力音信号のうちの先行しているチャネルの入力音信号のほうが、前記左右相関係数が大きいほど大きく含まれるように、前記左チャネル入力音信号と前記右チャネル入力音信号を重み付け平均して前記ダウンミックス信号を得るダウンミックス部と、
    を含むことを特徴とする音信号ダウンミックス装置。
    A sound signal downmix device that obtains a downmix signal that is a mixture of a left channel input sound signal and a right channel input sound signal.
    Leading channel information, which is information indicating which of the left channel input sound signal and the right channel input sound signal precedes, and left and right phases, which are correlation coefficients between the left channel input sound signal and the right channel input sound signal. The number of relationships, the left-right relationship information acquisition unit to obtain, and
    Based on the preceding channel information and the left-right correlation coefficient, the larger the left-right correlation coefficient is, the larger the preceding channel input sound signal of the left channel input sound signal and the right channel input sound signal is. A downmix unit that obtains the downmix signal by weighting and averaging the left channel input sound signal and the right channel input sound signal so as to be largely included.
    A sound signal downmixing device characterized by including.
  5. 請求項4に記載の音信号ダウンミックス装置であって、
    サンプル番号をtとし、前記左チャネル入力音信号をxL(t)とし、前記右チャネル入力音信号をxR(t)とし、前記ダウンミックス信号をxM(t)とし、前記左右相関係数をγとして、
    前記ダウンミックス部は、
    前記先行チャネル情報が左チャネルが先行していることを表す場合には、各サンプル番号tについて、xM(t)=((1+γ)/2)×xL(t)+((1-γ)/2)×xR(t)により前記ダウンミックス信号を得て、
    前記先行チャネル情報が右チャネルが先行していることを表す場合には、各サンプル番号tについて、xM(t)=((1-γ)/2)×xL(t)+((1+γ)/2)×xR(t)により前記ダウンミックス信号を得て、
    前記先行チャネル情報が何れのチャネルも先行していないことを表す場合には、各サンプル番号tについて、xM(t)=(xL(t)+xR(t))/2により前記ダウンミックス信号を得る
    ことを特徴とする音信号ダウンミックス装置。
    The sound signal downmixing device according to claim 4.
    Let the sample number be t, the left channel input sound signal be x L (t), the right channel input sound signal be x R (t), the downmix signal be x M (t), and the left-right phase relationship. Let the number be γ
    The downmix section
    When the preceding channel information indicates that the left channel precedes, x M (t) = ((1 + γ) / 2) × x L (t) + ((1) for each sample number t. Obtaining the downmix signal by -γ) / 2) × x R (t),
    When the preceding channel information indicates that the right channel precedes, x M (t) = ((1-γ) / 2) × x L (t) + ((1) for each sample number t. Obtaining the downmix signal by + γ) / 2) × x R (t),
    When the preceding channel information indicates that none of the channels are preceded, for each sample number t, the down is performed by x M (t) = (x L (t) + x R (t)) / 2. A sound signal downmix device characterized by obtaining a mixed signal.
  6. 請求項4または5に記載の音信号ダウンミックス装置を音信号ダウンミックス部として含み、
    前記ダウンミックス部が得た前記ダウンミックス信号を符号化してモノラル符号を得るモノラル符号化部と、
    前記左チャネル入力音信号と前記右チャネル入力音信号を符号化してステレオ符号を得るステレオ符号化部と、
    を更に含む
    ことを特徴とする音信号符号化装置。
    The sound signal downmix device according to claim 4 or 5 is included as a sound signal downmix unit.
    A monaural coding unit that encodes the downmix signal obtained by the downmix unit to obtain a monaural code, and
    A stereo coding unit that encodes the left channel input sound signal and the right channel input sound signal to obtain a stereo code, and
    A sound signal coding device comprising.
  7.  請求項1又は2に記載の音信号ダウンミックス方法の各ステップの処理をコンピュータに実行させるためのプログラム。 A program for causing a computer to execute the processing of each step of the sound signal downmixing method according to claim 1 or 2.
  8.  請求項3に記載の音信号符号化方法の各ステップの処理をコンピュータに実行させるためのプログラム。 A program for causing a computer to execute the processing of each step of the sound signal coding method according to claim 3.
  9.  請求項1又は2に記載の音信号ダウンミックス方法の各ステップの処理をコンピュータに実行させるためのプログラムを記録したコンピュータ読み取り可能な記録媒体。 A computer-readable recording medium in which a program for causing a computer to execute the processing of each step of the sound signal downmixing method according to claim 1 or 2 is recorded.
  10.  請求項3に記載の音信号符号化方法の各ステップの処理をコンピュータに実行させるためのプログラムを記録したコンピュータ読み取り可能な記録媒体。 A computer-readable recording medium in which a program for causing a computer to execute the processing of each step of the sound signal coding method according to claim 3 is recorded.
PCT/JP2020/041216 2020-03-09 2020-11-04 Sound signal downmixing method, sound signal coding method, sound signal downmixing device, sound signal coding device, program, and recording medium WO2021181746A1 (en)

Priority Applications (17)

Application Number Priority Date Filing Date Title
JP2022505754A JP7396459B2 (en) 2020-03-09 2020-11-04 Sound signal downmix method, sound signal encoding method, sound signal downmix device, sound signal encoding device, program and recording medium
EP20924291.6A EP4120250A4 (en) 2020-03-09 2020-11-04 Sound signal downmixing method, sound signal coding method, sound signal downmixing device, sound signal coding device, program, and recording medium
US17/909,666 US20230319498A1 (en) 2020-03-09 2020-11-04 Sound signal downmixing method, sound signal coding method, sound signal downmixing apparatus, sound signal coding apparatus, program and recording medium
CN202080098232.9A CN115280411A (en) 2020-03-09 2020-11-04 Audio signal down-mixing method, audio signal encoding method, audio signal down-mixing device, audio signal encoding device, program, and recording medium
PCT/JP2021/004640 WO2021181975A1 (en) 2020-03-09 2021-02-08 Sound signal downmixing method, sound signal encoding method, sound signal downmixing device, sound signal encoding device, program, and recording medium
US17/909,690 US20230108927A1 (en) 2020-03-09 2021-02-08 Sound signal downmixing method, sound signal coding method, sound signal downmixing apparatus, sound signal coding apparatus, program and recording medium
JP2022505844A JP7380835B2 (en) 2020-03-09 2021-02-08 Sound signal downmix method, sound signal encoding method, sound signal downmix device, sound signal encoding device, program and recording medium
US17/908,965 US20230106764A1 (en) 2020-03-09 2021-02-08 Sound signal downmixing method, sound signal coding method, sound signal downmixing apparatus, sound signal coding apparatus, program and recording medium
PCT/JP2021/004642 WO2021181977A1 (en) 2020-03-09 2021-02-08 Sound signal downmix method, sound signal coding method, sound signal downmix device, sound signal coding device, program, and recording medium
JP2022505842A JP7380833B2 (en) 2020-03-09 2021-02-08 Sound signal downmix method, sound signal encoding method, sound signal downmix device, sound signal encoding device, program and recording medium
PCT/JP2021/004639 WO2021181974A1 (en) 2020-03-09 2021-02-08 Sound signal downmixing method, sound signal coding method, sound signal downmixing device, sound signal coding device, program, and recording medium
PCT/JP2021/004641 WO2021181976A1 (en) 2020-03-09 2021-02-08 Sound signal down-mixing method, sound signal encoding method, sound signal down-mixing device, sound signal encoding device, program, and recording medium
US17/909,677 US20230106832A1 (en) 2020-03-09 2021-02-08 Sound signal downmixing method, sound signal coding method, sound signal downmixing apparatus, sound signal coding apparatus, program and recording medium
JP2022505843A JP7380834B2 (en) 2020-03-09 2021-02-08 Sound signal downmix method, sound signal encoding method, sound signal downmix device, sound signal encoding device, program and recording medium
US17/909,698 US20230107976A1 (en) 2020-03-09 2021-02-08 Sound signal downmixing method, sound signal coding method, sound signal downmixing apparatus, sound signal coding apparatus, program and recording medium
JP2022505845A JP7380836B2 (en) 2020-03-09 2021-02-08 Sound signal downmix method, sound signal encoding method, sound signal downmix device, sound signal encoding device, program and recording medium
JP2023203361A JP2024023484A (en) 2020-03-09 2023-11-30 Sound signal downmixing method, sound signal downmixing device and program

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JPPCT/JP2020/010081 2020-03-09
PCT/JP2020/010080 WO2021181472A1 (en) 2020-03-09 2020-03-09 Sound signal encoding method, sound signal decoding method, sound signal encoding device, sound signal decoding device, program, and recording medium
JPPCT/JP2020/010080 2020-03-09
PCT/JP2020/010081 WO2021181473A1 (en) 2020-03-09 2020-03-09 Sound signal encoding method, sound signal decoding method, sound signal encoding device, sound signal decoding device, program, and recording medium

Publications (1)

Publication Number Publication Date
WO2021181746A1 true WO2021181746A1 (en) 2021-09-16

Family

ID=77671498

Family Applications (4)

Application Number Title Priority Date Filing Date
PCT/JP2020/041216 WO2021181746A1 (en) 2020-03-09 2020-11-04 Sound signal downmixing method, sound signal coding method, sound signal downmixing device, sound signal coding device, program, and recording medium
PCT/JP2021/004640 WO2021181975A1 (en) 2020-03-09 2021-02-08 Sound signal downmixing method, sound signal encoding method, sound signal downmixing device, sound signal encoding device, program, and recording medium
PCT/JP2021/004642 WO2021181977A1 (en) 2020-03-09 2021-02-08 Sound signal downmix method, sound signal coding method, sound signal downmix device, sound signal coding device, program, and recording medium
PCT/JP2021/004641 WO2021181976A1 (en) 2020-03-09 2021-02-08 Sound signal down-mixing method, sound signal encoding method, sound signal down-mixing device, sound signal encoding device, program, and recording medium

Family Applications After (3)

Application Number Title Priority Date Filing Date
PCT/JP2021/004640 WO2021181975A1 (en) 2020-03-09 2021-02-08 Sound signal downmixing method, sound signal encoding method, sound signal downmixing device, sound signal encoding device, program, and recording medium
PCT/JP2021/004642 WO2021181977A1 (en) 2020-03-09 2021-02-08 Sound signal downmix method, sound signal coding method, sound signal downmix device, sound signal coding device, program, and recording medium
PCT/JP2021/004641 WO2021181976A1 (en) 2020-03-09 2021-02-08 Sound signal down-mixing method, sound signal encoding method, sound signal down-mixing device, sound signal encoding device, program, and recording medium

Country Status (1)

Country Link
WO (4) WO2021181746A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023032065A1 (en) * 2021-09-01 2023-03-09 日本電信電話株式会社 Sound signal downmixing method, sound signal encoding method, sound signal downmixing device, sound signal encoding device, and program
WO2024142360A1 (en) * 2022-12-28 2024-07-04 日本電信電話株式会社 Sound signal processing device, sound signal processing method, and program
WO2024142359A1 (en) * 2022-12-28 2024-07-04 日本電信電話株式会社 Audio signal processing device, audio signal processing method, and program

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010525403A (en) * 2007-04-26 2010-07-22 ドルビー インターナショナル アクチボラゲット Output signal synthesis apparatus and synthesis method
WO2010097748A1 (en) * 2009-02-27 2010-09-02 Koninklijke Philips Electronics N.V. Parametric stereo encoding and decoding
WO2010140350A1 (en) * 2009-06-02 2010-12-09 パナソニック株式会社 Down-mixing device, encoder, and method therefor
JP2011522472A (en) * 2008-05-23 2011-07-28 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Parametric stereo upmix device, parametric stereo decoder, parametric stereo downmix device, and parametric stereo encoder
JP2018533056A (en) * 2015-09-25 2018-11-08 ヴォイスエイジ・コーポレーション Method and system for using a long-term correlation difference between a left channel and a right channel to time-domain downmix a stereo audio signal into a primary channel and a secondary channel
JP2019536112A (en) * 2016-11-08 2019-12-12 フラウンホファー ゲセルシャフト ツール フェールデルンク ダー アンゲヴァンテン フォルシュンク エー.ファオ. Apparatus and method for encoding or decoding a multi-channel signal using side gain and residual gain

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010525403A (en) * 2007-04-26 2010-07-22 ドルビー インターナショナル アクチボラゲット Output signal synthesis apparatus and synthesis method
JP2011522472A (en) * 2008-05-23 2011-07-28 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Parametric stereo upmix device, parametric stereo decoder, parametric stereo downmix device, and parametric stereo encoder
WO2010097748A1 (en) * 2009-02-27 2010-09-02 Koninklijke Philips Electronics N.V. Parametric stereo encoding and decoding
WO2010140350A1 (en) * 2009-06-02 2010-12-09 パナソニック株式会社 Down-mixing device, encoder, and method therefor
JP2018533056A (en) * 2015-09-25 2018-11-08 ヴォイスエイジ・コーポレーション Method and system for using a long-term correlation difference between a left channel and a right channel to time-domain downmix a stereo audio signal into a primary channel and a secondary channel
JP2019536112A (en) * 2016-11-08 2019-12-12 フラウンホファー ゲセルシャフト ツール フェールデルンク ダー アンゲヴァンテン フォルシュンク エー.ファオ. Apparatus and method for encoding or decoding a multi-channel signal using side gain and residual gain

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023032065A1 (en) * 2021-09-01 2023-03-09 日本電信電話株式会社 Sound signal downmixing method, sound signal encoding method, sound signal downmixing device, sound signal encoding device, and program
WO2024142360A1 (en) * 2022-12-28 2024-07-04 日本電信電話株式会社 Sound signal processing device, sound signal processing method, and program
WO2024142359A1 (en) * 2022-12-28 2024-07-04 日本電信電話株式会社 Audio signal processing device, audio signal processing method, and program

Also Published As

Publication number Publication date
WO2021181977A1 (en) 2021-09-16
WO2021181976A1 (en) 2021-09-16
WO2021181975A1 (en) 2021-09-16

Similar Documents

Publication Publication Date Title
WO2021181746A1 (en) Sound signal downmixing method, sound signal coding method, sound signal downmixing device, sound signal coding device, program, and recording medium
JP2024023484A (en) Sound signal downmixing method, sound signal downmixing device and program
JP5737077B2 (en) Audio encoding apparatus, audio encoding method, and audio encoding computer program
US20230395092A1 (en) Sound signal high frequency compensation method, sound signal post processing method, sound signal decode method, apparatus thereof, program, and storage medium
WO2021181473A1 (en) Sound signal encoding method, sound signal decoding method, sound signal encoding device, sound signal decoding device, program, and recording medium
WO2021181472A1 (en) Sound signal encoding method, sound signal decoding method, sound signal encoding device, sound signal decoding device, program, and recording medium
JP4714075B2 (en) Multi-channel signal encoding method, apparatus using the method, program, and recording medium
WO2023032065A1 (en) Sound signal downmixing method, sound signal encoding method, sound signal downmixing device, sound signal encoding device, and program
US20230386482A1 (en) Sound signal refinement method, sound signal decode method, apparatus thereof, program, and storage medium
US20230402044A1 (en) Sound signal refining method, sound signal decoding method, apparatus thereof, program, and storage medium
US20230395080A1 (en) Sound signal refining method, sound signal decoding method, apparatus thereof, program, and storage medium
US20230410832A1 (en) Sound signal high frequency compensation method, sound signal post processing method, sound signal decode method, apparatus thereof, program, and storage medium
US20230377585A1 (en) Sound signal refinement method, sound signal decode method, apparatus thereof, program, and storage medium
US20230386480A1 (en) Sound signal refinement method, sound signal decode method, apparatus thereof, program, and storage medium
US20230402051A1 (en) Sound signal high frequency compensation method, sound signal post processing method, sound signal decode method, apparatus thereof, program, and storage medium
US20240119947A1 (en) Sound signal refinement method, sound signal decode method, apparatus thereof, program, and storage medium
US20230395081A1 (en) Sound signal high frequency compensation method, sound signal post processing method, sound signal decode method, apparatus thereof, program, and storage medium
WO2024142357A1 (en) Sound signal processing device, sound signal processing method, and program
WO2024142359A1 (en) Audio signal processing device, audio signal processing method, and program
US20230386481A1 (en) Sound signal refinement method, sound signal decode method, apparatus thereof, program, and storage medium
WO2024142360A1 (en) Sound signal processing device, sound signal processing method, and program
WO2024142358A1 (en) Sound-signal-processing device, sound-signal-processing method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20924291

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022505754

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 202217050665

Country of ref document: IN

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020924291

Country of ref document: EP

Effective date: 20221010