EP2070389B1 - Techniques facilitant le dialogue - Google Patents

Techniques facilitant le dialogue Download PDF

Info

Publication number
EP2070389B1
EP2070389B1 EP07802317A EP07802317A EP2070389B1 EP 2070389 B1 EP2070389 B1 EP 2070389B1 EP 07802317 A EP07802317 A EP 07802317A EP 07802317 A EP07802317 A EP 07802317A EP 2070389 B1 EP2070389 B1 EP 2070389B1
Authority
EP
European Patent Office
Prior art keywords
component signal
signal
gain
speech component
speech
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Not-in-force
Application number
EP07802317A
Other languages
German (de)
English (en)
Other versions
EP2070389A1 (fr
Inventor
Hyen-O Oh
Yang Won Jung
Christof Faller
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LG Electronics Inc
Original Assignee
LG Electronics Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by LG Electronics Inc filed Critical LG Electronics Inc
Publication of EP2070389A1 publication Critical patent/EP2070389A1/fr
Application granted granted Critical
Publication of EP2070389B1 publication Critical patent/EP2070389B1/fr
Not-in-force legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S3/00Systems employing more than two channels, e.g. quadraphonic
    • H04S3/008Systems employing more than two channels, e.g. quadraphonic in which the audio signals are in digital form, i.e. employing more than two discrete digital channels
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/008Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S5/00Pseudo-stereo systems, e.g. in which additional channel signals are derived from monophonic signals by means of phase shifting, time delay or reverberation 
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L21/0232Processing in the frequency domain
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/05Generation or adaptation of centre channel in multi-channel audio systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/03Application of parametric coding in stereophonic audio systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/07Synergistic effects of band splitting and sub-band processing

Definitions

  • Audio enhancement techniques are often used in home entertainment systems, stereos and other consumer electronic devices to enhance bass frequencies and to simulate various listening environments (e.g., concert halls).
  • US Patent Publication No. 2005/117761 discloses a headphone apparatus allowing the listener to distinguish between two or more kinds of audio signals (e.g., video game audio and a player's voice) by supplying to a listener a first audio signal that is sensed as being located inside the head of the listener and a second sound signal subjected to signal processing such that the sound signal is sensed as being located outside of the listener's head.
  • 2006 222686 discloses front and rear auxiliary speakers arranged opposite to a listener and performs "sound image localization control" for localizing sound images outputted from the auxiliary speakers at designated positions and "effect sound addition control” for adding effect sounds outputted from the auxiliary speakers to direct sounds outputted from main speakers.
  • Other techniques attempt to make movie dialogue more transparent by adding more high frequencies, for example. None of these techniques, however, address enhancing dialogue relative to ambient and other component signals.
  • a plural-channel audio signal (e.g., a stereo audio) is processed to modify a gain (e.g., a volume or loudness) of a speech component signal (e.g., dialogue spoken by actors in a movie) relative to an ambient component signal (e.g., reflected or reverberated sound) or other component signals.
  • a gain e.g., a volume or loudness
  • an ambient component signal e.g., reflected or reverberated sound
  • the speech component signal is identified and modified.
  • the speech component signal is identified by assuming that the speech source (e.g., the actor currently speaking) is in the center of a stereo sound image of the plural-channel audio signal and by considering the spectral content of the speech component signal.
  • FIG. 1 is block diagram of a mixing model for dialogue enhancement techniques.
  • FIG. 2 is a graph illustrating a decomposition of stereo signals using time-frequency tiles.
  • FIG. 3A is a graph of a function for computing a gain as a function of a decomposition gain factor for dialogue that is centered in a sound image.
  • FIG. 3B is a graph of a function for computing gain as a function of a decomposition gain factor for dialogue which is not centered.
  • FIG. 4 is a block diagram of an example dialogue enhancement system.
  • FIG. 5 is a flow diagram of an example dialogue enhancement process.
  • FIG. 6 is a block diagram of a digital television system for implementing the features and processes described in reference to FIGS. 1-5 .
  • FIG. 1 is block diagram of a mixing model 100 for dialogue enhancement techniques.
  • a listener receives audio signals from left and right channels.
  • An audio signal s corresponds to localized sound from a direction determined by a factor a .
  • Independent audio signals n 1 and n 2 correspond to laterally reflected or reverberated sound, often referred to as ambient sound or ambience.
  • Stereo signals can be recorded or mixed such that for a given audio source the source audio signal goes coherently into the left and right audio signal channels with specific directional cues (e.g., level difference, time difference), and the laterally reflected or reverberated independent signals n 1 and n 2 go into channels determining auditory event width and listener envelopment cues.
  • the model 100 can be represented mathematically as a perceptually motivated decomposition of a stereo signal with one audio source capturing the localization of the audio source and ambience.
  • FIG. 2 is a graph illustrating a decomposition of a stereo signal using time-frequency tiles.
  • the signals S , N 1 , N 2 and decomposition gain factor A can be estimated independently.
  • the subband and time indices i and k are ignored in the following description.
  • the bandwidth of a subband can be chosen to be equal to one critical band.
  • S , N 1 , N 2 , and A can be estimated approximately every t milliseconds (e.g., 20 ms) in each subband.
  • STFT short time Fourier transform
  • FFT fast Fourier transform
  • the power of N 1 and N 2 is assumed to be the same, i.e., it is assumed that the amount of lateral independent sound is the same for left and right channels.
  • the power ( P X1 , P X2 ) and the normalized cross-correlation can be determined.
  • a , P S , P N can be computed as a function of the estimated P X1 , P X2 , and ⁇ .
  • the least squares estimates of S , N 1 and N 2 are computed as a function of A , P s , and P N .
  • a signal that is similar to the original stereo signal can be obtained by applying [2] at each time and for each subband and converting the subbands back to the time domain.
  • g ( i , k ) is set to 0 dB at very low frequencies and above 8 kHz, to potentially modify the stereo signal as little as possible.
  • FIG. 3A An example of a suitable function f is illustrated in FIG. 3A .
  • the relation between f and A ( i,k ) is plotted using logarithmic (dB) scale, but A ( i , k ) and f are otherwise defined in linear scale.
  • the constant W is related to the directional sensitivity of the dialogue gain.
  • a value of W 6 dB, for example, gives good results for most signals. But it is noted that for different signals different W may be optimal.
  • the function f can be shifted such that its center corresponds to the dialogue position.
  • An example of a shifted function f is illustrated in FIG. 3B .
  • the identification of dialogue component signals based on center-assumption (or generally position-assumption) and spectral range of speech is simple and works well in many cases.
  • the dialogue identification can be modified and potentially improved.
  • One possibility is to explore more features of speech, such as formants, harmonic structure, transients to detect dialogue component signals.
  • a different shape of the gain function may be optimal.
  • a signal adaptive gain function may be used.
  • Dialogue gain control can also be implemented for home cinema systems with surround sound.
  • One important aspect of dialogue gain control is to detect whether dialogue is in the center channel or not. One way of doing this is to detect if the center has sufficient signal energy such that it is likely that dialogue is in the center channel. If dialogue is in the center channel, then gain can be added to the center channel to control the dialogue volume. If dialogue is not in the center channel (e.g., if the surround system plays back stereo content), then a two-channel dialogue gain control can be applied as previously described in reference to FIGS. 1-3 .
  • a plural-channel audio signal can include a speech component signal (e.g., a dialogue signal) and other component signals (e.g., reverberation).
  • the other component signals can be modified (e.g., attenuated) based on a location of the speech component signal in a sound image of the plural-channel audio signal and the speech component signal can be left unchanged.
  • FIG. 4 is a block diagram of an example dialogue enhancement system 400.
  • the system 400 includes an analysis filterbank 402, a power estimator 404, a signal estimator 406, a post-scaling module 408, a signal synthesis module 410 and a synthesis filterbank 412. While the components 402-412 of system 400 are shown as a separate processes, the processes of two or more components can be combined into a single component.
  • a plural-channel signal by the analysis filterbank 402 into subband signals i For each time k, a plural-channel signal by the analysis filterbank 402 into subband signals i.
  • left and right channels x 1 ( n ), x 2 ( n ) of a stereo signal are decomposed by the analysis filterbank 402 into i subbands X 1 (i, k ), X 2 (i , k).
  • the power estimator 404 generates power estimates of P ⁇ s , ⁇ , and P ⁇ N , which have been previously described in reference to FIGS. 1 and 2 .
  • the signal estimator 406 generates the estimated signals ⁇ , N ⁇ 1 , and N ⁇ 2 from the power estimates.
  • the post-scaling module 408 scales the signal estimates to provide ⁇ ', N ⁇ ' 1 , and N ⁇ ' 2 .
  • the signal synthesis module 410 receives the post-scaled signal estimates and decomposition gain factor A, constant W and desired dialogue gain G d , and synthesizes left and right subband signal estimates ⁇ 1 ( i , k ) and ⁇ 2 (i,k) which are input to the synthesis filterbank 412 to provide left and right time domain signals ⁇ 1 (n) and ⁇ 2 ( n ) with modified dialogue gain based on G d .
  • FIG. 5 is a flow diagram of an example dialogue enhancement process 500.
  • the process 500 begins by decomposing a plural-channel audio signal into frequency subband signals (502).
  • the decomposition can be performed by a filterbank using various known transforms, including but not limited to: polyphase filterbank, quadrature mirror filterbank (QMF), hybrid filterbank, discrete Fourier transform (DFT), and modified discrete cosine transform (MDCT).
  • QMF quadrature mirror filterbank
  • DFT discrete Fourier transform
  • MDCT modified discrete cosine transform
  • a first set of powers of two or more channels of the audio signal are estimated using the subband signals (504).
  • a cross-correlation is determined using the first set of powers (506).
  • a decomposition gain factor is estimated using the first set of powers and the cross-correlation (508). The decomposition gain factor provides a location cue for the dialogue source in the sound image.
  • a second set of powers for a speech component signal and an ambience component signal are estimated using the first set of powers and the cross-correlation (510).
  • Speech and ambience component signals are estimated using the second set of powers and the decomposition gain factor (512).
  • the estimated speech and ambience component signals are post-scaled (514).
  • Subband signals are synthesized with modified dialogue gain using the post-scaled estimated speech and ambience component signals and a desired dialogue gain (516).
  • the desired dialogue gain can be set automatically or specified by a user.
  • the synthesized subband signals are converted into a time domain audio signal with modified dialogue gain (512) using a synthesis filterbank, for example
  • the dialogue boosting effect is compensated by normalizing using weights ⁇ 1 - ⁇ 6 with g norm .
  • the normalization factor g norm can take the same value as the modified dialogue gain 10 g i k 20 .
  • g norm can be modified.
  • the normalization can be performed both in frequency domain and in time domain. When it is performed in frequency domain, the normalization can be performed for the frequency band where dialogue gain applies, for example, between 70 Hz and 8 KHz.
  • input signals X 1 ( i , k ) and X 2 ( i , k ) are substantially similar, e.g., input is a mono-like signal, almost every portion of input might be regarded as S, and when a user provides a desired dialogue gain, the desired dialogue gain increases the volume of the signal. To prevent this, it is desirable to user a separate dialogue volume (SDV) technique to observe the characteristics of the input signals.
  • SDV dialogue volume
  • the normalized cross-correlation of stereo signals is calculated
  • the normalized cross-correlation can be used as a metric for mono signal detection.
  • phi in [4] exceeds a given threshold, the input signal can be regarded as a mono signal, and separate dialogue volume can be automatically turned off.
  • the input signal can be regarded as a stereo signal, and separate dialogue volume can be automatically turned on.
  • time smoothing techniques can be incorporated to get ⁇ (i,k).
  • FIG. 6 is a block diagram of a an example digital television system 600 for implementing the features and processes described in reference to FIGS. 1-5 .
  • Digital television is a telecommunication system for broadcasting and receiving moving pictures and sound by means of digital signals.
  • DTV uses digital modulation data, which is digitally compressed and requires decoding by a specially designed television set, or a standard receiver with a set-top box, or a PC fitted with a television card.
  • the system in FIG. 6 is a DTV system, the disclosed implementations for dialogue enhancement can also be applied to analog TV systems or any other systems capable of dialogue enhancement.
  • the system 600 can include an interface 602, a demodulator 604, a decoder 606, and audio/visual output 608, a user input interface 610, one or more processors 612 (e.g., Intel® processors) and one or more computer readable mediums 614 (e.g., RAM, ROM, SDRAM, hard disk, optical disk, flash memory, SAN, etc.). Each of these components are coupled to one or more communication channels 616 (e.g., buses).
  • the interface 602 includes various circuits for obtaining an audio signal or a combined audio/video signal.
  • an interface can include antenna electronics, a tuner or mixer, a radio frequency (RF) amplifier, a local oscillator, an intermediate frequency (IF) amplifier, one or more filters, a demodulator, an audio amplifier, etc.
  • RF radio frequency
  • IF intermediate frequency
  • filters filters
  • demodulator an audio amplifier
  • the tuner 602 can be a DTV tuner for receiving a digital televisions signal include video and audio content.
  • the demodulator 604 extracts video and audio signals from the digital television signal. If the video and audio signals are encoded (e.g., MPEG encoded), the decoder 606 decodes those signals.
  • the A/V output can be any device capable of display video and playing audio (e.g., TV display, computer monitor, LCD, speakers, audio systems).
  • dialogue volume levels can be displayed to the user using a display device on a remote controller or an On Screen Display (OSD), for example.
  • the dialogue volume level can be relative to the master volume level.
  • One or more graphical objects can be used for displaying dialogue volume level, and dialogue volume level relative to master volume. For example, a first graphical object (e.g., a bar) can be displayed for indicating master volume and a second graphical object (e.g., a line) can be displayed with or composited on the first graphical object to indicate dialogue volume level.
  • the user input interface can include circuitry (e.g., a wireless or infrared receiver) and/ or software for receiving and decoding infrared or wireless signals generated by a remote controller.
  • a remote controller can include a separate dialogue volume control key or button, or a separate dialogue volume control select key for changing the state of a master volume control key or button, so that the master volume control can be used to control either the master volume or the separated dialogue volume.
  • the dialogue volume or master volume key can change its visible appearance to indicate its function.
  • the one or more processors can execute code stored in the computer-readable medium 614 to implement the features and operations 618, 620, 622, 624, 626, 628, 630 and 632, as described in reference to FIGS. 1-5 .
  • the computer-readable medium further includes an operating system618, analysis/ synthesis fliterbanks 620, a power estimator 622, a signal estimator 624, a post-scaling module 626 and a signal synthesizer 628.
  • the term "computer- readable medium” refers to any medium that participates in providing instructions to a processor 612 for execution, including without limitation, non- volatile media (e.g., optical or magnetic disks), volatile media (e.g., memory) and transmission media.
  • Transmission media includes, without limitation, coaxial cables, copper wire and fiber optics. Transmission media can also take the form of acoustic, light or radio frequency waves.
  • the operating system 618 can be multi-user, multiprocessing, multitasking, multithreading, real time, etc.
  • the operating system 618 performs basic tasks, including but not limited to: recognizing input from the user input interface 610; keeping track and managing files and directories on computer-readable medium 614 (e.g., memory or a storage device); controlling peripheral devices; and managing traffic on the one or more communication channels 616.
  • computer-readable medium 614 e.g., memory or a storage device
  • the described features can be implemented advantageously in one or more computer programs that are executable on a programmable system including at least one programmable processor coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and at least one output device.
  • a computer program is a set of instructions that can be used, directly or indirectly, in a computer to perform a certain activity or bring about a certain result.
  • a computer program can be written in any form of programming language (e.g., Objective-C, Java), including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
  • Suitable processors for the execution of a program of instructions include, by way of example, both general and special purpose microprocessors, and the sole processor or one of multiple processors or cores, of any kind of computer.
  • a processor will receive instructions and data from a read-only memory or a random access memory or both.
  • the essential elements of a computer are a processor for executing instructions and one or more memories for storing instructions and data.
  • a computer will also include, or be operatively coupled to communicate with, one or more mass storage devices for storing data files; such devices include magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and optical disks.
  • Storage devices suitable for tangibly embodying computer program instructions and data include all forms of nonvolatile memory, including by way of example semiconductor memory devices, such as EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
  • semiconductor memory devices such as EPROM, EEPROM, and flash memory devices
  • magnetic disks such as internal hard disks and removable disks
  • magneto-optical disks and CD-ROM and DVD-ROM disks.
  • the processor and the memory can be supplemented by, or incorporated in, ASICs (application-specific integrated circuits).
  • ASICs application-specific integrated circuits
  • the features can be implemented on a computer having a display device such as a CRT (cathode ray tube) or LCD (liquid crystal display) monitor for displaying information to the user and a keyboard and a pointing device such as a mouse or a trackball by which the user can provide input to the computer.
  • a display device such as a CRT (cathode ray tube) or LCD (liquid crystal display) monitor for displaying information to the user and a keyboard and a pointing device such as a mouse or a trackball by which the user can provide input to the computer.
  • the features can be implemented in a computer system that includes a back-end component, such as a data server, or that includes a middleware component, such as an application server or an Internet server, or that includes a front-end component, such as a client computer having a graphical user interface or an Internet browser, or any combination of them.
  • the components of the system can be connected by any form or medium of digital data communication such as a communication network. Examples of communication networks include, e.g., a LAN, a WAN, and the computers and networks forming the Internet.
  • the computer system can include clients and servers.
  • a client and server are generally remote from each other and typically interact through a network.
  • the relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Mathematical Physics (AREA)
  • Quality & Reliability (AREA)
  • Stereophonic System (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Image Processing (AREA)
  • Preparation Of Compounds By Using Micro-Organisms (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
  • Medicines Containing Material From Animals Or Micro-Organisms (AREA)
  • Manufacture, Treatment Of Glass Fibers (AREA)
  • Separation By Low-Temperature Treatments (AREA)
  • Electrotherapy Devices (AREA)
  • Input Circuits Of Receivers And Coupling Of Receivers And Audio Equipment (AREA)
  • Electrophonic Musical Instruments (AREA)
  • Tone Control, Compression And Expansion, Limiting Amplitude (AREA)

Claims (15)

  1. Procédé de traitement d'un signal audio, comprenant :
    l'obtention d'un signal audio à canaux multiples incluant un signal de composante de parole et un signal d'une autre composante ;
    la détermination de valeurs de gain pour au moins deux canaux du signal audio à canaux multiples, les valeurs de gain représentant un niveau pour chaque canal des au moins deux canaux ;
    la détermination d'une corrélation croisée entre les au moins deux canaux ;
    la détermination d'une localisation spatiale du signal de composante de parole en utilisant au moins l'une de la corrélation croisée et des valeurs de gain ;
    l'identification du signal de composante de parole sur la base de la localisation spatiale du signal de composante de parole ;
    la modification du signal de composante de parole en appliquant un gain au signal de composante de parole ; et
    la génération d'un signal audio modifié incluant le signal de composante de parole modifié.
  2. Procédé selon la revendication 1, où la modification du signal de composante de parole comprend en outre :
    l'identification du signal de composante de parole sur la base d'un domaine spectral du signal de composante de parole.
  3. Procédé selon la revendication 1, où le gain est une fonction de la localisation du signal de composante de parole et un gain désiré pour le signal de composante de parole.
  4. Procédé selon la revendication 3, où la fonction est une fonction adaptative de gain de signal ayant une région de gain qui se rapporte à une sensibilité directionnelle du facteur de gain.
  5. Procédé selon l'une quelconque des revendications 1, 2, 3 et 4, comprenant en outre :
    la normalisation du signal audio à canaux multiples avec un facteur de normalisation dans un domaine temporel ou un domaine fréquentiel.
  6. Procédé selon l'une quelconque des revendications 1, 2, 3, 4 et 5, comprenant en outre :
    la comparaison de la corrélation croisée avec une ou plusieurs valeurs de seuil ;
    la détermination si le signal audio à canaux multiples est sensiblement mono sur la base de résultats de la comparaison ; et
    la modification du signal de composante de parole lorsque le signal audio à canaux multiples n'est pas sensiblement mono.
  7. Procédé selon l'une quelconque des revendications 1, 2, 3, 4, 5 et 6, comprenant en outre :
    la décomposition (502) du signal audio à canaux multiples en un certain nombre de signaux en sous-bandes de fréquences, dans lequel:
    la détermination des valeurs de gain comprend l'estimation (504) d'un premier ensemble de puissances pour les au moins deux canaux en utilisant les signaux en sous-bandes,
    la détermination de la corrélation croisée comprend la détermination (506) de la corrélation croisée en utilisant le premier ensemble de puissances estimées, et
    la détermination de la localisation spatiale du signal de composante de parole comprend l'estimation (508) d'un facteur de gain de décomposition en utilisant le premier ensemble de puissances estimées et la corrélation croisée, dans lequel le facteur de gain de décomposition donne une marque de localisation du signal de composante de parole.
  8. Procédé selon la revendication 7, dans lequel la largeur de bande d'au moins une sous-bande est choisie de manière à être égale à une bande critique d'un système auditif humain.
  9. Procédé selon la revendication 7, comprenant en outre:
    l'estimation (510) d'un deuxième ensemble de puissances pour le signal de composante de parole et un signal de composante d'ambiance à partir du premier ensemble de puissances et de la corrélation croisée dans lequel l'autre signal de composante inclut le signal de composante d'ambiance.
  10. Procédé selon la revendication 9, comprenant en outre:
    l'estimation (512) du signal de composante de parole et du signal de composante d'ambiance en utilisant le deuxième ensemble de puissances et le facteur de gain de décomposition.
  11. Procédé selon la revendication 9, où les signaux de composantes de parole et d'ambiance estimés sont déterminés en utilisant une estimation par les moindres carrés.
  12. Procédé selon la revendication 10, comprenant en outre la normalisation de la corrélation croisée.
  13. Procédé selon la revendication 11 ou 12, comprenant en outre une post-mise à l'échelle (514) du signal de composante de parole estimé et du signal de composante d'ambiance estimé.
  14. Procédé selon l'une quelconque des revendications 10 à 13, comprenant en outre :
    la synthèse (516) des signaux en sous-bandes en utilisant les deuxièmes puissances estimées et un gain spécifié par l'utilisateur, dans lequel le gain inclut le gain spécifié par l'utilisateur et la génération du signal audio modifié comprend la conversion (518) des signaux en sous-bandes synthétisés en un signal audio dans le domaine temporel ayant un signal de composante de parole qui est modifié par le gain spécifié par l'utilisateur.
  15. Appareil de traitement d'un signal audio, comprenant:
    une interface (602) configurable pour obtenir un signal audio à canaux multiples incluant un signal de composante de parole et un signal d'une autre composante ;
    une interface de saisie d'utilisateur (610) configurable pour recevoir une information se rapportant à un gain pour commander un niveau du signal de composante de parole ;
    un estimateur de puissances (622) configurable pour déterminer des valeurs de gain pour au moins deux canaux du signal audio à canaux multiples, les valeurs de gain représentant un niveau pour chaque canal des au moins deux canaux;
    un estimateur de signaux (624) configurable pour :
    déterminer une corrélation croisée entre les au moins deux canaux,
    déterminer une localisation spatiale du signal de composante de parole en utilisant au moins l'une de la corrélation croisée et des valeurs de gain, et
    identifier le signal de composante de parole sur la base de la localisation spatiale du signal de composante de parole ;
    un synthétiseur de signaux (628) couplé à l'estimateur de signaux et configurable pour :
    modifier le signal de composante de parole en appliquant une valeur de gain au signal de composante de parole, et
    générer un signal audio modifié incluant le signal de composante de parole modifié ; et
    une unité de sortie (608) configurable pour délivrer en sortie le signal audio modifié.
EP07802317A 2006-09-14 2007-09-14 Techniques facilitant le dialogue Not-in-force EP2070389B1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US84480606P 2006-09-14 2006-09-14
US88459407P 2007-01-11 2007-01-11
US94326807P 2007-06-11 2007-06-11
PCT/EP2007/008028 WO2008031611A1 (fr) 2006-09-14 2007-09-14 Techniques facilitant le dialogue

Publications (2)

Publication Number Publication Date
EP2070389A1 EP2070389A1 (fr) 2009-06-17
EP2070389B1 true EP2070389B1 (fr) 2011-05-18

Family

ID=38853226

Family Applications (3)

Application Number Title Priority Date Filing Date
EP07825374.7A Not-in-force EP2064915B1 (fr) 2006-09-14 2007-09-14 Dispositif de commande et interface utilisateur pour des techniques d'amélioration de dialogue
EP07858967A Not-in-force EP2070391B1 (fr) 2006-09-14 2007-09-14 Techniques de renforcement des dialogues
EP07802317A Not-in-force EP2070389B1 (fr) 2006-09-14 2007-09-14 Techniques facilitant le dialogue

Family Applications Before (2)

Application Number Title Priority Date Filing Date
EP07825374.7A Not-in-force EP2064915B1 (fr) 2006-09-14 2007-09-14 Dispositif de commande et interface utilisateur pour des techniques d'amélioration de dialogue
EP07858967A Not-in-force EP2070391B1 (fr) 2006-09-14 2007-09-14 Techniques de renforcement des dialogues

Country Status (11)

Country Link
US (3) US8275610B2 (fr)
EP (3) EP2064915B1 (fr)
JP (3) JP2010515290A (fr)
KR (3) KR101061132B1 (fr)
AT (2) ATE487339T1 (fr)
AU (1) AU2007296933B2 (fr)
BR (1) BRPI0716521A2 (fr)
CA (1) CA2663124C (fr)
DE (1) DE602007010330D1 (fr)
MX (1) MX2009002779A (fr)
WO (3) WO2008035227A2 (fr)

Families Citing this family (55)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010515290A (ja) 2006-09-14 2010-05-06 エルジー エレクトロニクス インコーポレイティド ダイアログエンハンスメント技術のコントローラ及びユーザインタフェース
KR101238731B1 (ko) * 2008-04-18 2013-03-06 돌비 레버러토리즈 라이쎈싱 코오포레이션 서라운드 경험에 최소한의 영향을 미치는 멀티-채널 오디오에서 음성 가청도를 유지하는 방법과 장치
CN102113315B (zh) * 2008-07-29 2013-03-13 Lg电子株式会社 用于处理音频信号的方法和装置
JP4826625B2 (ja) 2008-12-04 2011-11-30 ソニー株式会社 音量補正装置、音量補正方法、音量補正プログラムおよび電子機器
JP4844622B2 (ja) 2008-12-05 2011-12-28 ソニー株式会社 音量補正装置、音量補正方法、音量補正プログラムおよび電子機器、音響装置
JP5120288B2 (ja) 2009-02-16 2013-01-16 ソニー株式会社 音量補正装置、音量補正方法、音量補正プログラムおよび電子機器
JP5564803B2 (ja) * 2009-03-06 2014-08-06 ソニー株式会社 音響機器及び音響処理方法
JP5577787B2 (ja) * 2009-05-14 2014-08-27 ヤマハ株式会社 信号処理装置
JP2010276733A (ja) * 2009-05-27 2010-12-09 Sony Corp 情報表示装置、情報表示方法および情報表示プログラム
EP2484127B1 (fr) * 2009-09-30 2020-02-12 Nokia Technologies Oy Procédé, logiciel, et appareil pour traitement de signaux audio
EP2532178A1 (fr) 2010-02-02 2012-12-12 Koninklijke Philips Electronics N.V. Reproduction spatiale du son
TWI459828B (zh) 2010-03-08 2014-11-01 Dolby Lab Licensing Corp 在多頻道音訊中決定語音相關頻道的音量降低比例的方法及系統
US8473287B2 (en) 2010-04-19 2013-06-25 Audience, Inc. Method for jointly optimizing noise reduction and voice quality in a mono or multi-microphone system
US8538035B2 (en) 2010-04-29 2013-09-17 Audience, Inc. Multi-microphone robust noise suppression
US8781137B1 (en) 2010-04-27 2014-07-15 Audience, Inc. Wind noise detection and suppression
JP5736124B2 (ja) * 2010-05-18 2015-06-17 シャープ株式会社 音声信号処理装置、方法、プログラム、及び記録媒体
RU2551792C2 (ru) * 2010-06-02 2015-05-27 Конинклейке Филипс Электроникс Н.В. Система и способ для обработки звука
US8447596B2 (en) 2010-07-12 2013-05-21 Audience, Inc. Monaural noise suppression based on computational auditory scene analysis
US8761410B1 (en) * 2010-08-12 2014-06-24 Audience, Inc. Systems and methods for multi-channel dereverberation
WO2012025431A2 (fr) * 2010-08-24 2012-03-01 Dolby International Ab Dissimulation de réception mono intermittente de récepteurs de radio fm stéréo
US8611559B2 (en) 2010-08-31 2013-12-17 Apple Inc. Dynamic adjustment of master and individual volume controls
US9620131B2 (en) 2011-04-08 2017-04-11 Evertz Microsystems Ltd. Systems and methods for adjusting audio levels in a plurality of audio signals
US20120308042A1 (en) * 2011-06-01 2012-12-06 Visteon Global Technologies, Inc. Subwoofer Volume Level Control
FR2976759B1 (fr) * 2011-06-16 2013-08-09 Jean Luc Haurais Procede de traitement d'un signal audio pour une restitution amelioree.
US9497560B2 (en) 2013-03-13 2016-11-15 Panasonic Intellectual Property Management Co., Ltd. Audio reproducing apparatus and method
US9729992B1 (en) 2013-03-14 2017-08-08 Apple Inc. Front loudspeaker directivity for surround sound systems
CN104683933A (zh) * 2013-11-29 2015-06-03 杜比实验室特许公司 音频对象提取
EP2945303A1 (fr) * 2014-05-16 2015-11-18 Thomson Licensing Procédé et appareil pour sélectionner ou éliminer des types de composants audio
JP6683618B2 (ja) * 2014-09-08 2020-04-22 日本放送協会 音声信号処理装置
RU2701055C2 (ru) 2014-10-02 2019-09-24 Долби Интернешнл Аб Способ декодирования и декодер для усиления диалога
CN107004427B (zh) * 2014-12-12 2020-04-14 华为技术有限公司 增强多声道音频信号内语音分量的信号处理装置
WO2016130954A1 (fr) * 2015-02-13 2016-08-18 Fideliquest Llc Supplémentation audio numérique
JP6436573B2 (ja) * 2015-03-27 2018-12-12 シャープ株式会社 受信装置、受信方法、及びプログラム
KR20240093802A (ko) * 2015-06-17 2024-06-24 소니그룹주식회사 송신 장치, 송신 방법, 수신 장치 및 수신 방법
US10251016B2 (en) 2015-10-28 2019-04-02 Dts, Inc. Dialog audio signal balancing in an object-based audio program
US10225657B2 (en) 2016-01-18 2019-03-05 Boomcloud 360, Inc. Subband spatial and crosstalk cancellation for audio reproduction
AU2017208916B2 (en) 2016-01-19 2019-01-31 Boomcloud 360, Inc. Audio enhancement for head-mounted speakers
WO2017132396A1 (fr) 2016-01-29 2017-08-03 Dolby Laboratories Licensing Corporation Amélioration bainaurale de dialogue
GB2547459B (en) * 2016-02-19 2019-01-09 Imagination Tech Ltd Dynamic gain controller
US10375489B2 (en) * 2017-03-17 2019-08-06 Robert Newton Rountree, SR. Audio system with integral hearing test
US10258295B2 (en) 2017-05-09 2019-04-16 LifePod Solutions, Inc. Voice controlled assistance for monitoring adverse events of a user and/or coordinating emergency actions such as caregiver communication
US10313820B2 (en) * 2017-07-11 2019-06-04 Boomcloud 360, Inc. Sub-band spatial audio enhancement
CN110998724B (zh) 2017-08-01 2021-05-21 杜比实验室特许公司 基于位置元数据的音频对象分类
US10511909B2 (en) * 2017-11-29 2019-12-17 Boomcloud 360, Inc. Crosstalk cancellation for opposite-facing transaural loudspeaker systems
US10764704B2 (en) 2018-03-22 2020-09-01 Boomcloud 360, Inc. Multi-channel subband spatial processing for loudspeakers
CN108877787A (zh) * 2018-06-29 2018-11-23 北京智能管家科技有限公司 语音识别方法、装置、服务器及存储介质
US11335357B2 (en) * 2018-08-14 2022-05-17 Bose Corporation Playback enhancement in audio systems
FR3087606B1 (fr) * 2018-10-18 2020-12-04 Connected Labs Decodeur televisuel ameliore
JP7001639B2 (ja) * 2019-06-27 2022-01-19 マクセル株式会社 システム
US10841728B1 (en) 2019-10-10 2020-11-17 Boomcloud 360, Inc. Multi-channel crosstalk processing
CN115668372A (zh) * 2020-05-15 2023-01-31 杜比国际公司 用于在回放音频数据期间提高对话可理解性的方法和设备
US11288036B2 (en) 2020-06-03 2022-03-29 Microsoft Technology Licensing, Llc Adaptive modulation of audio content based on background noise
US11404062B1 (en) 2021-07-26 2022-08-02 LifePod Solutions, Inc. Systems and methods for managing voice environments and voice routines
US11410655B1 (en) 2021-07-26 2022-08-09 LifePod Solutions, Inc. Systems and methods for managing voice environments and voice routines
CN114023358B (zh) * 2021-11-26 2023-07-18 掌阅科技股份有限公司 对话小说的音频生成方法、电子设备及存储介质

Family Cites Families (62)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB1054242A (fr) * 1961-05-08 1900-01-01
GB1522599A (en) * 1974-11-16 1978-08-23 Dolby Laboratories Inc Centre channel derivation for stereophonic cinema sound
NL8200555A (nl) * 1982-02-13 1983-09-01 Rotterdamsche Droogdok Mij Spaninrichting.
US4897878A (en) * 1985-08-26 1990-01-30 Itt Corporation Noise compensation in speech recognition apparatus
JPH03118519A (ja) 1989-10-02 1991-05-21 Hitachi Ltd 液晶表示素子
JPH03118519U (fr) * 1990-03-20 1991-12-06
JPH03285500A (ja) 1990-03-31 1991-12-16 Mazda Motor Corp 音響装置
JPH04249484A (ja) 1991-02-06 1992-09-04 Hitachi Ltd テレビジョン受信機用音声回路
US5142403A (en) 1991-04-01 1992-08-25 Xerox Corporation ROS scanner incorporating cylindrical mirror in pre-polygon optics
JPH05183997A (ja) 1992-01-04 1993-07-23 Matsushita Electric Ind Co Ltd 効果音付加自動判別装置
JPH05292592A (ja) 1992-04-10 1993-11-05 Toshiba Corp 音質補正装置
JP2950037B2 (ja) 1992-08-19 1999-09-20 日本電気株式会社 前方3chマトリクス・サラウンド・プロセッサ
DE69423922T2 (de) 1993-01-27 2000-10-05 Koninkl Philips Electronics Nv Tonsignalverarbeitungsanordnung zur Ableitung eines Mittelkanalsignals und audiovisuelles Wiedergabesystem mit solcher Verarbeitungsanordnung
US5572591A (en) 1993-03-09 1996-11-05 Matsushita Electric Industrial Co., Ltd. Sound field controller
JPH06335093A (ja) 1993-05-21 1994-12-02 Fujitsu Ten Ltd 音場拡大装置
JP3118519B2 (ja) 1993-12-27 2000-12-18 日本冶金工業株式会社 排気ガス浄化用メタルハニカム担体及びその製造方法
JPH07115606A (ja) 1993-10-19 1995-05-02 Sharp Corp 音声モード自動切替装置
JPH08222979A (ja) 1995-02-13 1996-08-30 Sony Corp オーディオ信号処理装置、およびオーディオ信号処理方法、並びにテレビジョン受像機
US5737331A (en) * 1995-09-18 1998-04-07 Motorola, Inc. Method and apparatus for conveying audio signals using digital packets
KR100206333B1 (ko) * 1996-10-08 1999-07-01 윤종용 두개의 스피커를 이용한 멀티채널 오디오 재생장치및 방법
US5912976A (en) * 1996-11-07 1999-06-15 Srs Labs, Inc. Multi-channel audio enhancement system for use in recording and playback and methods for providing same
US7085387B1 (en) * 1996-11-20 2006-08-01 Metcalf Randall B Sound system and method for capturing and reproducing sounds originating from a plurality of sound sources
US7016501B1 (en) * 1997-02-07 2006-03-21 Bose Corporation Directional decoding
US6243476B1 (en) * 1997-06-18 2001-06-05 Massachusetts Institute Of Technology Method and apparatus for producing binaural audio for a moving listener
US5890125A (en) * 1997-07-16 1999-03-30 Dolby Laboratories Licensing Corporation Method and apparatus for encoding and decoding multiple audio channels at low bit rates using adaptive selection of encoding method
US6111755A (en) * 1998-03-10 2000-08-29 Park; Jae-Sung Graphic audio equalizer for personal computer system
JPH11289600A (ja) 1998-04-06 1999-10-19 Matsushita Electric Ind Co Ltd 音響装置
MXPA00010027A (es) * 1998-04-14 2004-03-10 Hearing Enhancement Co Llc Control de volumen ajustable por el usuario, que se adapta al alcance del oido.
WO1999053721A1 (fr) * 1998-04-14 1999-10-21 Hearing Enhancement Company, L.L.C. Systeme ameliore d'accentuation d'ecoute et procede associe
US6311155B1 (en) * 2000-02-04 2001-10-30 Hearing Enhancement Company Llc Use of voice-to-remaining audio (VRA) in consumer applications
US6990205B1 (en) * 1998-05-20 2006-01-24 Agere Systems, Inc. Apparatus and method for producing virtual acoustic sound
US6170087B1 (en) * 1998-08-25 2001-01-09 Garry A. Brannon Article storage for hats
JP2000115897A (ja) 1998-10-05 2000-04-21 Nippon Columbia Co Ltd 音響処理装置
GB2353926B (en) 1999-09-04 2003-10-29 Central Research Lab Ltd Method and apparatus for generating a second audio signal from a first audio signal
JP2001245237A (ja) 2000-02-28 2001-09-07 Victor Co Of Japan Ltd 放送受信装置
US6879864B1 (en) 2000-03-03 2005-04-12 Tektronix, Inc. Dual-bar audio level meter for digital audio with dynamic range control
JP4474806B2 (ja) * 2000-07-21 2010-06-09 ソニー株式会社 入力装置、再生装置及び音量調整方法
JP3670562B2 (ja) 2000-09-05 2005-07-13 日本電信電話株式会社 ステレオ音響信号処理方法及び装置並びにステレオ音響信号処理プログラムを記録した記録媒体
US6813600B1 (en) * 2000-09-07 2004-11-02 Lucent Technologies Inc. Preclassification of audio material in digital audio compression applications
US7010480B2 (en) * 2000-09-15 2006-03-07 Mindspeed Technologies, Inc. Controlling a weighting filter based on the spectral content of a speech signal
JP3755739B2 (ja) 2001-02-15 2006-03-15 日本電信電話株式会社 ステレオ音響信号処理方法及び装置並びにプログラム及び記録媒体
US6804565B2 (en) * 2001-05-07 2004-10-12 Harman International Industries, Incorporated Data-driven software architecture for digital sound processing and equalization
EP1425738A2 (fr) * 2001-09-12 2004-06-09 Bitwave Private Limited Systeme et appareil de communication vocale et de reconnaissance vocale
JP2003084790A (ja) 2001-09-17 2003-03-19 Matsushita Electric Ind Co Ltd 台詞成分強調装置
DE10242558A1 (de) * 2002-09-13 2004-04-01 Audi Ag Audiosystem insbesondere für ein Kraftfahrzeug
WO2004032351A1 (fr) * 2002-09-30 2004-04-15 Electro Products Inc Systeme et procede de transfert integral d'evenements acoustiques
JP4694763B2 (ja) 2002-12-20 2011-06-08 パイオニア株式会社 ヘッドホン装置
US7076072B2 (en) * 2003-04-09 2006-07-11 Board Of Trustees For The University Of Illinois Systems and methods for interference-suppression with directional sensing patterns
JP2004343590A (ja) 2003-05-19 2004-12-02 Nippon Telegr & Teleph Corp <Ntt> ステレオ音響信号処理方法、装置、プログラムおよび記憶媒体
JP2005086462A (ja) 2003-09-09 2005-03-31 Victor Co Of Japan Ltd オーディオ信号再生装置のボーカル音帯域強調回路
US7307807B1 (en) * 2003-09-23 2007-12-11 Marvell International Ltd. Disk servo pattern writing
JP4317422B2 (ja) 2003-10-22 2009-08-19 クラリオン株式会社 電子機器、及び、その制御方法
JP4765289B2 (ja) * 2003-12-10 2011-09-07 ソニー株式会社 音響システムにおけるスピーカ装置の配置関係検出方法、音響システム、サーバ装置およびスピーカ装置
CN1939089B (zh) 2004-04-06 2011-01-12 罗姆股份有限公司 音量控制电路、半导体集成电路及声源设备
KR20060003444A (ko) * 2004-07-06 2006-01-11 삼성전자주식회사 모바일 기기에서 크로스토크 제거 장치 및 방법
US7383179B2 (en) * 2004-09-28 2008-06-03 Clarity Technologies, Inc. Method of cascading noise reduction algorithms to avoid speech distortion
US7502112B2 (en) * 2004-12-23 2009-03-10 Brytech Inc. Colorimetric device and colour determination process
SG124306A1 (en) * 2005-01-20 2006-08-30 St Microelectronics Asia A system and method for expanding multi-speaker playback
JP2006222686A (ja) 2005-02-09 2006-08-24 Fujitsu Ten Ltd オーディオ装置
KR100608025B1 (ko) * 2005-03-03 2006-08-02 삼성전자주식회사 2채널 헤드폰용 입체 음향 생성 방법 및 장치
EP1961263A1 (fr) * 2005-12-16 2008-08-27 TC Electronic A/S Procede pour realiser des mesures au moyen d'un systeme audio comprenant des haut-parleurs passifs
JP2010515290A (ja) 2006-09-14 2010-05-06 エルジー エレクトロニクス インコーポレイティド ダイアログエンハンスメント技術のコントローラ及びユーザインタフェース

Also Published As

Publication number Publication date
WO2008035227A2 (fr) 2008-03-27
JP2010504008A (ja) 2010-02-04
US20080165975A1 (en) 2008-07-10
JP2010515290A (ja) 2010-05-06
KR20090053951A (ko) 2009-05-28
AU2007296933A1 (en) 2008-03-20
US20080165286A1 (en) 2008-07-10
ATE510421T1 (de) 2011-06-15
DE602007010330D1 (de) 2010-12-16
BRPI0716521A2 (pt) 2013-09-24
AU2007296933B2 (en) 2011-09-22
US8275610B2 (en) 2012-09-25
EP2070391A2 (fr) 2009-06-17
EP2070391B1 (fr) 2010-11-03
KR101061415B1 (ko) 2011-09-01
WO2008035227A3 (fr) 2008-08-07
EP2064915A2 (fr) 2009-06-03
WO2008031611A1 (fr) 2008-03-20
US8184834B2 (en) 2012-05-22
KR20090053950A (ko) 2009-05-28
EP2064915B1 (fr) 2014-08-27
WO2008032209A3 (fr) 2008-07-24
KR101137359B1 (ko) 2012-04-25
CA2663124A1 (fr) 2008-03-20
MX2009002779A (es) 2009-03-30
EP2070391A4 (fr) 2009-11-11
CA2663124C (fr) 2013-08-06
US8238560B2 (en) 2012-08-07
US20080167864A1 (en) 2008-07-10
ATE487339T1 (de) 2010-11-15
EP2064915A4 (fr) 2012-09-26
WO2008032209A2 (fr) 2008-03-20
KR101061132B1 (ko) 2011-08-31
KR20090074191A (ko) 2009-07-06
JP2010518655A (ja) 2010-05-27
EP2070389A1 (fr) 2009-06-17

Similar Documents

Publication Publication Date Title
EP2070389B1 (fr) Techniques facilitant le dialogue
CN101518100B (zh) 对话增强技术
EP1803117B1 (fr) Mise en forme de l&#39;enveloppe temporelle de canaux individuels pour des schemas de codage repere biauriculaire analogues
CN102113314B (zh) 用于处理音频信号的方法和设备
US8705769B2 (en) Two-to-three channel upmix for center channel derivation
RU2408164C1 (ru) Методы улучшения диалогов

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20090406

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC MT NL PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA HR MK RS

RIN1 Information on inventor provided before grant (corrected)

Inventor name: OH, HYEN-O

Inventor name: JUNG, YANG, WON

Inventor name: FALLER, CHRISTOF

17Q First examination report despatched

Effective date: 20090824

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: LG ELECTRONICS INC.

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: LG ELECTRONICS INC.

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

DAX Request for extension of the european patent (deleted)
GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602007014715

Country of ref document: DE

Effective date: 20110630

REG Reference to a national code

Ref country code: NL

Ref legal event code: T3

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110919

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110518

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110518

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110518

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110829

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110518

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110918

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110518

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110518

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110819

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110518

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110518

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110518

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110518

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110518

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110518

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110518

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110518

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20120221

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20110930

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602007014715

Country of ref document: DE

Effective date: 20120221

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20110930

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20110914

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20110930

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110518

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20110914

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110818

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110518

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20110518

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 10

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 11

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: NL

Payment date: 20170810

Year of fee payment: 11

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20170810

Year of fee payment: 11

Ref country code: IT

Payment date: 20170919

Year of fee payment: 11

REG Reference to a national code

Ref country code: NL

Ref legal event code: MM

Effective date: 20181001

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20180914

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20181001

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180914

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180930

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180914

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20190805

Year of fee payment: 13

REG Reference to a national code

Ref country code: DE

Ref legal event code: R119

Ref document number: 602007014715

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210401