WO2011033924A1 - Echo removal device, echo removal method, and program for echo removal device - Google Patents

Echo removal device, echo removal method, and program for echo removal device Download PDF

Info

Publication number
WO2011033924A1
WO2011033924A1 PCT/JP2010/064678 JP2010064678W WO2011033924A1 WO 2011033924 A1 WO2011033924 A1 WO 2011033924A1 JP 2010064678 W JP2010064678 W JP 2010064678W WO 2011033924 A1 WO2011033924 A1 WO 2011033924A1
Authority
WO
WIPO (PCT)
Prior art keywords
signal
reference signal
voice
input
output
Prior art date
Application number
PCT/JP2010/064678
Other languages
French (fr)
Japanese (ja)
Inventor
島津宝浩
Original Assignee
ブラザー工業株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ブラザー工業株式会社 filed Critical ブラザー工業株式会社
Publication of WO2011033924A1 publication Critical patent/WO2011033924A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M9/00Arrangements for interconnection not involving centralised switching
    • H04M9/08Two-way loud-speaking telephone systems with means for conditioning the signal, e.g. for suppressing echoes for one or both directions of traffic
    • H04M9/082Two-way loud-speaking telephone systems with means for conditioning the signal, e.g. for suppressing echoes for one or both directions of traffic using echo cancellers

Definitions

  • the present invention relates to an echo removal apparatus, an echo removal method, and an echo removal apparatus program for removing an acoustic echo component from an audio signal transmitted to a communication destination apparatus.
  • a video conference system in which audio signals and video signals are transmitted and received between terminal devices installed at a plurality of bases, and a conference can be performed by exchanging audio and video between users in real time.
  • the voice uttered by the user is slightly delayed and returns to the user's site via the speaker and microphone at the site where the user is located at a remote location.
  • a so-called acoustic echo is generated in which the emitted voice reverberates.
  • the voice uttered by the user at the local site is transmitted to the other site and output from the speaker.
  • the output voice is picked up by the microphone at the other site, it is transmitted again to the local site and the speaker at the local site.
  • the positional relationship between the microphone and the speaker may change during the conference, for example, when the user moves from his seat to the front of the whiteboard with a microphone and gives an explanation.
  • information on time lag and level lag hereinafter also referred to as “(acoustic echo component) parameter”
  • the acoustic echo component is updated. Is always obtained based on the latest parameters (see, for example, Patent Document 1).
  • the echo canceller is continuously subjected to a load for calculating those parameters. Even if the parameters are updated when there is no change in the positional relationship between the microphone and the speaker, the parameters before the update and the parameters after the update are the same or there is almost no difference. Performing this only puts a wasteful load on the echo canceller.
  • An object of the present invention is to provide an echo removal apparatus, an echo removal method, and an echo removal apparatus program capable of removing an acoustic echo component.
  • An echo removing apparatus includes: an output unit that converts a received voice signal, which is a voice signal received from a communication destination apparatus, into a voice; and outputs an input surrounding voice to the communication destination apparatus.
  • Input means for converting to a transmission voice signal which is a voice signal to be transmitted, position detection means for detecting that a change has occurred in at least one of the output means and the input means, and output from the output means
  • Generating means for generating, when the position detecting means detects a change in the arrangement position, a reference signal used as a reference for removing an acoustic echo component generated when the sound is input to the input means from the transmission sound signal;
  • a superimposing unit that superimposes the reference signal on the received audio signal; and a filtering process performed on the transmission audio signal converted by the input unit to extract the reference signal
  • a calculation means for obtaining, and performing the calculation based on the time shift information and the level shift information for the received voice signal to generate the acoustic echo component, subtracting from the transmission voice signal, the acoustic echo component Removing means for generating a removed voice signal from which the sound is removed, and the removed voice as the transmission voice signal to be transmitted to the communication destination device And a transmitting means for transmitting the items.
  • the reference signal generated when obtaining the time shift information and the level shift information necessary for generating the acoustic echo component is superimposed on the received audio signal and output from the output means. can do. Therefore, even during transmission / reception of an audio signal to / from a communication destination device (hereinafter referred to as “in operation”), the time shift information and the level shift information are obtained using the reference signal, Can be updated. As a result, during operation, there is a change in the arrangement position of the output means and input means, and there is a possibility that an appropriate acoustic echo component cannot be generated with the information on the time deviation and the information on the level deviation used so far. Immediately, new time shift information and level shift information can be obtained and updated.
  • the reference signal can be generated when it is detected that a change has occurred in the arrangement position of at least one of the output means and the input means. In other words, if there is no change in the arrangement position of the output means and the input means, the reference signal is not generated, and the calculation for obtaining the time shift information and the level shift information is not performed. In other words, the time lag information and the level lag information are updated appropriately when a necessary situation occurs (when there is a change in the arrangement position of the output means or the input means). In comparison with the case where the echo is updated, a wasteful load is not applied to the echo canceller.
  • the position detection means detects a change in the arrangement position of the output means and the input means, but not only a change in the relative positional relationship between the output means and the input means, but also a change in each absolute arrangement position. Is detected. Therefore, it is possible to reliably detect a change in the situation that may affect the generation accuracy of the acoustic echo component.
  • the first aspect is a photographing unit that photographs an image including at least one of the output unit and the input unit from a fixed position, and at least one of the output unit and the input unit in a photographed image of the photographing unit.
  • Analysis means for analyzing the position may be further provided.
  • the position detection means may detect that a change has occurred in the arrangement position based on an analysis result of the analysis means. If the imaging means is used and the output means and the input means are photographed from a fixed position, it is possible to easily and reliably at least one of the output means and the input means simply by analyzing the photographed image and grasping the position of both in the photographed image. It is possible to detect an absolute change in the arrangement position.
  • the first aspect may further include an acceleration detection unit that detects an acceleration applied to at least one of the output unit and the input unit.
  • the position detection unit may include a detection result of the acceleration detection unit. Based on this, it may be detected that a change has occurred in the arrangement position. If it is an acceleration detection means, it is easy to provide it integrally with an output means or an input means. If there is a change in the arrangement position of the output means or input means, acceleration is applied to the acceleration detection means. Therefore, if the presence or absence of movement of the output means or input means is grasped based on the detection result of the acceleration detection means, It is possible to reliably detect an absolute change in the arrangement position of at least one of the output unit and the input unit.
  • the generating means may generate a signal having a frequency of a speech waveform in a non-audible region as the reference signal. If the frequency of the sound waveform of the reference signal is in the non-audible region, even if the reference signal is superimposed on the received sound signal and output from the output means, the user cannot hear the sound based on the reference signal. In this case, the user can hear only the voice based on the received voice signal. Therefore, even if the reference signal is output during operation, the user's utterance or listening is not hindered by the reference signal. Therefore, if there is a change in the position of the output means or input means, a new time is immediately Deviation information and level deviation information can be obtained and updated.
  • the first aspect may further include a determination unit that determines whether or not the received audio signal is in a silent state.
  • the position detection unit detects a change in the arrangement position, and the When the determination unit determines that the received audio signal is silent, the generation unit may generate a signal having a frequency of an audio waveform in the audible region as the reference signal.
  • an audible frequency signal has a wider directivity than a non-audible frequency signal.
  • the frequency of the acoustic echo component is also the frequency in the audible region.
  • the generation accuracy of the acoustic echo component can be further improved by obtaining the time shift information and the level shift information using the reference signal of the frequency in the audible region having a wide directivity and frequency characteristics close to those of the acoustic echo component. be able to.
  • a reference signal having a frequency in the audible region is superimposed on the received sound signal and output from the output means, the user can hear the sound based on the reference signal together with the sound based on the received sound signal.
  • the person's utterance and listening will be hindered by the reference signal. Therefore, it is preferable to generate the reference signal having a frequency in the audible region when the received audio signal is in a silent state.
  • the echo cancellation method includes an output step in which a received voice signal, which is a voice signal received from a communication destination device, is converted into voice and output from the output means, and surrounding voice is input means. And an input step that is converted into a transmission audio signal that is an audio signal to be transmitted to the communication destination device, and a change in at least one of the output means and the input means is detected.
  • a reference signal that serves as a reference for removing a sound echo component generated when the sound output from the output means is input to the input means is removed from the transmission sound signal in the position detection step.
  • a generation step that is generated when a change is detected, a superimposition step in which the reference signal is superimposed on the received audio signal, and the transmission converted in the input step An extraction process in which a voice signal is filtered and the reference signal is extracted; a generation reference signal that is the reference signal generated in the generation process; and the reference that is extracted in the extraction process And an extraction reference signal, which is a signal, is compared, information on a time lag between the generation timing of the generation reference signal and the extraction timing of the extraction reference signal, the signal level of the generation reference signal at the generation timing, and the extraction timing
  • the reference signal generated when obtaining the time lag information and the level lag information necessary for generating the acoustic echo component is superimposed on the received audio signal and output from the output means. can do. Therefore, even during transmission / reception of audio signals to / from the communication destination apparatus (during operation), it is possible to obtain and update information on time lag and information on level lag using the reference signal. As a result, during operation, there is a change in the arrangement position of the output means and input means, and there is a possibility that an appropriate acoustic echo component cannot be generated with the information on the time deviation and the information on the level deviation used so far. Immediately, new time shift information and level shift information can be obtained and updated.
  • the reference signal can be generated when it is detected that a change has occurred in the arrangement position of at least one of the output means and the input means.
  • the reference signal is not generated, and the calculation for obtaining the time shift information and the level shift information is not performed.
  • the time lag information and the level lag information are updated appropriately when a necessary situation occurs (when there is a change in the arrangement position of the output means or the input means). Compared to the case where it is regularly updated, there is no unnecessary load on the echo canceller.
  • the change in the arrangement position of the output means and the input means is detected, but not only the change in the relative positional relationship between the output means and the input means, but also the absolute arrangement position of each. A change has been detected. Therefore, it is possible to reliably detect a change in the situation that may affect the generation accuracy of the acoustic echo component.
  • the program of the echo removal apparatus causes a computer to function as various processing means of the echo removal apparatus according to claim 1.
  • the computer By causing the computer to execute the program of the echo removal apparatus, the effect of the invention described in claim 1 can be achieved.
  • the echo canceller is used for a terminal device of a video conference system in which users at remote locations (multiple locations) can exchange audio and video in real time via a network and proceed with a conference or the like.
  • the echo canceller is provided as a device that controls processing related to sound in the terminal device, and is incorporated in the terminal device of the video conference system as part of the hardware circuit.
  • the video conference system is a system that can transmit and receive audio signals and video signals between terminal devices 2 to 4 connected to each other via a network 1.
  • Each of the terminal devices 2 to 4 plays a role as a client or a host in the video conference system depending on the situation.
  • the terminal devices 2 to 4 may be used as clients.
  • the terminal devices 2 to 4 are all video conference dedicated terminals having the same configuration, and the details of the echo removal device will be described by taking the echo removal unit 8 of the terminal device 2 as an example.
  • three terminal devices 2 to 4 are connected to the network 1, but the number of terminal devices constituting the video conference system is not limited to three.
  • the terminal device 2 includes a known CPU 80 that controls the entire terminal device 2.
  • a ROM 82, a RAM 84, and an input / output interface 88 are connected to the CPU 80 via a bus 86.
  • An operation unit 92, a video processing unit 94, an audio processing unit 10, and a communication unit 46 are connected to the input / output interface 88.
  • the ROM 82 stores various programs and data for operating the terminal device 2.
  • the CPU 80 controls the operation of the terminal device 2 according to the program stored in the ROM 82.
  • the RAM 84 temporarily stores various data.
  • the operation unit 92 is an input device for a user to operate the terminal device 2.
  • the communication unit 46 connects the terminal device 2 at its own site and the terminal devices 3 and 4 at other sites via the network 1, and various signals (control signals, audio signals) converted into communication protocols between the terminals. , Video signals, etc.). Furthermore, the communication unit 46 exchanges audio signals and video signals with the audio processing unit 10 and the video processing unit 94 via the input / output interface 88.
  • the terminal device 2 also includes a codec, and compresses a signal to be transmitted and decompresses a received signal.
  • a video input device 96 and a video output device 98 are connected to the video processing unit 94.
  • the video processing unit 94 processes video captured by the video input device 96 (for example, a camera) and generates a video signal to be transmitted to the terminal devices 3 and 4.
  • the video processing unit 94 processes video signals received from the terminal devices 3 and 4 and displays the video on a video output device 98 (for example, a monitor).
  • a voice input device 60 and a voice output device 70 are connected to the voice processing unit 10.
  • the audio processing unit 10 processes audio input to the microphone 64 of the audio input device 60 and generates an audio signal (hereinafter referred to as “transmission audio signal”) to be transmitted to the terminal devices 3 and 4.
  • the audio processing unit 10 processes audio signals received from the terminal devices 3 and 4 (hereinafter referred to as “received audio signals”), and outputs audio from the speaker 74 of the audio output device 70.
  • the audio processing unit 10 the audio input device 60, the audio output device 70, the communication unit 46, and each configuration (CPU 80) for controlling each of these processing units (each device).
  • ROM 82, RAM 84, etc. constitute an echo removal unit 8.
  • the voice input device 60 includes a microphone 64 and an acceleration sensor 62, and is configured as a movable device.
  • the microphone 64 converts input ambient sound into an electric signal (analog sound signal).
  • the acceleration sensor 62 detects acceleration applied to the voice input device 60.
  • the audio output device 70 includes a speaker 74 and an acceleration sensor 72, and is configured as a movable device like the audio input device 60.
  • the speaker 74 converts an input electric signal (analog audio signal) into a sound and outputs the sound.
  • the acceleration sensor 72 detects acceleration applied to the audio output device 70.
  • the voice input device 60 and the voice output device 70 are provided separately from the terminal device 2 so that the installation location (arrangement position) can be changed independently.
  • the voice processing unit 10 includes a movement detection unit 12, reference signal generation units 14 and 16, a switch (SW) 18, a switch control unit 22, an adder 24, an A / D converter 26, a D / A converter 28, A / D converter 30, digital filter 34, signal comparison unit 36, delay processing unit 38, attenuation processing unit 40, subtractor 42, timer 44, and distributors 20 and 32.
  • An acceleration sensor 62 of the voice input device 60 and an acceleration sensor 72 of the voice output device 70 are connected to the movement detection unit 12 via the A / D converter 26.
  • the movement detection unit 12 detects that movement from the current position has occurred in at least one of the voice input device 60 and the voice output device 70 based on the detection results of acceleration by the acceleration sensors 62 and 72. That is, the movement detection unit 12 can detect not only a change in the relative positional relationship between the audio input device 60 and the audio output device 70 but also a change in each absolute arrangement position.
  • the inputs of the reference signal generators 14 and 16 are connected to the movement detector 12 respectively. Further, the outputs of the reference signal generation units 14 and 16 are connected to an adder 24 and a signal comparison unit 36 (described later) via the switch 18 and the distributor 20, respectively.
  • the reference signal generation unit 14 generates a signal whose frequency of the audio waveform is an audible frequency (1 KHz in the present embodiment) as a reference signal, and outputs the signal to the adder 24 and the signal comparison unit 36.
  • the reference signal generation unit 16 generates a signal having a frequency of the sound waveform in the non-audible region (100 kHz in the present embodiment) as the reference signal, and outputs the signal to the adder 24 and the signal comparison unit 36.
  • the switch 18 selectively switches connection between one of the reference signal generation unit 14 or the reference signal generation unit 16 and the adder 24 and the signal comparison unit 36. More specifically, the switch 18 is controlled by the switch control unit 22, a connection (A side in FIG. 1) that allows the reference signal of 1 KHz to be input to the adder 24 and the signal comparison unit 36, and 100 KHz.
  • the connection (B side in FIG. 1) for switching the reference signal is switched.
  • the switch 18 is shown as a contact type switch for the sake of convenience. However, a contactless type switch using a transistor or the like is preferable.
  • the switch control unit 22 is provided on a path through which the received audio signal is input to the adder 24. More specifically, the received audio signal received from the terminal devices 3 and 4 in the communication unit 46 is input to the audio processing unit 10 via the input / output interface 88, but the switch control unit 22 It is provided between the adder 24.
  • the switch control unit 22 determines whether or not the received audio signal passing through the switch control unit 22 is in a silent state.
  • the silent state refers to a state in which the signal level of the received audio signal (the amplitude of the audio waveform) is 0 or less than a predetermined threshold, but the signal level is 0 even when the received audio signal itself is not input. Yes, considered silent.
  • the switch control unit 22 performs control so that the switch 18 is switched to the A side when the received audio signal is silent, and the switch 18 is switched to the B side when it is in a sound state.
  • the silence state may be determined when the received audio signal passes as described above. However, in order to improve the accuracy, the state where the signal level is less than the threshold value continues for a predetermined time (for example, 1 second). Then, it is better to judge that there is no sound.
  • the switch control unit 22 also transmits an instruction to switch to filter setting corresponding to the reference signal generated according to the signal level of the received audio signal, also to the digital filter 34 described later.
  • the input of the adder 24 is connected to the reference signal generation units 14 and 16 through the switch 18 and the communication unit 46 through the switch control unit 22 and the input / output interface 88.
  • a D / A converter 28 and a delay processing unit 38 are connected to the output of the adder 24, respectively.
  • the adder 24 superimposes the reference signal input from the reference signal generation units 14 and 16 on the received audio signal input from the communication unit 46 (that is, combines the received audio signal and the reference signal), and outputs the output audio signal. To the D / A converter 28 and the delay processing unit 38.
  • the reference signal is not always generated.
  • the adder 24 passes the received audio signal as it is, and passes it to the D / A converter 28 and the delay processing unit 38. Output.
  • the reference signal may be generated even when the received audio signal is in a silent state (including no input).
  • the adder 24 passes the reference signal as it is and outputs it to the D / A converter 28 and the delay processing unit 38.
  • these signals output from the adder 24 are also referred to as output audio signals.
  • the speaker 74 of the audio output device 70 is connected to the output of the D / A converter 28 via an amplifier (not shown).
  • the D / A converter 28 converts the output audio signal into an analog audio signal and outputs the analog audio signal to the speaker 74.
  • the speaker 74 converts an input audio signal into audio and outputs it.
  • the microphone 64 of the voice input device 60 is connected to the input of the A / D converter 30.
  • the sound around the sound input device 60 is input to the microphone 64 and converted into an analog sound signal, and further converted into a digital sound signal (hereinafter referred to as “input sound signal”) by the A / D converter 30.
  • the An output of the A / D converter 30 is connected to a digital filter 34 and a subtractor 42 via a distributor 32.
  • the digital filter 34 performs a filtering process on the input audio signal input from the A / D converter 30 and extracts a reference signal included in the input audio signal.
  • a band pass filter (BPF) that can be set to selectively extract a 1 KHz or 100 KHz signal is adopted as the digital filter 34.
  • BPF band pass filter
  • the digital filter 34 is configured to switch the setting of the frequency of the voice waveform to be extracted in accordance with an instruction from the switch control unit 22. More specifically, a digital signal is extracted so that a 1 KHz reference signal is extracted when the received audio signal passing through the switch control unit 22 is silent, and a 100 KHz reference signal is extracted when the voice signal is sound. Filter setting of the filter 34 is performed.
  • the output of the digital filter 34 is connected to the signal comparison unit 36. That is, two types of reference signals are input to the signal comparison unit 36.
  • One is a reference signal (hereinafter referred to as “generated reference signal”) that is generated by the reference signal generation units 14 and 16 and is input as it is (without deterioration).
  • the other is generated by the reference signal generators 14 and 16 and is extracted from the input audio signal by the digital filter 34 via the adder 24, the D / A converter 28, the speaker 74, the microphone 64, and the A / D converter 30.
  • Reference signal (deteriorated) hereinafter referred to as “extraction reference signal”).
  • the signal comparison unit 36 has a timer 44 for obtaining a count value T used for calculating a time lag between the input timing of the generation reference signal (that is, the generation timing of the reference signal) and the extraction timing of the extraction reference signal. It is connected.
  • the signal comparison unit 36 compares the sound waveform of the generated reference signal with the sound waveform of the extracted reference signal, and obtains a time shift (delay) and a level shift (attenuation) of the extracted reference signal with respect to the generated reference signal.
  • the output of the signal comparison unit 36 is connected to a delay processing unit 38 and an attenuation processing unit 40.
  • the delay processing unit 38 receives the output audio signal output from the adder 24 and the time shift information (P) obtained by the signal comparison unit 36.
  • the delay processing unit 38 performs a process of delaying and outputting (delaying) the input output audio signal based on the time lag information.
  • the attenuation processing unit 40 receives the output audio signal that has been subjected to delay processing and is output from the delay processing unit 38 and the level shift information (L) obtained by the signal comparison unit 36 as described above.
  • the attenuation processing unit 40 performs a process of lowering (attenuating) the signal level of the output audio signal subjected to the delay process based on the level shift information.
  • the input of the subtractor 42 is connected to the attenuation processing unit 40 and the microphone 64 via the distributor 32 and the A / D converter 30. That is, two types of audio signals are input to the subtractor 42.
  • One audio signal is an output audio signal (hereinafter referred to as “acoustic echo component”) output from the adder 24 and subjected to delay processing and attenuation processing via the delay processing unit 38 and the attenuation processing unit 40.
  • the other audio signal is the aforementioned input audio signal that is output from the adder 24, converted into audio by the speaker 74, output to the microphone 64 together with the surrounding audio, and converted into the audio signal again. is there.
  • the subtractor 42 superimposes the sound waveform of the acoustic echo component on the speech waveform of the input speech signal, and removes the acoustic echo component from the input speech signal (hereinafter referred to as “removed speech signal”). Process to generate.
  • the output of the subtracter 42 is connected to the communication unit 46 via the input / output interface 88.
  • the removed audio signal is transmitted as a transmission audio signal from the communication unit 46 to the terminal devices 3 and 4 via the network 1.
  • the removed audio signal obtained by removing the acoustic echo component from the input audio signal based on the audio input to the microphone 64 is transmitted to the terminal devices 3 and 4 as a transmission audio signal.
  • the flow of processing will be described with reference to FIGS. For convenience, each step in the flowchart is abbreviated as “S”.
  • the terminal device 2 shown in FIG. 1 is driven when the power is turned on. That is, the CPU 80 drives the terminal device 2 by causing each processing unit to execute a sequence at the start of driving in accordance with a program stored in the ROM 82 and controlling transmission / reception of signals between the processing units (devices). .
  • the communication unit 46 negotiates with the terminal devices 3 and 4 via the network 1 to establish communication.
  • the initialization process (S9) shown in FIG. 2 is performed, and parameters (time shift information (P) and level shift information (L)) necessary for removing the acoustic echo component are set. Is done.
  • the details of the initialization process are performed according to the process flow shown in FIG. First, the timer 44 is started (S61), and the count value of the internal timer is incremented at regular intervals.
  • a reference signal is generated (S63).
  • the initialization process is performed in a state where no received audio signal is input from the terminal devices 3 and 4 (a state where communication is not established or a state where communication is interrupted). Therefore, the switch control unit 22 shown in FIG. 1 determines that the received audio signal is in a silent state, and the connection of the switch 18 is switched to the A side. Accordingly, in S63, the reference signal generation unit 14 is driven, and a reference signal whose frequency of the audio waveform is the frequency of the audible region (1 KHz) is generated. As shown in FIG.
  • the reference signal is generated as a signal in which a signal having a frequency of 1 KHz is intermittently repeated at regular intervals (the sound waveform of the reference signal (generated reference signal) is shown by a solid line in FIG. ).
  • the generated reference signal is input to the signal comparison unit 36 as a generated reference signal via the distributor 20.
  • the signal comparison unit 36 obtains the count value T of the timer 44 in response to the input of the generation reference signal, and uses this timing as a reference for determining the delay of the reference signal as a reference signal generation timing T0 (see FIG. 4). Hold. Further, the signal comparison unit 36 obtains the signal level of the generation reference signal and holds it as the generation level L0 (see FIG. 4).
  • the generated reference signal is output as sound from the speaker 74 of the sound output device 70 via the distributor 20, the adder 24, and the D / A converter 28 (S65). Since the received audio signal is silent, the reference signal passes through the adder 24 as it is and is output as an output audio signal, and the speaker 74 outputs an audible sound based on the 1 KHz reference signal.
  • the microphone 64 of the voice input device 60 is in a voice input waiting state (S67: NO).
  • the 1 KHz sound output from the speaker 74 is input to the microphone 64 (S67: YES)
  • it is converted into an input sound signal and input to the digital filter 34 via the A / D converter 30 and the distributor 32.
  • the switch control unit 22 is set so that the received audio signal is in a silent state, that is, a setting for selectively extracting a 1 KHz signal. Therefore, even if the input sound signal includes not only the reference signal but also a signal based on the sound around the microphone 64, the 1 kHz reference signal is extracted from the input sound signal and input to the signal comparison unit 36 as the extracted reference signal. (S69).
  • the signal comparison unit 36 acquires the count value T of the timer 44 in response to the input of the extraction reference signal, and holds it as the reference signal extraction timing T1, as shown in FIG.
  • the voice waveform of the extracted reference signal is indicated by a solid line
  • the voice waveform of the generated reference signal is indicated by a dotted line. Further, the signal comparison unit 36 obtains the signal level of the extraction reference signal and holds it as the extraction level L1.
  • the signal comparison unit 36 calculates T1-T0 and obtains the time shift P (S71). This time shift information (P) is transmitted to the delay processing unit 38 and set as a parameter for the delay processing. Similarly, the signal comparison unit 36 calculates L1 / L0 and obtains the level deviation L (S73). This level shift information (L) is transmitted to the attenuation processing unit 40 and set as a parameter for the attenuation processing. This is the end of the initialization process (S9).
  • a series of processes for removing acoustic echoes using the set parameters (P, L) are performed.
  • transmission / reception of audio signals (reception of reception audio signals and transmission of transmission audio signals) is performed by communication with the terminal devices 3 and 4 via the network 1 (S11).
  • the audio processing unit 10 as described above, if there is a change (movement) in the arrangement position of the audio input device 60 (microphone 64) or the audio output device 70 (speaker 74), the movement detection unit 12 detects it, and the reference The signal generators 14 and 16 are caused to generate a reference signal.
  • the reference signal is not generated.
  • the received audio signal received from the terminal devices 3 and 4 passes through the adder 24 as it is and is output as an output audio signal, and is output as audio from the speaker 74 of the audio output device 70 via the D / A converter 28. (S15).
  • the sound output from the speaker 74 is input to the microphone 64 that is in a voice input waiting state (S17: NO), it is converted into an input sound signal and passed through the A / D converter 30. Are input to the subtractor 42.
  • the input audio signal is also input to the digital filter 34 via the distributor 32, but since no reference signal is generated, no processing is performed in the signal comparison unit 36 input after passing through the digital filter 34. Not. However, when the reference signal is not generated, the input path from the distributor 32 to the digital filter 34 may be blocked.
  • the output audio signal output from the adder 24 (the received audio signal on which the reference signal is not superimposed here) is also input to the delay processing unit 38.
  • the delay processing unit 38 holds the time lag information (P) transmitted from the signal comparison unit 36, delays the output audio signal input from the adder 24 by P time, and outputs it to the attenuation processing unit 40.
  • the attenuation processing unit 40 holds the level shift information (L) transmitted from the signal comparison unit 36, attenuates the output audio signal input from the delay processing unit 38 by L and attenuates the acoustic echo component.
  • S21 the subtractor 42
  • the subtractor 42 receives an input audio signal input from the microphone 64 and an acoustic echo component generated by performing delay processing and subtraction processing on the output audio signal.
  • the subtractor 42 cancels the acoustic echo component included in the input speech signal by superimposing the opposite waveform of the speech waveform of the acoustic echo component on the speech waveform of the input speech signal, and removes the removed speech signal from which the acoustic echo has been removed.
  • Generate (S23) After S23, the process returns to S11, and the generated removed audio signal is transmitted as a transmission audio signal from the communication unit 46 to the terminal devices 3 and 4 via the network 1 (S11).
  • This transmitted audio signal does not include the audio output from the speaker 74 based on the received audio signals from the terminal devices 3 and 4 among the audio around the terminal device 2 input to the microphone 64, and is on the terminal device 2 side. It will be based only on the newly uttered voice. Therefore, even if the sound based on this transmission sound signal is output from the speaker on the terminal device 3 or 4 side, no acoustic echo is generated. Thereafter, if there is no change in the arrangement position of the voice input device 60 or the voice output device 70 (S13: NO), S11, S13, S15 to S23 are repeated, and the parameters (P, L) obtained in the initialization process are repeated. The acoustic echo is removed using.
  • the signal comparison unit 36 acquires the count value T of the timer 44 when the generation reference signal is input, and holds it as the reference signal generation timing T0. Further, the signal comparison unit 36 obtains the signal level of the generation reference signal and holds it as the generation level L0.
  • the adder 24 passes the input reference signal as it is, and outputs this reference signal as an output audio signal to the D / A converter 28 and the delay processing unit 38.
  • the output audio signal is converted into an analog audio signal via the D / A converter 28, and output as an audible sound based on the 1 KHz reference signal from the speaker 74 of the audio output device 70 (S39).
  • the switch control unit 22 determines that the received voice signal is not silent (S31: NO), as described above, a reference signal whose frequency of the voice waveform is inaudible (100 KHz) is generated. (S35).
  • the signal comparison unit 36 holds the count value T of the timer 44 as the generation timing T0 of the reference signal, and holds the signal level as the generation level L0.
  • the adder 24 superimposes the reference signal on the input received audio signal, and outputs it as an output audio signal to the D / A converter 28 and the delay processing unit 38 (S37).
  • the audio based on the received audio signal is output from the speaker 74 of the audio output device 70 together with the inaudible sound based on the reference signal ( S39).
  • the presence / absence of voice input detection is determined by the microphone 64 of the voice input device 60 (S41). And when input detection is not performed, it is in a waiting state (S41: NO).
  • the sound output from the speaker 74 is input to the microphone 64 (S41: YES)
  • the sound is converted into an input sound signal.
  • the input audio signal is input to the digital filter 34 via the A / D converter 30 and the distributor 32.
  • the switch control unit 22 is configured to selectively extract a 1 KHz signal when the received audio signal is silent, and to selectively extract a 100 KHz signal when the received voice signal is not silent. Has been made. Therefore, even if the reference signal included in the input audio signal has a frequency in the non-audible region or a frequency in the audible region, the reference signal as set by the filter setting is obtained by passing through the digital filter 34. Is extracted (S43).
  • the extracted reference signal (extracted reference signal) is input to the signal comparison unit 36.
  • the signal comparison unit 36 obtains the extraction timing T1 and the extraction level L1 of the extraction reference signal, and obtains the time shift P and the level shift L based on the generation timing T0 and the generation level L0 obtained from the generation reference signal.
  • S45, S47 is the same as the processing of S71 and S73 described above.
  • the newly obtained parameters (P, L) are transmitted to the delay processing unit 38 and the attenuation processing unit 40, respectively, and already held parameters (parameters obtained in the previous processing such as initialization processing). Updated.
  • the delay processing unit 38 performs processing for delaying the output audio signal input from the adder 24 by P time (S49), and the attenuation processing unit 40 outputs the output audio signal input from the delay processing unit 38.
  • the processing for generating an acoustic echo component by attenuating the signal by L times (S51) is the same as the processing of S19 and S21 described above.
  • the process of S23 described above is also performed in the subtractor 42 in which the process (S53) of generating the removed voice signal by superimposing the voice waveform of the acoustic echo component on the voice waveform of the input voice signal is performed. It is the same.
  • the process returns to S11, and the removed audio signal generated using the new parameter is transmitted as a transmission audio signal from the communication unit 46 to the terminal devices 3 and 4 via the network 1 (S11).
  • the path until the audio output from the speaker 74 is input to the microphone 64 changes, and the parameters for generating the acoustic echo component also change. Change. Therefore, when a change in the arrangement position of at least one of the voice input device 60 and the voice output device 70 is detected, the parameter is updated to remove the acoustic echo component in accordance with the (current) environment after the arrangement position change. It can be done reliably. Therefore, if the removed audio signal generated using the new parameter is transmitted to the terminal devices 3 and 4 as a transmission audio signal, the audio based on the transmission audio signal is output from the speaker on the terminal device 3 and 4 side. No acoustic echo is produced.
  • the reference signal generated when obtaining the parameters (time shift information (P) and level shift information (L)) necessary for generating the acoustic echo component is used.
  • And can be output from the speaker 74 while being superimposed on the received audio signal. Accordingly, even when the video conference system is operated and audio signals are being transmitted / received between the terminal device 2 and the terminal devices 3 and 4 (in operation), parameters are obtained using the reference signal. Can be updated. Thereby, during operation, a change occurs in the arrangement position of the audio input device 60 (microphone 64) and the audio output device 70 (speaker 74), and an appropriate acoustic echo component cannot be generated with the parameters used so far. Immediately, new parameters can be obtained and updated.
  • an appropriate acoustic echo component can be generated in response to a change in the situation that may affect the generation accuracy of the acoustic echo component during operation, and the accuracy of removing the acoustic echo component from the transmission voice signal is maintained. be able to.
  • the reference signal can be generated when it is detected that a change has occurred in at least one arrangement position of the audio input device 60 and the audio output device 70.
  • the reference signal is not generated, and the calculation for obtaining the parameters (information about time shift and level shift) is not performed.
  • the parameter is updated appropriately when a necessary situation occurs (when the arrangement position of the audio input device 60 or the audio output device 70 is changed), and is updated constantly or periodically. Compared to the case where the echo canceling unit 8 is used, no unnecessary load is applied to the echo removing unit 8.
  • the movement detection unit 12 detects a change in the arrangement position of the voice input device 60 and the voice output device 70, but not only changes in the relative positional relationship between the voice input device 60 and the voice output device 70, Changes in the respective absolute positions are detected. Therefore, it is possible to reliably detect a change in the situation that may affect the generation accuracy of the acoustic echo component.
  • the acceleration sensors 62 and 72 can be easily provided integrally with the microphone 64 and the speaker 74. If there is a change in the arrangement position of the voice input device 60 in which the acceleration sensor 62 and the microphone 64 are integrated, or the voice output device 70 in which the acceleration sensor 72 and the speaker 74 are integrated, acceleration is applied to the acceleration sensors 62 and 72. . Therefore, if the presence or absence of movement of the voice input device 60 or the voice output device 70 is grasped based on the detection results of the acceleration sensors 62 and 72, at least one of the voice input device 60 and the voice output device 70 can be easily and reliably obtained. It is possible to detect an absolute change in the arrangement position.
  • the frequency of the sound waveform of the reference signal is a frequency in the non-audible region
  • the reference signal is superimposed on the received sound signal and output from the speaker 74
  • the sound (reference sound) based on the reference signal is output to the user. Cannot be heard. In this case, the user can hear only the voice based on the received voice signal. Therefore, even if the reference signal is output during operation, the user's utterance or listening is not hindered by the reference signal. Therefore, when a change occurs in the arrangement position of the voice input device 60 or the voice output device 70, immediately. New parameters can be obtained and updated.
  • an appropriate acoustic echo component can be generated in response to a change in the situation that may affect the generation accuracy of the acoustic echo component during operation, and the accuracy of removing the acoustic echo component from the transmission voice signal is maintained. be able to.
  • signals in the audible frequency range have a wider directivity than signals in the non-audible frequency range.
  • the frequency of the acoustic echo component is also the frequency in the audible region. Therefore, if the parameters (time shift information and level shift information) are obtained using a audible frequency reference signal having a wide directivity and frequency characteristics close to those of the acoustic echo component, the accuracy of generating the acoustic echo component is Can be increased.
  • a reference signal having a frequency in the audible region is superimposed on the received sound signal and output from the speaker 74, the user can hear the sound based on the reference signal together with the sound based on the received sound signal. There is a risk that the person's utterance and listening will be hindered by the reference signal. Therefore, it is preferable to generate the reference signal having a frequency in the audible region when the received audio signal is in a silent state.
  • the speaker 74 corresponds to the “output unit” of the first aspect
  • the microphone 64 corresponds to the “input unit”.
  • the movement detection unit 12 corresponds to “position detection unit”
  • the reference signal generation units 14 and 16 correspond to “generation unit”.
  • the adder 24 corresponds to “superimposing means”
  • the digital filter 34 corresponds to “extraction means”.
  • the signal comparison unit 36 corresponds to “calculation unit”
  • the delay processing unit 38, the attenuation processing unit 40, and the subtractor 42 correspond to “removal unit”.
  • the communication unit 46 corresponds to “transmission means”.
  • the acceleration sensors 62 and 72 correspond to “acceleration detection means”
  • the switch control unit 22 corresponds to “determination means”.
  • FIG. 6 shows a configuration example of an echo removal apparatus when a personal computer (PC) 102 is used as the terminal apparatus 2.
  • PC personal computer
  • a portion that functions as an echo removal device is an echo removal unit 108.
  • the part which comprises the structure equivalent to the terminal device 2 is shown with the same code
  • the PC 102 includes a known CPU 180, and a ROM 82, a RAM 84, and an input / output interface 88 are connected to the CPU 180 via a bus 86.
  • the input / output interface 88 includes an operation input device 92 such as a mouse and a keyboard, an external storage device 90 such as a hard disk drive (HDD), a flash memory drive (SSD), and a DVD-ROM drive, a video processing unit 94, and a communication unit 46. Is connected.
  • a video input device 96 such as a web camera and a video output device 98 such as a monitor are connected to the video processing unit 94.
  • An audio input device 60 including a microphone 64 and an acceleration sensor 62 and an audio output device 70 including a speaker 74 and an acceleration sensor 72 are also connected to the input / output interface 88.
  • a speaker 74 is connected to the input / output interface 88 via the D / A converter 28
  • a microphone 64 is connected via the A / D converter 30, and the acceleration sensor 62, via the A / D converter 26. 72 is connected.
  • the audio input device 60, the audio output device 70, the operation input device 92, the video input device 96, and the video output device 98 are provided as external devices of the PC 102.
  • the echo removing unit 108 includes a voice input device 60, a voice output device 70, a communication unit 46, an external storage device 90, and various components (CPU 180, ROM 82, RAM 84, etc.) for controlling these processing units (each device). Consists of.
  • the PC 102 is connected to the network 1 via the communication unit 46, and the video conference system is constructed together with the terminal devices 3 and 4 connected through the network 1 as in the present embodiment.
  • the CPU 180 executes a program installed in the external storage device 90, so that the CPU 180 can perform processing equivalent to that of the audio processing unit 10 of the present embodiment. That is, it is only necessary to design a sound processing unit 110 that combines known modules for realizing the processes in the flowcharts of FIGS. 2 and 3 and can process a sound signal according to the process flow shown in the flowcharts as a program. Note that each processing unit constituting the audio processing unit 110 is a function realized by the CPU 180, and in FIG. 6, it is shown as a virtual processing unit so that it can be compared with the one in this embodiment (see FIG. 1). However, the same reference numerals are given in parentheses.
  • the CPU 180 that performs the process of S39 functions as the “output process” of the second and third aspects, and the CPU 180 that performs the process of S41 functions as the “input process”.
  • the CPU 180 that performs the process of S13 functions in the “position detection process”, and the CPU 180 that performs the process of S33 or S35 functions in the “generation process”.
  • the CPU 180 that performs the process of S37 functions as the “superimposition process”, and the CPU 180 that performs the process of S43 functions as the “extraction process”.
  • the CPU 180 that performs the processes of S45 and S47 functions as the “calculation process”, and the CPU 180 that performs the processes of S49, S51, and S53 functions as the “removal process”.
  • the CPU 180 that performs the process of S11 functions in the “transmission step”.
  • a change in the arrangement position of the voice input device 260 or the voice output device 270 may be detected by shooting the voice input device 60 or the voice output device 70 from a fixed position and analyzing the shot image.
  • the voice input device 260 and the voice output device 270 are configured as movable devices each including a microphone 64 and a speaker 74 without including an acceleration sensor.
  • the output of the camera 250 for photographing the voice input device 260 and the voice output device 270 is input to the input / output interface 88.
  • an image analysis unit 252 that performs a known image analysis process is provided, and the positions (for example, coordinates) of the audio input device 260 and the audio output device 270 in the image captured by the camera 250 are specified.
  • the image analysis unit 252 may be realized by, for example, the CPU 280 executing a program and performing a known image analysis process.
  • the analysis result of the image analysis unit 252 (for example, coordinate information of the audio input device 260 and the audio output device 270) is input to the movement detection unit 12. Note that, in the terminal device 202 of this modification, a portion that functions as an echo removal device is indicated as an echo removal unit 208.
  • the echo removal unit 208 has a sound processing unit 210 (except for the A / D converter 26, which may have the same configuration as the sound processing unit 10 of the present embodiment), a sound input device 260, a sound output device 270, a communication.
  • the terminal device 202 is configured in this way, and the camera 250 is installed at an appropriate fixed position overlooking the movable range of the voice input device 260 and the voice output device 270. Then, the image captured by the camera 250 is analyzed by the image analysis unit 252 and the positions of the audio input device 260 and the audio output device 270 in the captured image are specified. Based on the analysis result, the movement detection unit 12 determines whether or not a change has occurred in the arrangement position of the voice input device 260 or the voice output device 270. Thus, if the audio input device 260 and the audio output device 270 are photographed from a fixed position using the camera 250, the photographed image is analyzed and only the position of both in the photographed image is grasped. An absolute change in the arrangement position of at least one of the voice input device 260 and the voice output device 270 can be detected.
  • the camera 250 corresponds to the “photographing means” of the first aspect.
  • the CPU 280 that realizes an image analysis unit 252 that performs a known image analysis process and can specify the position of the audio input device 260 and the audio output device 270 in the captured image of the camera 250 functions as an “analysis unit”. To do.
  • a marker for identification is written on the voice input device 260 and the voice output device 270
  • the marker position (coordinates) may be specified in the captured image of the camera 250 fixed at a fixed position. In this way, it is possible to specify the arrangement positions of both in the captured image without performing shape recognition of the voice input device 260 and the voice output device 270, and the image analysis process can be simplified.
  • sound caused by a phase shift or a reflected wave phase shift when radio waves, infrared rays, laser beams, etc. are emitted from two or more fixed points and received by a voice input device or a voice output device. You may detect the change of the arrangement position of an input device or an audio
  • a digital output may be used for the speaker 74, the microphone 64, and the acceleration sensors 62 and 72.
  • an audio input device 60 or an audio output device 70 may be provided with an A / D converter or a D / A converter.
  • the count value T may be acquired from the CPU 80 by using an interval timer of the CPU 80 instead of the timer 44.
  • a band pass filter is used as the digital filter 34, a high pass filter (HPF), a low pass filter (LPF), or a combination of these various filters may be used.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Telephone Function (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Cable Transmission Systems, Equalization Of Radio And Reduction Of Echo (AREA)

Abstract

Disclosed are an echo removal device, an echo removal method, and a program for the echo removal device which make it possible to newly obtain time-difference information and level-difference information when the arrangement position of an input means or an output means is changed, and to remove acoustic echo components based on the latest information. When a change in the arrangement position of a voice input device (60) or a voice output device (70) is detected by a movement detection section (12), a reference signal is generated by reference signal generation sections (14, 16), superimposed on the voice signal received from terminal devices (3, 4) by an adder (24), and output from a speaker (74). In order to suppress the acoustic echo, the reference signal is extracted by a digital filter (34) from the voice picked up by a microphone (64), and an acoustic echo component, which is the delayed and attenuated received voice signal, is generated on the basis of the time-difference information and the level-difference information obtained by comparison with the original reference signal; the acoustic echo component is then removed from the voice signal to be transmitted by a subtractor (42), and the voice signal is transmitted to the terminal devices (3, 4).

Description

エコー除去装置、エコー除去方法、およびエコー除去装置のプログラムEcho removal apparatus, echo removal method, and program for echo removal apparatus
 本発明は、通信先装置に送信する音声信号から音響エコー成分を除去するエコー除去装置、エコー除去方法、およびエコー除去装置のプログラムに関する。 The present invention relates to an echo removal apparatus, an echo removal method, and an echo removal apparatus program for removing an acoustic echo component from an audio signal transmitted to a communication destination apparatus.
 複数の拠点に設置された端末装置間で音声信号や映像信号などの送受信を行い、利用者間でリアルタイムに音声や映像を交わして会議を進行することができるテレビ会議システムが知られている。こうしたテレビ会議システムの音声面において、利用者の発した音声が、すこし遅れて、遠隔地の利用者がいる拠点のスピーカとマイクを経由し、利用者のいる拠点に戻ってきて、利用者の発した音声が反響する、いわゆる音響エコーが発生することも知られている。例えば、自拠点において利用者の発した音声は、他拠点に送信されスピーカから出力されるが、出力された音声が他拠点のマイクに拾われると、再び自拠点に送信され、自拠点のスピーカから出力されることになる。音響エコーは、利用者の発した音声がこのような経路をたどる間に、もとの音声に対し、遅延(タイミングの遅れ(時間ずれ))や減衰(信号レベルの低下(レベルずれ))を生ずるために、発生する。このような音響エコーの影響を軽減できるように、例えば自拠点において、マイクに入力された音声を音声信号に変換して他拠点に送信する際に、他拠点から受信した音声信号をもとに音響エコー成分を求め、送信する音声信号から音響エコー成分を除去するエコー除去装置が知られている。 A video conference system is known in which audio signals and video signals are transmitted and received between terminal devices installed at a plurality of bases, and a conference can be performed by exchanging audio and video between users in real time. On the audio side of such a video conference system, the voice uttered by the user is slightly delayed and returns to the user's site via the speaker and microphone at the site where the user is located at a remote location. It is also known that a so-called acoustic echo is generated in which the emitted voice reverberates. For example, the voice uttered by the user at the local site is transmitted to the other site and output from the speaker. However, when the output voice is picked up by the microphone at the other site, it is transmitted again to the local site and the speaker at the local site. Will be output. Acoustic echo is delayed (timing delay (time shift)) and attenuated (decrease in signal level (level shift)) with respect to the original voice while the voice emitted by the user follows such a path. To occur. In order to reduce the effects of such acoustic echoes, for example, at the local site, when the voice input to the microphone is converted into a voice signal and transmitted to the other site, the voice signal received from the other site is used as a basis. 2. Description of the Related Art An echo removal apparatus that obtains an acoustic echo component and removes the acoustic echo component from a transmitted voice signal is known.
 もっとも、マイクやスピーカが設置される環境は様々である。例えば、広い会議室において、スピーカから出力された音声が室内の壁による反射を経由してマイクに入力される場合と、狭い会議室における同様の場合とでは、音声がスピーカからマイクに至るまでの経路に差があり、時間ずれやレベルずれの程度が異なる。そこで従来は、エコー除去装置を使用する前に、スピーカから基準となる音(基準音)を出力しつつマイクで拾い、基準音の時間ずれやレベルずれを測定し、測定結果に基づいてマイクおよびスピーカの設置場所に対応した音響エコー成分を求めていた。 However, there are various environments where microphones and speakers are installed. For example, in a large meeting room, when sound output from a speaker is input to a microphone via reflection by an indoor wall, and in a similar case in a narrow meeting room, the sound reaches from the speaker to the microphone. There are differences in paths, and the degree of time shift and level shift is different. Therefore, conventionally, before using the echo canceller, the reference sound (reference sound) is output from the speaker and picked up by the microphone, and the time shift or level shift of the reference sound is measured. The acoustic echo component corresponding to the installation location of the speaker was obtained.
 しかし、例えば利用者がマイクを持って自席からホワイトボード前に移動し説明を行う場合など、会議中に、マイクとスピーカとの位置関係が変わる場合がある。このような事例に対応するには、時間ずれの情報やレベルずれの情報(以下、「(音響エコー成分の)パラメータ」ともいう。)を、常時あるいは定期的に求めて更新し、音響エコー成分が常に最新のパラメータに基づき求められるようにするとよい(例えば特許文献1参照。)。また、基準音の音声波形の周波数を非可聴領域の周波数とすれば、会議中にパラメータの更新が行われ、基準音が利用者の発した音声と重なったとしても、利用者が、自己の発声や他者の音声の聞き取りを妨げられることがない(例えば特許文献2参照。)。 However, the positional relationship between the microphone and the speaker may change during the conference, for example, when the user moves from his seat to the front of the whiteboard with a microphone and gives an explanation. In order to deal with such cases, information on time lag and level lag (hereinafter also referred to as “(acoustic echo component) parameter”) is obtained and updated constantly or periodically, and the acoustic echo component is updated. Is always obtained based on the latest parameters (see, for example, Patent Document 1). In addition, if the frequency of the sound waveform of the reference sound is set to the frequency of the non-audible region, even if the parameter is updated during the conference and the reference sound overlaps with the sound emitted by the user, the user can There is no hindrance to utterance or listening to the voices of others (for example, see Patent Document 2).
特開2008-261923号公報JP 2008-261923 A 特開2008-259032号公報JP 2008-259032 A
 しかしながら、時間ずれの情報やレベルずれの情報を常時あるいは定期的に求めることによって、エコー除去装置には、それらパラメータを計算するための負荷が、継続的に、かかってしまう。また、マイクとスピーカとの位置関係に変化がない場合にパラメータを更新しても、更新前パラメータと更新後のパラメータとは同一であるか、あるいはほとんど差がなく、こうした場合にもパラメータの更新を行うことは、エコー除去装置に無駄な負荷がかかるだけであった。 However, by obtaining information on time lag and level lag constantly or periodically, the echo canceller is continuously subjected to a load for calculating those parameters. Even if the parameters are updated when there is no change in the positional relationship between the microphone and the speaker, the parameters before the update and the parameters after the update are the same or there is almost no difference. Performing this only puts a wasteful load on the echo canceller.
 本発明は、上記問題点を解決するためになされたものであり、入力手段や出力手段の配置位置が変化した場合に時間ずれの情報やレベルずれの情報を新たに求め、最新の情報に基づく音響エコー成分の除去を行うことができるエコー除去装置、エコー除去方法、およびエコー除去装置のプログラムを提供することを目的とする。 The present invention has been made to solve the above-described problems. When the arrangement position of the input means and the output means is changed, information on time lag and information on level lag is newly obtained and is based on the latest information. An object of the present invention is to provide an echo removal apparatus, an echo removal method, and an echo removal apparatus program capable of removing an acoustic echo component.
 本発明の第1態様に係るエコー除去装置は、通信先装置から受信する音声信号である受信音声信号を音声に変換して出力する出力手段と、入力される周囲の音声を前記通信先装置に送信する音声信号である送信音声信号に変換する入力手段と、前記出力手段および前記入力手段の少なくとも一方の配置位置に変化が生じたことを検出する位置検出手段と、前記出力手段から出力された音声が前記入力手段に入力されて生ずる音響エコー成分を前記送信音声信号から除去するための基準となる基準信号を、前記位置検出手段が前記配置位置の変化を検出した場合に生成する生成手段と、前記受信音声信号に前記基準信号を重畳する重畳手段と、前記入力手段の変換した前記送信音声信号にフィルタリング処理を行い、前記基準信号を抽出する抽出手段と、前記生成手段によって生成された際の前記基準信号である生成基準信号と、前記抽出手段によって抽出された際の前記基準信号である抽出基準信号とを比較して、前記生成基準信号の生成タイミングと前記抽出基準信号の抽出タイミングとの時間ずれの情報と、前記生成タイミングにおける前記生成基準信号の信号レベルと前記抽出タイミングにおける前記抽出基準信号の信号レベルとのレベルずれの情報とを求める演算手段と、前記受信音声信号に対し、前記時間ずれの情報と前記レベルずれの情報とに基づく演算を行って前記音響エコー成分を生成し、前記送信音声信号から差し引いて、前記音響エコー成分を除去した除去音声信号を生成する除去手段と、前記通信先装置に送信する前記送信音声信号として、前記除去音声信号を送信する送信手段とを備えている。 An echo removing apparatus according to a first aspect of the present invention includes: an output unit that converts a received voice signal, which is a voice signal received from a communication destination apparatus, into a voice; and outputs an input surrounding voice to the communication destination apparatus. Input means for converting to a transmission voice signal which is a voice signal to be transmitted, position detection means for detecting that a change has occurred in at least one of the output means and the input means, and output from the output means Generating means for generating, when the position detecting means detects a change in the arrangement position, a reference signal used as a reference for removing an acoustic echo component generated when the sound is input to the input means from the transmission sound signal; A superimposing unit that superimposes the reference signal on the received audio signal; and a filtering process performed on the transmission audio signal converted by the input unit to extract the reference signal An output means, a generated reference signal that is the reference signal generated by the generating means, and an extracted reference signal that is the reference signal extracted by the extracting means; Information on the time difference between the generation timing of the extraction reference signal and the extraction reference signal, and information on the level difference between the signal level of the generation reference signal at the generation timing and the signal level of the extraction reference signal at the extraction timing. A calculation means for obtaining, and performing the calculation based on the time shift information and the level shift information for the received voice signal to generate the acoustic echo component, subtracting from the transmission voice signal, the acoustic echo component Removing means for generating a removed voice signal from which the sound is removed, and the removed voice as the transmission voice signal to be transmitted to the communication destination device And a transmitting means for transmitting the items.
 第1態様によれば、音響エコー成分を生成する上で必要な時間ずれの情報とレベルずれの情報とを求める際に生成される基準信号を、受信音声信号に重畳して、出力手段から出力することができる。したがって、通信先装置との間で音声信号の送受信がなされている最中(以下、「運用中」という。)においても、基準信号を用いて時間ずれの情報とレベルずれの情報とを求め、更新することができる。これにより、運用中に、出力手段や入力手段の配置位置に変化が生じ、それまで用いていた時間ずれの情報とレベルずれの情報とでは適切な音響エコー成分が生成できなくなる虞を生じても、直ちに、新たな時間ずれの情報とレベルずれの情報とを求め、更新することができる。よって、運用中に起こりうる、音響エコー成分の生成精度に影響を及ぼす虞のある状況変化に対応して適切な音響エコー成分を生成することができ、送信音声信号からの音響エコー成分の除去精度を維持することができる。 According to the first aspect, the reference signal generated when obtaining the time shift information and the level shift information necessary for generating the acoustic echo component is superimposed on the received audio signal and output from the output means. can do. Therefore, even during transmission / reception of an audio signal to / from a communication destination device (hereinafter referred to as “in operation”), the time shift information and the level shift information are obtained using the reference signal, Can be updated. As a result, during operation, there is a change in the arrangement position of the output means and input means, and there is a possibility that an appropriate acoustic echo component cannot be generated with the information on the time deviation and the information on the level deviation used so far. Immediately, new time shift information and level shift information can be obtained and updated. Therefore, it is possible to generate an appropriate acoustic echo component in response to a change in the situation that may affect the generation accuracy of the acoustic echo component during operation, and the accuracy of removing the acoustic echo component from the transmission voice signal Can be maintained.
 また、第1態様では、基準信号を、出力手段および入力手段の少なくとも一方の配置位置に変化が生じたことが検出された場合に、生成することができる。換言すると、出力手段や入力手段の配置位置に変化がなければ、基準信号の生成が行われず、時間ずれの情報やレベルずれの情報を求める演算も行われない。つまり、時間ずれの情報とレベルずれの情報の更新は、必要とされる状況が生じた場合(出力手段や入力手段の配置位置に変化があった場合)に適切になされるので、常時あるいは定期的に更新される場合と比べ、エコー除去装置に無駄な負荷がかかることがない。 In the first aspect, the reference signal can be generated when it is detected that a change has occurred in the arrangement position of at least one of the output means and the input means. In other words, if there is no change in the arrangement position of the output means and the input means, the reference signal is not generated, and the calculation for obtaining the time shift information and the level shift information is not performed. In other words, the time lag information and the level lag information are updated appropriately when a necessary situation occurs (when there is a change in the arrangement position of the output means or the input means). In comparison with the case where the echo is updated, a wasteful load is not applied to the echo canceller.
 また、位置検出手段は、出力手段や入力手段の配置位置の変化の検出を行うが、出力手段と入力手段との相対的な位置関係の変化だけでなく、それぞれの絶対的な配置位置の変化を検出している。したがって、音響エコー成分の生成精度に影響を及ぼす虞のある状況変化を確実に検出することができる。 Further, the position detection means detects a change in the arrangement position of the output means and the input means, but not only a change in the relative positional relationship between the output means and the input means, but also a change in each absolute arrangement position. Is detected. Therefore, it is possible to reliably detect a change in the situation that may affect the generation accuracy of the acoustic echo component.
 また、第1態様が、前記出力手段および前記入力手段の少なくとも一方が含まれる画像を定位置から撮影する撮影手段と、前記撮影手段の撮影画像内における前記出力手段および前記入力手段の少なくとも一方の位置を解析する解析手段とをさらに備えてもよく、この場合、前記位置検出手段は、前記解析手段の解析結果に基づき、前記配置位置に変化が生じたことを検出してもよい。撮影手段を用い、定位置から出力手段や入力手段を撮影すれば、撮影画像を解析し、撮影画像内における両者の位置を把握するだけで、容易かつ確実に、出力手段および入力手段の少なくとも一方の、絶対的な、配置位置の変化を検出することができる。 Further, the first aspect is a photographing unit that photographs an image including at least one of the output unit and the input unit from a fixed position, and at least one of the output unit and the input unit in a photographed image of the photographing unit. Analysis means for analyzing the position may be further provided. In this case, the position detection means may detect that a change has occurred in the arrangement position based on an analysis result of the analysis means. If the imaging means is used and the output means and the input means are photographed from a fixed position, it is possible to easily and reliably at least one of the output means and the input means simply by analyzing the photographed image and grasping the position of both in the photographed image. It is possible to detect an absolute change in the arrangement position.
 また、第1態様が、前記出力手段および前記入力手段の少なくとも一方に加わる加速度を検出する加速度検出手段をさらに備えてもよく、この場合、前記位置検出手段は、前記加速度検出手段の検出結果に基づき、前記配置位置に変化が生じたことを検出してもよい。加速度検出手段であれば出力手段や入力手段に一体に設けることが容易である。出力手段や入力手段の配置位置に変化があれば、加速度検出手段に加速度が加わるので、加速度検出手段の検出結果をもとに、出力手段や入力手段の移動の有無を把握すれば、容易かつ確実に、出力手段および入力手段の少なくとも一方の、絶対的な、配置位置の変化を検出することができる。 In addition, the first aspect may further include an acceleration detection unit that detects an acceleration applied to at least one of the output unit and the input unit. In this case, the position detection unit may include a detection result of the acceleration detection unit. Based on this, it may be detected that a change has occurred in the arrangement position. If it is an acceleration detection means, it is easy to provide it integrally with an output means or an input means. If there is a change in the arrangement position of the output means or input means, acceleration is applied to the acceleration detection means. Therefore, if the presence or absence of movement of the output means or input means is grasped based on the detection result of the acceleration detection means, It is possible to reliably detect an absolute change in the arrangement position of at least one of the output unit and the input unit.
 また、第1態様において、前記生成手段が、前記基準信号として、音声波形の周波数が非可聴領域の周波数の信号を生成してもよい。基準信号の音声波形の周波数が非可聴領域の周波数であれば、受信音声信号に基準信号を重畳して出力手段から出力したとしても、基準信号に基づく音声を、利用者は聞き取ることができない。この場合に利用者が聞き取ることができるのは、実質的に、受信音声信号に基づく音声のみである。したがって、運用中に基準信号を出力しても、利用者の発声や聞き取りが基準信号によって妨げられることはないので、出力手段や入力手段の配置位置に変化が生じた場合、直ちに、新たな時間ずれの情報とレベルずれの情報を求め、更新することができる。よって、運用中に起こりうる、音響エコー成分の生成精度に影響を及ぼす虞のある状況変化に対応して適切な音響エコー成分を生成することができ、送信音声信号からの音響エコー成分の除去精度を維持することができる。 In the first aspect, the generating means may generate a signal having a frequency of a speech waveform in a non-audible region as the reference signal. If the frequency of the sound waveform of the reference signal is in the non-audible region, even if the reference signal is superimposed on the received sound signal and output from the output means, the user cannot hear the sound based on the reference signal. In this case, the user can hear only the voice based on the received voice signal. Therefore, even if the reference signal is output during operation, the user's utterance or listening is not hindered by the reference signal. Therefore, if there is a change in the position of the output means or input means, a new time is immediately Deviation information and level deviation information can be obtained and updated. Therefore, it is possible to generate an appropriate acoustic echo component in response to a change in the situation that may affect the generation accuracy of the acoustic echo component during operation, and the accuracy of removing the acoustic echo component from the transmission voice signal Can be maintained.
 また、第1態様が、前記受信音声信号が無音状態であるか否かを判定する判定手段をさらに備えてもよく、この場合、前記位置検出手段が前記配置位置の変化を検出し、且つ前記判定手段が、前記受信音声信号が無音状態であると判定した場合に、前記生成手段が、前記基準信号として、音声波形の周波数が可聴領域の周波数の信号を生成してもよい。一般に、音声波形の信号について、可聴領域の周波数の信号は、非可聴領域の周波数の信号と比べ、指向性が広い。また、音響エコー成分の周波数も可聴領域の周波数である。ゆえに、指向性が広く、周波数特性が音響エコー成分に近い、可聴領域の周波数の基準信号を用い、時間ずれの情報とレベルずれの情報とを求めれば、音響エコー成分の生成精度を、より高めることができる。もっとも、可聴領域の周波数をもった基準信号を受信音声信号に重畳して出力手段から出力すると、利用者は、受信音声信号に基づく音声とともに基準信号に基づく音声を聞き取ることができてしまい、利用者の発声や聞き取りが基準信号によって妨げられてしまう虞がある。ゆえに、可聴領域の周波数をもった基準信号は、受信音声信号が無音状態である場合に生成することが好ましい。 The first aspect may further include a determination unit that determines whether or not the received audio signal is in a silent state. In this case, the position detection unit detects a change in the arrangement position, and the When the determination unit determines that the received audio signal is silent, the generation unit may generate a signal having a frequency of an audio waveform in the audible region as the reference signal. In general, with respect to a speech waveform signal, an audible frequency signal has a wider directivity than a non-audible frequency signal. The frequency of the acoustic echo component is also the frequency in the audible region. Therefore, the generation accuracy of the acoustic echo component can be further improved by obtaining the time shift information and the level shift information using the reference signal of the frequency in the audible region having a wide directivity and frequency characteristics close to those of the acoustic echo component. be able to. However, if a reference signal having a frequency in the audible region is superimposed on the received sound signal and output from the output means, the user can hear the sound based on the reference signal together with the sound based on the received sound signal. There is a possibility that the person's utterance and listening will be hindered by the reference signal. Therefore, it is preferable to generate the reference signal having a frequency in the audible region when the received audio signal is in a silent state.
 また、本発明の第2態様に係るエコー除去方法は、通信先装置から受信する音声信号である受信音声信号が音声に変換されて出力手段から出力される出力工程と、周囲の音声が入力手段に入力されて、前記通信先装置に送信する音声信号である送信音声信号に変換される入力工程と、前記出力手段および前記入力手段の少なくとも一方の配置位置に変化を生じたことが検出される位置検出工程と、前記出力手段から出力された音声が前記入力手段に入力されて生ずる音響エコー成分を前記送信音声信号から除去するための基準となる基準信号が、前記位置検出工程において前記配置位置の変化が検出された場合に生成される生成工程と、前記受信音声信号に前記基準信号が重畳される重畳工程と、前記入力工程において変換された前記送信音声信号にフィルタリング処理が行われ、前記基準信号が抽出される抽出工程と、前記生成工程において生成された際の前記基準信号である生成基準信号と、前記抽出工程において抽出された際の前記基準信号である抽出基準信号とが比較され、前記生成基準信号の生成タイミングと前記抽出基準信号の抽出タイミングとの時間ずれの情報と、前記生成タイミングにおける前記生成基準信号の信号レベルと前記抽出タイミングにおける前記抽出基準信号の信号レベルとのレベルずれの情報とが求められる演算工程と、前記受信音声信号に対し、前記時間ずれの情報と前記レベルずれの情報とに基づく演算が行われて前記音響エコー成分が生成され、前記送信音声信号から差し引かれて、前記音響エコー成分を除去した除去音声信号が生成される除去工程と、前記通信先装置に送信する前記送信音声信号として、前記除去音声信号が送信される送信工程とを備えている。 The echo cancellation method according to the second aspect of the present invention includes an output step in which a received voice signal, which is a voice signal received from a communication destination device, is converted into voice and output from the output means, and surrounding voice is input means. And an input step that is converted into a transmission audio signal that is an audio signal to be transmitted to the communication destination device, and a change in at least one of the output means and the input means is detected. A reference signal that serves as a reference for removing a sound echo component generated when the sound output from the output means is input to the input means is removed from the transmission sound signal in the position detection step. A generation step that is generated when a change is detected, a superimposition step in which the reference signal is superimposed on the received audio signal, and the transmission converted in the input step An extraction process in which a voice signal is filtered and the reference signal is extracted; a generation reference signal that is the reference signal generated in the generation process; and the reference that is extracted in the extraction process And an extraction reference signal, which is a signal, is compared, information on a time lag between the generation timing of the generation reference signal and the extraction timing of the extraction reference signal, the signal level of the generation reference signal at the generation timing, and the extraction timing A calculation step in which level deviation information with respect to the signal level of the extraction reference signal is obtained, and a calculation based on the time gap information and the level deviation information is performed on the received audio signal to perform the acoustic echo Component is generated and subtracted from the transmitted audio signal to generate a removed audio signal from which the acoustic echo component has been removed. A removal step, as the transmission sound signal to be transmitted to the communication destination device, and a transmission step of the removing the sound signal is transmitted.
 第2態様によれば、音響エコー成分を生成する上で必要な時間ずれの情報とレベルずれの情報とを求める際に生成される基準信号を、受信音声信号に重畳して、出力手段から出力することができる。したがって、通信先装置との間で音声信号の送受信がなされている最中(運用中)においても、基準信号を用いて時間ずれの情報とレベルずれの情報とを求め、更新することができる。これにより、運用中に、出力手段や入力手段の配置位置に変化が生じ、それまで用いていた時間ずれの情報とレベルずれの情報とでは適切な音響エコー成分が生成できなくなる虞を生じても、直ちに、新たな時間ずれの情報とレベルずれの情報とを求め、更新することができる。よって、運用中に起こりうる、音響エコー成分の生成精度に影響を及ぼす虞のある状況変化に対応して適切な音響エコー成分を生成することができ、送信音声信号からの音響エコー成分の除去精度を維持することができる。 According to the second aspect, the reference signal generated when obtaining the time lag information and the level lag information necessary for generating the acoustic echo component is superimposed on the received audio signal and output from the output means. can do. Therefore, even during transmission / reception of audio signals to / from the communication destination apparatus (during operation), it is possible to obtain and update information on time lag and information on level lag using the reference signal. As a result, during operation, there is a change in the arrangement position of the output means and input means, and there is a possibility that an appropriate acoustic echo component cannot be generated with the information on the time deviation and the information on the level deviation used so far. Immediately, new time shift information and level shift information can be obtained and updated. Therefore, it is possible to generate an appropriate acoustic echo component in response to a change in the situation that may affect the generation accuracy of the acoustic echo component during operation, and the accuracy of removing the acoustic echo component from the transmission voice signal Can be maintained.
 また、第2態様では、基準信号を、出力手段および入力手段の少なくとも一方の配置位置に変化が生じたことが検出された場合に、生成することができる。換言すると、出力手段や入力手段の配置位置に変化がなければ、基準信号の生成が行われず、時間ずれの情報やレベルずれの情報とを求める演算も行われない。つまり、時間ずれの情報とレベルずれの情報との更新は、必要とされる状況が生じた場合(出力手段や入力手段の配置位置に変化があった場合)に適切になされるので、常時あるいは定期的に更新される場合と比べ、エコー除去装置に無駄な負荷がかかることがない。 In the second aspect, the reference signal can be generated when it is detected that a change has occurred in the arrangement position of at least one of the output means and the input means. In other words, if there is no change in the arrangement positions of the output means and the input means, the reference signal is not generated, and the calculation for obtaining the time shift information and the level shift information is not performed. In other words, the time lag information and the level lag information are updated appropriately when a necessary situation occurs (when there is a change in the arrangement position of the output means or the input means). Compared to the case where it is regularly updated, there is no unnecessary load on the echo canceller.
 また、位置検出工程では、出力手段や入力手段の配置位置の変化の検出が行われるが、出力手段と入力手段との相対的な位置関係の変化だけでなく、それぞれの絶対的な配置位置の変化が検出されている。したがって、音響エコー成分の生成精度に影響を及ぼす虞のある状況変化を確実に検出することができる。 Further, in the position detection step, the change in the arrangement position of the output means and the input means is detected, but not only the change in the relative positional relationship between the output means and the input means, but also the absolute arrangement position of each. A change has been detected. Therefore, it is possible to reliably detect a change in the situation that may affect the generation accuracy of the acoustic echo component.
 また、本発明の第3態様のエコー除去装置のプログラムは、請求項1に記載のエコー除去装置の各種処理手段として、コンピュータを機能させることを特徴とする。エコー除去装置のプログラムをコンピュータに実行させることにより、請求項1に記載の発明の効果を奏することができる。 Further, the program of the echo removal apparatus according to the third aspect of the present invention causes a computer to function as various processing means of the echo removal apparatus according to claim 1. By causing the computer to execute the program of the echo removal apparatus, the effect of the invention described in claim 1 can be achieved.
ハードウェア回路でエコー除去装置の機能を実現する端末装置2の電気的な構成を示すブロック図である。It is a block diagram which shows the electrical structure of the terminal device 2 which implement | achieves the function of an echo removal apparatus with a hardware circuit. エコー除去装置において実施される処理の流れを示すフローチャートである。It is a flowchart which shows the flow of the process implemented in an echo removal apparatus. 初期化処理において実行される処理の流れを示すフローチャートである。It is a flowchart which shows the flow of the process performed in an initialization process. 基準信号の音声波形の一例を示す図である。It is a figure which shows an example of the audio | voice waveform of a reference signal. スピーカから音声として出力されマイクに再入力されることによって遅延と減衰とを生じた基準信号の音声波形の一例を示す図である。It is a figure which shows an example of the audio | voice waveform of the reference signal which produced delay and attenuation | damping by being output as an audio | voice from a speaker and re-inputting into a microphone. ソフトウェア制御でエコー除去装置の機能を実現するPC102の電気的な構成を示すブロック図である。It is a block diagram which shows the electrical structure of PC102 which implement | achieves the function of an echo removal apparatus by software control. 変形例としての端末装置202の電気的な構成を示すブロック図である。It is a block diagram which shows the electric constitution of the terminal device 202 as a modification.
 以下、本発明に係るエコー除去装置の一実施の形態について、図面を参照して説明する。なお、参照する図面は、本発明が採用しうる技術的特徴を説明するために用いられるものであり、記載されている装置の構成、各種処理のフローチャート等は、特に特定的な記載がない限り、それのみに限定する趣旨ではなく、単なる説明例である。 Hereinafter, an embodiment of an echo removing apparatus according to the present invention will be described with reference to the drawings. The drawings to be referred to are used for explaining the technical features that can be adopted by the present invention, and the configuration of the apparatus described, the flowcharts of various processes, etc., unless otherwise specified. It is not intended to be limited to that, but merely an illustrative example.
 本実施の形態において、エコー除去装置は、遠隔地(複数拠点)の利用者同士がネットワークを介してリアルタイムに音声や映像を交わし、会議等を進めることができる、テレビ会議システムの端末装置に用いられる装置である。具体的に、本実施の形態では、エコー除去装置が、端末装置において音声に関する処理を司る装置として提供され、ハードウェア回路の一部としてテレビ会議システムの端末装置に組み込まれている。以下では、図1に示すテレビ会議システムの端末装置2において、エコー除去装置の機能をなす部位を、エコー除去部8として説明する。 In this embodiment, the echo canceller is used for a terminal device of a video conference system in which users at remote locations (multiple locations) can exchange audio and video in real time via a network and proceed with a conference or the like. Device. Specifically, in the present embodiment, the echo canceller is provided as a device that controls processing related to sound in the terminal device, and is incorporated in the terminal device of the video conference system as part of the hardware circuit. Below, the part which functions as an echo removal apparatus in the terminal device 2 of the video conference system shown in FIG.
 図1に示すように、本実施の形態において、テレビ会議システムは、ネットワーク1を介して相互に接続された端末装置2~4間で、音声信号や映像信号の送受信を行うことのできるシステムとして提供されている。端末装置2~4のいずれもが、状況に応じてテレビ会議システムにおけるクライアントあるいはホストとしての役割を担う。MCU(MULTI―POINT CONTROL UNIT)を用いたテレビ会議システムを構築する場合、端末装置2~4は、クライアントとして使用すればよい。ここでは、端末装置2~4は、いずれも同一構成のテレビ会議専用端末であるものとし、エコー除去装置の詳細については、端末装置2のエコー除去部8を例に説明することとする。なお、図1ではネットワーク1に3台の端末装置2~4が接続されているが、テレビ会議システムを構成する端末装置の数を3台に限るものではない。 As shown in FIG. 1, in the present embodiment, the video conference system is a system that can transmit and receive audio signals and video signals between terminal devices 2 to 4 connected to each other via a network 1. Is provided. Each of the terminal devices 2 to 4 plays a role as a client or a host in the video conference system depending on the situation. When constructing a video conference system using MCU (MULTI-POINT CONTROL UNIT), the terminal devices 2 to 4 may be used as clients. Here, it is assumed that the terminal devices 2 to 4 are all video conference dedicated terminals having the same configuration, and the details of the echo removal device will be described by taking the echo removal unit 8 of the terminal device 2 as an example. In FIG. 1, three terminal devices 2 to 4 are connected to the network 1, but the number of terminal devices constituting the video conference system is not limited to three.
 端末装置2は、端末装置2の全体の制御を司る、公知のCPU80を備えている。CPU80には、バス86を介し、ROM82、RAM84、入出力インターフェイス88が接続されている。入出力インターフェイス88には、操作部92、映像処理部94、音声処理部10、通信部46が接続されている。 The terminal device 2 includes a known CPU 80 that controls the entire terminal device 2. A ROM 82, a RAM 84, and an input / output interface 88 are connected to the CPU 80 via a bus 86. An operation unit 92, a video processing unit 94, an audio processing unit 10, and a communication unit 46 are connected to the input / output interface 88.
 ROM82は、端末装置2を動作させるための各種のプログラムやデータを記憶している。CPU80は、ROM82に記憶されたプログラムにしたがって、端末装置2の動作を制御する。RAM84は、各種データを一時的に記憶する。操作部92は、利用者が端末装置2の操作を行うための入力装置である。通信部46は、ネットワーク1を介して、自拠点の端末装置2と他拠点の端末装置3、4とを接続し、端末間で、通信用のプロトコルに変換した各種信号(制御信号、音声信号、映像信号など)の送受信を行う。さらに通信部46は、入出力インターフェイス88を介し、音声処理部10や映像処理部94との間で、音声信号や映像信号の受け渡しを行う。また、図示しないが、端末装置2はコーデックも備えており、送信する信号の圧縮や受信した信号の解凍がなされる。 The ROM 82 stores various programs and data for operating the terminal device 2. The CPU 80 controls the operation of the terminal device 2 according to the program stored in the ROM 82. The RAM 84 temporarily stores various data. The operation unit 92 is an input device for a user to operate the terminal device 2. The communication unit 46 connects the terminal device 2 at its own site and the terminal devices 3 and 4 at other sites via the network 1, and various signals (control signals, audio signals) converted into communication protocols between the terminals. , Video signals, etc.). Furthermore, the communication unit 46 exchanges audio signals and video signals with the audio processing unit 10 and the video processing unit 94 via the input / output interface 88. Although not shown, the terminal device 2 also includes a codec, and compresses a signal to be transmitted and decompresses a received signal.
 映像処理部94には、映像入力装置96および映像出力装置98が接続されている。映像処理部94は、映像入力装置96(例えばカメラ)により撮影された映像を処理し、端末装置3、4に送信する映像信号を生成する。また、映像処理部94は、端末装置3、4から受信した映像信号を処理し、映像出力装置98(例えばモニタ)に映像を表示する。 A video input device 96 and a video output device 98 are connected to the video processing unit 94. The video processing unit 94 processes video captured by the video input device 96 (for example, a camera) and generates a video signal to be transmitted to the terminal devices 3 and 4. The video processing unit 94 processes video signals received from the terminal devices 3 and 4 and displays the video on a video output device 98 (for example, a monitor).
 音声処理部10には、音声入力装置60および音声出力装置70が接続されている。音声処理部10は、音声入力装置60のマイク64に入力された音声を処理し、端末装置3、4に送信する音声信号(以下、「送信音声信号」という。)を生成する。また、音声処理部10は、端末装置3、4から受信した音声信号(以下、「受信音声信号」という。)を処理し、音声出力装置70のスピーカ74から音声を出力する。音声処理部10の詳細については後述するが、音声処理部10、音声入力装置60、音声出力装置70、通信部46、そしてこれらの各処理部(各装置)を制御するための各構成(CPU80、ROM82、RAM84等)によって、エコー除去部8が構成されている。 A voice input device 60 and a voice output device 70 are connected to the voice processing unit 10. The audio processing unit 10 processes audio input to the microphone 64 of the audio input device 60 and generates an audio signal (hereinafter referred to as “transmission audio signal”) to be transmitted to the terminal devices 3 and 4. The audio processing unit 10 processes audio signals received from the terminal devices 3 and 4 (hereinafter referred to as “received audio signals”), and outputs audio from the speaker 74 of the audio output device 70. Although details of the audio processing unit 10 will be described later, the audio processing unit 10, the audio input device 60, the audio output device 70, the communication unit 46, and each configuration (CPU 80) for controlling each of these processing units (each device). , ROM 82, RAM 84, etc.) constitute an echo removal unit 8.
 上記の音声入力装置60は、マイク64と加速度センサ62とを備え、移動可能な装置として構成されている。マイク64は、入力される周囲の音声を電気信号(アナログの音声信号)に変換する。加速度センサ62は、音声入力装置60に加わる加速度を検出する。音声出力装置70は、スピーカ74と加速度センサ72とを備え、音声入力装置60と同様に移動可能な装置として構成されている。スピーカ74は、入力される電気信号(アナログの音声信号)を音声に変換して出力する。加速度センサ72は、音声出力装置70に加わる加速度を検出する。音声入力装置60と音声出力装置70とは、設置場所(配置位置)をそれぞれ独立に変更できるように、端末装置2とは別体に設けられている。 The voice input device 60 includes a microphone 64 and an acceleration sensor 62, and is configured as a movable device. The microphone 64 converts input ambient sound into an electric signal (analog sound signal). The acceleration sensor 62 detects acceleration applied to the voice input device 60. The audio output device 70 includes a speaker 74 and an acceleration sensor 72, and is configured as a movable device like the audio input device 60. The speaker 74 converts an input electric signal (analog audio signal) into a sound and outputs the sound. The acceleration sensor 72 detects acceleration applied to the audio output device 70. The voice input device 60 and the voice output device 70 are provided separately from the terminal device 2 so that the installation location (arrangement position) can be changed independently.
 次に、音声処理部10は、移動検出部12、基準信号生成部14,16、スイッチ(SW)18、スイッチ制御部22、加算器24、A/Dコンバータ26、D/Aコンバータ28、A/Dコンバータ30、デジタルフィルタ34、信号比較部36、遅延処理部38、減衰処理部40、減算器42、タイマ44、分配器20,32を備える。移動検出部12には、A/Dコンバータ26を介し、音声入力装置60の加速度センサ62と、音声出力装置70の加速度センサ72とが接続されている。移動検出部12は、加速度センサ62,72による加速度の検出結果に基づき、音声入力装置60および音声出力装置70の少なくとも一方に、現在位置からの移動が生じたことを検出する。すなわち、移動検出部12は、音声入力装置60と音声出力装置70との相対的な位置関係の変化だけでなく、それぞれの絶対的な配置位置の変化についても、検出することができる。 Next, the voice processing unit 10 includes a movement detection unit 12, reference signal generation units 14 and 16, a switch (SW) 18, a switch control unit 22, an adder 24, an A / D converter 26, a D / A converter 28, A / D converter 30, digital filter 34, signal comparison unit 36, delay processing unit 38, attenuation processing unit 40, subtractor 42, timer 44, and distributors 20 and 32. An acceleration sensor 62 of the voice input device 60 and an acceleration sensor 72 of the voice output device 70 are connected to the movement detection unit 12 via the A / D converter 26. The movement detection unit 12 detects that movement from the current position has occurred in at least one of the voice input device 60 and the voice output device 70 based on the detection results of acceleration by the acceleration sensors 62 and 72. That is, the movement detection unit 12 can detect not only a change in the relative positional relationship between the audio input device 60 and the audio output device 70 but also a change in each absolute arrangement position.
 基準信号生成部14,16の入力が、それぞれ移動検出部12に接続されている。また、基準信号生成部14,16の出力が、スイッチ18および分配器20を介し、加算器24と信号比較部36(後述)とのそれぞれに接続されている。基準信号生成部14は、基準信号として、音声波形の周波数が可聴領域の周波数(本実施の形態では1KHz)の信号を生成し、加算器24と信号比較部36とに出力する。基準信号生成部16も同様に、基準信号として、音声波形の周波数が非可聴領域の周波数(本実施の形態では100KHz)の信号を生成し、加算器24と信号比較部36とに出力する。 The inputs of the reference signal generators 14 and 16 are connected to the movement detector 12 respectively. Further, the outputs of the reference signal generation units 14 and 16 are connected to an adder 24 and a signal comparison unit 36 (described later) via the switch 18 and the distributor 20, respectively. The reference signal generation unit 14 generates a signal whose frequency of the audio waveform is an audible frequency (1 KHz in the present embodiment) as a reference signal, and outputs the signal to the adder 24 and the signal comparison unit 36. Similarly, the reference signal generation unit 16 generates a signal having a frequency of the sound waveform in the non-audible region (100 kHz in the present embodiment) as the reference signal, and outputs the signal to the adder 24 and the signal comparison unit 36.
 スイッチ18は、基準信号生成部14または基準信号生成部16の一方と、加算器24および信号比較部36との接続を、択一的に切り換える。より具体的に、スイッチ18は、スイッチ制御部22によって制御され、加算器24および信号比較部36に、1KHzの基準信号が入力されるようにする接続(図1中A側)と、100KHzの基準信号が入力されるようにする接続(図1中B側)とを切り換える。なお、スイッチ18は、図1では便宜上、有接点型のスイッチとして図示しているが、トランジスタ等を用いた無接点型のものであれば好ましい。 The switch 18 selectively switches connection between one of the reference signal generation unit 14 or the reference signal generation unit 16 and the adder 24 and the signal comparison unit 36. More specifically, the switch 18 is controlled by the switch control unit 22, a connection (A side in FIG. 1) that allows the reference signal of 1 KHz to be input to the adder 24 and the signal comparison unit 36, and 100 KHz. The connection (B side in FIG. 1) for switching the reference signal is switched. In FIG. 1, the switch 18 is shown as a contact type switch for the sake of convenience. However, a contactless type switch using a transistor or the like is preferable.
 スイッチ制御部22は、受信音声信号が加算器24に入力される経路上に設けられている。より具体的には、通信部46において端末装置3、4から受信する受信音声信号が入出力インターフェイス88を介して音声処理部10に入力されるが、スイッチ制御部22は、入出力インターフェイス88と加算器24との間に設けられている。スイッチ制御部22は、スイッチ制御部22を通過する受信音声信号が無音状態であるか否かを判別する。なお、無音状態とは、受信音声信号の信号レベル(音声波形の振幅)が0または所定のしきい値未満の状態をいうが、受信音声信号自体が未入力である場合も信号レベルが0であり、無音状態とみなされる。スイッチ制御部22は、受信音声信号が無音状態の場合には、スイッチ18をA側に切り換え、有音状態の場合には、スイッチ18をB側に切り換えるよう、制御する。なお、無音状態の判断は、上記のように、受信音声信号の通過時に行えばよいが、より精度を高めるためには、信号レベルがしきい値未満の状態が所定時間(例えば1秒間)継続したら、無音状態と判断するようにするとよい。また、スイッチ制御部22は、後述するデジタルフィルタ34に対しても、受信音声信号の信号レベルに応じて生成される基準信号に対応したフィルタ設定に切り換える指示を伝達する。 The switch control unit 22 is provided on a path through which the received audio signal is input to the adder 24. More specifically, the received audio signal received from the terminal devices 3 and 4 in the communication unit 46 is input to the audio processing unit 10 via the input / output interface 88, but the switch control unit 22 It is provided between the adder 24. The switch control unit 22 determines whether or not the received audio signal passing through the switch control unit 22 is in a silent state. The silent state refers to a state in which the signal level of the received audio signal (the amplitude of the audio waveform) is 0 or less than a predetermined threshold, but the signal level is 0 even when the received audio signal itself is not input. Yes, considered silent. The switch control unit 22 performs control so that the switch 18 is switched to the A side when the received audio signal is silent, and the switch 18 is switched to the B side when it is in a sound state. Note that the silence state may be determined when the received audio signal passes as described above. However, in order to improve the accuracy, the state where the signal level is less than the threshold value continues for a predetermined time (for example, 1 second). Then, it is better to judge that there is no sound. The switch control unit 22 also transmits an instruction to switch to filter setting corresponding to the reference signal generated according to the signal level of the received audio signal, also to the digital filter 34 described later.
 加算器24の入力には、スイッチ18を介して基準信号生成部14,16と、スイッチ制御部22および入出力インターフェイス88を介して通信部46とが接続されている。加算器24の出力には、D/Aコンバータ28と遅延処理部38とがそれぞれ接続されている。加算器24は、通信部46から入力される受信音声信号に、基準信号生成部14,16から入力される基準信号を重畳(すなわち、受信音声信号と基準信号とを合成)し、出力音声信号として、D/Aコンバータ28と遅延処理部38とに出力する。 The input of the adder 24 is connected to the reference signal generation units 14 and 16 through the switch 18 and the communication unit 46 through the switch control unit 22 and the input / output interface 88. A D / A converter 28 and a delay processing unit 38 are connected to the output of the adder 24, respectively. The adder 24 superimposes the reference signal input from the reference signal generation units 14 and 16 on the received audio signal input from the communication unit 46 (that is, combines the received audio signal and the reference signal), and outputs the output audio signal. To the D / A converter 28 and the delay processing unit 38.
 なお、後述するが、基準信号は常時生成されるわけではなく、基準信号が生成されない場合、加算器24は、受信音声信号を、そのまま通過させ、D/Aコンバータ28と遅延処理部38とに出力する。また、本実施の形態では、受信音声信号が無音状態(未入力も含む)の場合においても基準信号を生成する場合がある。この場合、加算器24は、基準信号を、そのまま通過させ、D/Aコンバータ28と遅延処理部38とに出力する。便宜上、加算器24から出力されるこれらの信号についても、出力音声信号と呼ぶこととする。 As will be described later, the reference signal is not always generated. When the reference signal is not generated, the adder 24 passes the received audio signal as it is, and passes it to the D / A converter 28 and the delay processing unit 38. Output. In the present embodiment, the reference signal may be generated even when the received audio signal is in a silent state (including no input). In this case, the adder 24 passes the reference signal as it is and outputs it to the D / A converter 28 and the delay processing unit 38. For convenience, these signals output from the adder 24 are also referred to as output audio signals.
 D/Aコンバータ28の出力には、図示しない増幅器を介して、音声出力装置70のスピーカ74が接続されている。D/Aコンバータ28は、出力音声信号をアナログの音声信号に変換し、スピーカ74に出力する。スピーカ74は、入力される音声信号を音声に変換し、出力する。 The speaker 74 of the audio output device 70 is connected to the output of the D / A converter 28 via an amplifier (not shown). The D / A converter 28 converts the output audio signal into an analog audio signal and outputs the analog audio signal to the speaker 74. The speaker 74 converts an input audio signal into audio and outputs it.
 音声入力装置60のマイク64は、A/Dコンバータ30の入力に接続されている。音声入力装置60の周囲の音声は、マイク64に入力されてアナログの音声信号に変換され、さらにA/Dコンバータ30によって、デジタルの音声信号(以下、「入力音声信号」という。)に変換される。A/Dコンバータ30の出力は、分配器32を介して、デジタルフィルタ34と減算器42とに接続されている。 The microphone 64 of the voice input device 60 is connected to the input of the A / D converter 30. The sound around the sound input device 60 is input to the microphone 64 and converted into an analog sound signal, and further converted into a digital sound signal (hereinafter referred to as “input sound signal”) by the A / D converter 30. The An output of the A / D converter 30 is connected to a digital filter 34 and a subtractor 42 via a distributor 32.
 デジタルフィルタ34は、A/Dコンバータ30から入力される入力音声信号にフィルタリング処理を行い、入力音声信号に含まれる基準信号を抽出する。本実施の形態では、基準信号として1KHzまたは100KHzの信号が生成されるので、デジタルフィルタ34として、1KHzまたは100KHzの信号を選択的に抽出するよう設定することのできるバンドパスフィルタ(BPF)が採用されている(あるいは2種類のBPFを切り換えて使用)。そして、デジタルフィルタ34は、スイッチ制御部22からの指示に従い、抽出する音声波形の周波数の設定を切り換えるように構成されている。より具体的に、スイッチ制御部22を通過する受信音声信号が無音状態の場合には、1KHzの基準信号が抽出され、有音状態の場合には、100KHzの基準信号が抽出されるよう、デジタルフィルタ34のフィルタ設定が行われる。 The digital filter 34 performs a filtering process on the input audio signal input from the A / D converter 30 and extracts a reference signal included in the input audio signal. In this embodiment, since a 1 KHz or 100 KHz signal is generated as a reference signal, a band pass filter (BPF) that can be set to selectively extract a 1 KHz or 100 KHz signal is adopted as the digital filter 34. (Or switch between two types of BPF) The digital filter 34 is configured to switch the setting of the frequency of the voice waveform to be extracted in accordance with an instruction from the switch control unit 22. More specifically, a digital signal is extracted so that a 1 KHz reference signal is extracted when the received audio signal passing through the switch control unit 22 is silent, and a 100 KHz reference signal is extracted when the voice signal is sound. Filter setting of the filter 34 is performed.
 デジタルフィルタ34の出力は、信号比較部36に接続されている。つまり、信号比較部36には、2種類の基準信号が入力される。一方は、基準信号生成部14,16にて生成され、そのまま(無劣化で)入力される、基準信号(以下、「生成基準信号」という。)である。他方は、基準信号生成部14,16にて生成され、加算器24、D/Aコンバータ28、スピーカ74、マイク64、A/Dコンバータ30を経て、デジタルフィルタ34で入力音声信号から抽出される(劣化した)基準信号(以下、「抽出基準信号」という。)である。また、信号比較部36には、生成基準信号の入力タイミング(すなわち基準信号の生成タイミング)と、抽出基準信号の抽出タイミングとの時間ずれの演算に用いるカウント値Tを取得するためのタイマ44が接続されている。信号比較部36は、生成基準信号の音声波形と、抽出基準信号の音声波形とを比較し、生成基準信号に対する抽出基準信号の時間ずれ(遅延)とレベルずれ(減衰)とを求める。 The output of the digital filter 34 is connected to the signal comparison unit 36. That is, two types of reference signals are input to the signal comparison unit 36. One is a reference signal (hereinafter referred to as “generated reference signal”) that is generated by the reference signal generation units 14 and 16 and is input as it is (without deterioration). The other is generated by the reference signal generators 14 and 16 and is extracted from the input audio signal by the digital filter 34 via the adder 24, the D / A converter 28, the speaker 74, the microphone 64, and the A / D converter 30. Reference signal (deteriorated) (hereinafter referred to as “extraction reference signal”). Further, the signal comparison unit 36 has a timer 44 for obtaining a count value T used for calculating a time lag between the input timing of the generation reference signal (that is, the generation timing of the reference signal) and the extraction timing of the extraction reference signal. It is connected. The signal comparison unit 36 compares the sound waveform of the generated reference signal with the sound waveform of the extracted reference signal, and obtains a time shift (delay) and a level shift (attenuation) of the extracted reference signal with respect to the generated reference signal.
 信号比較部36の出力は、遅延処理部38と、減衰処理部40とに接続されている。遅延処理部38には、加算器24から出力される出力音声信号と、上記の信号比較部36にて求められる時間ずれの情報(P)とが入力される。遅延処理部38は、時間ずれの情報に基づき、入力された出力音声信号を遅らせて出力する(遅延させる)処理を行う。減衰処理部40には、遅延処理部38から出力される、遅延処理がなされた出力音声信号と、上記同様、信号比較部36にて求められるレベルずれの情報(L)とが入力される。減衰処理部40は、レベルずれの情報に基づき、遅延処理がなされた出力音声信号の信号レベルを下げる(減衰させる)処理を行う。 The output of the signal comparison unit 36 is connected to a delay processing unit 38 and an attenuation processing unit 40. The delay processing unit 38 receives the output audio signal output from the adder 24 and the time shift information (P) obtained by the signal comparison unit 36. The delay processing unit 38 performs a process of delaying and outputting (delaying) the input output audio signal based on the time lag information. The attenuation processing unit 40 receives the output audio signal that has been subjected to delay processing and is output from the delay processing unit 38 and the level shift information (L) obtained by the signal comparison unit 36 as described above. The attenuation processing unit 40 performs a process of lowering (attenuating) the signal level of the output audio signal subjected to the delay process based on the level shift information.
 減算器42の入力は、減衰処理部40と、分配器32およびA/Dコンバータ30を介したマイク64とに接続されている。つまり、減算器42には、2種類の音声信号が入力される。一方の音声信号は、加算器24から出力され、遅延処理部38および減衰処理部40を経て、遅延処理ならびに減衰処理が施された出力音声信号(以下、「音響エコー成分」という。)である。他方の音声信号は、加算器24から出力され、スピーカ74で音声に変換されて出力された後、周囲の音声とともにマイク64に入力されて再び音声信号に変換された、前述の入力音声信号である。減算器42は、入力音声信号の音声波形に、音響エコー成分の音声波形の逆位相波形を重ね合わせ、入力音声信号から音響エコー成分を除去した音声信号(以下、「除去音声信号」という。)を生成する処理を行う。 The input of the subtractor 42 is connected to the attenuation processing unit 40 and the microphone 64 via the distributor 32 and the A / D converter 30. That is, two types of audio signals are input to the subtractor 42. One audio signal is an output audio signal (hereinafter referred to as “acoustic echo component”) output from the adder 24 and subjected to delay processing and attenuation processing via the delay processing unit 38 and the attenuation processing unit 40. . The other audio signal is the aforementioned input audio signal that is output from the adder 24, converted into audio by the speaker 74, output to the microphone 64 together with the surrounding audio, and converted into the audio signal again. is there. The subtractor 42 superimposes the sound waveform of the acoustic echo component on the speech waveform of the input speech signal, and removes the acoustic echo component from the input speech signal (hereinafter referred to as “removed speech signal”). Process to generate.
 減算器42の出力は、入出力インターフェイス88を介して通信部46に接続されている。除去音声信号は、送信音声信号として、通信部46からネットワーク1を介して端末装置3、4に送信される。 The output of the subtracter 42 is connected to the communication unit 46 via the input / output interface 88. The removed audio signal is transmitted as a transmission audio signal from the communication unit 46 to the terminal devices 3 and 4 via the network 1.
 次に、本実施の形態の端末装置2において、マイク64に入力された音声に基づく入力音声信号から音響エコー成分を除去した除去音声信号を、送信音声信号として、端末装置3、4に送信する処理の流れについて、図1~図5を参照して説明する。なお、便宜上、フローチャートにおける各ステップを「S」と略記する。 Next, in the terminal device 2 of the present embodiment, the removed audio signal obtained by removing the acoustic echo component from the input audio signal based on the audio input to the microphone 64 is transmitted to the terminal devices 3 and 4 as a transmission audio signal. The flow of processing will be described with reference to FIGS. For convenience, each step in the flowchart is abbreviated as “S”.
 図1に示す、端末装置2は、電源投入を契機に、駆動される。すなわち、CPU80が、ROM82に記憶されたプログラムに従い、各処理部に駆動開始時のシーケンスを実行させ、各処理部(装置)間における信号の送受信を制御することによって、端末装置2は駆動される。例えば通信部46では、ネットワーク1を介して端末装置3、4とのネゴシエーションが図られ、通信が確立される。 The terminal device 2 shown in FIG. 1 is driven when the power is turned on. That is, the CPU 80 drives the terminal device 2 by causing each processing unit to execute a sequence at the start of driving in accordance with a program stored in the ROM 82 and controlling transmission / reception of signals between the processing units (devices). . For example, the communication unit 46 negotiates with the terminal devices 3 and 4 via the network 1 to establish communication.
 エコー除去部8においては、図2に示す、初期化処理(S9)が実施され、音響エコー成分の除去に必要なパラメータ(時間ずれの情報(P)およびレベルずれの情報(L))が設定される。初期化処理の詳細は、図3に示す、処理の流れに従って行われる。まず、タイマ44が起動され(S61)、内部タイマのカウント値が一定時間ごとにインクリメントされる。 In the echo removal unit 8, the initialization process (S9) shown in FIG. 2 is performed, and parameters (time shift information (P) and level shift information (L)) necessary for removing the acoustic echo component are set. Is done. The details of the initialization process are performed according to the process flow shown in FIG. First, the timer 44 is started (S61), and the count value of the internal timer is incremented at regular intervals.
 次に、基準信号が生成される(S63)。初期化処理は、端末装置3、4からの受信音声信号の入力がない状態(通信が確立されてない状態あるいは通信が遮断されている状態)で行われる。よって図1に示すスイッチ制御部22では、受信音声信号が無音状態にあると判断され、スイッチ18の接続がA側に切り換えられる。これに伴いS63では基準信号生成部14が駆動され、音声波形の周波数が可聴領域の周波数(1KHz)の基準信号が生成される。基準信号は、図4に示すように、周波数1KHzの信号が一定間隔で間欠的に繰り返されてなる信号として生成される(基準信号(生成基準信号)の音声波形を図4において実線で示す。)。生成された基準信号は、図1に示すように、分配器20を介し、生成基準信号として、信号比較部36に入力される。信号比較部36は、生成基準信号の入力を契機にタイマ44のカウント値Tを取得し、このタイミングを基準信号の遅延を求める基準とすべく、基準信号の生成タイミングT0(図4参照)として保持する。さらに、信号比較部36は、生成基準信号の信号レベルを求め、生成レベルL0(図4参照)として保持する。 Next, a reference signal is generated (S63). The initialization process is performed in a state where no received audio signal is input from the terminal devices 3 and 4 (a state where communication is not established or a state where communication is interrupted). Therefore, the switch control unit 22 shown in FIG. 1 determines that the received audio signal is in a silent state, and the connection of the switch 18 is switched to the A side. Accordingly, in S63, the reference signal generation unit 14 is driven, and a reference signal whose frequency of the audio waveform is the frequency of the audible region (1 KHz) is generated. As shown in FIG. 4, the reference signal is generated as a signal in which a signal having a frequency of 1 KHz is intermittently repeated at regular intervals (the sound waveform of the reference signal (generated reference signal) is shown by a solid line in FIG. ). As shown in FIG. 1, the generated reference signal is input to the signal comparison unit 36 as a generated reference signal via the distributor 20. The signal comparison unit 36 obtains the count value T of the timer 44 in response to the input of the generation reference signal, and uses this timing as a reference for determining the delay of the reference signal as a reference signal generation timing T0 (see FIG. 4). Hold. Further, the signal comparison unit 36 obtains the signal level of the generation reference signal and holds it as the generation level L0 (see FIG. 4).
 また、生成された基準信号は、分配器20、加算器24、およびD/Aコンバータ28を介し、音声出力装置70のスピーカ74から音声として出力される(S65)。受信音声信号が無音状態であるので、基準信号は加算器24をそのまま通過し出力音声信号として出力され、スピーカ74からは、1KHzの基準信号に基づく可聴音が出力される。 Further, the generated reference signal is output as sound from the speaker 74 of the sound output device 70 via the distributor 20, the adder 24, and the D / A converter 28 (S65). Since the received audio signal is silent, the reference signal passes through the adder 24 as it is and is output as an output audio signal, and the speaker 74 outputs an audible sound based on the 1 KHz reference signal.
 一方、音声入力装置60のマイク64は、音声の入力待ち状態にある(S67:NO)。このマイク64に、スピーカ74から出力された1KHzの音声が入力されると(S67:YES)、入力音声信号に変換され、A/Dコンバータ30および分配器32を介し、デジタルフィルタ34に入力される。デジタルフィルタ34では、スイッチ制御部22によって、受信音声信号が無音状態にある場合の設定、すなわち、1KHzの信号を選択的に抽出する設定がなされている。したがって入力音声信号に、基準信号だけでなく、マイク64の周囲の音声に基づく信号が含まれていても、1KHzの基準信号が入力音声信号から抽出され、抽出基準信号として信号比較部36に入力される(S69)。信号比較部36は、抽出基準信号の入力を契機にタイマ44のカウント値Tを取得し、図5に示すように、基準信号の抽出タイミングT1として保持する。なお、図5では、抽出基準信号の音声波形を実線で示し、生成基準信号の音声波形を点線で示している。さらに、信号比較部36は、抽出基準信号の信号レベルを求め、抽出レベルL1として保持する。 On the other hand, the microphone 64 of the voice input device 60 is in a voice input waiting state (S67: NO). When the 1 KHz sound output from the speaker 74 is input to the microphone 64 (S67: YES), it is converted into an input sound signal and input to the digital filter 34 via the A / D converter 30 and the distributor 32. The In the digital filter 34, the switch control unit 22 is set so that the received audio signal is in a silent state, that is, a setting for selectively extracting a 1 KHz signal. Therefore, even if the input sound signal includes not only the reference signal but also a signal based on the sound around the microphone 64, the 1 kHz reference signal is extracted from the input sound signal and input to the signal comparison unit 36 as the extracted reference signal. (S69). The signal comparison unit 36 acquires the count value T of the timer 44 in response to the input of the extraction reference signal, and holds it as the reference signal extraction timing T1, as shown in FIG. In FIG. 5, the voice waveform of the extracted reference signal is indicated by a solid line, and the voice waveform of the generated reference signal is indicated by a dotted line. Further, the signal comparison unit 36 obtains the signal level of the extraction reference signal and holds it as the extraction level L1.
 そして、図3に示すように、信号比較部36において、T1-T0の演算がなされ、時間ずれPが求められる(S71)。この時間ずれの情報(P)は、遅延処理部38に伝達され、遅延処理のパラメータとして設定される。同様に、信号比較部36において、L1/L0の演算がなされ、レベルずれLが求められる(S73)。このレベルずれの情報(L)は、減衰処理部40に伝達され、減衰処理のパラメータとして設定される。以上で初期化処理(S9)は終了する。 Then, as shown in FIG. 3, the signal comparison unit 36 calculates T1-T0 and obtains the time shift P (S71). This time shift information (P) is transmitted to the delay processing unit 38 and set as a parameter for the delay processing. Similarly, the signal comparison unit 36 calculates L1 / L0 and obtains the level deviation L (S73). This level shift information (L) is transmitted to the attenuation processing unit 40 and set as a parameter for the attenuation processing. This is the end of the initialization process (S9).
 図2に示すように、初期化処理が終わると、設定されたパラメータ(P,L)を用いて音響エコーを除去する一連の処理(S11,S13,S15~S23)が行われる。通信部46においては、ネットワーク1を介した端末装置3、4との通信によって、音声信号の送受信(受信音声信号の受信および送信音声信号の送信)が行われる(S11)。音声処理部10においては、上記したように、音声入力装置60(マイク64)や音声出力装置70(スピーカ74)の配置位置に変化(移動)があれば、移動検出部12が検知し、基準信号生成部14,16に基準信号を生成させる。すなわち、音声入力装置60や音声出力装置70の配置位置に変化がなければ(S13:NO)、基準信号は生成されない。この場合、端末装置3、4から受信した受信音声信号は、加算器24をそのまま通過し出力音声信号として出力され、D/Aコンバータ28を介し、音声出力装置70のスピーカ74から音声として出力される(S15)。 As shown in FIG. 2, when the initialization process is completed, a series of processes (S11, S13, S15 to S23) for removing acoustic echoes using the set parameters (P, L) are performed. In the communication unit 46, transmission / reception of audio signals (reception of reception audio signals and transmission of transmission audio signals) is performed by communication with the terminal devices 3 and 4 via the network 1 (S11). In the audio processing unit 10, as described above, if there is a change (movement) in the arrangement position of the audio input device 60 (microphone 64) or the audio output device 70 (speaker 74), the movement detection unit 12 detects it, and the reference The signal generators 14 and 16 are caused to generate a reference signal. That is, if there is no change in the arrangement positions of the voice input device 60 and the voice output device 70 (S13: NO), the reference signal is not generated. In this case, the received audio signal received from the terminal devices 3 and 4 passes through the adder 24 as it is and is output as an output audio signal, and is output as audio from the speaker 74 of the audio output device 70 via the D / A converter 28. (S15).
 一方、音声の入力待ち状態にあるマイク64に(S17:NO)、スピーカ74から出力された音声が入力されると(S17:YES)、入力音声信号に変換され、A/Dコンバータ30を介し、減算器42に入力される。入力音声信号は、分配器32を介してデジタルフィルタ34にも入力されるが、基準信号が生成されていないため、デジタルフィルタ34の通過後に入力される信号比較部36において、何の処理も施されない。もっとも、基準信号が生成されない場合には、分配器32からデジタルフィルタ34への入力経路が遮断されるようにしてもよい。 On the other hand, when the sound output from the speaker 74 is input to the microphone 64 that is in a voice input waiting state (S17: NO), it is converted into an input sound signal and passed through the A / D converter 30. Are input to the subtractor 42. The input audio signal is also input to the digital filter 34 via the distributor 32, but since no reference signal is generated, no processing is performed in the signal comparison unit 36 input after passing through the digital filter 34. Not. However, when the reference signal is not generated, the input path from the distributor 32 to the digital filter 34 may be blocked.
 ところで、加算器24から出力された出力音声信号(ここでは基準信号が重畳されていない受信音声信号)は、遅延処理部38にも入力される。遅延処理部38は、信号比較部36から伝達された時間ずれの情報(P)を保持しており、加算器24から入力された出力音声信号を、P時間遅らせて、減衰処理部40に出力する(S19)。減衰処理部40は、信号比較部36から伝達されたレベルずれの情報(L)を保持しており、遅延処理部38から入力された出力音声信号をL倍して減衰させて音響エコー成分を生成し、減算器42に出力する(S21)。 Incidentally, the output audio signal output from the adder 24 (the received audio signal on which the reference signal is not superimposed here) is also input to the delay processing unit 38. The delay processing unit 38 holds the time lag information (P) transmitted from the signal comparison unit 36, delays the output audio signal input from the adder 24 by P time, and outputs it to the attenuation processing unit 40. (S19). The attenuation processing unit 40 holds the level shift information (L) transmitted from the signal comparison unit 36, attenuates the output audio signal input from the delay processing unit 38 by L and attenuates the acoustic echo component. Generate and output to the subtractor 42 (S21).
 そして、減算器42には、上記のマイク64から入力される入力音声信号と、出力音声信号に遅延処理および減算処理を施し生成した音響エコー成分とが入力される。減算器42は、入力音声信号の音声波形に音響エコー成分の音声波形の逆位相波形を重ね合わせることによって、入力音声信号に含まれる音響エコー成分を相殺し、音響エコーを除去した除去音声信号を生成する(S23)。S23の後はS11に戻り、生成された除去音声信号が、送信音声信号として、通信部46からネットワーク1を介し、端末装置3、4に送信される(S11)。この送信音声信号は、マイク64に入力される、端末装置2の周囲の音声のうち、端末装置3、4からの受信音声信号に基づきスピーカ74から出力された音声を含まず、端末装置2側で新たに発せられた音声のみに基づくものとなる。したがって、この送信音声信号に基づく音声が端末装置3、4側のスピーカで出力されても、音響エコーを生ずることはない。以降も、音声入力装置60や音声出力装置70の配置位置に変化がなければ(S13:NO)、S11,S13,S15~S23が繰り返され、初期化処理で求められたパラメータ(P,L)を用いた音響エコーの除去がなされる。 The subtractor 42 receives an input audio signal input from the microphone 64 and an acoustic echo component generated by performing delay processing and subtraction processing on the output audio signal. The subtractor 42 cancels the acoustic echo component included in the input speech signal by superimposing the opposite waveform of the speech waveform of the acoustic echo component on the speech waveform of the input speech signal, and removes the removed speech signal from which the acoustic echo has been removed. Generate (S23). After S23, the process returns to S11, and the generated removed audio signal is transmitted as a transmission audio signal from the communication unit 46 to the terminal devices 3 and 4 via the network 1 (S11). This transmitted audio signal does not include the audio output from the speaker 74 based on the received audio signals from the terminal devices 3 and 4 among the audio around the terminal device 2 input to the microphone 64, and is on the terminal device 2 side. It will be based only on the newly uttered voice. Therefore, even if the sound based on this transmission sound signal is output from the speaker on the terminal device 3 or 4 side, no acoustic echo is generated. Thereafter, if there is no change in the arrangement position of the voice input device 60 or the voice output device 70 (S13: NO), S11, S13, S15 to S23 are repeated, and the parameters (P, L) obtained in the initialization process are repeated. The acoustic echo is removed using.
 次に、S11,S13,S15~S23が繰り返されるうち、音声入力装置60および音声出力装置70の少なくとも一方の配置位置の変化が検出された場合(S13:YES)、新たなパラメータを設定して音響エコーを除去する一連の処理(S31~S53)が行われる。上記したように、スイッチ制御部22において受信音声信号が無音状態であると判定された場合には(S31:YES)、上記同様、音声波形の周波数が可聴領域(1KHz)の基準信号が生成される(S33)。生成された基準信号は、分配器20を介して信号比較部36と加算器24とに入力される。信号比較部36は、上記同様、生成基準信号の入力を契機にタイマ44のカウント値Tを取得し、基準信号の生成タイミングT0として保持する。さらに、信号比較部36は、生成基準信号の信号レベルを求め、生成レベルL0として保持する。 Next, when a change in the arrangement position of at least one of the voice input device 60 and the voice output device 70 is detected while S11, S13, and S15 to S23 are repeated (S13: YES), a new parameter is set. A series of processing (S31 to S53) for removing the acoustic echo is performed. As described above, when the switch control unit 22 determines that the received voice signal is silent (S31: YES), a reference signal whose frequency of the voice waveform is audible (1 KHz) is generated as described above. (S33). The generated reference signal is input to the signal comparison unit 36 and the adder 24 via the distributor 20. Similarly to the above, the signal comparison unit 36 acquires the count value T of the timer 44 when the generation reference signal is input, and holds it as the reference signal generation timing T0. Further, the signal comparison unit 36 obtains the signal level of the generation reference signal and holds it as the generation level L0.
 また、加算器24は、入力された基準信号をそのまま通過させ、この基準信号を出力音声信号として、D/Aコンバータ28と遅延処理部38とに出力する。出力音声信号はD/Aコンバータ28を介してアナログの音声信号に変換され、音声出力装置70のスピーカ74から、1KHzの基準信号に基づく可聴音として出力される(S39)。 Also, the adder 24 passes the input reference signal as it is, and outputs this reference signal as an output audio signal to the D / A converter 28 and the delay processing unit 38. The output audio signal is converted into an analog audio signal via the D / A converter 28, and output as an audible sound based on the 1 KHz reference signal from the speaker 74 of the audio output device 70 (S39).
 一方、スイッチ制御部22において受信音声信号が無音状態でないと判定された場合には(S31:NO)、上記したように、音声波形の周波数が非可聴領域(100KHz)の基準信号が生成される(S35)。上記同様、信号比較部36は、タイマ44のカウント値Tを基準信号の生成タイミングT0として保持し、信号レベルを生成レベルL0として保持する。また、加算器24は、入力される受信音声信号に基準信号を重畳させ、出力音声信号として、D/Aコンバータ28と遅延処理部38とに出力する(S37)。D/Aコンバータ28を介してアナログの音声信号に変換された出力音声信号は、音声出力装置70のスピーカ74から、受信音声信号に基づく音声が、基準信号に基づく非可聴音とともに出力される(S39)。 On the other hand, when the switch control unit 22 determines that the received voice signal is not silent (S31: NO), as described above, a reference signal whose frequency of the voice waveform is inaudible (100 KHz) is generated. (S35). Similarly to the above, the signal comparison unit 36 holds the count value T of the timer 44 as the generation timing T0 of the reference signal, and holds the signal level as the generation level L0. Further, the adder 24 superimposes the reference signal on the input received audio signal, and outputs it as an output audio signal to the D / A converter 28 and the delay processing unit 38 (S37). As for the output audio signal converted into an analog audio signal via the D / A converter 28, the audio based on the received audio signal is output from the speaker 74 of the audio output device 70 together with the inaudible sound based on the reference signal ( S39).
 音声入力装置60のマイク64にて、音声の入力検出の有無を判断する(S41)。そして、入力検出がされない場合は待ち状態にある(S41:NO)。また、このマイク64に、スピーカ74から出力された音声が入力されると(S41:YES)、入力音声信号に変換される。入力音声信号は、A/Dコンバータ30および分配器32を介し、デジタルフィルタ34に入力される。デジタルフィルタ34では、スイッチ制御部22によって、受信音声信号が無音状態の場合には、1KHzの信号を選択的に抽出し、無音状態でない場合には、100KHzの信号を選択的に抽出する設定がなされている。よって、入力音声信号に含まれる基準信号が非可聴領域の周波数のものであっても、あるいは可聴領域の周波数のものであっても、デジタルフィルタ34を通過することによって、フィルタ設定通りの基準信号が抽出される(S43)。 The presence / absence of voice input detection is determined by the microphone 64 of the voice input device 60 (S41). And when input detection is not performed, it is in a waiting state (S41: NO). When the sound output from the speaker 74 is input to the microphone 64 (S41: YES), the sound is converted into an input sound signal. The input audio signal is input to the digital filter 34 via the A / D converter 30 and the distributor 32. In the digital filter 34, the switch control unit 22 is configured to selectively extract a 1 KHz signal when the received audio signal is silent, and to selectively extract a 100 KHz signal when the received voice signal is not silent. Has been made. Therefore, even if the reference signal included in the input audio signal has a frequency in the non-audible region or a frequency in the audible region, the reference signal as set by the filter setting is obtained by passing through the digital filter 34. Is extracted (S43).
 抽出された基準信号(抽出基準信号)は信号比較部36に入力される。そして信号比較部36において、抽出基準信号の抽出タイミングT1と抽出レベルL1とが求められ、生成基準信号から得た生成タイミングT0と生成レベルL0とに基づき、時間ずれPとレベルずれLとが求められることは(S45、S47)、上記したS71、S73の処理と同様である。新たに求められたパラメータ(P,L)は、それぞれ、遅延処理部38および減衰処理部40に伝達され、すでに保持されているパラメータ(初期化処理など、以前の処理で求められたパラメータ)が更新される。更新されたパラメータを用い、遅延処理部38において加算器24から入力される出力音声信号をP時間遅らせる処理(S49)が行われ、減衰処理部40において、遅延処理部38から入力される出力音声信号をL倍して減衰させて音響エコー成分を生成する処理(S51)が行われることについて、上記したS19、S21の処理と同様である。さらに、減算器42において、入力音声信号の音声波形に音響エコー成分の音声波形の逆位相波形を重ね合わせて除去音声信号を生成する処理(S53)が行われることについても、上記したS23の処理と同様である。S53の後はS11に戻り、新たなパラメータを用いて生成された除去音声信号が、送信音声信号として、通信部46からネットワーク1を介し、端末装置3、4に送信される(S11)。 The extracted reference signal (extracted reference signal) is input to the signal comparison unit 36. The signal comparison unit 36 obtains the extraction timing T1 and the extraction level L1 of the extraction reference signal, and obtains the time shift P and the level shift L based on the generation timing T0 and the generation level L0 obtained from the generation reference signal. (S45, S47) is the same as the processing of S71 and S73 described above. The newly obtained parameters (P, L) are transmitted to the delay processing unit 38 and the attenuation processing unit 40, respectively, and already held parameters (parameters obtained in the previous processing such as initialization processing). Updated. Using the updated parameters, the delay processing unit 38 performs processing for delaying the output audio signal input from the adder 24 by P time (S49), and the attenuation processing unit 40 outputs the output audio signal input from the delay processing unit 38. The processing for generating an acoustic echo component by attenuating the signal by L times (S51) is the same as the processing of S19 and S21 described above. Further, the process of S23 described above is also performed in the subtractor 42 in which the process (S53) of generating the removed voice signal by superimposing the voice waveform of the acoustic echo component on the voice waveform of the input voice signal is performed. It is the same. After S53, the process returns to S11, and the removed audio signal generated using the new parameter is transmitted as a transmission audio signal from the communication unit 46 to the terminal devices 3 and 4 via the network 1 (S11).
 音声入力装置60や音声出力装置70の配置位置に変化が生ずると、スピーカ74から出力される音声が、マイク64に入力されるまでの経路が変化し、音響エコー成分を生成する際のパラメータも変化する。したがって、音声入力装置60および音声出力装置70の少なくとも一方の配置位置の変化を検出したら、パラメータを更新することで、配置位置変化後の(現在の)環境にあわせた音響エコー成分の除去を、確実に、行うことができる。よって、新たなパラメータを用いて生成された除去音声信号を送信音声信号として端末装置3、4に送信すれば、この送信音声信号に基づく音声が端末装置3、4側のスピーカで出力されても、音響エコーを生ずることはない。 When a change occurs in the arrangement position of the audio input device 60 or the audio output device 70, the path until the audio output from the speaker 74 is input to the microphone 64 changes, and the parameters for generating the acoustic echo component also change. Change. Therefore, when a change in the arrangement position of at least one of the voice input device 60 and the voice output device 70 is detected, the parameter is updated to remove the acoustic echo component in accordance with the (current) environment after the arrangement position change. It can be done reliably. Therefore, if the removed audio signal generated using the new parameter is transmitted to the terminal devices 3 and 4 as a transmission audio signal, the audio based on the transmission audio signal is output from the speaker on the terminal device 3 and 4 side. No acoustic echo is produced.
 以降は、音声入力装置60や音声出力装置70の配置位置に変化がなければ(S13:NO)、既存のパラメータを用いて音響エコーの除去が行われ、変化があれば(S13:YES)、再度パラメータを更新しつつ、音響エコーの除去が行われる。 Thereafter, if there is no change in the arrangement position of the voice input device 60 and the voice output device 70 (S13: NO), the acoustic echo is removed using the existing parameters, and if there is a change (S13: YES), The acoustic echo is removed while updating the parameters again.
 以上説明したように、本実施の形態では、音響エコー成分を生成するために必要なパラメータ(時間ずれの情報(P)とレベルずれの情報(L))を求めるときに生成される基準信号を、受信音声信号に重畳して、スピーカ74から出力することができる。したがって、テレビ会議システムが運用され、端末装置2と端末装置3、4との間で音声信号の送受信が行われている最中(運用中)であっても、基準信号を用いてパラメータを求め、更新することができる。これにより、運用中に、音声入力装置60(マイク64)や音声出力装置70(スピーカ74)の配置位置に変化が生じ、それまで用いていたパラメータでは適切な音響エコー成分を生成できなくなっても、直ちに、新たなパラメータを求め、更新することができる。よって、運用中に起こりうる、音響エコー成分の生成精度に影響を及ぼす虞のある状況変化に対応して適切な音響エコー成分を生成でき、送信音声信号からの音響エコー成分の除去精度を維持することができる。 As described above, in the present embodiment, the reference signal generated when obtaining the parameters (time shift information (P) and level shift information (L)) necessary for generating the acoustic echo component is used. , And can be output from the speaker 74 while being superimposed on the received audio signal. Accordingly, even when the video conference system is operated and audio signals are being transmitted / received between the terminal device 2 and the terminal devices 3 and 4 (in operation), parameters are obtained using the reference signal. Can be updated. Thereby, during operation, a change occurs in the arrangement position of the audio input device 60 (microphone 64) and the audio output device 70 (speaker 74), and an appropriate acoustic echo component cannot be generated with the parameters used so far. Immediately, new parameters can be obtained and updated. Therefore, an appropriate acoustic echo component can be generated in response to a change in the situation that may affect the generation accuracy of the acoustic echo component during operation, and the accuracy of removing the acoustic echo component from the transmission voice signal is maintained. be able to.
 また、本実施の形態では、基準信号を、音声入力装置60および音声出力装置70の少なくとも一方の配置位置に変化が生じたことが検出された場合に、生成することができる。換言すると、音声入力装置60や音声出力装置70の配置位置に変化がなければ、基準信号の生成が行われず、パラメータ(時間ずれの情報とレベルずれの情報)を求める演算も行われない。つまり、パラメータの更新は、必要とされる状況が生じた場合(音声入力装置60や音声出力装置70の配置位置に変化があった場合)に適切になされるので、常時あるいは定期的に更新される場合と比べ、エコー除去部8に無駄な負荷がかかることがない。 Further, in the present embodiment, the reference signal can be generated when it is detected that a change has occurred in at least one arrangement position of the audio input device 60 and the audio output device 70. In other words, if there is no change in the arrangement positions of the voice input device 60 and the voice output device 70, the reference signal is not generated, and the calculation for obtaining the parameters (information about time shift and level shift) is not performed. In other words, the parameter is updated appropriately when a necessary situation occurs (when the arrangement position of the audio input device 60 or the audio output device 70 is changed), and is updated constantly or periodically. Compared to the case where the echo canceling unit 8 is used, no unnecessary load is applied to the echo removing unit 8.
 また、移動検出部12は、音声入力装置60や音声出力装置70の配置位置の変化の検出を行うが、音声入力装置60と音声出力装置70との相対的な位置関係の変化だけでなく、それぞれの絶対的な配置位置の変化を検出している。したがって、音響エコー成分の生成精度に影響を及ぼす虞のある状況変化を確実に検出することができる。 In addition, the movement detection unit 12 detects a change in the arrangement position of the voice input device 60 and the voice output device 70, but not only changes in the relative positional relationship between the voice input device 60 and the voice output device 70, Changes in the respective absolute positions are detected. Therefore, it is possible to reliably detect a change in the situation that may affect the generation accuracy of the acoustic echo component.
 また、加速度センサ62,72であれば、マイク64やスピーカ74と一体に設けることが容易である。加速度センサ62とマイク64とを一体にした音声入力装置60や、加速度センサ72とスピーカ74とを一体にした音声出力装置70の配置位置に変化があれば、加速度センサ62,72に加速度が加わる。よって、加速度センサ62,72の検出結果をもとに、音声入力装置60や音声出力装置70の移動の有無を把握すれば、容易かつ確実に、音声入力装置60および音声出力装置70の少なくとも一方の、絶対的な、配置位置の変化を検出することができる。 Further, the acceleration sensors 62 and 72 can be easily provided integrally with the microphone 64 and the speaker 74. If there is a change in the arrangement position of the voice input device 60 in which the acceleration sensor 62 and the microphone 64 are integrated, or the voice output device 70 in which the acceleration sensor 72 and the speaker 74 are integrated, acceleration is applied to the acceleration sensors 62 and 72. . Therefore, if the presence or absence of movement of the voice input device 60 or the voice output device 70 is grasped based on the detection results of the acceleration sensors 62 and 72, at least one of the voice input device 60 and the voice output device 70 can be easily and reliably obtained. It is possible to detect an absolute change in the arrangement position.
 また、基準信号の音声波形の周波数が非可聴領域の周波数であれば、受信音声信号に基準信号を重畳してスピーカ74から出力したとしても、基準信号に基づく音声(基準音)を、利用者は聞き取ることができない。この場合に利用者が聞き取ることができるのは、実質的に、受信音声信号に基づく音声のみである。したがって、運用中に基準信号を出力しても、利用者の発声や聞き取りが基準信号によって妨げられることはないので、音声入力装置60や音声出力装置70の配置位置に変化が生じた場合、直ちに、新たなパラメータを求め、更新することができる。よって、運用中に起こりうる、音響エコー成分の生成精度に影響を及ぼす虞のある状況変化に対応して適切な音響エコー成分を生成でき、送信音声信号からの音響エコー成分の除去精度を維持することができる。 Further, if the frequency of the sound waveform of the reference signal is a frequency in the non-audible region, even if the reference signal is superimposed on the received sound signal and output from the speaker 74, the sound (reference sound) based on the reference signal is output to the user. Cannot be heard. In this case, the user can hear only the voice based on the received voice signal. Therefore, even if the reference signal is output during operation, the user's utterance or listening is not hindered by the reference signal. Therefore, when a change occurs in the arrangement position of the voice input device 60 or the voice output device 70, immediately. New parameters can be obtained and updated. Therefore, an appropriate acoustic echo component can be generated in response to a change in the situation that may affect the generation accuracy of the acoustic echo component during operation, and the accuracy of removing the acoustic echo component from the transmission voice signal is maintained. be able to.
 一般に、音声波形の信号について、可聴領域の周波数の信号は、非可聴領域の周波数の信号と比べ、指向性が広い。また、音響エコー成分の周波数も可聴領域の周波数である。ゆえに、指向性が広く、周波数特性が音響エコー成分に近い、可聴領域の周波数の基準信号を用い、パラメータ(時間ずれの情報とレベルずれの情報)を求めれば、音響エコー成分の生成精度を、より高めることができる。もっとも、可聴領域の周波数をもった基準信号を受信音声信号に重畳してスピーカ74から出力すると、利用者は、受信音声信号に基づく音声とともに基準信号に基づく音声を聞き取ることができてしまい、利用者の発声や聞き取りが、基準信号によって妨げられてしまう虞がある。ゆえに、可聴領域の周波数をもった基準信号は、受信音声信号が無音状態である場合に生成することが好ましい。 In general, for audio waveform signals, signals in the audible frequency range have a wider directivity than signals in the non-audible frequency range. The frequency of the acoustic echo component is also the frequency in the audible region. Therefore, if the parameters (time shift information and level shift information) are obtained using a audible frequency reference signal having a wide directivity and frequency characteristics close to those of the acoustic echo component, the accuracy of generating the acoustic echo component is Can be increased. However, if a reference signal having a frequency in the audible region is superimposed on the received sound signal and output from the speaker 74, the user can hear the sound based on the reference signal together with the sound based on the received sound signal. There is a risk that the person's utterance and listening will be hindered by the reference signal. Therefore, it is preferable to generate the reference signal having a frequency in the audible region when the received audio signal is in a silent state.
 なお、可聴領域の周波数の音声のうち、特に低音側(周波数の小さい側)の領域の周波数をもった音声は、信号レベルがある程度大きくても、人は聞き取りにくいことが知られている。ゆえに、可聴領域であってもこうした低音側の領域の周波数をもった基準信号を用いれば、運用中に、受信音声信号が無音状態であるとしても、利用者に不快感を与えにくく、より好ましい。 Note that it is known that, among sounds having frequencies in the audible range, sounds having frequencies in the low frequency side (small frequency side) are difficult to hear even if the signal level is somewhat high. Therefore, it is more preferable to use a reference signal having a frequency in such a low-frequency region even in the audible region, even if the received audio signal is silent during operation, which is less likely to cause discomfort to the user. .
 上記実施の形態において、スピーカ74が、第1態様の「出力手段」に相当し、マイク64が「入力手段」に相当する。移動検出部12が「位置検出手段」に相当し、基準信号生成部14,16が「生成手段」に相当する。加算器24が「重畳手段」に相当し、デジタルフィルタ34が「抽出手段」に相当する。信号比較部36が「演算手段」に相当し、遅延処理部38、減衰処理部40、減算器42が、「除去手段」に相当する。通信部46が「送信手段」に相当する。加速度センサ62,72が「加速度検出手段」に相当し、スイッチ制御部22が「判定手段」に相当する。 In the above embodiment, the speaker 74 corresponds to the “output unit” of the first aspect, and the microphone 64 corresponds to the “input unit”. The movement detection unit 12 corresponds to “position detection unit”, and the reference signal generation units 14 and 16 correspond to “generation unit”. The adder 24 corresponds to “superimposing means”, and the digital filter 34 corresponds to “extraction means”. The signal comparison unit 36 corresponds to “calculation unit”, and the delay processing unit 38, the attenuation processing unit 40, and the subtractor 42 correspond to “removal unit”. The communication unit 46 corresponds to “transmission means”. The acceleration sensors 62 and 72 correspond to “acceleration detection means”, and the switch control unit 22 corresponds to “determination means”.
 なお、上記の実施形態に示されるエコー除去装置の構成は一例であり、本発明は各種の変形が可能なことはいうまでもない。例えば、エコー除去装置の音声処理部の機能を、ハードウェア回路ではなく、CPUがプログラムを実行することにより実現されるソフトウェア制御により提供してもよい。図6に、端末装置2としてパーソナルコンピュータ(PC)102を用いた場合のエコー除去装置の構成例を示す。なお、本変形例のPC102において、エコー除去装置の機能をなす部位を、エコー除去部108とする。以下の説明において、端末装置2と同等の構成をなす部分は同一の符号で示し、その部分の説明については省略または簡略化する。 It should be noted that the configuration of the echo removal apparatus shown in the above embodiment is merely an example, and it goes without saying that the present invention can be variously modified. For example, the function of the sound processing unit of the echo removal apparatus may be provided by software control realized by a CPU executing a program instead of a hardware circuit. FIG. 6 shows a configuration example of an echo removal apparatus when a personal computer (PC) 102 is used as the terminal apparatus 2. Note that, in the PC 102 of the present modification, a portion that functions as an echo removal device is an echo removal unit 108. In the following description, the part which comprises the structure equivalent to the terminal device 2 is shown with the same code | symbol, and the description of the part is abbreviate | omitted or simplified.
 PC102は、公知のCPU180を備え、CPU180には、バス86を介し、ROM82、RAM84、入出力インターフェイス88が接続されている。入出力インターフェイス88には、マウスやキーボード等の操作入力装置92、ハーディスクドライブ(HDD)やフラッシュメモリドライブ(SSD)、DVD-ROMドライブ等の外部記憶装置90、映像処理部94、通信部46が接続されている。映像処理部94には、Webカメラ等の映像入力装置96およびモニタ等の映像出力装置98が接続されている。また、マイク64や加速度センサ62を備えた音声入力装置60、スピーカ74や加速度センサ72を備えた音声出力装置70も、入出力インターフェイス88に接続されている。詳細には、入出力インターフェイス88に、D/Aコンバータ28を介してスピーカ74が接続され、A/Dコンバータ30を介してマイク64が接続され、A/Dコンバータ26を介して加速度センサ62,72が接続されている。音声入力装置60、音声出力装置70、操作入力装置92、映像入力装置96、映像出力装置98は、PC102の外付け装置として設けられる。エコー除去部108は、音声入力装置60、音声出力装置70、通信部46、外部記憶装置90、そしてこれらの各処理部(各装置)を制御するための各構成(CPU180、ROM82、RAM84等)によって構成される。また、PC102は通信部46を介してネットワーク1に接続されており、ネットワーク1を通じて接続される端末装置3、4とともにテレビ会議システムを構築することは、本実施の形態と同様である。 The PC 102 includes a known CPU 180, and a ROM 82, a RAM 84, and an input / output interface 88 are connected to the CPU 180 via a bus 86. The input / output interface 88 includes an operation input device 92 such as a mouse and a keyboard, an external storage device 90 such as a hard disk drive (HDD), a flash memory drive (SSD), and a DVD-ROM drive, a video processing unit 94, and a communication unit 46. Is connected. A video input device 96 such as a web camera and a video output device 98 such as a monitor are connected to the video processing unit 94. An audio input device 60 including a microphone 64 and an acceleration sensor 62 and an audio output device 70 including a speaker 74 and an acceleration sensor 72 are also connected to the input / output interface 88. Specifically, a speaker 74 is connected to the input / output interface 88 via the D / A converter 28, a microphone 64 is connected via the A / D converter 30, and the acceleration sensor 62, via the A / D converter 26. 72 is connected. The audio input device 60, the audio output device 70, the operation input device 92, the video input device 96, and the video output device 98 are provided as external devices of the PC 102. The echo removing unit 108 includes a voice input device 60, a voice output device 70, a communication unit 46, an external storage device 90, and various components (CPU 180, ROM 82, RAM 84, etc.) for controlling these processing units (each device). Consists of. The PC 102 is connected to the network 1 via the communication unit 46, and the video conference system is constructed together with the terminal devices 3 and 4 connected through the network 1 as in the present embodiment.
 このような構成のPC102では、外部記憶装置90にインストールされるプログラムをCPU180が実行することによって、CPU180が、本実施の形態の音声処理部10と同等の処理を行うことが可能である。つまり、図2,図3のフローチャートの各処理を実現する公知のモジュールを組み合わせ、フローチャートに示される処理の流れに従って音声信号を処理することができる音声処理部110を、プログラムとして設計すればよい。なお、音声処理部110を構成する各処理部はCPU180によって実現される機能であり、図6では、本実施の形態のもの(図1参照)と対比できるように仮想的な処理部として示したに過ぎず、同一の符号を括弧書きで付している。 In the PC 102 having such a configuration, the CPU 180 executes a program installed in the external storage device 90, so that the CPU 180 can perform processing equivalent to that of the audio processing unit 10 of the present embodiment. That is, it is only necessary to design a sound processing unit 110 that combines known modules for realizing the processes in the flowcharts of FIGS. 2 and 3 and can process a sound signal according to the process flow shown in the flowcharts as a program. Note that each processing unit constituting the audio processing unit 110 is a function realized by the CPU 180, and in FIG. 6, it is shown as a virtual processing unit so that it can be compared with the one in this embodiment (see FIG. 1). However, the same reference numerals are given in parentheses.
 上記変形例において、S39の処理を行うCPU180が、第2,第3態様の「出力工程」として機能し、S41の処理を行うCPU180が「入力工程」として機能する。S13の処理を行うCPU180が「位置検出工程」に機能し、S33またはS35の処理を行うCPU180が「生成工程」に機能する。S37の処理を行うCPU180が「重畳工程」に機能し、S43の処理を行うCPU180が「抽出工程」に機能する。S45,S47の処理を行うCPU180が「演算工程」に機能し、S49,S51,S53の処理を行うCPU180が「除去工程」に機能する。S11の処理を行うCPU180が「送信工程」に機能する。 In the above modification, the CPU 180 that performs the process of S39 functions as the “output process” of the second and third aspects, and the CPU 180 that performs the process of S41 functions as the “input process”. The CPU 180 that performs the process of S13 functions in the “position detection process”, and the CPU 180 that performs the process of S33 or S35 functions in the “generation process”. The CPU 180 that performs the process of S37 functions as the “superimposition process”, and the CPU 180 that performs the process of S43 functions as the “extraction process”. The CPU 180 that performs the processes of S45 and S47 functions as the “calculation process”, and the CPU 180 that performs the processes of S49, S51, and S53 functions as the “removal process”. The CPU 180 that performs the process of S11 functions in the “transmission step”.
 また、音声入力装置60や音声出力装置70を定位置から撮影し、撮影画像を解析することによって、音声入力装置260や音声出力装置270の配置位置の変化を検出してもよい。例えば図7に示す端末装置202において、音声入力装置260や音声出力装置270は、加速度センサを備えず、それぞれマイク64、スピーカ74を備えた移動可能な装置として構成する。音声入力装置260や音声出力装置270を撮影するカメラ250の出力は、入出力インターフェイス88に入力する。また、公知の画像解析処理を行う画像解析部252を設け、カメラ250により撮影された画像内における、音声入力装置260や音声出力装置270の位置(例えば座標)を特定する。画像解析部252は、例えばCPU280がプログラムを実行して公知の画像解析処理を行うことで、実現されればよい。画像解析部252の解析結果(例えば音声入力装置260や音声出力装置270の座標情報)は、移動検出部12に入力されるようにする。なお、本変形例の端末装置202において、エコー除去装置の機能をなす部位は、エコー除去部208として示す。エコー除去部208は、音声処理部210(A/Dコンバータ26を除き、本実施の形態の音声処理部10と同等の構成であればよい。)、音声入力装置260、音声出力装置270、通信部46、カメラ250、そしてこれらの各処理部(各装置)を制御するための各構成(CPU280、ROM82、RAM84等)によって構成される。 In addition, a change in the arrangement position of the voice input device 260 or the voice output device 270 may be detected by shooting the voice input device 60 or the voice output device 70 from a fixed position and analyzing the shot image. For example, in the terminal device 202 illustrated in FIG. 7, the voice input device 260 and the voice output device 270 are configured as movable devices each including a microphone 64 and a speaker 74 without including an acceleration sensor. The output of the camera 250 for photographing the voice input device 260 and the voice output device 270 is input to the input / output interface 88. Further, an image analysis unit 252 that performs a known image analysis process is provided, and the positions (for example, coordinates) of the audio input device 260 and the audio output device 270 in the image captured by the camera 250 are specified. The image analysis unit 252 may be realized by, for example, the CPU 280 executing a program and performing a known image analysis process. The analysis result of the image analysis unit 252 (for example, coordinate information of the audio input device 260 and the audio output device 270) is input to the movement detection unit 12. Note that, in the terminal device 202 of this modification, a portion that functions as an echo removal device is indicated as an echo removal unit 208. The echo removal unit 208 has a sound processing unit 210 (except for the A / D converter 26, which may have the same configuration as the sound processing unit 10 of the present embodiment), a sound input device 260, a sound output device 270, a communication. The unit 46, the camera 250, and the components (CPU 280, ROM 82, RAM 84, etc.) for controlling each of these processing units (each device).
 端末装置202をこのように構成し、カメラ250を、音声入力装置260および音声出力装置270の移動しうる範囲を見渡せる適切な定位置に設置する。そして、カメラ250で撮影した画像を画像解析部252で解析して撮影画像内における音声入力装置260や音声出力装置270の位置を特定する。解析結果に基づき、移動検出部12で、音声入力装置260または音声出力装置270の配置位置に変化が生じたか否かを判断する。このように、カメラ250を用い、定位置から音声入力装置260や音声出力装置270を撮影すれば、撮影画像を解析し、撮影画像内における両者の位置を把握するだけで、容易かつ確実に、音声入力装置260および音声出力装置270の少なくとも一方の、絶対的な、配置位置の変化を検出することができる。 The terminal device 202 is configured in this way, and the camera 250 is installed at an appropriate fixed position overlooking the movable range of the voice input device 260 and the voice output device 270. Then, the image captured by the camera 250 is analyzed by the image analysis unit 252 and the positions of the audio input device 260 and the audio output device 270 in the captured image are specified. Based on the analysis result, the movement detection unit 12 determines whether or not a change has occurred in the arrangement position of the voice input device 260 or the voice output device 270. Thus, if the audio input device 260 and the audio output device 270 are photographed from a fixed position using the camera 250, the photographed image is analyzed and only the position of both in the photographed image is grasped. An absolute change in the arrangement position of at least one of the voice input device 260 and the voice output device 270 can be detected.
 上記変形例において、カメラ250が、第1態様の「撮影手段」に相当する。また、公知の画像解析処理を行う画像解析部252を実現し、カメラ250の撮影画像内における音声入力装置260や音声出力装置270の位置を特定することができるCPU280が、「解析手段」として機能する。 In the above modification, the camera 250 corresponds to the “photographing means” of the first aspect. In addition, the CPU 280 that realizes an image analysis unit 252 that performs a known image analysis process and can specify the position of the audio input device 260 and the audio output device 270 in the captured image of the camera 250 functions as an “analysis unit”. To do.
 また、例えば音声入力装置260と音声出力装置270とに識別用のマーカーを記し、
定位置に固定したカメラ250の撮影画像内でマーカーの位置(座標)を特定するようにしてもよい。このようにすれば、音声入力装置260や音声出力装置270の形状認識を行わなくとも撮影画像内における両者の配置位置を特定可能であり、画像解析処理を簡易化することができる。また、図示しないが、電波や赤外線、レーザ光等を2点あるいは3点以上の定点から発し、音声入力装置や音声出力装置で受信した際の位相ずれや、反射波の位相ずれなどによる、音声入力装置や音声出力装置の配置位置の変化の検出を行ってもよい。
Also, for example, a marker for identification is written on the voice input device 260 and the voice output device 270,
The marker position (coordinates) may be specified in the captured image of the camera 250 fixed at a fixed position. In this way, it is possible to specify the arrangement positions of both in the captured image without performing shape recognition of the voice input device 260 and the voice output device 270, and the image analysis process can be simplified. In addition, although not shown in the drawing, sound caused by a phase shift or a reflected wave phase shift when radio waves, infrared rays, laser beams, etc. are emitted from two or more fixed points and received by a voice input device or a voice output device. You may detect the change of the arrangement position of an input device or an audio | voice output device.
 また、スピーカ74やマイク64、加速度センサ62,72には、デジタル出力のものを用いてもよい。あるいは、A/DコンバータやD/Aコンバータを、音声入力装置60や音声出力装置70に設けてもよい。また、タイマ44の代わりにCPU80のインターバルタイマ等を用い、カウント値TをCPU80から取得してもよい。また、デジタルフィルタ34にはバンドパスフィルタを用いたが、ハイパスフィルタ(HPF)やローパスフィルタ(LPF)、あるいはこれら各種のフィルタの組み合わせを用いてもよい。 Further, a digital output may be used for the speaker 74, the microphone 64, and the acceleration sensors 62 and 72. Alternatively, an audio input device 60 or an audio output device 70 may be provided with an A / D converter or a D / A converter. Further, the count value T may be acquired from the CPU 80 by using an interval timer of the CPU 80 instead of the timer 44. Further, although a band pass filter is used as the digital filter 34, a high pass filter (HPF), a low pass filter (LPF), or a combination of these various filters may be used.
 2~4,202 端末装置
 8,108,208 エコー除去部
 12  移動検出部
 14,16 基準信号生成部
 22  スイッチ制御部
 24  加算器
 34  デジタルフィルタ
 36  信号比較部
 38  遅延処理部
 40  減衰処理部
 42  減算器
 46  通信部
 62,72 加速度センサ
 64  マイク
 74  スピーカ
102  PC
180  CPU
250  カメラ
252  画像解析部
2 to 4,202 Terminal device 8, 108, 208 Echo removal unit 12 Movement detection unit 14, 16 Reference signal generation unit 22 Switch control unit 24 Adder 34 Digital filter 36 Signal comparison unit 38 Delay processing unit 40 Attenuation processing unit 42 Subtraction Device 46 Communication unit 62, 72 Acceleration sensor 64 Microphone 74 Speaker 102 PC
180 CPU
250 Camera 252 Image analysis unit

Claims (7)

  1.  通信先装置から受信する音声信号である受信音声信号を音声に変換して出力する出力手段と、
     入力される周囲の音声を前記通信先装置に送信する音声信号である送信音声信号に変換する入力手段と、
     前記出力手段および前記入力手段の少なくとも一方の配置位置に変化が生じたことを検出する位置検出手段と、
     前記出力手段から出力された音声が前記入力手段に入力されて生ずる音響エコー成分を前記送信音声信号から除去するための基準となる基準信号を、前記位置検出手段が前記配置位置の変化を検出した場合に生成する生成手段と、
     前記受信音声信号に前記基準信号を重畳する重畳手段と、
     前記入力手段の変換した前記送信音声信号にフィルタリング処理を行い、前記基準信号を抽出する抽出手段と、
     前記生成手段によって生成された際の前記基準信号である生成基準信号と、前記抽出手段によって抽出された際の前記基準信号である抽出基準信号とを比較して、前記生成基準信号の生成タイミングと前記抽出基準信号の抽出タイミングとの時間ずれの情報と、前記生成タイミングにおける前記生成基準信号の信号レベルと前記抽出タイミングにおける前記抽出基準信号の信号レベルとのレベルずれの情報とを求める演算手段と、
     前記受信音声信号に対し、前記時間ずれの情報と前記レベルずれの情報とに基づく演算を行って前記音響エコー成分を生成し、前記送信音声信号から差し引いて、前記音響エコー成分を除去した除去音声信号を生成する除去手段と、
     前記通信先装置に送信する前記送信音声信号として、前記除去音声信号を送信する送信手段と
    を備えたことを特徴とするエコー除去装置。
    Output means for converting a received voice signal, which is a voice signal received from the communication destination apparatus, into voice and outputting the voice;
    Input means for converting a surrounding voice to be inputted into a transmission voice signal which is a voice signal to be transmitted to the communication destination device;
    Position detecting means for detecting that a change has occurred in an arrangement position of at least one of the output means and the input means;
    The position detection means detects a change in the arrangement position of a reference signal that serves as a reference for removing an acoustic echo component generated when the sound output from the output means is input to the input means from the transmission sound signal. Generating means for generating the case,
    Superimposing means for superimposing the reference signal on the received audio signal;
    An extraction unit that performs filtering on the transmission voice signal converted by the input unit, and extracts the reference signal;
    The generation reference signal, which is the reference signal generated by the generation unit, and the extraction reference signal, which is the reference signal extracted by the extraction unit, are compared, and the generation timing of the generation reference signal is Calculating means for obtaining information on a time lag with respect to the extraction timing of the extraction reference signal, and information on a level lag between the signal level of the generation reference signal at the generation timing and the signal level of the extraction reference signal at the extraction timing; ,
    The received voice signal is generated based on the time shift information and the level shift information to generate the acoustic echo component, and is subtracted from the transmission voice signal to remove the acoustic echo component. Removing means for generating a signal;
    An echo cancellation apparatus comprising: a transmission unit that transmits the removed voice signal as the transmission voice signal to be transmitted to the communication destination apparatus.
  2.  前記出力手段および前記入力手段の少なくとも一方が含まれる画像を定位置から撮影する撮影手段と、
     前記撮影手段の撮影画像内における前記出力手段および前記入力手段の少なくとも一方の位置を解析する解析手段と
    をさらに備え、
     前記位置検出手段は、前記解析手段の解析結果に基づき、前記配置位置に変化が生じたことを検出することを特徴とする請求項1に記載のエコー除去装置。
    A photographing means for photographing an image including at least one of the output means and the input means from a fixed position;
    An analysis means for analyzing a position of at least one of the output means and the input means in a photographed image of the photographing means;
    The echo removing apparatus according to claim 1, wherein the position detecting unit detects that a change has occurred in the arrangement position based on an analysis result of the analyzing unit.
  3.  前記出力手段および前記入力手段の少なくとも一方に加わる加速度を検出する加速度検出手段をさらに備え、
     前記位置検出手段は、前記加速度検出手段の検出結果に基づき、前記配置位置に変化が生じたことを検出することを特徴とする請求項1に記載のエコー除去装置。
    An acceleration detecting means for detecting an acceleration applied to at least one of the output means and the input means;
    The echo removing apparatus according to claim 1, wherein the position detecting unit detects a change in the arrangement position based on a detection result of the acceleration detecting unit.
  4.  前記生成手段は、前記基準信号として、音声波形の周波数が非可聴領域の周波数の信号を生成することを特徴とする請求項1に記載のエコー除去装置。 The echo removing apparatus according to claim 1, wherein the generating means generates a signal having a frequency of a speech waveform in a non-audible region as the reference signal.
  5.  前記受信音声信号が無音状態であるか否かを判定する判定手段をさらに備え、
     前記生成手段は、前記位置検出手段が前記配置位置の変化を検出し、且つ前記判定手段が、前記受信音声信号が無音状態であると判定した場合に、前記基準信号として、音声波形の周波数が可聴領域の周波数の信号を生成することを特徴とする請求項1に記載のエコー除去装置。
    A determination means for determining whether or not the received audio signal is silent;
    When the position detecting unit detects a change in the arrangement position and the determining unit determines that the received audio signal is in a silent state, the generating unit uses a frequency of an audio waveform as the reference signal. The echo canceller according to claim 1, wherein a signal having a frequency in an audible region is generated.
  6.  通信先装置から受信する音声信号である受信音声信号が音声に変換されて出力手段から出力される出力工程と、
     周囲の音声が入力手段に入力されて、前記通信先装置に送信する音声信号である送信音声信号に変換される入力工程と、
     前記出力手段および前記入力手段の少なくとも一方の配置位置に変化を生じたことが検出される位置検出工程と、
     前記出力手段から出力された音声が前記入力手段に入力されて生ずる音響エコー成分を前記送信音声信号から除去するための基準となる基準信号が、前記位置検出工程において前記配置位置の変化が検出された場合に生成される生成工程と、
     前記受信音声信号に前記基準信号が重畳される重畳工程と、
     前記入力工程において変換された前記送信音声信号にフィルタリング処理が行われ、前記基準信号が抽出される抽出工程と、
     前記生成工程において生成された際の前記基準信号である生成基準信号と、前記抽出工程において抽出された際の前記基準信号である抽出基準信号とが比較され、前記生成基準信号の生成タイミングと前記抽出基準信号の抽出タイミングとの時間ずれの情報と、前記生成タイミングにおける前記生成基準信号の信号レベルと前記抽出タイミングにおける前記抽出基準信号の信号レベルとのレベルずれの情報とが求められる演算工程と、
     前記受信音声信号に対し、前記時間ずれの情報と前記レベルずれの情報とに基づく演算が行われて前記音響エコー成分が生成され、前記送信音声信号から差し引かれて、前記音響エコー成分を除去した除去音声信号が生成される除去工程と、
     前記通信先装置に送信する前記送信音声信号として、前記除去音声信号が送信される送信工程と
    を備えたことを特徴とするエコー除去方法。
    An output step in which a received voice signal, which is a voice signal received from the communication destination device, is converted into voice and output from the output means;
    An input step in which ambient audio is input to the input means and converted into a transmission audio signal that is an audio signal to be transmitted to the communication destination device;
    A position detection step in which it is detected that a change has occurred in at least one of the output means and the input means; and
    In the position detection step, a change in the arrangement position is detected in a reference signal that serves as a reference for removing an acoustic echo component generated when the sound output from the output means is input to the input means. Generation process generated in the case of
    A superimposition step in which the reference signal is superimposed on the received audio signal;
    An extraction step in which filtering processing is performed on the transmission audio signal converted in the input step, and the reference signal is extracted;
    The generation reference signal that is the reference signal when generated in the generation step is compared with the extraction reference signal that is the reference signal when extracted in the extraction step, and the generation timing of the generation reference signal and the A calculation step in which information on a time shift from the extraction timing of the extraction reference signal and information on a level shift between the signal level of the generation reference signal at the generation timing and the signal level of the extraction reference signal at the extraction timing; ,
    An operation based on the time lag information and the level lag information is performed on the received voice signal to generate the acoustic echo component, which is subtracted from the transmitted voice signal to remove the acoustic echo component. A removal step in which a removal audio signal is generated;
    An echo removal method comprising: a transmission step of transmitting the removed voice signal as the transmission voice signal to be transmitted to the communication destination device.
  7.  請求項1に記載のエコー除去装置の各種処理手段として、コンピュータを機能させることを特徴とするエコー除去装置のプログラム。 A program for an echo removal apparatus, which causes a computer to function as various processing means of the echo removal apparatus according to claim 1.
PCT/JP2010/064678 2009-09-17 2010-08-30 Echo removal device, echo removal method, and program for echo removal device WO2011033924A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2009215283A JP2011066668A (en) 2009-09-17 2009-09-17 Echo canceler, echo canceling method, and program of echo canceler
JP2009-215283 2009-09-17

Publications (1)

Publication Number Publication Date
WO2011033924A1 true WO2011033924A1 (en) 2011-03-24

Family

ID=43758533

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2010/064678 WO2011033924A1 (en) 2009-09-17 2010-08-30 Echo removal device, echo removal method, and program for echo removal device

Country Status (2)

Country Link
JP (1) JP2011066668A (en)
WO (1) WO2011033924A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013078002A1 (en) * 2011-11-23 2013-05-30 Qualcomm Incorporated Acoustic echo cancellation based on ultrasound motion detection

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5666063B2 (en) * 2012-08-03 2015-02-12 三菱電機株式会社 Telephone device
US9131041B2 (en) 2012-10-19 2015-09-08 Blackberry Limited Using an auxiliary device sensor to facilitate disambiguation of detected acoustic environment changes
JP6347029B2 (en) * 2014-03-19 2018-06-27 アイホン株式会社 Intercom system
KR20210108232A (en) * 2020-02-25 2021-09-02 삼성전자주식회사 Apparatus and method for echo cancelling
KR20220017775A (en) * 2020-08-05 2022-02-14 삼성전자주식회사 Audio signal processing apparatus and method thereof

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0983412A (en) * 1995-09-08 1997-03-28 Ricoh Co Ltd Digital echo canceller
JP2001119470A (en) * 1999-10-15 2001-04-27 Fujitsu Ten Ltd Telephone voice processor
JP2006080660A (en) * 2004-09-07 2006-03-23 Oki Electric Ind Co Ltd Communication terminal having echo canceler and echo cancellation method
JP2007072351A (en) * 2005-09-09 2007-03-22 Mitsubishi Electric Corp Speech recognition device
JP2007336364A (en) * 2006-06-16 2007-12-27 Oki Electric Ind Co Ltd Echo canceler

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0983412A (en) * 1995-09-08 1997-03-28 Ricoh Co Ltd Digital echo canceller
JP2001119470A (en) * 1999-10-15 2001-04-27 Fujitsu Ten Ltd Telephone voice processor
JP2006080660A (en) * 2004-09-07 2006-03-23 Oki Electric Ind Co Ltd Communication terminal having echo canceler and echo cancellation method
JP2007072351A (en) * 2005-09-09 2007-03-22 Mitsubishi Electric Corp Speech recognition device
JP2007336364A (en) * 2006-06-16 2007-12-27 Oki Electric Ind Co Ltd Echo canceler

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013078002A1 (en) * 2011-11-23 2013-05-30 Qualcomm Incorporated Acoustic echo cancellation based on ultrasound motion detection
CN103988487A (en) * 2011-11-23 2014-08-13 高通股份有限公司 Acoustic echo cancellation based on ultrasound motion detection
US9363386B2 (en) 2011-11-23 2016-06-07 Qualcomm Incorporated Acoustic echo cancellation based on ultrasound motion detection

Also Published As

Publication number Publication date
JP2011066668A (en) 2011-03-31

Similar Documents

Publication Publication Date Title
US10993025B1 (en) Attenuating undesired audio at an audio canceling device
JP5085556B2 (en) Configure echo cancellation
US9494683B1 (en) Audio-based gesture detection
US8842851B2 (en) Audio source localization system and method
US9595997B1 (en) Adaption-based reduction of echo and noise
WO2011033924A1 (en) Echo removal device, echo removal method, and program for echo removal device
US9385779B2 (en) Acoustic echo control for automated speaker tracking systems
JP2008288785A (en) Video conference apparatus
US9928847B1 (en) System and method for acoustic echo cancellation
US20230353953A1 (en) Voice input/output apparatus, hearing aid, voice input/output method, and voice input/output program
EP2795884A1 (en) Audio conferencing
JP3607625B2 (en) Multi-channel echo suppression method, apparatus thereof, program thereof and recording medium thereof
JP2009141560A (en) Sound signal processor, and sound signal processing method
KR102112018B1 (en) Apparatus and method for cancelling acoustic echo in teleconference system
JP2007174190A (en) Audio system
US8976956B2 (en) Speaker phone noise suppression method and apparatus
JP6569853B2 (en) Directivity control system and audio output control method
CN113556652B (en) Voice processing method, device, equipment and system
JP2010226403A (en) Howling canceler
US20230419943A1 (en) Devices, methods, systems, and media for spatial perception assisted noise identification and cancellation
JP2008294600A (en) Sound emission and collection apparatus and sound emission and collection system
JP2008219240A (en) Sound emitting and collecting system
JP4743085B2 (en) Echo canceller
JP6347029B2 (en) Intercom system
JP2015103824A (en) Voice generation system and stand for voice generation apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10817037

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10817037

Country of ref document: EP

Kind code of ref document: A1