WO2023074654A1 - Information processing device, information processing method, program, and recording medium - Google Patents

Information processing device, information processing method, program, and recording medium Download PDF

Info

Publication number
WO2023074654A1
WO2023074654A1 PCT/JP2022/039616 JP2022039616W WO2023074654A1 WO 2023074654 A1 WO2023074654 A1 WO 2023074654A1 JP 2022039616 W JP2022039616 W JP 2022039616W WO 2023074654 A1 WO2023074654 A1 WO 2023074654A1
Authority
WO
WIPO (PCT)
Prior art keywords
filter
sound
unit
information processing
sound collection
Prior art date
Application number
PCT/JP2022/039616
Other languages
French (fr)
Japanese (ja)
Inventor
洋人 河内
壮志 中川
Original Assignee
パイオニア株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パイオニア株式会社 filed Critical パイオニア株式会社
Priority to JP2023556447A priority Critical patent/JPWO2023074654A1/ja
Publication of WO2023074654A1 publication Critical patent/WO2023074654A1/en

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/20Speech recognition techniques specially adapted for robustness in adverse environments, e.g. in noise, of stress induced speech
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering

Definitions

  • the present invention relates to an information processing device, an information processing method, a program, and a recording medium.
  • the user's uttered voice, noise, noise, etc. are picked up, and filter characteristics for noise removal are determined based on the picked-up speech.
  • the sound for determining the filter characteristics is not picked up immediately after the power of the device is turned on, so the appropriate filter characteristics for removing noise cannot be determined.
  • One example is the problem that the words uttered by the user cannot be detected correctly.
  • a main object of the present invention is to provide a processing device, an information processing method, a program, and a recording medium.
  • the invention according to claim 1 provides a filter characteristic calculation unit for calculating a filter characteristic for removing noise from the sound based on the collected sound, and the filter characteristic calculation unit a filter unit that removes noise from the collected sound based on the calculated filter characteristics; a sound collection environment detection unit that detects the sound collection environment of the sound based on sensor information; and sets the filter characteristics in the filter unit based on the sound collection environment at the time of activation and a filter table linking the filter characteristics calculated by the filter characteristic calculation unit and the sound collection environment. and a filter control unit for processing.
  • an information processing method in an information processing apparatus including a filter characteristic calculation section, a filter section, a sound pickup environment detection section, and a filter control section, wherein the filter characteristic calculation a first step in which a unit calculates a filter characteristic for removing noise from the sound based on the collected sound; and the filter unit calculates a filter characteristic based on the filter characteristic calculated by the filter characteristic calculation unit.
  • the filter unit obtains the sound collection environment at the time of activation, and based on the sound collection environment at the time of activation and a filter table that associates the filter characteristics calculated by the filter characteristic calculation unit with the sound collection environment. and a fourth step of setting the filter characteristics to.
  • a program for causing a computer to execute an information processing method in an information processing apparatus comprising a filter characteristic calculation section, a filter section, a sound pickup environment detection section, and a filter control section. a first step in which the filter characteristic calculation unit calculates filter characteristics for removing noise from the sound based on the collected sound; and the filter unit calculates by the filter characteristic calculation unit a second step of removing noise from the collected sound based on the obtained filter characteristics; and a third step of detecting the sound collecting environment of the sound based on the sensor information by the sound collecting environment detection unit.
  • the filter control unit acquires the sound collection environment at startup, and associates the sound collection environment at startup with the filter characteristics calculated by the filter characteristic calculation unit and the sound collection environment. and a fourth step of setting the filter characteristics in the filter unit based on and causing a computer to execute an information processing method.
  • a program for causing a computer to execute an information processing method in an information processing apparatus comprising a filter characteristic calculation section, a filter section, a sound pickup environment detection section, and a filter control section.
  • a computer-readable non-transitory recording medium recording the a second step in which the filter unit removes noise from the collected sound based on the filter characteristics calculated by the filter characteristic calculation unit; a third step of detecting the sound pickup environment of the sound based on the filter control unit, the sound pickup environment is acquired at startup, and the sound pickup environment at startup and the filter characteristic calculation unit are calculated recording a program for causing a computer to execute an information processing method comprising: a fourth step of setting the filter characteristics in the filter unit based on a filter table linking the filter characteristics and the sound pickup environment; It is a recording medium that
  • FIG. 4 is a diagram exemplifying a filter table generated and referred to by the filter control unit of the information processing apparatus according to the embodiment of the present invention; It is a figure which shows the processing flow of the filter control part of the information processing apparatus which concerns on the Example of this invention.
  • FIG. 5 is a diagram illustrating processing when the filter control unit of the information processing apparatus according to the embodiment of the present invention refers to the filter table;
  • FIG. 7 is a diagram illustrating processing when the filter control unit of the information processing apparatus according to the embodiment of the present invention adds sound pickup environments and filter characteristics to the filter table;
  • FIG. 11 is a diagram illustrating sensor information acquired by a sensor unit of an information processing apparatus according to another embodiment of the present invention;
  • the information processing apparatus includes a filter characteristic calculation unit that calculates filter characteristics for removing noise from the sound based on the collected sound, and based on the filter characteristics calculated by the filter characteristic calculation unit, A filter unit that removes noise from the collected sound, a sound collection environment detection unit that detects the sound collection environment of the sound based on sensor information, and a sound collection environment that acquires the sound collection environment at the time of startup, and the sound collection environment at the time of startup. and a filter control unit that sets filter characteristics in the filter unit based on a filter table that associates the filter characteristics calculated by the filter characteristics calculation unit with the sound pickup environment.
  • the filter characteristic calculator calculates filter characteristics for removing noise from the sound based on the collected sound.
  • the filter section removes noise from the picked-up sound based on the filter characteristics calculated by the filter characteristic calculation section.
  • the sound-collecting environment detection unit detects a sound-collecting environment in which sound is being collected based on sensor information such as camera images and vehicle sensors.
  • the filter control unit obtains the sound collection environment when the information processing apparatus is started, and stores the sound collection environment at the time of the start-up in a filter table that associates the filter characteristics and the sound collection environment calculated by the filter characteristic calculation unit. Based on this, the filter characteristic to be set in the filter section is determined, and the filter characteristic is set in the filter section.
  • the filter control unit sets a filter characteristic linked to the sound pickup environment at startup to the filter unit if the sound pickup environment at startup of the information processing device is in the filter table, and if not in the filter table. sets the filter characteristics last set in the filter section to the filter section.
  • the filter control unit determines filter characteristics to be set in the filter unit based on the sound pickup environment and the filter table at startup. As a result, even when the sound for determining the filter characteristics is not picked up at startup, the filter characteristics can be set based on the sound pickup environment at startup, so noise can be removed from the sound appropriately. can do.
  • the filter control unit sets the filter characteristic calculated by the filter characteristic calculation unit to the filter unit during a period other than when the information processing apparatus is started. That is, except when the information processing apparatus is activated, the filter characteristics calculated by the filter characteristic calculation unit are set in the filter unit based on the sound picked up by the sound pickup unit. As a result, during a period other than when the information processing apparatus is activated, the filter characteristic is calculated based on the sound in the space uttered by the user, so that the optimum filter characteristic can be set.
  • the filter characteristics last set in the filter section are set, so better filter characteristics can be set for the space where the user speaks.
  • the above-described filter characteristics last set in the filter section are the filter characteristics set in the filter section when the information processing apparatus stops its operation due to power off or the like.
  • the filter characteristic is set in the filter section. In other words, even when the information processing apparatus is started, the filter characteristics that have been calculated based on the sound of the space where the user speaks are set, so that the filter characteristics that are more appropriate for that space can be set. can be done.
  • the filter control unit acquires the sound pickup environment when the filter characteristics are calculated by the filter characteristic calculation unit. Add to the filter table by associating with the filter characteristics obtained. As a result, the optimum filter characteristics for each sound pickup environment can be accumulated in the filter table simply by operating the information processing device. Characteristics can be set in the filter section.
  • FIG. 1 An information processing apparatus 1 according to the present embodiment will be described with reference to FIGS. 1 to 5.
  • FIG. 1 An information processing apparatus 1 according to the present embodiment will be described with reference to FIGS. 1 to 5.
  • FIG. 1 An information processing apparatus 1 according to the present embodiment will be described with reference to FIGS. 1 to 5.
  • the information processing device 1 includes at least a sound pickup unit 10, a filter unit 20, a filter characteristic calculation unit 30, a sensor unit 40, a sound pickup environment detection unit 50, and a filter control unit 60.
  • the sound pickup unit 10 is configured by, for example, a microphone, picks up sound in the vehicle interior, and transmits the picked sound to the filter unit 20 and the filter characteristic calculation unit 30 .
  • the sound picked up by the sound pickup unit 10 includes the user's uttered voice, noise generated around the microphone, noise, and the like.
  • the sound picked up by the microphone installed in the vehicle cabin includes engine sound, wind noise, road noise, operating sound of the air conditioner, music output from the speaker, etc. ing.
  • the microphone may be configured by using, for example, a microphone for hands-free calling installed in the vehicle, as long as it can pick up the voice inside the vehicle.
  • the filter unit 20 removes noise from the sound picked up by the sound pickup unit 10 based on filter characteristics received from the filter control unit 60, which will be described later.
  • the noise-removed voice is input to a voice recognition engine (not shown) to detect words uttered by the user.
  • the filter characteristic calculation unit 30 calculates filter characteristics for removing noise from the sound. Specifically, the filter characteristic calculation unit 30 divides the sound picked up by the sound collection unit 10 into, for example, 20 seconds of sound data, and calculates filter characteristics for removing noise for each divided sound data. calculate. The filter characteristics calculated by the filter characteristics calculator 30 are transmitted to the filter controller 60, which will be described later.
  • the sensor unit 40 includes at least a camera that captures an image of the inside of the vehicle and a sensor that detects the state of the vehicle, and transmits acquired sensor information to the sound pickup environment detection unit 50 described later.
  • the image to be transmitted as sensor information from the sensor unit 40 should be an image inside the vehicle. You may make it Examples of sensor information for detecting the state of the vehicle include vehicle speed pulse, acceleration sensor, GPS signal, and various sensor information connected to an ECU (Electronic Control Unit) of the vehicle.
  • ECU Electronic Control Unit
  • the sound collection environment detection unit 50 detects the sound collection environment based on sensor information from the sensor unit 40 . Specifically, the sound-collecting environment detection unit 50 analyzes an image captured inside the vehicle, and detects, for example, the boarding position of the passenger, the gender of the passenger, the open/close state of the windows of the vehicle, etc. as the sound-collecting environment. Further, the sound collection environment detection unit 50 detects the running speed of the vehicle, the engine speed, the operating state of the air conditioner, etc. as the sound collection environment based on the sensor information indicating the state of the vehicle. The sound collection environment detection unit 50 transmits the detected sound collection environment to the filter control unit 60 .
  • the sound collection environment detection unit 50 also detects the sound collection environment when the filter characteristic calculation unit 30 calculates the filter characteristics, and transmits the detected sound collection environment to the filter control unit 60 .
  • the sound collection environment detection unit 50 calculates the average value of the sound collection environment (average value of engine speed, running speed, etc.) during the period when the filter characteristics are being calculated in the filter characteristic calculation unit 30, The sound pickup environment is transmitted to the filter control unit 60 .
  • the filter control unit 60 acquires the sound collection environment at the time of activation, and associates the sound collection environment at the time of activation with the filter characteristics calculated by the filter characteristic calculation unit 30 and the sound collection environment when the filter characteristics are calculated.
  • a filter characteristic is set in the filter unit 20 based on the table. That is, when the information processing apparatus 1 is activated, the filter characteristic calculation unit 30 cannot calculate the filter characteristic because the sound for determining the filter characteristic has not yet been collected. Therefore, the filter control unit 60 determines filter characteristics to be set in the filter unit 20 based on the sound pickup environment acquired at startup and the filter table shown in FIG.
  • the filter table the sound pickup environment when the filter characteristics are calculated by the filter characteristic calculator 30 and the calculated filter characteristics are stored in association with each other.
  • the sound pickup environment (riding position, gender, window open/closed state, running speed, engine speed, air conditioner operation status) detected from the camera image of the sensor unit 40 and the vehicle sensor information, and the filter characteristic calculation unit 30 are stored in the filter table in association with the filter characteristics calculated in .
  • sound pickup environments K1 to K5 and filter characteristics F1 to F5 are associated with each other and stored in a filter table.
  • the filter control unit 60 acquires the filter characteristics associated with the sound collection environment from the filter table, and sends the filter characteristics to the filter unit 20. Set properties.
  • the filter characteristics last set in the filter unit 20 are set in the filter unit 20 .
  • the last filter characteristic set in the filter unit 20 mentioned above is the filter characteristic set in the filter unit 20 when the operation of the information processing apparatus 1 is stopped due to power off or the like.
  • the filter characteristic is set in the filter unit 20 .
  • the filter control unit 60 stores the last value of the filter characteristic set in the filter unit 20 in a memory (not shown).
  • the filter control unit 60 sets the filter characteristic calculated by the filter characteristic calculation unit 30 to the filter unit 20 during a period other than the startup time.
  • the filter control unit 60 acquires the sound pickup environment from the sound pickup environment detection unit 50 when the filter characteristics are calculated by the filter characteristic calculation unit 30, and if the sound pickup environment is not in the filter table, the sound pickup environment is Associate the environment with the filter characteristics and add them to the filter table. Details of the processing of the filter control unit 60 will be described below.
  • step S100 it is determined whether or not the ACC power supply (accessory power supply) of the vehicle is on. If it is determined that the ACC power supply of the vehicle is not on ("NO” in step S100), the process returns to step S100 and shifts to the standby state. On the other hand, if it is determined that the ACC power supply of the vehicle is on ("YES" in step S100), the process proceeds to step S110.
  • step S100 When it is determined that the ACC power supply of the vehicle is on ("YES" in step S100), the sound collection environment is obtained from the sound collection environment detection unit 50 (step S110). That is, the filter control unit 60 acquires the current sound collection environment from the sound collection environment detection unit 50 immediately after the ACC power supply is turned on (the information processing apparatus 1 is turned on).
  • the filter control unit 60 determines whether or not the sound pickup environment acquired in step S110 is in the filter table (step S120). If it is determined that the acquired sound pickup environment is in the filter table ("YES" in step S120), the process proceeds to step S130. On the other hand, if it is determined that the acquired sound pickup environment is not in the filter table ("NO" in step S120), the process proceeds to step S140.
  • the degree of similarity will be described with reference to FIG. 4 by exemplifying a case where the value of the sound collection environment KA acquired immediately after startup is compared with the sound collection environments K1 to K3 in the filter table.
  • the filter control unit 60 sets the traveling speed values (K 11 , K 21 ) in the filter table. , K 31 ) is calculated, and the value is taken as the degree of similarity.
  • the filter control unit 60 determines that the two traveling speed values are the same.
  • a predetermined value for example, when the similarity is ⁇ 10 Km/h
  • the filter control unit 60 determines that the two traveling speed values are the same.
  • the engine speed if the similarly calculated similarity is smaller than a predetermined value (for example, if the similarity is ⁇ 200 rpm), it is determined that the two engine speeds are the same.
  • the filter control unit 60 determines that the sound collection environment is the same (the sound collection environment KA and the sound collection environment K3 obtained from the sound collection environment detection unit 50 are the same sound collection environment). judge). Note that if it is determined that a plurality of sound pickup environments are the same when the above-described similarity degree determination is performed, for example, the sound pickup environment with the smallest similarity value is selected as the same environment. Determined as sound pickup environment.
  • the filter characteristics associated with the sound collection environment are filtered. It acquires from the table and sets the filter characteristics in the filter unit 20 (step S130).
  • step S140 the last The filter characteristics set in the filter section 20 are set in the filter section 20 (step S140).
  • the filter control unit 60 acquires the filter characteristics calculated by the filter characteristics calculation unit 30, and sets the acquired filter characteristics in the filter unit 20 (step S150). That is, since the filter characteristic calculation unit 30 can calculate the filter characteristics after a predetermined period of time has elapsed from the time of startup, the filter control unit 60 maintains the filter characteristics calculated by the filter characteristic calculation unit 30 during periods other than the time of startup. A characteristic is set in the filter unit 20 .
  • the filter control unit 60 acquires the sound collection environment from the sound collection environment detection unit 50 when the filter characteristics set in the filter unit 20 were calculated in step S150 (step S160).
  • the filter control unit 60 stores the filter characteristics set in the filter unit 20 in a memory (not shown) (step S170). That is, in step S170, a process of storing the value of the filter characteristic set in the filter unit 20 last in the memory is executed.
  • the filter control unit 60 determines whether or not the same sound pickup environment as the sound pickup environment acquired in step S160 exists in the filter table (step S180). If it is determined that the same sound pickup environment exists in the filter table ("YES" in step S180), the process proceeds to step S200. On the other hand, if it is determined that the same sound collection environment is not in the filter table ("NO" in step S180), the filter characteristics set in the filter unit 20 in step S150 and the sound collection environment acquired in step S160 are combined. It is linked and added to the filter table (step S190).
  • the filter control unit 60 acquires the sound collection environment from the sound collection environment detection unit 50 when the filter characteristics are calculated by the filter characteristic calculation unit 30, associates the sound collection environment with the filter characteristics, and obtains the filter characteristics. Add to table. Specifically, the sound pickup environment acquired from the sound pickup environment detection unit 50, for example, the boarding position of the passenger, the gender of each passenger, the open/closed state of the vehicle window, the traveling speed, the engine speed, the operation status of the air conditioner, etc. Each piece of information is compared with the sound pickup environment in the filter table to determine whether or not the same sound pickup environment exists in the filter table. Add to the filter table in association with filter characteristics. More specifically, for example, as shown in FIG.
  • the sound collection environment K1 to K5 registered in the filter table there is a sound collection environment that is the same as the sound collection environment acquired from the sound collection environment detection unit 50. If the same sound collection environment does not exist, the sound collection environment is added as a new sound collection environment K6 and the filter characteristic F6 linked to the sound collection environment to the filter table. .
  • the method for determining whether or not the same sound pickup environment exists in the filter table is the same as the determination method in step S120 described above.
  • the filter control unit 60 determines whether or not the ACC power supply (accessory power supply) of the vehicle is on (step S200). If it is determined that the ACC power supply of the vehicle is on ("YES” in step S200), the process proceeds to step S150 to continue the process. On the other hand, if it is determined that the ACC power supply of the vehicle is not on ("NO" in step S200), the process is terminated.
  • the information processing apparatus 1 includes a filter characteristic calculation unit 30 for calculating filter characteristics for removing noise from the collected sound based on the collected sound, and a filter calculated by the filter characteristic calculation unit 30.
  • a filter unit 20 that removes noise from the collected sound based on the characteristics
  • a sound collection environment detection unit 50 that detects the sound collection environment of the sound based on the sensor unit 40, and acquires the sound collection environment at startup
  • a filter control unit 60 that sets filter characteristics in the filter unit 20 based on the sound collection environment at startup and a filter table linking the filter characteristics calculated by the filter characteristic calculation unit 30 and the sound collection environment; , is equipped with
  • the filter section 20 removes noise from the picked-up sound based on the filter characteristics calculated by the filter characteristic calculation section 30 .
  • the filter characteristic calculation unit 30 calculates filter characteristics for removing noise from the sound.
  • the sound-collecting environment detection unit 50 detects the sound-collecting environment in which the sound is being collected, based on information from the sensor unit 40 such as camera images and vehicle sensors.
  • the filter control unit 60 acquires the sound pickup environment when the information processing apparatus 1 is activated, determines the filter characteristics to be set in the filter unit 20 based on the sound pickup environment at the time of activation and the filter table, and sets the filter characteristics. A characteristic is set in the filter section 20 . If the sound pickup environment at startup of the information processing device 1 is in the filter table, the filter control unit 60 sets the filter characteristics associated with the sound pickup environment at startup in the filter unit 20, and sets the filter characteristics in the filter table.
  • the filter control unit 60 determines the filter characteristics to be set in the filter unit 20 based on the sound pickup environment at startup and the filter table. As a result, even when the sound for determining the filter characteristics is not picked up at startup, the filter characteristics can be set based on the sound pickup environment at startup, so noise can be removed from the sound appropriately. can do. Also, even if there is no sound pickup environment at startup in the filter table, the filter characteristics set in the filter section 20 last are set. In other words, since proven filter characteristics calculated based on the sound of the space where the user speaks are set, more appropriate filter characteristics can be set for that space.
  • the filter control unit 60 acquires the sound collection environment when the filter characteristics are calculated by the filter characteristic calculation unit 30, and if the sound collection environment is not in the filter table, the sound collection environment and the filter characteristic calculation unit 30 is associated with the filter characteristics calculated and added to the filter table.
  • the value of the filter characteristic calculated by the filter characteristic calculator 30 varies greatly depending on the sound pickup environment. For example, the engine sound included in the collected sound changes in volume and frequency depending on the engine speed. In addition, road noise included in the collected sound varies in volume and frequency depending on the running speed. In addition, the utterance voice included in the collected voice changes in volume and frequency depending on the gender of the utterer.
  • the filter control unit 60 sets the filter characteristic calculated by the filter characteristic calculation unit 30 to the filter unit 20 during a period other than when the information processing apparatus 1 is activated. That is, except when the information processing apparatus 1 is activated, the filter characteristic calculation unit 30 sets the filter characteristic calculated based on the sound picked up by the sound pickup unit 10 to the filter unit 20 . Accordingly, except when the information processing apparatus 1 is activated, the filter characteristics are calculated based on the sound in the space uttered by the user, so that the optimum filter characteristics can be set.
  • the sensor information of the sensor unit 40 includes at least an image of the interior of the vehicle and information indicating the running state of the vehicle.
  • the filter table is created by linking information such as boarding position, gender, window open/close status, driving speed, engine speed, etc. , the filter value to be set in the filter unit 20 is determined based on the filter table and the sound pickup environment at startup.
  • the sound collected by the sound pickup unit 10 includes noise and the user's uttered voice, and the sound from which the noise is removed by the filter unit 20 is transmitted to the speech recognition engine. That is, under the control of the filter control unit 60, the optimum filter characteristics are set in the filter unit 20 even immediately after the information processing apparatus 1 is activated, so noise is removed from the sound picked up by the sound pickup unit 10. be able to. As a result, it is possible to improve the speech recognition rate of the speech recognition engine.
  • the above-described sound-collecting environment detection unit 50 detects the passenger's boarding position, the sex of each passenger, the open/closed state of the vehicle window, the running speed, the engine speed, the air conditioner operation status, etc. as the sound-collecting environment. You may make it detect further information as shown to. Specifically, the current weather conditions and the driving conditions of surrounding vehicles may be detected as the sound pickup environment from images of the inside and outside of the vehicle captured by a camera. Since there is a possibility that running noise will increase during rain compared to when it is fine, adding the current weather as a condition of the sound pickup environment makes it possible to set more appropriate filter characteristics. In addition, when trucks, motorcycles, etc.
  • the position where the vehicle is currently running may be detected as the sound collection environment from the GPS (Global Positioning System) information of the own vehicle. For example, since there is noise generated peculiar to the traveling position such as highways, residential areas, urban areas, etc., it is possible to set more appropriate filter characteristics by grasping the traveling position.
  • GPS Global Positioning System
  • the above-described sound collection environment detection unit 50 detects the position of the speaker based on the image captured inside the vehicle. It may be detected as a sound collection environment. As a result, even if a camera for capturing an image of the inside of the vehicle cannot be installed, it is possible to ascertain the passenger's boarding position.
  • the filter table is generated in the filter control unit 60, but the sound collection environment received from the sound collection environment detection unit 50 is transmitted to the server via the Internet line, and the filter table is generated in the server. may be created.
  • the load of filter table creation processing in the filter control unit 60 can be eliminated, so power consumption can be reduced.
  • the capacity of the memory for storing and creating the filter table can be reduced or deleted, the cost of the information processing apparatus 1 can be reduced.
  • the filter table can be shared with other users. Specifically, for example, the server aggregates and analyzes the data of the same filter table for each vehicle model, and generates a filter table that can be shared for each vehicle model. As a result, since the filter characteristics can be set by referring to the shared filter table, the optimum filter characteristics can be set even when the information processing apparatus 1A is used for the first time or immediately after startup.
  • the filter characteristics last set in the filter unit 20 are set in the filter unit 20.
  • a filter characteristic associated with the sound pickup environment closest to the sound environment may be set in the filter section 20 .
  • the sound-collecting environment with the smallest similarity is determined to be the same sound-collecting environment, and the sound-collecting environment is associated with the sound-collecting environment.
  • the obtained filter characteristic is set in the filter section 20 .
  • the filter characteristic associated with the sound collection environment closest to the sound collection environment acquired at startup can be set in the filter unit 20, so that the optimum filter characteristic can be set for the space where the user speaks.
  • information processing device 10 sound pickup unit 20; filter unit 30; filter characteristic calculation unit 40; sensor unit 50;

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Signal Processing (AREA)
  • Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)

Abstract

The present invention provides an information processing device which, even immediately after the information processing device is turned on, can determine a filter property for removing noise and can remove said noise. The information processing device comprises: a filter property calculation unit 30 for calculating, on the basis of sound that is picked up, a filter property for removing noise from the sound; a filter unit 20 for removing noise from the picked up sound on the basis of the filter property calculated by the filter property calculation unit 30; a sound pickup environment detection unit 50 for detecting the sound pickup environment of the sound on the basis of a sensor unit 40; and a filter control unit 60 for acquiring the sound pickup environment at the time of startup and for setting a filter property for the filter unit 20 on the basis of the sound pickup environment at the time of start up and a filter table in which a sound pickup environment and the filter property calculated by the filter property calculation unit 30 are associated.

Description

情報処理装置、情報処理方法、プログラムおよび記録媒体Information processing device, information processing method, program and recording medium
 本発明は、情報処理装置、情報処理方法、プログラムおよび記録媒体に関する。 The present invention relates to an information processing device, an information processing method, a program, and a recording medium.
 近年、ユーザがウェイクワード(ウェイクアップワードあるいはホットワードともいう)を発話することにより音声アシスタントを起動させ、ユーザの発話により、操作指示、情報検索等を行う機器、例えば、スマートフォンやスマートスピーカ等が普及している。
 一般に、この種の機器は、騒音や雑音(ノイズ)が含まれている環境下で動作させると、ユーザが発話した言葉を正しく検出できないことがある。
 ここで、この種の機器を車輌に搭載した場合には、走行状態(走行速度、エンジン回転数等)や車両状態(エアコン作動状況、窓開閉状況等)により、騒音や雑音等の大きさが大きく変化するため、収音した音声からノイズを除去し、ユーザの発話した言葉を検出しやすくする技術が開示されている(例えば、特許文献1参照)。
In recent years, devices such as smartphones, smart speakers, etc. that activate voice assistants by uttering wake words (also known as wake-up words or hot words) by users and perform operation instructions, information searches, etc. based on user utterances have become available. Widespread.
In general, when this type of device is operated in an environment containing noise, it may not be possible to correctly detect words uttered by the user.
Here, when this type of device is installed in a vehicle, the magnitude of noise, noise, etc. may vary depending on the driving conditions (driving speed, engine speed, etc.) and vehicle conditions (air conditioner operating conditions, window opening/closing conditions, etc.). Since it changes greatly, a technique has been disclosed that removes noise from the collected voice and makes it easier to detect the words uttered by the user (see, for example, Patent Document 1).
特開2009-210647号公報JP 2009-210647 A
 上述した先行技術では、ユーザの発話音声、騒音や雑音等(ノイズ)が含まれた音声を収音し、その収音した音声に基づいてノイズを除去するためのフィルタ特性を決定している。
 しかしながら、上述した先行技術では、機器の電源がオンされた直後では、フィルタ特性を決定するための音声が収音されていないため、ノイズを除去するための適切なフィルタ特性を決定することができず、ユーザが発話した言葉を正しく検出できないという課題が一例として挙げられる。
In the prior art described above, the user's uttered voice, noise, noise, etc. (noise) are picked up, and filter characteristics for noise removal are determined based on the picked-up speech.
However, in the prior art described above, the sound for determining the filter characteristics is not picked up immediately after the power of the device is turned on, so the appropriate filter characteristics for removing noise cannot be determined. One example is the problem that the words uttered by the user cannot be detected correctly.
 本発明は、上述の一例として挙げられた課題に鑑みてなされたものであり、機器の電源がオンされた直後でも、ノイズを除去するための最適なフィルタ特性を決定し、ノイズを除去する情報処理装置、情報処理方法、プログラムおよび記録媒体を提供することを主な目的とする。 SUMMARY OF THE INVENTION The present invention has been made in view of the problems exemplified above. A main object of the present invention is to provide a processing device, an information processing method, a program, and a recording medium.
 上記課題を解決するために、請求項1に記載の発明は、収音した音声に基づいて前記音声から雑音を除去するためのフィルタ特性を算出するフィルタ特性算出部と、前記フィルタ特性算出部によって算出された前記フィルタ特性に基づいて、収音した音声から雑音を除去するフィルタ部と、センサ情報に基づいて前記音声の収音環境を検出する収音環境検出部と、起動時に前記収音環境を取得し、該起動時の収音環境と、前記フィルタ特性算出部が算出した前記フィルタ特性と前記収音環境とを紐づけたフィルタテーブルとに基づいて、前記フィルタ部に前記フィルタ特性を設定するフィルタ制御部と、を備える情報処理装置である。 In order to solve the above problems, the invention according to claim 1 provides a filter characteristic calculation unit for calculating a filter characteristic for removing noise from the sound based on the collected sound, and the filter characteristic calculation unit a filter unit that removes noise from the collected sound based on the calculated filter characteristics; a sound collection environment detection unit that detects the sound collection environment of the sound based on sensor information; and sets the filter characteristics in the filter unit based on the sound collection environment at the time of activation and a filter table linking the filter characteristics calculated by the filter characteristic calculation unit and the sound collection environment. and a filter control unit for processing.
 また、請求項7に記載の発明は、フィルタ特性算出部と、フィルタ部と、収音環境検出部と、フィルタ制御部とを備えた情報処理装置における情報処理方法であって、前記フィルタ特性算出部が、収音した音声に基づいて前記音声から雑音を除去するためのフィルタ特性を算出する第1の工程と、前記フィルタ部が、前記フィルタ特性算出部によって算出された前記フィルタ特性に基づいて、収音した音声から雑音を除去する第2の工程と、前記収音環境検出部が、センサ情報に基づいて前記音声の収音環境を検出する第3の工程と、前記フィルタ制御部が、起動時に前記収音環境を取得し、該起動時の収音環境と、前記フィルタ特性算出部が算出した前記フィルタ特性と前記収音環境とを紐づけたフィルタテーブルとに基づいて、前記フィルタ部に前記フィルタ特性を設定する第4の工程とを備える情報処理方法である。 According to a seventh aspect of the present invention, there is provided an information processing method in an information processing apparatus including a filter characteristic calculation section, a filter section, a sound pickup environment detection section, and a filter control section, wherein the filter characteristic calculation a first step in which a unit calculates a filter characteristic for removing noise from the sound based on the collected sound; and the filter unit calculates a filter characteristic based on the filter characteristic calculated by the filter characteristic calculation unit. a second step of removing noise from the collected sound; a third step of detecting the sound pickup environment of the sound based on the sensor information; The filter unit obtains the sound collection environment at the time of activation, and based on the sound collection environment at the time of activation and a filter table that associates the filter characteristics calculated by the filter characteristic calculation unit with the sound collection environment. and a fourth step of setting the filter characteristics to.
 また、請求項8に記載の発明は、フィルタ特性算出部と、フィルタ部と、収音環境検出部と、フィルタ制御部とを備えた情報処理装置における情報処理方法をコンピュータに実行させるためのプログラムであって、前記フィルタ特性算出部が、収音した音声に基づいて前記音声から雑音を除去するためのフィルタ特性を算出する第1の工程と、前記フィルタ部が、前記フィルタ特性算出部によって算出された前記フィルタ特性に基づいて、収音した音声から雑音を除去する第2の工程と、前記収音環境検出部が、センサ情報に基づいて前記音声の収音環境を検出する第3の工程と、前記フィルタ制御部が、起動時に前記収音環境を取得し、該起動時の収音環境と、前記フィルタ特性算出部が算出した前記フィルタ特性と前記収音環境とを紐づけたフィルタテーブルとに基づいて、前記フィルタ部に前記フィルタ特性を設定する第4の工程とを備える情報処理方法をコンピュータに実行させるためのプログラムである。 Further, according to an eighth aspect of the present invention, there is provided a program for causing a computer to execute an information processing method in an information processing apparatus comprising a filter characteristic calculation section, a filter section, a sound pickup environment detection section, and a filter control section. a first step in which the filter characteristic calculation unit calculates filter characteristics for removing noise from the sound based on the collected sound; and the filter unit calculates by the filter characteristic calculation unit a second step of removing noise from the collected sound based on the obtained filter characteristics; and a third step of detecting the sound collecting environment of the sound based on the sensor information by the sound collecting environment detection unit. and a filter table in which the filter control unit acquires the sound collection environment at startup, and associates the sound collection environment at startup with the filter characteristics calculated by the filter characteristic calculation unit and the sound collection environment. and a fourth step of setting the filter characteristics in the filter unit based on and causing a computer to execute an information processing method.
 また、請求項9に記載の発明は、フィルタ特性算出部と、フィルタ部と、収音環境検出部と、フィルタ制御部とを備えた情報処理装置における情報処理方法をコンピュータに実行させるためのプログラムを記録したコンピュータによって読み取り可能な非一過性の記録媒体であって、前記フィルタ特性算出部が、収音した音声に基づいて前記音声から雑音を除去するためのフィルタ特性を算出する第1の工程と、前記フィルタ部が、前記フィルタ特性算出部によって算出された前記フィルタ特性に基づいて、収音した音声から雑音を除去する第2の工程と、前記収音環境検出部が、センサ情報に基づいて前記音声の収音環境を検出する第3の工程と、前記フィルタ制御部が、起動時に前記収音環境を取得し、該起動時の収音環境と、前記フィルタ特性算出部が算出した前記フィルタ特性と前記収音環境とを紐づけたフィルタテーブルとに基づいて、前記フィルタ部に前記フィルタ特性を設定する第4の工程とを備える情報処理方法をコンピュータに実行させるためのプログラムを記録した記録媒体である。 According to a ninth aspect of the present invention, there is provided a program for causing a computer to execute an information processing method in an information processing apparatus comprising a filter characteristic calculation section, a filter section, a sound pickup environment detection section, and a filter control section. A computer-readable non-transitory recording medium recording the a second step in which the filter unit removes noise from the collected sound based on the filter characteristics calculated by the filter characteristic calculation unit; a third step of detecting the sound pickup environment of the sound based on the filter control unit, the sound pickup environment is acquired at startup, and the sound pickup environment at startup and the filter characteristic calculation unit are calculated recording a program for causing a computer to execute an information processing method comprising: a fourth step of setting the filter characteristics in the filter unit based on a filter table linking the filter characteristics and the sound pickup environment; It is a recording medium that
本発明の実施例に係る情報処理装置の構成を示す図である。It is a figure which shows the structure of the information processing apparatus which concerns on the Example of this invention. 本発明の実施例に係る情報処理装置のフィルタ制御部が生成および参照するフィルタテーブルを例示した図である。FIG. 4 is a diagram exemplifying a filter table generated and referred to by the filter control unit of the information processing apparatus according to the embodiment of the present invention; 本発明の実施例に係る情報処理装置のフィルタ制御部の処理フローを示す図である。It is a figure which shows the processing flow of the filter control part of the information processing apparatus which concerns on the Example of this invention. 本発明の実施例に係る情報処理装置のフィルタ制御部がフィルタテーブルを参照するときの処理を例示する図である。FIG. 5 is a diagram illustrating processing when the filter control unit of the information processing apparatus according to the embodiment of the present invention refers to the filter table; 本発明の実施例に係る情報処理装置のフィルタ制御部がフィルタテーブルに収音環境とフィルタ特性とを追加するときの処理を例示した図である。FIG. 7 is a diagram illustrating processing when the filter control unit of the information processing apparatus according to the embodiment of the present invention adds sound pickup environments and filter characteristics to the filter table; 本発明のその他の実施例に係る情報処理装置のセンサ部が取得するセンサ情報を例示した図である。FIG. 11 is a diagram illustrating sensor information acquired by a sensor unit of an information processing apparatus according to another embodiment of the present invention;
 本実施形態に係る情報処理装置は、収音した音声に基づいて音声から雑音を除去するためのフィルタ特性を算出するフィルタ特性算出部と、フィルタ特性算出部によって算出されたフィルタ特性に基づいて、収音した音声から雑音を除去するフィルタ部と、センサ情報に基づいて音声の収音環境を検出する収音環境検出部と、起動時に収音環境を取得し、該起動時の収音環境と、フィルタ特性算出部が算出したフィルタ特性と収音環境とを紐づけたフィルタテーブルとに基づいて、フィルタ部にフィルタ特性を設定するフィルタ制御部とを備えている。
 フィルタ特性算出部は、収音した音声に基づいて、音声から雑音を除去するためのフィルタ特性を算出する。
 フィルタ部は、フィルタ特性算出部において算出されたフィルタ特性に基づいて、収音した音声から雑音を除去する。
 収音環境検出部は、カメラ画像、車両センサ等のセンサ情報に基づいて、音声を収音している収音環境を検出する。
 フィルタ制御部は、情報処理装置の起動時に収音環境を取得し、その起動時の収音環境と、フィルタ特性算出部が算出したフィルタ特性と収音環境とを紐づけたフィルタテーブルと、に基づいて、フィルタ部に設定するフィルタ特性を決定し、そのフィルタ特性をフィルタ部に設定する。
 フィルタ制御部は、情報処理装置の起動時の収音環境が、フィルタテーブルにある場合には、起動時の収音環境に紐づいたフィルタ特性をフィルタ部に設定し、フィルタテーブルにない場合には、最後にフィルタ部に設定したフィルタ特性をフィルタ部に設定する。
 情報処理装置の起動時には、フィルタ特性を決定するための音声が収音されていないため、雑音を除去するための適切なフィルタ特性を決定することができない。
 そのため、フィルタ制御部は、起動時の収音環境とフィルタテーブルとに基づいて、フィルタ部に設定するフィルタ特性を決定する。
 これにより、フィルタ特性を決定するための音声が収音されていない起動時であっても、起動時の収音環境に基づいたフィルタ特性を設定することができるため、適切に音声から雑音を除去することができる。
 また、フィルタ制御部は、情報処理装置の起動時を除く期間では、フィルタ特性算出部により算出されたフィルタ特性を、フィルタ部に設定する。
 すなわち、情報処理装置の起動時以外では、収音部により収音された音声に基づいて、フィルタ特性算出部が算出したフィルタ特性を、フィルタ部に設定する。
 これにより、情報処理装置の起動時を除く期間では、ユーザが発話する空間の音声に基づいて、フィルタ特性が算出されるため、最適なフィルタ特性を設定することができる。
The information processing apparatus according to the present embodiment includes a filter characteristic calculation unit that calculates filter characteristics for removing noise from the sound based on the collected sound, and based on the filter characteristics calculated by the filter characteristic calculation unit, A filter unit that removes noise from the collected sound, a sound collection environment detection unit that detects the sound collection environment of the sound based on sensor information, and a sound collection environment that acquires the sound collection environment at the time of startup, and the sound collection environment at the time of startup. and a filter control unit that sets filter characteristics in the filter unit based on a filter table that associates the filter characteristics calculated by the filter characteristics calculation unit with the sound pickup environment.
The filter characteristic calculator calculates filter characteristics for removing noise from the sound based on the collected sound.
The filter section removes noise from the picked-up sound based on the filter characteristics calculated by the filter characteristic calculation section.
The sound-collecting environment detection unit detects a sound-collecting environment in which sound is being collected based on sensor information such as camera images and vehicle sensors.
The filter control unit obtains the sound collection environment when the information processing apparatus is started, and stores the sound collection environment at the time of the start-up in a filter table that associates the filter characteristics and the sound collection environment calculated by the filter characteristic calculation unit. Based on this, the filter characteristic to be set in the filter section is determined, and the filter characteristic is set in the filter section.
The filter control unit sets a filter characteristic linked to the sound pickup environment at startup to the filter unit if the sound pickup environment at startup of the information processing device is in the filter table, and if not in the filter table. sets the filter characteristics last set in the filter section to the filter section.
When the information processing apparatus is activated, sound for determining filter characteristics has not been picked up, so appropriate filter characteristics for removing noise cannot be determined.
Therefore, the filter control unit determines filter characteristics to be set in the filter unit based on the sound pickup environment and the filter table at startup.
As a result, even when the sound for determining the filter characteristics is not picked up at startup, the filter characteristics can be set based on the sound pickup environment at startup, so noise can be removed from the sound appropriately. can do.
In addition, the filter control unit sets the filter characteristic calculated by the filter characteristic calculation unit to the filter unit during a period other than when the information processing apparatus is started.
That is, except when the information processing apparatus is activated, the filter characteristics calculated by the filter characteristic calculation unit are set in the filter unit based on the sound picked up by the sound pickup unit.
As a result, during a period other than when the information processing apparatus is activated, the filter characteristic is calculated based on the sound in the space uttered by the user, so that the optimum filter characteristic can be set.
 また、フィルタテーブルに起動時の収音環境がない場合でも、最後にフィルタ部に設定されたフィルタ特性が設定されるため、ユーザが発話する空間にとって、より良いフィルタ特性を設定することができる。
 上述した、最後にフィルタ部に設定したフィルタ特性とは、情報処理装置が、電源オフ等により動作が停止されたときに、フィルタ部に設定されているフィルタ特性のことであり、次に情報処理装置が起動され、フィルタテーブルに起動時の収音環境がないときには、当該フィルタ特性がフィルタ部に設定される。
 つまり、情報処理装置の起動時であっても、ユーザが発話する空間の音に基づいて算出された実績のあるフィルタ特性が設定されるため、その空間にとって、より適切なフィルタ特性を設定することができる。
 また、フィルタ制御部は、フィルタ特性算出部においてフィルタ特性を算出したときの収音環境を取得し、その収音環境がフィルタテーブルにない場合には、その収音環境とフィルタ特性算出部において算出したフィルタ特性とを紐づけて、フィルタテーブルに追加する。
 これにより、情報処理装置を動作させるだけで、収音環境毎に最適なフィルタ特性をフィルタテーブルに蓄積することができるため、フィルタ制御部は、フィルタテーブルを参照することにより、起動時に最適なフィルタ特性をフィルタ部に設定することができる。
Also, even if there is no sound pickup environment at startup in the filter table, the filter characteristics last set in the filter section are set, so better filter characteristics can be set for the space where the user speaks.
The above-described filter characteristics last set in the filter section are the filter characteristics set in the filter section when the information processing apparatus stops its operation due to power off or the like. When the device is activated and there is no sound pickup environment at the time of activation in the filter table, the filter characteristic is set in the filter section.
In other words, even when the information processing apparatus is started, the filter characteristics that have been calculated based on the sound of the space where the user speaks are set, so that the filter characteristics that are more appropriate for that space can be set. can be done.
In addition, the filter control unit acquires the sound pickup environment when the filter characteristics are calculated by the filter characteristic calculation unit. Add to the filter table by associating with the filter characteristics obtained.
As a result, the optimum filter characteristics for each sound pickup environment can be accumulated in the filter table simply by operating the information processing device. Characteristics can be set in the filter section.
<実施例>
 図1から図5を用いて、本実施例に係る情報処理装置1について説明する。
<Example>
An information processing apparatus 1 according to the present embodiment will be described with reference to FIGS. 1 to 5. FIG.
<情報処理装置1の構成>
 図1を用いて、本実施例に係る情報処理装置1の構成について説明する。
 情報処理装置1は、収音部10と、フィルタ部20と、フィルタ特性算出部30と、センサ部40と、収音環境検出部50と、フィルタ制御部60と、を少なくとも含んで構成されている。
<Configuration of information processing device 1>
A configuration of an information processing apparatus 1 according to the present embodiment will be described with reference to FIG.
The information processing device 1 includes at least a sound pickup unit 10, a filter unit 20, a filter characteristic calculation unit 30, a sensor unit 40, a sound pickup environment detection unit 50, and a filter control unit 60. there is
 収音部10は、例えば、マイクロフォンで構成され、車室内の音声を収音し、収音した音声をフィルタ部20およびフィルタ特性算出部30に送信する。
 収音部10が収音した音声には、ユーザの発話音声と、マイクロフォン周辺で発生している雑音、騒音等が含まれている。
 具体的には、車室内に設置されたマイクロフォンにより収音されている音声には、走行時のエンジン音、風切り音、ロードノイズ、エアコンの作動音、スピーカから出力されている音楽等が含まれている。
 なお、マイクロフォンは、上述した車室内の音声を収音できればよいため、車両に設置されている、例えば、ハンズフリー通話用のマイクロフォンを用いて構成してもよい。
The sound pickup unit 10 is configured by, for example, a microphone, picks up sound in the vehicle interior, and transmits the picked sound to the filter unit 20 and the filter characteristic calculation unit 30 .
The sound picked up by the sound pickup unit 10 includes the user's uttered voice, noise generated around the microphone, noise, and the like.
Specifically, the sound picked up by the microphone installed in the vehicle cabin includes engine sound, wind noise, road noise, operating sound of the air conditioner, music output from the speaker, etc. ing.
It should be noted that the microphone may be configured by using, for example, a microphone for hands-free calling installed in the vehicle, as long as it can pick up the voice inside the vehicle.
 フィルタ部20は、後述するフィルタ制御部60から受信したフィルタ特性に基づいて、収音部10において収音した音声から雑音を除去する。
 なお、雑音が除去された音声は、図示しない音声認識エンジンに入力され、ユーザの発話した言葉が検出される。
The filter unit 20 removes noise from the sound picked up by the sound pickup unit 10 based on filter characteristics received from the filter control unit 60, which will be described later.
The noise-removed voice is input to a voice recognition engine (not shown) to detect words uttered by the user.
 フィルタ特性算出部30は、収音部10において収音された音声に基づいて、その音声から雑音を除去するためのフィルタ特性を算出する。
 具体的には、フィルタ特性算出部30は、収音部10において収音した音声を、例えば20秒間毎の音声データに分割し、分割された音声データ毎に雑音を除去するためのフィルタ特性を算出する。
 なお、フィルタ特性算出部30において算出されたフィルタ特性は、後述するフィルタ制御部60に送信される。
Based on the sound picked up by the sound pickup unit 10, the filter characteristic calculation unit 30 calculates filter characteristics for removing noise from the sound.
Specifically, the filter characteristic calculation unit 30 divides the sound picked up by the sound collection unit 10 into, for example, 20 seconds of sound data, and calculates filter characteristics for removing noise for each divided sound data. calculate.
The filter characteristics calculated by the filter characteristics calculator 30 are transmitted to the filter controller 60, which will be described later.
 センサ部40は、少なくとも、車両内を撮像するカメラ、車両の状態を検出するセンサにより構成され、取得したセンサ情報を、後述する収音環境検出部50に送信する。
 ここで、センサ部40からセンサ情報として送信する画像は、車両内の画像が取得できればよいため、所謂、車両に設置されているドライブレコーダが撮像している画像を収音環境検出部50に送信するようにしてもよい。
 また、車両の状態を検出するセンサ情報としては、車速パルス、加速度センサ、GPS信号、車両のECU(Electronic Control Unit)に接続されている各種センサ情報等を例示することができる。
The sensor unit 40 includes at least a camera that captures an image of the inside of the vehicle and a sensor that detects the state of the vehicle, and transmits acquired sensor information to the sound pickup environment detection unit 50 described later.
Here, the image to be transmitted as sensor information from the sensor unit 40 should be an image inside the vehicle. You may make it
Examples of sensor information for detecting the state of the vehicle include vehicle speed pulse, acceleration sensor, GPS signal, and various sensor information connected to an ECU (Electronic Control Unit) of the vehicle.
 収音環境検出部50は、センサ部40からのセンサ情報に基づいて、収音環境を検出する。
 具体的には、収音環境検出部50は、車両内を撮像した画像を分析し、例えば、乗員の乗車位置、乗員の性別、車両の窓の開閉状態等を収音環境として検出する。
 また、収音環境検出部50は、車両の状態を示すセンサ情報に基づいて、車両の走行速度、エンジン回転数、エアコン作動状況等を収音環境として検出する。
 なお、収音環境検出部50は、検出した収音環境を、フィルタ制御部60に送信する。
 また、収音環境検出部50は、フィルタ特性算出部30がフィルタ特性を算出したときの収音環境を検出し、検出した収音環境をフィルタ制御部60に送信する。
 例えば、収音環境検出部50は、フィルタ特性算出部30において、フィルタ特性を算出している期間中の収音環境の平均値(エンジン回転数、走行速度等の平均値)を算出して、当該収音環境をフィルタ制御部60に送信する。
The sound collection environment detection unit 50 detects the sound collection environment based on sensor information from the sensor unit 40 .
Specifically, the sound-collecting environment detection unit 50 analyzes an image captured inside the vehicle, and detects, for example, the boarding position of the passenger, the gender of the passenger, the open/close state of the windows of the vehicle, etc. as the sound-collecting environment.
Further, the sound collection environment detection unit 50 detects the running speed of the vehicle, the engine speed, the operating state of the air conditioner, etc. as the sound collection environment based on the sensor information indicating the state of the vehicle.
The sound collection environment detection unit 50 transmits the detected sound collection environment to the filter control unit 60 .
The sound collection environment detection unit 50 also detects the sound collection environment when the filter characteristic calculation unit 30 calculates the filter characteristics, and transmits the detected sound collection environment to the filter control unit 60 .
For example, the sound collection environment detection unit 50 calculates the average value of the sound collection environment (average value of engine speed, running speed, etc.) during the period when the filter characteristics are being calculated in the filter characteristic calculation unit 30, The sound pickup environment is transmitted to the filter control unit 60 .
 フィルタ制御部60は、起動時に収音環境を取得し、起動時の収音環境と、フィルタ特性算出部30が算出したフィルタ特性とフィルタ特性を算出したときの収音環境とを紐づけたフィルタテーブルと、に基づいて、フィルタ部20にフィルタ特性を設定する。
 つまり、情報処理装置1の起動時には、フィルタ特性を決定するための音声がまだ収音されていないため、フィルタ特性算出部30において、フィルタ特性を算出することができない。
 そのため、フィルタ制御部60は、起動時に取得した収音環境と、図2に示すようなフィルタテーブルとに基づいて、フィルタ部20に設定するフィルタ特性を決定する。
The filter control unit 60 acquires the sound collection environment at the time of activation, and associates the sound collection environment at the time of activation with the filter characteristics calculated by the filter characteristic calculation unit 30 and the sound collection environment when the filter characteristics are calculated. A filter characteristic is set in the filter unit 20 based on the table.
That is, when the information processing apparatus 1 is activated, the filter characteristic calculation unit 30 cannot calculate the filter characteristic because the sound for determining the filter characteristic has not yet been collected.
Therefore, the filter control unit 60 determines filter characteristics to be set in the filter unit 20 based on the sound pickup environment acquired at startup and the filter table shown in FIG.
 フィルタテーブルには、フィルタ特性算出部30においてフィルタ特性が算出されたときの収音環境と、算出されたフィルタ特性とが紐づけられて、格納されている。
 具体的には、センサ部40のカメラ画像および車両センサ情報から検出された収音環境(乗車位置、性別、窓開閉状態、走行速度、エンジン回転数、エアコン作動状況)と、フィルタ特性算出部30において算出されたフィルタ特性とが紐づけられて、フィルタテーブルに格納されている。
 より具体的には、図2に示すように、収音環境K1~K5と、フィルタ特性F1~F5とが、それぞれ紐づけられて、フィルタテーブルに格納されている。
In the filter table, the sound pickup environment when the filter characteristics are calculated by the filter characteristic calculator 30 and the calculated filter characteristics are stored in association with each other.
Specifically, the sound pickup environment (riding position, gender, window open/closed state, running speed, engine speed, air conditioner operation status) detected from the camera image of the sensor unit 40 and the vehicle sensor information, and the filter characteristic calculation unit 30 are stored in the filter table in association with the filter characteristics calculated in .
More specifically, as shown in FIG. 2, sound pickup environments K1 to K5 and filter characteristics F1 to F5 are associated with each other and stored in a filter table.
 フィルタ制御部60は、起動時に取得した収音環境と同じ収音環境がフィルタテーブルにある場合には、その収音環境に紐づいたフィルタ特性をフィルタテーブルから取得し、フィルタ部20にそのフィルタ特性を設定する。
 一方で、起動時に取得した収音環境と同じ収音環境がフィルタテーブルにない場合には、フィルタ部20に最後に設定したフィルタ特性をフィルタ部20に設定する。
 上述した、フィルタ部20に最後に設定したフィルタ特性とは、情報処理装置1が、電源オフ等により動作が停止されたときに、フィルタ部20に設定されていたフィルタ特性のことであり、次に情報処理装置1が起動され、起動時に取得した収音環境がフィルタテーブルにないときには、当該フィルタ特性をフィルタ部20に設定する。
 なお、フィルタ制御部60は、フィルタ部20に最後に設定したフィルタ特性の値を、図示しないメモリに格納する。
If the filter table contains the same sound collection environment as the sound collection environment acquired at startup, the filter control unit 60 acquires the filter characteristics associated with the sound collection environment from the filter table, and sends the filter characteristics to the filter unit 20. Set properties.
On the other hand, if the same sound pickup environment as the sound pickup environment acquired at startup is not found in the filter table, the filter characteristics last set in the filter unit 20 are set in the filter unit 20 .
The last filter characteristic set in the filter unit 20 mentioned above is the filter characteristic set in the filter unit 20 when the operation of the information processing apparatus 1 is stopped due to power off or the like. When the information processing apparatus 1 is activated at the time of activation and the sound pickup environment acquired at the time of activation is not in the filter table, the filter characteristic is set in the filter unit 20 .
Note that the filter control unit 60 stores the last value of the filter characteristic set in the filter unit 20 in a memory (not shown).
 また、フィルタ制御部60は、起動時を除く期間では、フィルタ特性算出部30において算出されたフィルタ特性をフィルタ部20に設定する。 In addition, the filter control unit 60 sets the filter characteristic calculated by the filter characteristic calculation unit 30 to the filter unit 20 during a period other than the startup time.
 フィルタ制御部60は、フィルタ特性算出部30においてフィルタ特性が算出されたときの収音環境を収音環境検出部50から取得し、その収音環境がフィルタテーブルにない場合には、その収音環境とフィルタ特性とを紐づけて、フィルタテーブルに追加する。
 なお、フィルタ制御部60の処理の詳細は、以下に説明する。
The filter control unit 60 acquires the sound pickup environment from the sound pickup environment detection unit 50 when the filter characteristics are calculated by the filter characteristic calculation unit 30, and if the sound pickup environment is not in the filter table, the sound pickup environment is Associate the environment with the filter characteristics and add them to the filter table.
Details of the processing of the filter control unit 60 will be described below.
<フィルタ制御部60の処理>
 図3から図5を用いて、フィルタ制御部60の処理の詳細について説明する。
<Processing of Filter Control Unit 60>
Details of the processing of the filter control unit 60 will be described with reference to FIGS. 3 to 5. FIG.
 図3に示すように、車両のACC電源(アクセサリ電源)がオン状態であるか否かを判定する(ステップS100)。
 車両のACC電源がオン状態にないと判定した場合(ステップS100の「NO」)には、処理をステップS100に戻し、待機状態に移行する。
 一方で、車両のACC電源がオン状態にあると判定した場合(ステップS100の「YES」)には、処理をステップS110に移行させる。
As shown in FIG. 3, it is determined whether or not the ACC power supply (accessory power supply) of the vehicle is on (step S100).
If it is determined that the ACC power supply of the vehicle is not on ("NO" in step S100), the process returns to step S100 and shifts to the standby state.
On the other hand, if it is determined that the ACC power supply of the vehicle is on ("YES" in step S100), the process proceeds to step S110.
 車両のACC電源がオン状態にあると判定した場合(ステップS100の「YES」)には、収音環境検出部50から収音環境を取得する(ステップS110)。
 つまり、フィルタ制御部60は、ACC電源がオンされた(情報処理装置1がオンされた)直後に、現在の収音環境を収音環境検出部50から取得する。
When it is determined that the ACC power supply of the vehicle is on ("YES" in step S100), the sound collection environment is obtained from the sound collection environment detection unit 50 (step S110).
That is, the filter control unit 60 acquires the current sound collection environment from the sound collection environment detection unit 50 immediately after the ACC power supply is turned on (the information processing apparatus 1 is turned on).
 フィルタ制御部60は、ステップS110で取得した収音環境が、フィルタテーブルにあるか否かを判定する(ステップS120)。
 取得した収音環境がフィルタテーブルにあると判定した場合(ステップS120の「YES」)には、処理をステップS130に移行させる。
 一方で、取得した収音環境がフィルタテーブルにないと判定した場合(ステップS120の「NO」)には、処理をステップS140に移行させる。
The filter control unit 60 determines whether or not the sound pickup environment acquired in step S110 is in the filter table (step S120).
If it is determined that the acquired sound pickup environment is in the filter table ("YES" in step S120), the process proceeds to step S130.
On the other hand, if it is determined that the acquired sound pickup environment is not in the filter table ("NO" in step S120), the process proceeds to step S140.
 ここで、収音環境検出部50から取得した収音環境と同じ収音環境がフィルタテーブルにあるか否かを判定する方法について説明する。
 収音環境検出部50から取得した収音環境を示す情報の中には、走行速度やエンジン回転数等のように、走行中に大きく値が変化する収音環境がある。
 そのため、収音環境検出部50から受信した収音環境と同じ収音環境がフィルタテーブルにあるか否かを判定するときには、同じであるかを判定するための類似度を算出する。
Here, a method of determining whether or not the same sound collection environment as the sound collection environment acquired from the sound collection environment detection unit 50 exists in the filter table will be described.
Among the information indicating the sound collection environment acquired from the sound collection environment detection unit 50, there is a sound collection environment whose values change greatly during driving, such as the running speed and the number of revolutions of the engine.
Therefore, when determining whether or not the same sound collecting environment as the sound collecting environment received from the sound collecting environment detection unit 50 exists in the filter table, the similarity for determining whether they are the same is calculated.
 図4を用いて、起動時直後に取得した収音環境KAの値と、フィルタテーブルにある収音環境K1~K3と、を比較した場合を例示して、類似度について説明する。
 フィルタ制御部60は、収音環境検出部50から取得した起動時の収音環境の走行速度の値がKA1であった場合には、フィルタテーブルにある走行速度の値(K11、K21、K31)との差の絶対値を算出し、その値を類似度としている。
 そして、フィルタ制御部60は、算出した類似度が所定値より小さい場合(例えば、類似度<10Km/hの場合)には、2つの走行速度の値は同じであると判定する。
 また、エンジン回転数においても、同様に算出した類似度が所定値より小さい場合(例えば、類似度<200rpmの場合)には、2つのエンジン回転数の数値は同じであると判定する。
The degree of similarity will be described with reference to FIG. 4 by exemplifying a case where the value of the sound collection environment KA acquired immediately after startup is compared with the sound collection environments K1 to K3 in the filter table.
When the traveling speed value of the sound pickup environment at startup acquired from the sound pickup environment detecting unit 50 is K A1 , the filter control unit 60 sets the traveling speed values (K 11 , K 21 ) in the filter table. , K 31 ) is calculated, and the value is taken as the degree of similarity.
Then, when the calculated similarity is smaller than a predetermined value (for example, when the similarity is <10 Km/h), the filter control unit 60 determines that the two traveling speed values are the same.
As for the engine speed, if the similarly calculated similarity is smaller than a predetermined value (for example, if the similarity is <200 rpm), it is determined that the two engine speeds are the same.
 上述した類似度の判定において、走行速度およびエンジン回転数の双方とも同じであると判定され、さらに、その他の収音環境(乗車位置、性別、窓開閉状態、エアコン作動状況)が同じであると判定された場合に、フィルタ制御部60は、同じ収音環境であると判定する(収音環境検出部50から取得した収音環境KAと収音環境K3とは、同じ収音環境であると判定する)。
 なお、上述した類似度の判定を行ったときに、複数の収音環境と同じであると判定された場合には、例えば、類似度の値が一番小さい値となる収音環境を、同じ収音環境として判定する。
In the determination of the degree of similarity described above, it is determined that both the driving speed and the engine speed are the same, and that the other sound pickup environments (riding position, gender, window open/closed state, air conditioner operating state) are the same. If so, the filter control unit 60 determines that the sound collection environment is the same (the sound collection environment KA and the sound collection environment K3 obtained from the sound collection environment detection unit 50 are the same sound collection environment). judge).
Note that if it is determined that a plurality of sound pickup environments are the same when the above-described similarity degree determination is performed, for example, the sound pickup environment with the smallest similarity value is selected as the same environment. Determined as sound pickup environment.
 起動時に取得した収音環境(ステップS110において取得した収音環境)が、フィルタテーブルにあると判定した場合(ステップS120の「YES」)には、その収音環境に紐づいたフィルタ特性をフィルタテーブルから取得し、そのフィルタ特性をフィルタ部20に設定する(ステップS130)。 If it is determined that the sound collection environment acquired at startup (the sound collection environment acquired in step S110) is in the filter table (“YES” in step S120), the filter characteristics associated with the sound collection environment are filtered. It acquires from the table and sets the filter characteristics in the filter unit 20 (step S130).
 一方で、起動時に取得した収音環境(ステップS110において取得した収音環境)がフィルタテーブルにないと判定した場合(ステップS120の「NO」)には、図示しないメモリに格納されている、最後にフィルタ部20に設定したフィルタ特性を、フィルタ部20に設定する(ステップS140)。 On the other hand, if it is determined that the sound collection environment acquired at startup (the sound collection environment acquired in step S110) is not in the filter table ("NO" in step S120), the last The filter characteristics set in the filter section 20 are set in the filter section 20 (step S140).
 フィルタ制御部60は、フィルタ特性算出部30において算出されたフィルタ特性を取得し、取得したフィルタ特性をフィルタ部20に設定する(ステップS150)。
 つまり、起動時から所定時間経過すれば、フィルタ特性算出部30がフィルタ特性を算出することができるため、フィルタ制御部60は、起動時を除く期間では、フィルタ特性算出部30において算出されたフィルタ特性を、フィルタ部20に設定する。
The filter control unit 60 acquires the filter characteristics calculated by the filter characteristics calculation unit 30, and sets the acquired filter characteristics in the filter unit 20 (step S150).
That is, since the filter characteristic calculation unit 30 can calculate the filter characteristics after a predetermined period of time has elapsed from the time of startup, the filter control unit 60 maintains the filter characteristics calculated by the filter characteristic calculation unit 30 during periods other than the time of startup. A characteristic is set in the filter unit 20 .
 フィルタ制御部60は、ステップS150においてフィルタ部20に設定したフィルタ特性を算出した時の収音環境を、収音環境検出部50から取得する(ステップS160)。 The filter control unit 60 acquires the sound collection environment from the sound collection environment detection unit 50 when the filter characteristics set in the filter unit 20 were calculated in step S150 (step S160).
 フィルタ制御部60は、フィルタ部20に設定したフィルタ特性を、図示しないメモリに保存する(ステップS170)。
 つまり、ステップS170では、最後にフィルタ部20に設定したフィルタ特性の値をメモリに格納する処理が実行される。
The filter control unit 60 stores the filter characteristics set in the filter unit 20 in a memory (not shown) (step S170).
That is, in step S170, a process of storing the value of the filter characteristic set in the filter unit 20 last in the memory is executed.
 フィルタ制御部60は、ステップS160において取得しく収音環境と同じ収音環境が、フィルタテーブルにあるか否かを判定する(ステップS180)。
 同じ収音環境がフィルタテーブルにあると判定した場合(ステップS180の「YES」)には、処理をステップS200に移行させる。
 一方で、同じ収音環境がフィルタテーブルにないと判定した場合(ステップS180の「NO」)には、ステップS150においてフィルタ部20に設定したフィルタ特性と、ステップS160で取得した収音環境とを紐づけて、フィルタテーブルに追加する(ステップS190)。
The filter control unit 60 determines whether or not the same sound pickup environment as the sound pickup environment acquired in step S160 exists in the filter table (step S180).
If it is determined that the same sound pickup environment exists in the filter table ("YES" in step S180), the process proceeds to step S200.
On the other hand, if it is determined that the same sound collection environment is not in the filter table ("NO" in step S180), the filter characteristics set in the filter unit 20 in step S150 and the sound collection environment acquired in step S160 are combined. It is linked and added to the filter table (step S190).
 つまり、フィルタ制御部60は、フィルタ特性算出部30においてフィルタ特性が算出されたときの収音環境を収音環境検出部50から取得し、その収音環境とフィルタ特性とを紐づけて、フィルタテーブルに追加する。
 具体的には、収音環境検出部50から取得した収音環境である、例えば、乗員の乗車位置、各乗員の性別、車両窓の開閉状態、走行速度、エンジン回転数、エアコン作動状況等の各情報と、フィルタテーブルにある収音環境とを比較し、同じ収音環境がフィルタテーブルにあるか否かを判定し、フィルタテーブルに同じ収音環境がない場合には、その収音環境とフィルタ特性とを紐づけてフィルタテーブルに追加する。
 より具体的には、例えば、図5に示すように、フィルタテーブルに登録されている収音環境K1~K5の中に、収音環境検出部50から取得した収音環境と同じ収音環境があるか否かを判定し、同じ収音環境がない場合には、新たな収音環境K6として、その収音環境と、その収音環境に紐づいたフィルタ特性F6とをフィルタテーブルに追加する。
 なお、同じ収音環境がフィルタテーブルにあるか否かの判定方法は、上述したステップS120における判定方法と同じである。
That is, the filter control unit 60 acquires the sound collection environment from the sound collection environment detection unit 50 when the filter characteristics are calculated by the filter characteristic calculation unit 30, associates the sound collection environment with the filter characteristics, and obtains the filter characteristics. Add to table.
Specifically, the sound pickup environment acquired from the sound pickup environment detection unit 50, for example, the boarding position of the passenger, the gender of each passenger, the open/closed state of the vehicle window, the traveling speed, the engine speed, the operation status of the air conditioner, etc. Each piece of information is compared with the sound pickup environment in the filter table to determine whether or not the same sound pickup environment exists in the filter table. Add to the filter table in association with filter characteristics.
More specifically, for example, as shown in FIG. 5, among the sound collection environments K1 to K5 registered in the filter table, there is a sound collection environment that is the same as the sound collection environment acquired from the sound collection environment detection unit 50. If the same sound collection environment does not exist, the sound collection environment is added as a new sound collection environment K6 and the filter characteristic F6 linked to the sound collection environment to the filter table. .
The method for determining whether or not the same sound pickup environment exists in the filter table is the same as the determination method in step S120 described above.
 フィルタ制御部60は、車両のACC電源(アクセサリ電源)がオン状態であるか否かを判定する(ステップS200)。
 車両のACC電源がオン状態にあると判定した場合(ステップS200の「YES」)には、処理をステップS150に移行させ、処理を継続させる。
 一方で、車両のACC電源がオン状態にないと判定した場合(ステップS200の「NO」)には、処理を終了させる。
The filter control unit 60 determines whether or not the ACC power supply (accessory power supply) of the vehicle is on (step S200).
If it is determined that the ACC power supply of the vehicle is on ("YES" in step S200), the process proceeds to step S150 to continue the process.
On the other hand, if it is determined that the ACC power supply of the vehicle is not on ("NO" in step S200), the process is terminated.
 本実施例に係る情報処理装置1は、収音した音声に基づいて、その音声から雑音を除去するためのフィルタ特性を算出するフィルタ特性算出部30と、フィルタ特性算出部30によって算出されたフィルタ特性に基づいて、収音した音声から雑音を除去するフィルタ部20と、センサ部40に基づいて音声の収音環境を検出する収音環境検出部50と、起動時に収音環境を取得し、該起動時の収音環境と、フィルタ特性算出部30が算出したフィルタ特性と収音環境とを紐づけたフィルタテーブルと、に基づいて、フィルタ部20にフィルタ特性を設定するフィルタ制御部60と、を備えている。
 フィルタ部20は、フィルタ特性算出部30において算出されたフィルタ特性に基づいて収音した音声から雑音を除去する。
 フィルタ特性算出部30は、収音部10において収音された音声に基づいて、その音声から雑音を除去するためのフィルタ特性を算出する。
 収音環境検出部50は、カメラ画像、車両センサ等のセンサ部40の情報に基づいて、音声を収音している収音環境を検出する。
 フィルタ制御部60は、情報処理装置1の起動時に収音環境を取得し、その起動時の収音環境とフィルタテーブルと、に基づいて、フィルタ部20に設定するフィルタ特性を決定し、そのフィルタ特性をフィルタ部20に設定する。
 フィルタ制御部60は、情報処理装置1の起動時の収音環境が、フィルタテーブルにある場合には、起動時の収音環境に紐づいたフィルタ特性をフィルタ部20に設定し、フィルタテーブルにない場合には、最後にフィルタ部20に設定したフィルタ特性をフィルタ部20に設定する。
 起動時直後には、フィルタ特性を決定するための音声が収音されていないため、適切なフィルタ特性を決定することができない。
 そのため、フィルタ制御部60は、起動時の収音環境とフィルタテーブルとに基づいて、フィルタ部20に設定するフィルタ特性を決定する。
 これにより、フィルタ特性を決定するための音声が収音されていない起動時であっても、起動時の収音環境に基づいたフィルタ特性を設定することができるため、適切に音声から雑音を除去することができる。
 また、フィルタテーブルに起動時の収音環境がない場合でも、最後にフィルタ部20に設定したフィルタ特性を設定する。
 つまり、ユーザが発話する空間の音に基づいて算出された実績のあるフィルタ特性が設定されるため、その空間にとって、より適切なフィルタ特性を設定することができる。
The information processing apparatus 1 according to the present embodiment includes a filter characteristic calculation unit 30 for calculating filter characteristics for removing noise from the collected sound based on the collected sound, and a filter calculated by the filter characteristic calculation unit 30. A filter unit 20 that removes noise from the collected sound based on the characteristics, a sound collection environment detection unit 50 that detects the sound collection environment of the sound based on the sensor unit 40, and acquires the sound collection environment at startup, a filter control unit 60 that sets filter characteristics in the filter unit 20 based on the sound collection environment at startup and a filter table linking the filter characteristics calculated by the filter characteristic calculation unit 30 and the sound collection environment; , is equipped with
The filter section 20 removes noise from the picked-up sound based on the filter characteristics calculated by the filter characteristic calculation section 30 .
Based on the sound picked up by the sound pickup unit 10, the filter characteristic calculation unit 30 calculates filter characteristics for removing noise from the sound.
The sound-collecting environment detection unit 50 detects the sound-collecting environment in which the sound is being collected, based on information from the sensor unit 40 such as camera images and vehicle sensors.
The filter control unit 60 acquires the sound pickup environment when the information processing apparatus 1 is activated, determines the filter characteristics to be set in the filter unit 20 based on the sound pickup environment at the time of activation and the filter table, and sets the filter characteristics. A characteristic is set in the filter section 20 .
If the sound pickup environment at startup of the information processing device 1 is in the filter table, the filter control unit 60 sets the filter characteristics associated with the sound pickup environment at startup in the filter unit 20, and sets the filter characteristics in the filter table. If not, the filter characteristic last set in the filter section 20 is set in the filter section 20 .
Since sound for determining filter characteristics has not been picked up immediately after startup, appropriate filter characteristics cannot be determined.
Therefore, the filter control unit 60 determines the filter characteristics to be set in the filter unit 20 based on the sound pickup environment at startup and the filter table.
As a result, even when the sound for determining the filter characteristics is not picked up at startup, the filter characteristics can be set based on the sound pickup environment at startup, so noise can be removed from the sound appropriately. can do.
Also, even if there is no sound pickup environment at startup in the filter table, the filter characteristics set in the filter section 20 last are set.
In other words, since proven filter characteristics calculated based on the sound of the space where the user speaks are set, more appropriate filter characteristics can be set for that space.
 さらに、フィルタ制御部60は、フィルタ特性算出部30においてフィルタ特性を算出したときの収音環境を取得し、その収音環境がフィルタテーブルにない場合には、その収音環境とフィルタ特性算出部30において算出したフィルタ特性とを紐づけて、フィルタテーブルに追加する。
 フィルタ特性算出部30において算出されるフィルタ特性は、収音環境によってフィルタ特性の値は大きく変化する。
 例えば、収音した音声に含まれるエンジン音は、エンジン回転数によって、音の大きさや周波数が変化する。
 また、収音した音声に含まれるロードノイズは、走行速度によって、音の大きさや周波数が変化する。
 また、収音した音声に含まれる発話音声は、発話する性別によって、音の大きさや周波数が変化する。
 そのため、フィルタ特性を算出した時の収音環境と、算出されたフィルタ特性とを紐づけて、フィルタテーブルに蓄積することにより、その車両、乗員にとって、最適なフィルタ特性を蓄積することができる。
 すなわち、情報処理装置1を動作させるだけで、その空間に最適な収音環境毎のフィルタ特性をフィルタテーブルに蓄積することができるため、フィルタ制御部60は、フィルタテーブルを参照することにより、最適なフィルタ特性を設定することができる。
Further, the filter control unit 60 acquires the sound collection environment when the filter characteristics are calculated by the filter characteristic calculation unit 30, and if the sound collection environment is not in the filter table, the sound collection environment and the filter characteristic calculation unit 30 is associated with the filter characteristics calculated and added to the filter table.
The value of the filter characteristic calculated by the filter characteristic calculator 30 varies greatly depending on the sound pickup environment.
For example, the engine sound included in the collected sound changes in volume and frequency depending on the engine speed.
In addition, road noise included in the collected sound varies in volume and frequency depending on the running speed.
In addition, the utterance voice included in the collected voice changes in volume and frequency depending on the gender of the utterer.
Therefore, by linking the sound pickup environment when the filter characteristics are calculated with the calculated filter characteristics and accumulating them in a filter table, it is possible to accumulate optimum filter characteristics for the vehicle and the occupant.
That is, just by operating the information processing apparatus 1, the filter characteristics for each sound pickup environment that are optimal for the space can be accumulated in the filter table. filter characteristics can be set.
 また、フィルタ制御部60は、情報処理装置1の起動時を除く期間では、フィルタ特性算出部30で算出されたフィルタ特性をフィルタ部20に設定する。
 すなわち、情報処理装置1の起動時以外では、フィルタ特性算出部30が、収音部10により収音した音声に基づいて算出したフィルタ特性をフィルタ部20に設定する。
 これにより、情報処理装置1の起動時以外は、ユーザが発話する空間の音声に基づいてフィルタ特性が算出されるため、最適なフィルタ特性を設定することができる。
Further, the filter control unit 60 sets the filter characteristic calculated by the filter characteristic calculation unit 30 to the filter unit 20 during a period other than when the information processing apparatus 1 is activated.
That is, except when the information processing apparatus 1 is activated, the filter characteristic calculation unit 30 sets the filter characteristic calculated based on the sound picked up by the sound pickup unit 10 to the filter unit 20 .
Accordingly, except when the information processing apparatus 1 is activated, the filter characteristics are calculated based on the sound in the space uttered by the user, so that the optimum filter characteristics can be set.
 また、センサ部40のセンサ情報は、少なくとも車両内を撮像した画像と車両の走行状態を示す情報とを含んでいる。
 すなわち、収音した音声に含まれる雑音を除去するフィルタ特性に影響を与える要因である、乗車位置、性別、窓開閉状態、走行速度、エンジン回転数等の情報に紐づけてフィルタテーブルを作成し、該フィルタテーブルと、起動時の収音環境により、フィルタ部20に設定するフィルタ値を決定する。
 これにより、ユーザの発話する空間にとって最適なフィルタ特性を、起動時の収音環境により決定することができるため、起動時であっても、収音した音声から雑音を除去することができる。
Moreover, the sensor information of the sensor unit 40 includes at least an image of the interior of the vehicle and information indicating the running state of the vehicle.
In other words, the filter table is created by linking information such as boarding position, gender, window open/close status, driving speed, engine speed, etc. , the filter value to be set in the filter unit 20 is determined based on the filter table and the sound pickup environment at startup.
As a result, the optimum filter characteristics for the space where the user speaks can be determined according to the sound pickup environment at the time of activation, so noise can be removed from the picked-up voice even at the time of activation.
 また、収音部10の収音した音声には、雑音とユーザの発話音声とが含まれ、フィルタ部20において雑音が除去された音声は、音声認識エンジンに送信される。
 すなわち、フィルタ制御部60の制御により、情報処理装置1が起動直後であっても、最適なフィルタ特性がフィルタ部20に設定されるため、収音部10において収音した音声から雑音を除去することができる。
 これにより、音声認識エンジンにおける、発話音声の認識率を向上させることができる。
The sound collected by the sound pickup unit 10 includes noise and the user's uttered voice, and the sound from which the noise is removed by the filter unit 20 is transmitted to the speech recognition engine.
That is, under the control of the filter control unit 60, the optimum filter characteristics are set in the filter unit 20 even immediately after the information processing apparatus 1 is activated, so noise is removed from the sound picked up by the sound pickup unit 10. be able to.
As a result, it is possible to improve the speech recognition rate of the speech recognition engine.
<その他の実施例>
 上述した収音環境検出部50では、乗員の乗車位置、各乗員の性別、車両窓の開閉状態、走行速度、エンジン回転数、エアコン作動状況等を収音環境として検出していたが、図6に示すような情報をさらに検出するようにしてもよい。
 具体的には、カメラで車両内外を撮像した画像から、現在の天候状態や周辺車両の走行状態を収音環境として検出するようにしてもよい。
 降雨時は晴天時に比べ、走行騒音が増える可能性があるため、現在の天候を収音環境の条件として追加することにより、より適切なフィルタ特性を設定することができる。
 また、車両周辺にトラックやバイク等が走行している場合には、走行騒音が増える可能性があるため、自車両周辺の走行車両の状態を収音環境として検出することにより、より適切なフィルタ特性を設定することができる。
 また、自車両のGPS(Global Positioning System)情報から、現在車両が走行している位置を収音環境として検出してもよい。
 例えば、高速道路、住宅街、市街地等の走行位置特有に発生する雑音があるため、走行位置を把握することにより、より適切なフィルタ特性を設定することができる。
<Other Examples>
The above-described sound-collecting environment detection unit 50 detects the passenger's boarding position, the sex of each passenger, the open/closed state of the vehicle window, the running speed, the engine speed, the air conditioner operation status, etc. as the sound-collecting environment. You may make it detect further information as shown to.
Specifically, the current weather conditions and the driving conditions of surrounding vehicles may be detected as the sound pickup environment from images of the inside and outside of the vehicle captured by a camera.
Since there is a possibility that running noise will increase during rain compared to when it is fine, adding the current weather as a condition of the sound pickup environment makes it possible to set more appropriate filter characteristics.
In addition, when trucks, motorcycles, etc. are running around the vehicle, there is a possibility that the running noise will increase. Properties can be set.
Also, the position where the vehicle is currently running may be detected as the sound collection environment from the GPS (Global Positioning System) information of the own vehicle.
For example, since there is noise generated peculiar to the traveling position such as highways, residential areas, urban areas, etc., it is possible to set more appropriate filter characteristics by grasping the traveling position.
 また、上述した収音環境検出部50では、車両内を撮像する画像に基づいて、発話者の位置等を検出していたが、マイクアレイを用いて収音した音声から、乗員の乗車位置を収音環境として検出するようにしてもよい。
 これにより、車両内を撮像するカメラが設置できない場合でも、乗員の乗車位置を把握することができる。
In addition, the above-described sound collection environment detection unit 50 detects the position of the speaker based on the image captured inside the vehicle. It may be detected as a sound collection environment.
As a result, even if a camera for capturing an image of the inside of the vehicle cannot be installed, it is possible to ascertain the passenger's boarding position.
 また、上述した情報処理装置1では、フィルタ制御部60においてフィルタテーブルを生成したが、収音環境検出部50から受信した収音環境を、インターネット回線を介してサーバに送信し、サーバにおいてフィルタテーブルを作成するようにしてもよい。
 これにより、フィルタ制御部60におけるフィルタテーブル作成処理の負荷をなくすことができるため、消費電力を低減することができる。
 また、フィルタテーブルを格納および作成するためのメモリの容量を小さくすること、もしくは、削除することができるため、情報処理装置1のコストダウンを図ることができる。
 また、サーバにフィルタテーブルを保管することにより、フィルタテーブルを他のユーザと共有することができる。
 具体的には、例えば、サーバにおいて、同じ車種毎のフィルタテーブルのデータ集計および分析を行い、車種毎に共有できるフィルタテーブルを生成する。
 これにより、共有のフィルタテーブルを参照して、フィルタ特性を設定することができるため、情報処理装置1Aを初めて使用する、起動直後であっても、最適なフィルタ特性を設定することができる。
Further, in the information processing apparatus 1 described above, the filter table is generated in the filter control unit 60, but the sound collection environment received from the sound collection environment detection unit 50 is transmitted to the server via the Internet line, and the filter table is generated in the server. may be created.
As a result, the load of filter table creation processing in the filter control unit 60 can be eliminated, so power consumption can be reduced.
Moreover, since the capacity of the memory for storing and creating the filter table can be reduced or deleted, the cost of the information processing apparatus 1 can be reduced.
Also, by storing the filter table on the server, the filter table can be shared with other users.
Specifically, for example, the server aggregates and analyzes the data of the same filter table for each vehicle model, and generates a filter table that can be shared for each vehicle model.
As a result, since the filter characteristics can be set by referring to the shared filter table, the optimum filter characteristics can be set even when the information processing apparatus 1A is used for the first time or immediately after startup.
 上述した情報処理装置1では、起動時に取得した収音環境がフィルタテーブルにない場合には、最後にフィルタ部20に設定したフィルタ特性をフィルタ部20に設定していたが、起動時に取得した収音環境に一番近い収音環境に紐づいたフィルタ特性をフィルタ部20に設定するようにしてもよい。
 具体的には、例えば、上述した類似度が所定値以上であった場合でも、類似度が一番小さい値を示す収音環境を、同じ収音環境と判定し、その収音環境に紐づいたフィルタ特性をフィルタ部20に設定する。
 これにより、起動時に取得した収音環境に一番近い収音環境に紐づいたフィルタ特性をフィルタ部20に設定できるため、ユーザが発話する空間にとって、最適なフィルタ特性を設定することができる。
In the information processing apparatus 1 described above, when the sound pickup environment acquired at startup is not in the filter table, the filter characteristics last set in the filter unit 20 are set in the filter unit 20. A filter characteristic associated with the sound pickup environment closest to the sound environment may be set in the filter section 20 .
Specifically, for example, even if the above-described similarity is equal to or greater than a predetermined value, the sound-collecting environment with the smallest similarity is determined to be the same sound-collecting environment, and the sound-collecting environment is associated with the sound-collecting environment. The obtained filter characteristic is set in the filter section 20 .
As a result, the filter characteristic associated with the sound collection environment closest to the sound collection environment acquired at startup can be set in the filter unit 20, so that the optimum filter characteristic can be set for the space where the user speaks.
 以上、この発明の実施例につき、図面を参照して詳述してきたが、具体的な構成はこの実施例に限られるものではなく、この発明の要旨を逸脱しない範囲の設計等も含まれる。 Although the embodiments of the present invention have been described in detail above with reference to the drawings, the specific configuration is not limited to these embodiments, and includes designs within the scope of the gist of the present invention.
 1;情報処理装置
 10;収音部
 20;フィルタ部
 30;フィルタ特性算出部
 40;センサ部
 50;収音環境取得部
 60;フィルタ制御部
1; information processing device 10; sound pickup unit 20; filter unit 30; filter characteristic calculation unit 40; sensor unit 50;

Claims (9)

  1.  収音した音声に基づいて前記音声から雑音を除去するためのフィルタ特性を算出するフィルタ特性算出部と、
     前記フィルタ特性算出部によって算出された前記フィルタ特性に基づいて、収音した音声から雑音を除去するフィルタ部と、
     センサ情報に基づいて前記音声の収音環境を検出する収音環境検出部と、
     起動時に前記収音環境を取得し、該起動時の収音環境と、前記フィルタ特性算出部が算出した前記フィルタ特性と前記収音環境とを紐づけたフィルタテーブルと、に基づいて、前記フィルタ部に前記フィルタ特性を設定するフィルタ制御部と、
     を備える情報処理装置。
    a filter characteristic calculation unit that calculates filter characteristics for removing noise from the sound based on the collected sound;
    a filter unit that removes noise from the collected sound based on the filter characteristics calculated by the filter characteristic calculation unit;
    a sound collection environment detection unit that detects the sound collection environment of the sound based on sensor information;
    Acquiring the sound collection environment at the time of startup, and based on the sound collection environment at the time of startup and a filter table linking the filter characteristics calculated by the filter characteristic calculation unit and the sound collection environment, the filter a filter control unit that sets the filter characteristics in the unit;
    Information processing device.
  2.  前記フィルタ制御部は、前記起動時の収音環境が、前記フィルタテーブルにある場合には、該起動時の収音環境に紐づいた前記フィルタ特性を前記フィルタ部に設定し、前記フィルタテーブルにない場合には、最後に前記フィルタ部に設定した前記フィルタ特性を前記フィルタ部に設定することを特徴とする請求項1に記載の情報処理装置。 When the sound pickup environment at startup is in the filter table, the filter control unit sets the filter characteristics linked to the sound pickup environment at startup to the filter unit, and stores the filter characteristics in the filter table. 2. The information processing apparatus according to claim 1, wherein if there is no filter characteristic, the filter characteristic last set in the filter section is set in the filter section.
  3.  前記フィルタ制御部は、起動時を除く期間では、前記フィルタ特性算出部で算出された前記フィルタ特性を前記フィルタ部に設定することを特徴とする請求項1または2に記載の情報処理装置。 The information processing apparatus according to claim 1 or 2, wherein the filter control unit sets the filter characteristic calculated by the filter characteristic calculation unit to the filter unit during a period other than the time of startup.
  4.  前記フィルタ制御部は、前記フィルタ特性算出部においてフィルタ特性を算出したときの前記収音環境を取得し、該収音環境が前記フィルタテーブルにない場合には、該収音環境と前記フィルタ特性算出部において算出した前記フィルタ特性とを紐づけて、前記フィルタテーブルに追加することを特徴とする請求項1から3のいずれか1項に記載の情報処理装置。 The filter control unit obtains the sound collection environment when the filter characteristics are calculated by the filter characteristic calculation unit, and if the sound collection environment is not in the filter table, the sound collection environment and the filter characteristics calculation are performed. 4. The information processing apparatus according to any one of claims 1 to 3, wherein the filter characteristic calculated in the section is linked to the filter characteristic and added to the filter table.
  5.  前記センサ情報は、少なくとも車両内を撮像した画像と車両の走行状態を示す情報とを含んでいることを特徴とする請求項1から4のいずれか1項に記載の情報処理装置。 The information processing apparatus according to any one of claims 1 to 4, wherein the sensor information includes at least an image captured inside the vehicle and information indicating the running state of the vehicle.
  6.  前記収音した音声には、前記雑音とユーザの発話音声とが含まれ、前記フィルタ部において前記雑音が除去された音声を、音声認識エンジンに送信することを特徴とする請求項1から5のいずれか1項に記載の情報処理装置。 6. The method according to claim 1, wherein the collected voice includes the noise and the user's uttered voice, and the voice from which the noise has been removed by the filter unit is transmitted to a voice recognition engine. The information processing apparatus according to any one of items 1 and 2.
  7.  フィルタ特性算出部と、フィルタ部と、収音環境検出部と、フィルタ制御部とを備えた情報処理装置における情報処理方法であって、
     前記フィルタ特性算出部が、収音した音声に基づいて前記音声から雑音を除去するためのフィルタ特性を算出する第1の工程と、
     前記フィルタ部が、前記フィルタ特性算出部によって算出された前記フィルタ特性に基づいて、収音した音声から雑音を除去する第2の工程と、
     前記収音環境検出部が、センサ情報に基づいて前記音声の収音環境を検出する第3の工程と、
     前記フィルタ制御部が、起動時に前記収音環境を取得し、該起動時の収音環境と、前記フィルタ特性算出部が算出した前記フィルタ特性と前記収音環境とを紐づけたフィルタテーブルと、に基づいて、前記フィルタ部に前記フィルタ特性を設定する第4の工程と、を備える情報処理方法。
    An information processing method in an information processing device including a filter characteristic calculation unit, a filter unit, a sound pickup environment detection unit, and a filter control unit,
    a first step in which the filter characteristic calculation unit calculates a filter characteristic for removing noise from the sound based on the collected sound;
    a second step in which the filter unit removes noise from the collected sound based on the filter characteristics calculated by the filter characteristic calculation unit;
    a third step in which the sound collection environment detection unit detects the sound collection environment of the sound based on sensor information;
    a filter table in which the filter control unit acquires the sound collection environment at the time of activation, and associates the sound collection environment at the time of activation with the filter characteristics and the sound collection environment calculated by the filter characteristic calculation unit; and a fourth step of setting the filter characteristics in the filter unit based on the information processing method.
  8.  フィルタ特性算出部と、フィルタ部と、収音環境検出部と、フィルタ制御部とを備えた情報処理装置における情報処理方法をコンピュータに実行させるためのプログラムであって、
     前記フィルタ特性算出部が、収音した音声に基づいて前記音声から雑音を除去するためのフィルタ特性を算出する第1の工程と、
     前記フィルタ部が、前記フィルタ特性算出部によって算出された前記フィルタ特性に基づいて、収音した音声から雑音を除去する第2の工程と、
     前記収音環境検出部が、センサ情報に基づいて前記音声の収音環境を検出する第3の工程と、
     前記フィルタ制御部が、起動時に前記収音環境を取得し、該起動時の収音環境と、前記フィルタ特性算出部が算出した前記フィルタ特性と前記収音環境とを紐づけたフィルタテーブルと、に基づいて、前記フィルタ部に前記フィルタ特性を設定する第4の工程と、を備える情報処理方法をコンピュータに実行させるためのプログラム。
    A program for causing a computer to execute an information processing method in an information processing apparatus comprising a filter characteristic calculation unit, a filter unit, a sound pickup environment detection unit, and a filter control unit,
    a first step in which the filter characteristic calculation unit calculates a filter characteristic for removing noise from the sound based on the collected sound;
    a second step in which the filter unit removes noise from the collected sound based on the filter characteristics calculated by the filter characteristic calculation unit;
    a third step in which the sound collection environment detection unit detects the sound collection environment of the sound based on sensor information;
    a filter table in which the filter control unit acquires the sound collection environment at the time of activation, and associates the sound collection environment at the time of activation with the filter characteristics and the sound collection environment calculated by the filter characteristic calculation unit; A program for causing a computer to execute an information processing method comprising: a fourth step of setting the filter characteristics in the filter unit based on.
  9.  フィルタ特性算出部と、フィルタ部と、収音環境検出部と、フィルタ制御部とを備えた情報処理装置における情報処理方法をコンピュータに実行させるためのプログラムを記録したコンピュータによって読み取り可能な非一過性の記録媒体であって、
     前記フィルタ特性算出部が、収音した音声に基づいて前記音声から雑音を除去するためのフィルタ特性を算出する第1の工程と、
     前記フィルタ部が、前記フィルタ特性算出部によって算出された前記フィルタ特性に基づいて、収音した音声から雑音を除去する第2の工程と、
     前記収音環境検出部が、センサ情報に基づいて前記音声の収音環境を検出する第3の工程と、
     前記フィルタ制御部が、起動時に前記収音環境を取得し、該起動時の収音環境と、前記フィルタ特性算出部が算出した前記フィルタ特性と前記収音環境とを紐づけたフィルタテーブルと、に基づいて、前記フィルタ部に前記フィルタ特性を設定する第4の工程と、を備える情報処理方法をコンピュータに実行させるためのプログラムを記録した記録媒体。
    A non-transient computer-readable program recording a program for causing a computer to execute an information processing method in an information processing apparatus comprising a filter characteristic calculation unit, a filter unit, a sound pickup environment detection unit, and a filter control unit a sexual recording medium,
    a first step in which the filter characteristic calculation unit calculates a filter characteristic for removing noise from the sound based on the collected sound;
    a second step in which the filter unit removes noise from the collected sound based on the filter characteristics calculated by the filter characteristic calculation unit;
    a third step in which the sound collection environment detection unit detects the sound collection environment of the sound based on sensor information;
    a filter table in which the filter control unit acquires the sound collection environment at the time of activation, and associates the sound collection environment at the time of activation with the filter characteristics and the sound collection environment calculated by the filter characteristic calculation unit; and a recording medium recording a program for causing a computer to execute an information processing method comprising: a fourth step of setting the filter characteristics in the filter unit based on
PCT/JP2022/039616 2021-10-27 2022-10-25 Information processing device, information processing method, program, and recording medium WO2023074654A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2023556447A JPWO2023074654A1 (en) 2021-10-27 2022-10-25

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021-175852 2021-10-27
JP2021175852 2021-10-27

Publications (1)

Publication Number Publication Date
WO2023074654A1 true WO2023074654A1 (en) 2023-05-04

Family

ID=86157804

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/039616 WO2023074654A1 (en) 2021-10-27 2022-10-25 Information processing device, information processing method, program, and recording medium

Country Status (2)

Country Link
JP (1) JPWO2023074654A1 (en)
WO (1) WO2023074654A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006039267A (en) * 2004-07-28 2006-02-09 Nissan Motor Co Ltd Voice input device
JP2006039447A (en) * 2004-07-30 2006-02-09 Nissan Motor Co Ltd Voice input device
WO2016002358A1 (en) * 2014-06-30 2016-01-07 ソニー株式会社 Information-processing device, information processing method, and program
JP2016042132A (en) * 2014-08-18 2016-03-31 ソニー株式会社 Voice processing device, voice processing method, and program
JP2017138416A (en) * 2016-02-02 2017-08-10 キヤノン株式会社 Voice processing device and voice processing method
JP2018191145A (en) * 2017-05-08 2018-11-29 オリンパス株式会社 Voice collection device, voice collection method, voice collection program, and dictation method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006039267A (en) * 2004-07-28 2006-02-09 Nissan Motor Co Ltd Voice input device
JP2006039447A (en) * 2004-07-30 2006-02-09 Nissan Motor Co Ltd Voice input device
WO2016002358A1 (en) * 2014-06-30 2016-01-07 ソニー株式会社 Information-processing device, information processing method, and program
JP2016042132A (en) * 2014-08-18 2016-03-31 ソニー株式会社 Voice processing device, voice processing method, and program
JP2017138416A (en) * 2016-02-02 2017-08-10 キヤノン株式会社 Voice processing device and voice processing method
JP2018191145A (en) * 2017-05-08 2018-11-29 オリンパス株式会社 Voice collection device, voice collection method, voice collection program, and dictation method

Also Published As

Publication number Publication date
JPWO2023074654A1 (en) 2023-05-04

Similar Documents

Publication Publication Date Title
US20100204987A1 (en) In-vehicle speech recognition device
JP4134989B2 (en) Automotive audio equipment
US9230538B2 (en) Voice recognition device and navigation device
WO2017081960A1 (en) Voice recognition control system
CN108630221A (en) Audio signal quality based on quantization SNR analyses and adaptive wiener filter enhances
RU2015129116A (en) ADAPTIVE DECREASE OF NOISE LEVEL IN THE PHONE WITH THE SPEAKER MODE BASED ON THE VEHICLE CONDITION WITH THE ABILITY TO TRAIN
US20180096699A1 (en) Information-providing device
JP2012025270A (en) Apparatus for controlling sound volume for vehicle, and program for the same
US10654468B2 (en) Method and device for operating a hybrid vehicle comprising an electric energy store, and electric motor and an internal combustion engine
JP2002314637A (en) Device for reducing noise
WO2023074654A1 (en) Information processing device, information processing method, program, and recording medium
JP2013086754A (en) Acoustic device
JP4016529B2 (en) Noise suppression device, voice recognition device, and vehicle navigation device
US11557275B2 (en) Voice system and voice output method of moving machine
JP2002351488A (en) Noise canceller and on-vehicle system
JP5029433B2 (en) Vehicle drunk driving prevention device
WO2023074655A1 (en) Information processing device, information processing method, program, and recording medium
JP2019092077A (en) Recording control device, recording control method, and program
JPH11352987A (en) Voice recognition device
JP2007065122A (en) Noise suppressing device of on-vehicle voice recognition device
JP2008070877A (en) Voice signal pre-processing device, voice signal processing device, voice signal pre-processing method and program for voice signal pre-processing
WO2013098983A1 (en) Sound control device, sound control method, sound control program, and recording medium on which sound control program is recorded
WO2023157783A1 (en) Information processing device, information processing method, program, and recording medium
JP2008026464A (en) Voice recognition apparatus for vehicle
JP2009181025A (en) On-vehicle speech recognition device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22886981

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023556447

Country of ref document: JP