WO2015098079A1 - Voice recognition processing device, voice recognition processing method, and display device - Google Patents
Voice recognition processing device, voice recognition processing method, and display device Download PDFInfo
- Publication number
- WO2015098079A1 WO2015098079A1 PCT/JP2014/006367 JP2014006367W WO2015098079A1 WO 2015098079 A1 WO2015098079 A1 WO 2015098079A1 JP 2014006367 W JP2014006367 W JP 2014006367W WO 2015098079 A1 WO2015098079 A1 WO 2015098079A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- voice
- unit
- command
- search
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims description 5
- 238000000034 method Methods 0.000 claims description 70
- 230000005540 biological transmission Effects 0.000 claims description 9
- 239000013589 supplement Substances 0.000 claims description 3
- 238000012216 screening Methods 0.000 claims 1
- 230000003287 optical effect Effects 0.000 description 10
- 230000000295 complement effect Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 230000005236 sound signal Effects 0.000 description 6
- 239000002245 particle Substances 0.000 description 5
- 230000002950 deficient Effects 0.000 description 4
- 239000004065 semiconductor Substances 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 238000007792 addition Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000005401 electroluminescence Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/083—Recognition networks
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/28—Constructional details of speech recognition systems
- G10L15/32—Multiple recognisers used in sequence or in parallel; Score combination systems therefor, e.g. voting systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/42203—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] sound input device, e.g. microphone
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/42204—User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L2015/088—Word spotting
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
Definitions
- the present disclosure relates to a speech recognition processing device, a speech recognition processing method, and a display device that operate by recognizing speech uttered by a user.
- Patent Document 1 discloses a voice input device having a voice recognition function.
- the voice input device receives voice uttered by the user, analyzes the received voice, recognizes a command indicated by the user's voice (voice recognition), and controls the device according to the voice-recognized command. It is configured. That is, the voice input device of Patent Document 1 can recognize a voice arbitrarily generated by a user, and can control the device according to a command (command) that is a result of the voice recognition.
- the hypertext displayed on the browser is displayed.
- the text can be selected using the voice recognition function of the voice input device.
- the user can also perform a search on a website (search site) that provides a search service by using this voice recognition function.
- This disclosure provides a voice recognition processing device and a voice recognition processing method that improve user operability.
- the speech recognition processing device includes a speech acquisition unit, a first speech recognition unit, a second speech recognition unit, a selection unit, a storage unit, and a processing unit.
- the voice acquisition unit is configured to acquire voice uttered by the user and output voice information.
- the first voice recognition unit is configured to convert voice information into first information.
- the second voice recognition unit is configured to convert voice information into second information.
- the sorting unit is configured to sort the third information and the fourth information from the second information.
- the storage unit is configured to store the first information, the third information, and the fourth information.
- the processing unit is configured to execute processing based on the first information, the third information, and the fourth information. Then, if there is one or two pieces of deficient information among the first information, the third information, and the fourth information, the processing unit complements the deficient information using the information stored in the storage unit and performs processing. Is configured to run.
- a speech recognition processing method includes a step of acquiring speech uttered by a user and converting it into speech information, a step of converting speech information into first information, a step of converting speech information into second information, Selecting the third information and the fourth information from the two information, storing the first information, the third information, and the fourth information in the storage unit, the first information, the third information, and the fourth information A step of executing a process based on the information, and a step of complementing using one or two missing information of the first information, the third information, and the fourth information using information stored in the storage unit, Prepare.
- the display device includes a voice acquisition unit, a first voice recognition unit, a second voice recognition unit, a selection unit, a storage unit, a processing unit, and a display unit.
- the voice acquisition unit is configured to acquire voice uttered by the user and output voice information.
- the first voice recognition unit is configured to convert voice information into first information.
- the second voice recognition unit is configured to convert voice information into second information.
- the sorting unit is configured to sort the third information and the fourth information from the second information.
- the storage unit is configured to store the first information, the third information, and the fourth information.
- the processing unit is configured to execute processing based on the first information, the third information, and the fourth information.
- the display unit is configured to display a processing result in the processing unit. Then, if there is one or two pieces of deficient information among the first information, the third information, and the fourth information, the processing unit complements the deficient information using the information stored in the storage unit and performs processing. Is configured to run.
- the voice recognition processing device can improve operability when the user performs voice operation.
- FIG. 1 is a diagram schematically showing a speech recognition processing system according to the first embodiment.
- FIG. 2 is a block diagram illustrating a configuration example of the speech recognition processing system according to the first embodiment.
- FIG. 3 is a diagram showing an outline of dictation performed by the speech recognition processing system according to the first embodiment.
- FIG. 4 is a flowchart showing an operation example of the keyword single search process performed by the speech recognition processing apparatus according to the first embodiment.
- FIG. 5 is a flowchart showing an operation example of the keyword associative search process performed by the speech recognition processing apparatus according to the first embodiment.
- FIG. 6 is a flowchart illustrating an operation example of the speech recognition interpretation process performed by the speech recognition processing apparatus according to the first embodiment.
- FIG. 7 is a diagram schematically illustrating an example of a reserved word table of the speech recognition processing device according to the first embodiment.
- a television receiver (television) 10 is cited as an example of a display device including a voice recognition processing device, but the display device is not limited to the television 10 at all.
- a PC or a tablet terminal may be used.
- FIG. 1 schematically shows a speech recognition processing system 11 according to the first embodiment.
- a speech recognition processing device is built in the television 10 which is an example of a display device.
- the voice recognition processing system 11 in the present embodiment includes a television 10 and a voice recognition unit 50.
- the voice recognition processing system 11 may include at least one of a remote controller (hereinafter also referred to as “remote controller”) 20 and a portable terminal 30.
- remote controller hereinafter also referred to as “remote controller”
- the display unit 140 of the television 10 displays the sound recognition icon 201 and the volume of the collected sound along with the video based on the input video signal, the received broadcast signal, and the like.
- An indicator 202 is displayed. This is to indicate to the user 700 that the operation of the television 10 based on the voice of the user 700 (hereinafter referred to as “voice operation”) is possible and to prompt the user 700 to speak.
- the user 700 When the user 700 utters a sound, the sound is collected by the microphone incorporated in the remote controller 20 or the portable terminal 30 used by the user 700 and transferred to the television 10. Then, the voice uttered by the user 700 is recognized by the voice recognition processing device built in the television 10. In the television 10, the television 10 is controlled according to the result of the voice recognition.
- the television 10 may include a built-in microphone 130.
- the voice recognition processing system 11 can be configured not to include the remote controller 20 and the portable terminal 30.
- the television 10 is connected to the voice recognition unit 50 via the network 40. Communication between the television 10 and the voice recognition unit 50 is possible.
- FIG. 2 is a block diagram showing a configuration example of the speech recognition processing system 11 according to the first embodiment.
- the television 10 includes a voice recognition processing device 100, a display unit 140, a transmission / reception unit 150, a tuner 160, a storage unit 171, a built-in microphone 130, and a wireless communication unit 180.
- the built-in microphone 130 is a microphone configured to collect sound mainly coming from a direction facing the display surface of the display unit 140. That is, the built-in microphone 130 has a sound collection direction set so as to collect the sound emitted by the user 700 facing the display unit 140 of the television 10, and can collect the sound emitted by the user 700. Is possible.
- the built-in microphone 130 may be provided in the casing of the television 10 or may be installed outside the casing of the television 10 as shown as an example in FIG.
- the remote controller 20 is a controller for the user 700 to remotely operate the television 10.
- the remote controller 20 includes a microphone 21 and an input unit 22 in addition to a general configuration necessary for remote operation of the television 10.
- the microphone 21 is configured to collect a sound uttered by the user 700 and output a sound signal.
- the input unit 22 is configured to receive an input operation manually performed by the user 700 and output an input signal corresponding to the input operation.
- the input unit 22 is, for example, a touch pad, but may be a keyboard, a button, or the like.
- An audio signal generated by the sound collected by the microphone 21 or an input signal generated when the user 700 performs an input operation on the input unit 22 is wirelessly transmitted to the television 10 by, for example, infrared rays or radio waves.
- the display unit 140 is, for example, a liquid crystal display, but may be a plasma display, an organic EL (ElectroLuminescence) display, or the like.
- the display unit 140 is controlled by the display control unit 108 and displays an image based on an externally input video signal, a broadcast signal received by the tuner 160, or the like.
- the transmission / reception unit 150 is connected to the network 40 and is configured to communicate with an external device (for example, the voice recognition unit 50) connected to the network 40 through the network 40.
- an external device for example, the voice recognition unit 50
- the tuner 160 is configured to receive a terrestrial broadcast or satellite broadcast television broadcast signal via an antenna (not shown).
- the tuner 160 may be configured to receive a television broadcast signal transmitted via a dedicated cable.
- the storage unit 171 is, for example, a nonvolatile semiconductor memory, but may be a volatile semiconductor memory, a hard disk, or the like.
- the storage unit 171 stores information (data), a program, and the like used for controlling each unit of the television 10.
- the mobile terminal 30 is, for example, a smartphone, and can operate software for remotely operating the television 10. Therefore, in the speech recognition processing system 11 in the present embodiment, the mobile terminal 30 on which the software is operating can be used for remote operation of the television 10.
- the portable terminal 30 has a microphone 31 and an input unit 32.
- the microphone 31 is a microphone built in the mobile terminal 30, and is configured to collect the sound emitted by the user 700 and output the sound signal, similarly to the microphone 21 provided in the remote controller 20.
- the input unit 32 is configured to receive an input operation manually performed by the user 700 and output an input signal corresponding to the input operation.
- the input unit 32 is, for example, a touch panel, but may be a keyboard, a button, or the like.
- the mobile terminal 30 in which the software is operating receives an audio signal based on the sound collected by the microphone 31 or an input signal generated when the user 700 performs an input operation on the input unit 32. For example, wireless transmission to the television 10 by infrared rays or radio waves is performed.
- the television 10 and the remote controller 20 or the portable terminal 30 are connected by wireless communication such as a wireless LAN (Local Area Network) or Bluetooth (registered trademark).
- wireless communication such as a wireless LAN (Local Area Network) or Bluetooth (registered trademark).
- the network 40 is, for example, the Internet, but may be another network.
- the voice recognition unit 50 is a server (a server on the cloud) connected to the television 10 via the network 40.
- the voice recognition unit 50 receives voice information transmitted from the television 10 and converts the received voice information into a character string.
- the character string may be a plurality of characters or a single character. Then, the voice recognition unit 50 transmits character string information indicating the converted character string to the television 10 via the network 40 as a result of the voice recognition.
- the voice recognition processing device 100 includes a voice acquisition unit 101, a voice processing unit 102, a recognition result acquisition unit 103, an intention interpretation processing unit 104, a word storage processing unit 105, a command processing unit 106, and a search processing unit 107.
- the storage unit 170 is, for example, a nonvolatile semiconductor memory, but may be a volatile semiconductor memory, a hard disk, or the like.
- the storage unit 170 is controlled by the word storage processing unit 105 and can arbitrarily write and read data.
- the storage unit 170 also stores information (for example, “voice-command” correspondence information described later) referred to by the voice processing unit 102 and the like.
- the “voice-command” correspondence information is information in which voice information is associated with a command. Note that the storage unit 170 and the storage unit 171 may be configured integrally.
- the voice acquisition unit 101 acquires a voice signal based on a voice uttered by the user 700.
- the voice acquisition unit 101 may acquire a voice signal based on voice generated by the user 700 from the built-in microphone 130 of the television 10, or may be built into the microphone 21 built into the remote controller 20 or the portable terminal 30.
- the microphone 31 may be acquired via the wireless communication unit 180.
- the voice acquisition unit 101 converts the voice signal into voice information that can be used for various processes in the subsequent stage, and outputs the voice information to the voice processing unit 102. Note that if the audio signal is a digital signal, the audio acquisition unit 101 may use the audio signal as it is as audio information.
- the voice processing unit 102 is an example of a “first voice recognition unit”.
- the voice processing unit 102 is configured to convert voice information into command information that is an example of “first information”.
- the voice processing unit 102 performs “command recognition processing”.
- the “command recognition process” is a process for determining whether or not a preset command is included in the voice information acquired from the voice acquisition unit 101 and specifying the command if included.
- the voice processing unit 102 refers to the “voice-command” correspondence information stored in advance in the storage unit 170 based on the voice information acquired from the voice acquisition unit 101.
- the “voice-command” correspondence information is a correspondence table in which voice information and a command that is instruction information for the television 10 are associated with each other.
- voice processing unit 102 can identify a command included in the voice information acquired from the voice acquisition unit 101 with reference to the “voice-command” correspondence information, information (command information) representing the command is obtained as a result of voice recognition. The result is output to the recognition result acquisition unit 103.
- the voice processing unit 102 transmits the voice information acquired from the voice acquisition unit 101 from the transmission / reception unit 150 to the voice recognition unit 50 via the network 40.
- the voice recognition unit 50 is an example of a “second voice recognition unit”.
- the voice recognition unit 50 is configured to convert voice information into character string information that is an example of “second information”, and performs “keyword recognition processing”.
- the voice recognition unit 50 receives the voice information transmitted from the television 10
- the voice recognition unit 50 divides the voice information into phrases and distinguishes each phrase in order to distinguish a keyword from a keyword other than a keyword (for example, a particle). Convert to a character string (hereinafter referred to as “dictation”).
- the voice recognition unit 50 transmits the dictated character string information (character string information) to the television 10 as a result of the voice recognition.
- the voice recognition unit 50 may acquire voice information other than the command from the received voice information, or may convert voice information other than the command from the received voice information into a character string and send it back. Alternatively, voice information excluding commands may be transmitted from the television 10 to the voice recognition unit 50.
- the recognition result acquisition unit 103 acquires command information as a result of voice recognition from the voice processing unit 102. Further, the recognition result acquisition unit 103 acquires character string information as a result of voice recognition from the voice recognition unit 50 via the network 40 and the transmission / reception unit 150.
- the intention interpretation processing unit 104 is an example of a “selection unit”.
- the intention interpretation processing unit 104 is configured to select, from the character string information, reserved word information that is an example of “third information” and free word information that is an example of “fourth information”.
- the intention interpretation processing unit 104 acquires the command information and the character string information from the recognition result acquisition unit 103, the intention interpretation processing unit 104 selects “free word” and “reserved word” from the character string information. Then, based on the selected free word, reserved word, and command information, intention interpretation for specifying the intention of the voice operation spoken by the user 700 is performed. Details of this operation will be described later.
- the intention interpretation processing unit 104 outputs the command information interpreted as intended to the command processing unit 106. Also, free word information representing free words, reserved word information representing reserved words, and command information are output to the word storage processing unit 105.
- the intention interpretation processing unit 104 may output free word information and reserved word information to the command processing unit 106.
- the word storage processing unit 105 stores the command information, free word information, and reserved word information output from the intention interpretation processing unit 104 in the storage unit 170.
- the command processing unit 106 is an example of a “processing unit”.
- the command processing unit 106 is configured to execute processing based on command information, reserved word information, and free word information.
- the command processing unit 106 performs command processing corresponding to the command information that is intentionally interpreted by the intention interpretation processing unit 104. Further, the command processing unit 106 performs command processing corresponding to the user operation received by the operation receiving unit 110.
- the command processing unit 106 may perform new command processing based on one or two of command information, free word information, and reserved word information stored in the storage unit 170 by the word storage processing unit 105. . That is, if there is one or two pieces of missing information among command information, reserved word information, and free word information, the command processing unit 106 supplements the missing information using information stored in the storage unit 170. , Configured to execute command processing. Details of this will be described later.
- the search processing unit 107 is an example of a “processing unit”. If the command information is a search command, the search processing unit 107 is configured to execute a search process based on reserved word information and free word information. If the command information corresponds to a search command associated with a preset application, the search processing unit 107 performs a search based on free word information and reserved word information with the application.
- the search processing unit 107 uses the Internet search application based on free word information and reserved word information. Perform a search.
- the search processing unit 107 uses the program guide application based on free word information and reserved word information. Perform a search.
- the search processing unit 107 can search all applications (searchable applications) that can perform a search based on the free word information and reserved word information. ) To perform a search based on the free word information and reserved word information.
- the search processing unit 107 complements the missing information using information stored in the storage unit 170 to perform the search processing. Is configured to run. If the shortage information is command information and the immediately preceding command process is a search process in the search processing unit 107, the search process is executed again.
- the display control unit 108 displays the search result in the search processing unit 107 on the display unit 140.
- the display control unit 108 displays the keyword search result in the Internet search application, the keyword search result in the program guide application, or the keyword search result in the searchable application on the display unit 140.
- the operation receiving unit 110 receives an input signal generated by an input operation performed by the user 700 using the input unit 22 of the remote controller 20 or an input signal generated by the user 700 using the input unit 32 of the portable terminal 30. 20 or from the mobile terminal 30 via the wireless communication unit 180. In this way, the operation reception unit 110 receives an operation (user operation) performed by the user 700.
- the first starting method is as follows.
- the user 700 presses a microphone button (not shown) that is one of the input units 22 provided in the remote controller 20 in order to start the voice recognition process.
- the operation reception unit 110 receives that the microphone button of the remote controller 20 has been pressed.
- the television 10 changes the volume of a speaker (not shown) of the television 10 to a preset volume.
- This volume is a sufficiently small volume that does not hinder voice recognition by the microphone 21.
- the voice recognition processing device 100 starts the voice recognition process.
- the television 10 does not need to perform the above volume adjustment, so the volume remains as it is.
- a mobile terminal 30 for example, a smartphone having a touch panel
- the user 700 activates software (software for operating the television 10 by voice) provided in the mobile terminal 30 and presses a microphone button displayed on the touch panel when the software operates. This user operation corresponds to the user operation of pressing the microphone button of the remote controller 20.
- the speech recognition processing apparatus 100 starts the speech recognition process.
- the second starting method is as follows.
- the user 700 utters a voice (for example, “speech operation start”, etc.) representing a command (start command) for starting a preset voice recognition process to the built-in microphone 130 of the television 10.
- a voice for example, “speech operation start”, etc.
- start command a command for starting a preset voice recognition process to the built-in microphone 130 of the television 10.
- the voice recognition processing device 100 recognizes that the voice collected by the built-in microphone 130 is a preset start command, the television 10 changes the volume of the speaker to a preset volume as described above. Then, the voice recognition processing by the voice recognition processing device 100 is started.
- control unit (not shown) that controls each block of the television 10.
- the display control unit 108 starts the voice recognition processing and prompts the user 700 to perform voice operation in order to prompt the user 700 to speak.
- a recognition icon 201 and an indicator 202 indicating the volume of the collected sound are displayed on the image display surface of the display unit 140.
- the display control unit 108 may display a message indicating that the voice recognition processing has started on the display unit 140 instead of the voice recognition icon 201.
- a message indicating that the voice recognition process has been started may be output by voice from a speaker.
- voice recognition icon 201 and the indicator 202 are not limited to the design shown in FIG. Any design may be used as long as the desired effect can be obtained.
- the speech recognition processing apparatus 100 performs two types of speech recognition processing.
- One is speech recognition processing (command recognition processing) for recognizing speech corresponding to a preset command.
- the other is voice recognition processing (keyword recognition processing) for recognizing keywords other than preset commands.
- the command recognition process is performed by the voice processing unit 102 as described above.
- the voice processing unit 102 compares the voice information based on the voice uttered by the user 700 to the television 10 with the “voice-command” correspondence information stored in the storage unit 170 in advance. If the voice information includes a command registered in the “voice-command” correspondence information, the command is specified.
- various commands for operating the television 10 are registered. For example, free word search operation commands and the like are also registered.
- the keyword recognition process is performed using the voice recognition unit 50 connected to the television 10 via the network 40 as described above.
- the voice recognition unit 50 acquires voice information from the television 10 via the network 40. Then, the voice recognition unit 50 divides the acquired voice information into phrases and divides the information into keywords and other than keywords (for example, particles, etc.). Thus, the voice recognition unit 50 performs dictation.
- the voice recognition unit 50 uses a database in which voice information and a character string (including one character) are associated when performing dictation.
- the voice recognition unit 50 separates the acquired voice information into a keyword and other than the keyword by comparing with the database, and converts each into a character string.
- the voice recognition unit 50 receives all voices (voice information) acquired by the voice acquisition unit 101 from the television 10, performs dictation on all the voice information, and results thereof. Are transmitted to the television 10.
- the voice processing unit 102 of the television 10 may be configured to transmit voice information other than the command recognized by the “voice-command” correspondence information to the voice recognition unit 50.
- FIG. 3 is a diagram showing an outline of dictation performed by the speech recognition processing system 11 according to the first embodiment.
- FIG. 3 shows a state in which the web browser is displayed on the display unit 140 of the television 10.
- the user 700 performs a search by keyword (keyword search) using the Internet search application of the web browser
- the voice recognition processing is started in the voice recognition processing device 100, an image shown as an example in FIG.
- the input field 203 is an area for inputting a keyword used for a search on a web browser. If the cursor is displayed in the input field 203, the user 700 can input a keyword in the input field 203.
- a voice signal based on the voice is input to the voice acquisition unit 101 and converted into voice information. Then, the voice information is transmitted from the television 10 to the voice recognition unit 50 via the network 40. For example, if the user 700 speaks “ABC”, audio information based on the audio is transmitted from the television 10 to the audio recognition unit 50.
- the voice recognition unit 50 converts voice information received from the television 10 into a character string by comparing it with a database. Then, the voice recognition unit 50 transmits the character string information (character string information) to the television 10 via the network 40 as a result of the voice recognition based on the received voice information. If the received voice information is based on the voice “ABC”, the voice recognition unit 50 compares the voice information with the database to convert it to a character string “ABC”, and transmits the character string information to the television 10. To do.
- the television 10 Upon receiving the character string information from the voice recognition unit 50, the television 10 operates the recognition result acquisition unit 103, the intention interpretation processing unit 104, the command processing unit 106, and the display control unit 108 based on the character string information, A character string corresponding to the column information is displayed in the input field 203. For example, when receiving the character string information corresponding to the character string “ABC” from the voice recognition unit 50, the television 10 displays the character string “ABC” in the input field 203.
- the web browser displayed on the display unit 140 of the television 10 performs a keyword search using the character string displayed in the input field 203.
- FIG. 4 is a flowchart illustrating an operation example of the keyword single search process performed by the speech recognition processing apparatus 100 according to the first embodiment.
- FIG. 5 is a flowchart illustrating an operation example of the keyword association search process performed by the speech recognition processing apparatus 100 according to the first embodiment.
- FIG. 6 is a flowchart showing an operation example of the speech recognition interpretation process performed by the speech recognition processing apparatus 100 according to the first embodiment.
- the flowchart shown in FIG. 6 is a flowchart showing details of the speech recognition / interpretation processing step in each search process shown in FIGS.
- FIG. 7 is a diagram schematically illustrating an example of a reserved word table of the speech recognition processing apparatus 100 according to the first embodiment.
- the speech recognition interpretation process (step S101) of the keyword single search process shown in FIG. 4 and the speech recognition interpretation process (step S201) of the keyword associative search process shown in FIG. Then, substantially the same processing is performed. First, the speech recognition interpretation process will be described with reference to FIG.
- the voice recognition processing of the voice recognition processing device 100 is started.
- the voice of the user 700 is converted into a voice signal by the built-in microphone 130, the microphone 21 of the remote controller 20, or the microphone 31 of the portable terminal 30, and the voice signal is converted into a voice acquisition unit. 101.
- the voice acquisition unit 101 acquires the voice signal of the user 700 (step S301).
- the voice acquisition unit 101 converts the acquired voice signal of the user 700 into voice information that can be used for various processes in the subsequent stage. If the user 700 speaks, for example, “Search ABC image”, the voice acquisition unit 101 outputs voice information based on the voice.
- the voice processing unit 102 compares the voice information output from the voice acquisition unit 101 with the “voice-command” correspondence information stored in the storage unit 170 in advance. Then, it is checked whether or not the voice information output from the voice acquisition unit 101 corresponds to the command registered in the “voice-command” correspondence information (step S302).
- the voice information output from the voice acquisition unit 101 includes voice information based on the word “search” issued by the user 700, and “search” is registered as command information in the “voice-command” correspondence information. Then, the voice processing unit 102 determines that the “search” command is included in the voice information.
- the command information includes command information corresponding to voice information such as “search”, “channel up”, “voice up”, “play”, “stop”, “word conversion”, “character display”, etc. It is included.
- the “voice-command” correspondence information can be updated by adding or deleting command information.
- the user 700 can add new command information to the “voice-command” correspondence information.
- new command information can be added to the “voice-command” correspondence information via the network 40.
- the speech recognition processing apparatus 100 can perform speech recognition processing based on the latest “voice-command” correspondence information.
- step S302 the voice processing unit 102 transmits the voice information output from the voice acquisition unit 101 from the transmission / reception unit 150 to the voice recognition unit 50 via the network 40.
- the voice recognition unit 50 converts the received voice information into a character string delimited by a keyword and a keyword (for example, a particle). Therefore, the voice recognition unit 50 performs dictation based on the received voice information.
- the voice recognition unit 50 compares the database in which keywords and character strings are associated with the received voice information. If a keyword registered in the database is included in the received voice information, a character string (including a word) corresponding to the keyword is selected. In this way, the voice recognition unit 50 performs dictation and converts the received voice information into a character string. For example, if the voice recognition unit 50 receives voice information based on the voice “search for ABC image” uttered by the user 700, the voice recognition unit 50 converts the voice information into “ABC” and “NO” by dictation. , “Image”, “to”, and “search”. The voice recognition unit 50 transmits character string information representing each converted character string to the television 10 via the network 40.
- This database is provided in the voice recognition unit 50, but may be in another location on the network 40.
- the database may be configured such that keyword information is updated regularly or irregularly.
- the recognition result acquisition unit 103 of the television 10 acquires command information output as a result of speech recognition from the speech processing unit 102 and character string information transmitted as a result of speech recognition from the speech recognition unit 50, The data is output to the interpretation processing unit 104.
- the intention interpretation processing unit 104 performs intention interpretation for specifying the intention of the voice operation uttered by the user 700 based on the command information and the character string information acquired from the recognition result acquisition unit 103 (step S303).
- the intention interpretation processing unit 104 performs selection of character string information for intention interpretation.
- the selection types include free words, reserved words, and commands. If there is a character string information that overlaps with command information, the intention interpretation processing unit 104 determines that it is a command and selects it. Further, the reserved words are selected from the character string information based on the reserved word table shown as an example in FIG. Free words are selected by removing character strings such as particles that do not correspond to keywords from the remaining character string information.
- the intention interpretation processing unit 104 acquires, for example, character string information such as “ABC”, “NO”, “image”, “O”, “search”, and command information indicating “search”, “ABC” is selected as a free word, “image” as a reserved word, and “search” as a command.
- the speech recognition processing apparatus 100 can perform an operation based on the intention of the user 700 (the intention of the voice operation spoken by the user 700). For example, the speech recognition processing apparatus 100 can execute the command “search” using the free word “ABC” for the reserved word “image”.
- the intention interpretation processing unit 104 compares the reserved word table shown in FIG. 7 with the character string information as an example, and if the character string information includes a term registered in the reserved word table, the term Are selected from the string information as reserved words.
- the reserved word is a predetermined term such as “image”, “moving image”, “program”, “Web”, etc. as shown in an example in FIG.
- the reserved words are not limited to these terms.
- the intention interpretation processing unit 104 may perform intention interpretation using a character string such as a particle included in the character string information.
- the intention interpretation processing unit 104 executes the speech recognition interpretation process (step S101 shown in FIG. 4 and step S201 shown in FIG. 5).
- the intention interpretation processing unit 104 executes the speech recognition interpretation process shown in FIG. 6 based on the voice uttered by the user 700 (step S101). Since they overlap, detailed description of step S101 is omitted.
- the intention interpretation processing unit 104 determines whether or not the reserved word information is included in the character string information based on the processing result in step S101 (step S102).
- step S102 If it is determined in step S102 that no reserved word information is included (No), the process proceeds to step S104.
- step S102 When it is determined in step S102 that reserved word information is included (Yes), the word storage processing unit 105 stores the reserved word information in the storage unit 170 (step S103). In the example described above, the “image” of reserved word information is stored in the storage unit 170.
- the voice recognition processing device 100 determines whether or not the character string information includes free word information based on the processing result in step S101 (step S104).
- step S104 If it is determined in step S104 that free word information is not included (No), the process proceeds to step S106.
- step S104 When it is determined in step S104 that free word information is included (Yes), the word storage processing unit 105 stores the free word information in the storage unit 170 (step S105).
- the word storage processing unit 105 stores the free word information in the storage unit 170 (step S105).
- “ABC” of free word information is stored in the storage unit 170.
- the word storage processing unit 105 stores command information in the storage unit 170.
- the command processing unit 106 executes command processing based on the free word information, reserved word information, and command information (step S106).
- the command processing unit 106 When the command processing unit 106 receives command information from the intention interpretation processing unit 104 and receives free word information and / or reserved word information from the word storage processing unit 105, the command processing unit 106 converts the free word information and / or reserved word information into each or both. On the other hand, an instruction (command) based on the command information is executed. Note that the command processing unit 106 may receive free word information and reserved word information from the intention interpretation processing unit 104. Command information may be received from the word storage processing unit 105.
- the command processing unit 106 mainly performs command processing other than search.
- This command processing includes, for example, channel change and volume change of the television 10.
- the search processing unit 107 executes the search process (step S107).
- the search processing unit 107 sets the search target content to “image” based on the “image” of the reserved word information, and performs an image search using “ABC” of the free word information.
- the search result in step S107 is displayed on the display unit 140 by the display control unit.
- the keyword single search process ends.
- the keyword associative search processing is based on the previous input content and the newly input content without inputting again the content input in the previous search when the user 700 executes the search processing continuously. This is a process for executing a new search.
- an example in which an input operation is performed by a voice uttered by the user 700 will be described. ) May be used.
- keyword associative search processing will be described with specific examples.
- the user 700 first speaks “Search ABC image” and the search for “image” by the free word “ABC” has already been performed.
- the user 700 newly searches for “moving image” using the same free word “ABC” as the free word used for the previous image search.
- the user 700 can omit the utterance of the free word “ABC” that overlaps with the previous search. That is, the user 700 may say “Search for a moving image”.
- the intention interpretation processing unit 104 executes the speech recognition interpretation process shown in FIG. 6 based on the voice uttered by the user 700 (step S201). Since they overlap, detailed description of step S201 is omitted.
- Voice information based on the voice uttered by the user is transmitted from the voice recognition processing device 100 to the voice recognition unit 50 via the network 40.
- the voice recognition unit 50 returns character string information based on the received voice information.
- This character string information includes reserved word information (for example, “moving image”) and command information (for example, “search”), but does not include free word information.
- the returned character string information is received by the recognition result acquisition unit 103 and output to the intention interpretation processing unit 104.
- the voice processing unit 102 of the voice recognition processing apparatus 100 determines that the command “search” is included in the voice information based on the voice uttered by the user 700. Then, the voice processing unit 102 outputs command information corresponding to the command “search” to the recognition result acquisition unit 103. Further, the recognition result acquisition unit 103 receives character string information including the character string “moving image” from the voice recognition unit 50. Then, the intention interpretation processing unit 104 determines that “moving image” included in the character string information acquired from the recognition result acquisition unit 103 is a reserved word. Further, since the free word information is not included in the character string information, the free word information is not output from the intention interpretation processing unit 104.
- the intention interpretation processing unit 104 determines whether or not the reserved word information is included in the character string information based on the processing result in step S201 (step S202).
- step S202 If it is determined in step S202 that the reserved word information is not included (No), the process proceeds to step S205. The operations after step S205 will be described later.
- the word storage processing unit 105 stores the reserved word information (for example, “moving image”) as a new search target content. (Step S203).
- the new reserved word information is stored in the storage unit 170, so that the reserved word information is updated.
- the previous reserved word information “image” is switched to the new reserved word information “moving image” (step S204).
- free word information is not output from the intention interpretation processing unit 104, so the word storage processing unit 105 reads the free word information (for example, “ABC”) stored in the storage unit 170 and performs command processing.
- the command processing unit 106 receives command information from the intention interpretation processing unit 104, and receives the read free word information and new reserved word information from the word storage processing unit 105. Then, command processing corresponding to the command information is performed on the read free word information and new reserved word information (step S208). As described above, the command processing unit 106 mainly performs command processing other than search.
- search processing unit 107 executes search processing (step S209).
- the search processing unit 107 sets the search target content to “video” based on the “video” of the new reserved word information, and performs video search using “ABC” of the free word information read from the storage unit 170. Do.
- step S209 The search result in step S209 is displayed on the display unit 140 by the display control unit 108.
- the keyword associative search process ends.
- step S202 the keyword associative search process when it is determined in step S202 that reserved word information is not included (No) will be described.
- the user 700 searches for “image” using a free word “XYZ” different from the free word used for the previous image search.
- the user 700 can omit the utterance of the reserved word “image” and the command “search”, which overlaps with the previous search. That is, the user 700 may speak “XYZ”.
- step S201 Since it overlaps, detailed description of step S201, S202 is abbreviate
- Voice information (for example, “XYZ”) based on the voice uttered by the user is transmitted from the voice recognition processing device 100 to the voice recognition unit 50 via the network 40.
- the voice recognition unit 50 returns character string information based on the received voice information.
- This character string information includes free word information (for example, “XYZ”), but does not include reserved word information and command information.
- the returned character string information is received by the recognition result acquisition unit 103 and output to the intention interpretation processing unit 104.
- the reserved word information is not included in the character string information, and the command information is not output from the voice processing unit 102. Accordingly, reserved word information and command information are not output from the intention interpretation processing unit 104.
- step S202 it is determined in step S202 that the reserved word information is not included (No).
- the intention interpretation processing unit 104 determines whether or not the character string information includes free word information based on the processing result in step S201 (step S205).
- step S205 If it is determined in step S205 that the free word information is not included (No), the process proceeds to step S208.
- step S205 When it is determined in step S205 that free word information is included (Yes), the word storage processing unit 105 stores the free word information (for example, “XYZ”) as new free word information. (Step S206).
- the free word information for example, “XYZ”
- the new free word information is stored in the storage unit 170, whereby the free word information is updated.
- the previous free word information “ABC” is switched to the new free word information “XYZ” (step S207).
- reserved word information and command information are not output from the intention interpretation processing unit 104, so the word storage processing unit 105 stores reserved word information (for example, “image”) and command information stored in the storage unit 170. (For example, “search”) is read and output to the command processing unit 106.
- the command processing unit 106 receives the reserved word information and command information read from the storage unit 170 by the word storage processing unit 105 and new free word information (for example, “XYZ”). Then, command processing corresponding to the read command information is performed on the read reserved word information and new free word information (step S208).
- the search processing unit 107 executes the search process (step S209).
- the search processing unit 107 sets the search target content to “image” based on the “image” of the reserved word information read from the storage unit 170, and performs image search using “XYZ” of the new free word information. Do.
- step S209 The search result in step S209 is displayed on the display unit 140 by the display control unit 108.
- the keyword associative search process ends.
- step S205 When it is determined in step S205 that free word information is not included (No), the search processing unit 107 proceeds to step S208 and performs normal command processing or search processing.
- the speech recognition processing apparatus 100 includes the speech acquisition unit 101, the speech processing unit 102 that is an example of the first speech recognition unit, and the speech recognition that is an example of the second speech recognition unit.
- Unit 50 intention interpretation processing unit 104 which is an example of a selection unit, storage unit 170, command processing unit 106 and search processing unit 107 which are examples of a processing unit.
- the voice acquisition unit 101 is configured to acquire voice uttered by the user and output voice information.
- the voice processing unit 102 is configured to convert voice information into command information which is an example of first information.
- the voice recognition unit 50 is configured to convert voice information into character string information, which is an example of second information.
- the intention interpretation processing unit 104 is configured to sort reserved word information, which is an example of third information, and free word information, which is an example of fourth information, from character string information.
- the storage unit 170 is configured to store command information, reserved word information, and free word information.
- the command processing unit 106 is configured to execute processing based on command information, reserved word information, and free word information. Then, if there is one or two pieces of missing information in command information, reserved word information, and free word information, the command processing unit 106 and the search processing unit 107 use the information stored in the storage unit 170 as the missing information. It is comprised so that it may complement and may perform a process.
- the search processing unit 107 is configured to execute a search process based on the search command, reserved word information, and free word information.
- the voice recognition unit 50 may be installed on the network 40, and the voice recognition processing apparatus 100 may include a transmission / reception unit 150 configured to communicate with the voice recognition unit 50 via the network 40.
- the voice processing unit 102 may be configured to convert voice information into command information using “voice-command” correspondence information in which a plurality of command information and voice information that are set in advance are associated with each other.
- the user 700 When the user 700 using the speech recognition processing device 100 configured as described above continuously performs the voice operation, the user 700 newly adds the previous utterance content without newly uttering the content uttered by the previous voice operation. New operations can be performed based on the content of the utterance. For example, when the user 700 continuously performs search processing, a new search based on the previous utterance content and the newly uttered content can be performed without uttering again the content input by voice operation in the previous search. It can be performed.
- the user 700 utters “Search ABC image”, searches for “image” with the free word “ABC”, and subsequently searches for “ABC video”.
- the utterance of the free word “ABC” which overlaps with the search of “No. This makes it possible to execute the same search process as when “Search ABC video” is uttered.
- the user 700 utters “search ABC image”, searches for “image” with the free word “ABC”, and subsequently searches for “XYZ image”, which overlaps with the previous search.
- the reserved word “image” and the command “search” may be omitted and only “XYZ” may be spoken. This makes it possible to execute the same search process as when “search for XYZ images” is uttered.
- the speech recognition processing device 100 can reduce the complexity when the user 700 performs a voice operation, and can improve the operability.
- the first embodiment has been described as an example of the technique disclosed in the present application.
- the technology in the present disclosure is not limited to this, and can also be applied to embodiments in which changes, replacements, additions, omissions, and the like are performed.
- the “voice-command” correspondence information includes, for example, command information corresponding to voice information such as “channel up”, “voice up”, “play”, “stop”, “change language”, “character display”, and the like. It may be registered.
- the speech recognition processing apparatus 100 recognizes the free word “optical disc” and the command information “playback”. As a result, the video recorded on the optical disc is played back on the optical disc playback device on which the speech recognition processing device 100 is mounted.
- the command information “stop” is recognized by the speech recognition processing device 100, and the optical disc playback device stops the playback of the optical disc.
- the free word “optical disk” is stored in the storage unit 170 by the word storage processing unit 105, so the command processing unit 106 reads the newly input command information “playback” process from the storage unit 170. This is because it is executed for the free word “optical disk”.
- the user 700 can control the operation of the optical disk playback apparatus by simply speaking “stop” without speaking “stop optical disk”.
- the speech recognition processing apparatus 100 recognizes the free word information “Japanese” and the command information “character display”. Thereby, on the television 10 on which the speech recognition processing device 100 is mounted, the command “character display” for displaying Japanese subtitles on the display unit 140 of the television 10 is executed. Following this state, when the user 700 speaks “English”, the free word information “English” is voice-recognized by the voice recognition processing device 100. Then, the television 10 reads the command information “character display” from the storage unit 170, continues the operation of “character display” as it is, and changes the character displayed on the display unit 140 from “Japanese” to “English”. That is, the user 700 can change the display character of the television 10 from “Japanese” to “English” by simply speaking “English” without speaking “English character display”.
- the voice recognition processing apparatus 100 reads out the information from the storage unit 170 and complements it, and executes the command process. Therefore, the user 700 overlaps with the previous voice operation. There is no need to repeatedly speak a word, and the complexity of voice operation is reduced and the operability is improved.
- a reserved word is not included in the utterance of the user 700, but the command processing unit 106 can execute the command processing.
- the intention interpretation processing unit 104 stores the fact that the reserved word or free word may not be included.
- the information is transmitted to the processing unit 105 and the command processing unit 106 (search processing unit 107). Therefore, based on the information transmitted from the intention interpretation processing unit 104, the command processing unit 106 (search processing unit 107) determines whether command processing should be performed using a combination of free word information, reserved word information, and command information.
- the command processing can be executed by determining whether the command processing should be performed in combination with the command information or the command processing should be performed in combination with the reserved word information and the command information. Further, the word storage processing unit 105 is prevented from reading unnecessary information from the storage unit 170. In the above example, the reserved word information is not included in the voice information, but the reserved word information is unnecessary, so the word storage processing unit 105 does not read the reserved word information from the storage unit 170.
- the voice processing unit 102 may operate so as to output the information along with the command information to the subsequent stage.
- the search target is not limited to “image” or “video”. Etc. may be the search target.
- the voice uttered by the user 700 includes the command information “search” and the keyword, and the type of “search” is the Internet.
- the speech recognition processing apparatus 100 performs a search using the keyword by the Internet search application. For example, if the user 700 says “Search ABC on the Internet”, the speech recognition processing apparatus 100 recognizes the voice “Search on the Internet” as “Search” by the Internet search application. Therefore, the user 700 can cause the television 10 to perform an Internet search using the keyword only by emitting the voice.
- the voice recognition process if the voice uttered by the user 700 includes command information “search” and a keyword, and the type of the “search” is a search by a program guide application, the voice recognition is performed.
- a search using the keyword is performed in the program guide application. For example, if the user 700 utters “Search ABC in the program guide”, the speech recognition processing apparatus 100 recognizes the voice “Search in the program guide” as “search” by the program guide application. For this reason, the user 700 can cause the television 10 to perform a program table search using the keyword only by speaking the voice.
- the speech recognition processing apparatus 100 “Search” by the free word may be performed by all the applications including the free word, and the search results of all the searched applications may be displayed on the display unit 140.
- the voice recognition process can be started by the method described above. Therefore, if the voice recognition process is started, the user 700 can perform the above-described search even while watching the program on the television 10.
- the voice recognition unit 50 may be provided in the voice recognition processing device 100.
- reserved word information may be read from the storage unit 170 to complement command processing, or command information may be read from the storage unit 170 to complement command processing.
- reserved word information and free word information may be read from the storage unit 170 to complement command processing, or free word information and command information may be read from the storage unit 170 to complement command processing.
- Each block shown in FIG. 2 may be configured as an independent circuit block, or may be configured such that software programmed to realize the operation of each block is executed by a processor.
- This disclosure is applicable to a device that executes a processing operation instructed by a user.
- the present disclosure is applicable to portable terminal devices, television receivers, personal computers, set-top boxes, video recorders, game machines, smartphones, tablet terminals, and the like.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
以下、図1~図7を用いて、実施の形態1を説明する。なお、本実施の形態では、音声認識処理装置を備えた表示装置の一例としてテレビジョン受像機(テレビ)10を挙げているが、表示装置は何らテレビ10に限定されるものではない。例えば、PCやタブレット端末等であってもよい。 (Embodiment 1)
The first embodiment will be described below with reference to FIGS. In the present embodiment, a television receiver (television) 10 is cited as an example of a display device including a voice recognition processing device, but the display device is not limited to the
図1は、実施の形態1における音声認識処理システム11を概略的に示す図である。本実施の形態では、表示装置の一例であるテレビ10に音声認識処理装置が内蔵されている。 [1-1. Constitution]
FIG. 1 schematically shows a speech
次に、本実施の形態におけるテレビ10の音声認識処理装置100の動作について説明する。 [1-2. Operation]
Next, the operation of the speech
以上のように、本実施の形態において、音声認識処理装置100は、音声取得部101と、第1音声認識部の一例である音声処理部102と、第2音声認識部の一例である音声認識部50と、選別部の一例である意図解釈処理部104と、記憶部170と、処理部の一例であるコマンド処理部106および検索処理部107と、を備えている。音声取得部101は、ユーザが発する音声を取得して音声情報を出力するように構成されている。音声処理部102は、音声情報を第1情報の一例であるコマンド情報に変換するように構成されている。音声認識部50は、音声情報を第2情報の一例である文字列情報に変換するように構成されている。意図解釈処理部104は、文字列情報から第3情報の一例である予約語情報と第4情報の一例であるフリーワード情報とを選別するように構成されている。記憶部170は、コマンド情報、予約語情報、およびフリーワード情報を記憶するように構成されている。コマンド処理部106は、コマンド情報、予約語情報、およびフリーワード情報にもとづく処理を実行するように構成されている。そして、コマンド処理部106および検索処理部107は、コマンド情報、予約語情報、およびフリーワード情報のうち1つまたは2つの不足情報があれば、その不足情報を記憶部170に記憶された情報を用いて補完して処理を実行するように構成されている。 [1-3. Effect]
As described above, in the present embodiment, the speech
以上のように、本出願において開示する技術の例示として、実施の形態1を説明した。しかしながら、本開示における技術は、これに限定されず、変更、置き換え、付加、省略等を行った実施の形態にも適用できる。また、上記実施の形態1で説明した各構成要素を組み合わせて、新たな実施の形態とすることも可能である。 (Other embodiments)
As described above, the first embodiment has been described as an example of the technique disclosed in the present application. However, the technology in the present disclosure is not limited to this, and can also be applied to embodiments in which changes, replacements, additions, omissions, and the like are performed. Moreover, it is also possible to combine each component demonstrated in the said Embodiment 1, and it can also be set as a new embodiment.
11 音声認識処理システム
20 リモートコントローラ
21,31 マイク
22,32 入力部
30 携帯端末
40 ネットワーク
50 音声認識部
100 音声認識処理装置
101 音声取得部
102 音声処理部
103 認識結果取得部
104 意図解釈処理部
105 ワード記憶処理部
106 コマンド処理部
107 検索処理部
108 表示制御部
110 操作受付部
130 内蔵マイク
140 表示部
150 送受信部
160 チューナ
170,171 記憶部
180 無線通信部
201 音声認識アイコン
202 インジケータ
700 ユーザ DESCRIPTION OF
Claims (6)
- ユーザが発する音声を取得して音声情報を出力するように構成された音声取得部と、
前記音声情報を第1情報に変換するように構成された第1音声認識部と、
前記音声情報を第2情報に変換するように構成された第2音声認識部と、
前記第2情報から第3情報と第4情報とを選別するように構成された選別部と、
前記第1情報、前記第3情報、および前記第4情報を記憶するように構成された記憶部と、
前記第1情報、前記第3情報、および前記第4情報にもとづく処理を実行するように構成された処理部と、
を備え、
前記処理部は、前記第1情報、前記第3情報、および前記第4情報のうち1つまたは2つの不足情報があれば、前記不足情報を前記記憶部に記憶された情報を用いて補完して処理を実行するように構成された、
音声認識処理装置。 An audio acquisition unit configured to acquire audio emitted by the user and output audio information;
A first voice recognition unit configured to convert the voice information into first information;
A second voice recognition unit configured to convert the voice information into second information;
A sorting unit configured to sort third information and fourth information from the second information;
A storage unit configured to store the first information, the third information, and the fourth information;
A processing unit configured to execute processing based on the first information, the third information, and the fourth information;
With
If there is one or two pieces of missing information among the first information, the third information, and the fourth information, the processing unit supplements the missing information using information stored in the storage unit. Configured to perform processing,
Speech recognition processing device. - 前記処理部は、
前記第1情報が検索コマンドであるとき、
前記検索コマンドにもとづく検索処理を実行するように構成された、
請求項1に記載の音声認識処理装置。 The processor is
When the first information is a search command,
Configured to execute a search process based on the search command;
The speech recognition processing apparatus according to claim 1. - 前記第2音声認識部はネットワーク上に設置され、
前記ネットワークを介して、前記第2音声認識部と通信を行うように構成された送受信部を備えた、
請求項1に記載の音声認識処理装置。 The second voice recognition unit is installed on a network;
A transmission / reception unit configured to communicate with the second voice recognition unit via the network;
The speech recognition processing apparatus according to claim 1. - 前記第1音声認識部は、
あらかじめ設定された複数の第1情報と前記音声情報とを対応付けた情報、を用いて前記音声情報を前記第1情報に変換するように構成された、
請求項1に記載の音声認識処理装置。 The first voice recognition unit
Configured to convert the voice information into the first information using information in which a plurality of preset first information and the voice information are associated with each other.
The speech recognition processing apparatus according to claim 1. - ユーザが発する音声を取得して音声情報に変換するステップと、
前記音声情報を第1情報に変換するステップと、
前記音声情報を第2情報に変換するステップと、
前記第2情報から第3情報と第4情報とを選別するステップと、
前記第1情報、前記第3情報、および前記第4情報を記憶部に記憶するステップと、
前記第1情報、前記第3情報、および前記第4情報にもとづく処理を実行するステップと、
前記第1情報、前記第3情報、および前記第4情報のうち1つまたは2つの不足情報があれば、前記記憶部に記憶された情報を用いて補完するステップと、
を備えた音声認識処理方法。 Acquiring voice emitted by the user and converting it into voice information;
Converting the audio information into first information;
Converting the audio information into second information;
Screening third information and fourth information from the second information;
Storing the first information, the third information, and the fourth information in a storage unit;
Executing a process based on the first information, the third information, and the fourth information;
If there is one or two missing information among the first information, the third information, and the fourth information, the step of complementing using the information stored in the storage unit;
A speech recognition processing method comprising: - ユーザが発する音声を取得して音声情報を出力するように構成された音声取得部と、
前記音声情報を第1情報に変換するように構成された第1音声認識部と、
前記音声情報を第2情報に変換するように構成された第2音声認識部と、
前記第2情報から第3情報と第4情報とを選別するように構成された選別部と、
前記第1情報、前記第3情報、および前記第4情報を記憶するように構成された記憶部と、
前記第1情報、前記第3情報、および前記第4情報にもとづく処理を実行するように構成された処理部と、
前記処理部における処理結果を表示するように構成された表示部と、
を備え、
前記処理部は、前記第1情報、前記第3情報、および前記第4情報のうち1つまたは2つの不足情報があれば、前記不足情報を前記記憶部に記憶された情報を用いて補完して処理を実行するように構成された、
表示装置。 An audio acquisition unit configured to acquire audio emitted by the user and output audio information;
A first voice recognition unit configured to convert the voice information into first information;
A second voice recognition unit configured to convert the voice information into second information;
A sorting unit configured to sort third information and fourth information from the second information;
A storage unit configured to store the first information, the third information, and the fourth information;
A processing unit configured to execute processing based on the first information, the third information, and the fourth information;
A display unit configured to display a processing result in the processing unit;
With
If there is one or two pieces of missing information among the first information, the third information, and the fourth information, the processing unit supplements the missing information using information stored in the storage unit. Configured to perform processing,
Display device.
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP14874773.6A EP3089157B1 (en) | 2013-12-26 | 2014-12-22 | Voice recognition processing device, voice recognition processing method, and display device |
JP2015554558A JP6244560B2 (en) | 2013-12-26 | 2014-12-22 | Speech recognition processing device, speech recognition processing method, and display device |
US15/023,385 US9905225B2 (en) | 2013-12-26 | 2014-12-22 | Voice recognition processing device, voice recognition processing method, and display device |
CN201480057905.0A CN105659318B (en) | 2013-12-26 | 2014-12-22 | Voice recognition processing unit, voice recognition processing method and display device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013-268669 | 2013-12-26 | ||
JP2013268669 | 2013-12-26 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015098079A1 true WO2015098079A1 (en) | 2015-07-02 |
Family
ID=53477977
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2014/006367 WO2015098079A1 (en) | 2013-12-26 | 2014-12-22 | Voice recognition processing device, voice recognition processing method, and display device |
Country Status (5)
Country | Link |
---|---|
US (1) | US9905225B2 (en) |
EP (1) | EP3089157B1 (en) |
JP (1) | JP6244560B2 (en) |
CN (1) | CN105659318B (en) |
WO (1) | WO2015098079A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109147784A (en) * | 2018-09-10 | 2019-01-04 | 百度在线网络技术(北京)有限公司 | Voice interactive method, equipment and storage medium |
JP2019049985A (en) * | 2016-03-04 | 2019-03-28 | 株式会社リコー | Voice control of interactive whiteboard appliance |
JP2022009571A (en) * | 2017-06-13 | 2022-01-14 | グーグル エルエルシー | Establishment of audio-based network session with unregistered resource |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014103568A1 (en) * | 2012-12-28 | 2014-07-03 | ソニー株式会社 | Information processing device, information processing method and program |
KR20160090584A (en) * | 2015-01-22 | 2016-08-01 | 엘지전자 주식회사 | Display device and method for controlling the same |
US9898250B1 (en) * | 2016-02-12 | 2018-02-20 | Amazon Technologies, Inc. | Controlling distributed audio outputs to enable voice output |
US9858927B2 (en) * | 2016-02-12 | 2018-01-02 | Amazon Technologies, Inc | Processing spoken commands to control distributed audio outputs |
US10409552B1 (en) * | 2016-09-19 | 2019-09-10 | Amazon Technologies, Inc. | Speech-based audio indicators |
JP7044633B2 (en) * | 2017-12-28 | 2022-03-30 | シャープ株式会社 | Operation support device, operation support system, and operation support method |
JP7227093B2 (en) * | 2019-07-05 | 2023-02-21 | Tvs Regza株式会社 | How to select electronic devices, programs and search services |
US10972802B1 (en) * | 2019-09-26 | 2021-04-06 | Dish Network L.L.C. | Methods and systems for implementing an elastic cloud based voice search using a third-party search provider |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001249685A (en) * | 2000-03-03 | 2001-09-14 | Alpine Electronics Inc | Speech dialog device |
JP2005059185A (en) * | 2003-08-19 | 2005-03-10 | Sony Corp | Robot device and method of controlling the same |
JP2007226642A (en) * | 2006-02-24 | 2007-09-06 | Honda Motor Co Ltd | Voice recognition equipment controller |
JP4812941B2 (en) | 1999-01-06 | 2011-11-09 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | Voice input device having a period of interest |
JP2012501480A (en) * | 2008-08-29 | 2012-01-19 | マルチモーダル・テクノロジーズ・インク | Hybrid speech recognition |
JP2013205523A (en) * | 2012-03-27 | 2013-10-07 | Yahoo Japan Corp | Response generation apparatus, response generation method and response generation program |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000356999A (en) | 1999-06-16 | 2000-12-26 | Ishikawajima Harima Heavy Ind Co Ltd | Device and method for inputting command by voice |
CN1320499C (en) * | 2001-07-05 | 2007-06-06 | 皇家菲利浦电子有限公司 | Method of providing an account information and method of and device for transcribing of dictations |
FR2833103B1 (en) * | 2001-12-05 | 2004-07-09 | France Telecom | NOISE SPEECH DETECTION SYSTEM |
JP4849662B2 (en) * | 2005-10-21 | 2012-01-11 | 株式会社ユニバーサルエンターテインメント | Conversation control device |
JP2008076811A (en) * | 2006-09-22 | 2008-04-03 | Honda Motor Co Ltd | Voice recognition device, voice recognition method and voice recognition program |
US20090144056A1 (en) * | 2007-11-29 | 2009-06-04 | Netta Aizenbud-Reshef | Method and computer program product for generating recognition error correction information |
US8099289B2 (en) * | 2008-02-13 | 2012-01-17 | Sensory, Inc. | Voice interface and search for electronic devices including bluetooth headsets and remote systems |
US8751229B2 (en) * | 2008-11-21 | 2014-06-10 | At&T Intellectual Property I, L.P. | System and method for handling missing speech data |
KR20130125067A (en) * | 2012-05-08 | 2013-11-18 | 삼성전자주식회사 | Electronic apparatus and method for controlling electronic apparatus thereof |
KR101914708B1 (en) * | 2012-06-15 | 2019-01-14 | 삼성전자주식회사 | Server and method for controlling the same |
CN102833633B (en) * | 2012-09-04 | 2016-01-20 | 深圳创维-Rgb电子有限公司 | A kind of television voice control system and method |
-
2014
- 2014-12-22 EP EP14874773.6A patent/EP3089157B1/en active Active
- 2014-12-22 US US15/023,385 patent/US9905225B2/en active Active
- 2014-12-22 WO PCT/JP2014/006367 patent/WO2015098079A1/en active Application Filing
- 2014-12-22 CN CN201480057905.0A patent/CN105659318B/en active Active
- 2014-12-22 JP JP2015554558A patent/JP6244560B2/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4812941B2 (en) | 1999-01-06 | 2011-11-09 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | Voice input device having a period of interest |
JP2001249685A (en) * | 2000-03-03 | 2001-09-14 | Alpine Electronics Inc | Speech dialog device |
JP2005059185A (en) * | 2003-08-19 | 2005-03-10 | Sony Corp | Robot device and method of controlling the same |
JP2007226642A (en) * | 2006-02-24 | 2007-09-06 | Honda Motor Co Ltd | Voice recognition equipment controller |
JP2012501480A (en) * | 2008-08-29 | 2012-01-19 | マルチモーダル・テクノロジーズ・インク | Hybrid speech recognition |
JP2013205523A (en) * | 2012-03-27 | 2013-10-07 | Yahoo Japan Corp | Response generation apparatus, response generation method and response generation program |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019049985A (en) * | 2016-03-04 | 2019-03-28 | 株式会社リコー | Voice control of interactive whiteboard appliance |
JP2022009571A (en) * | 2017-06-13 | 2022-01-14 | グーグル エルエルシー | Establishment of audio-based network session with unregistered resource |
JP7339310B2 (en) | 2017-06-13 | 2023-09-05 | グーグル エルエルシー | Establishing audio-based network sessions with unregistered resources |
CN109147784A (en) * | 2018-09-10 | 2019-01-04 | 百度在线网络技术(北京)有限公司 | Voice interactive method, equipment and storage medium |
JP2019185062A (en) * | 2018-09-10 | 2019-10-24 | 百度在線網絡技術(北京)有限公司 | Voice interaction method, terminal apparatus, and computer readable recording medium |
CN109147784B (en) * | 2018-09-10 | 2021-06-08 | 百度在线网络技术(北京)有限公司 | Voice interaction method, device and storage medium |
US11176938B2 (en) | 2018-09-10 | 2021-11-16 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method, device and storage medium for controlling game execution using voice intelligent interactive system |
JP7433000B2 (en) | 2018-09-10 | 2024-02-19 | バイドゥ オンライン ネットワーク テクノロジー(ペキン) カンパニー リミテッド | Voice interaction methods, terminal equipment and computer readable storage media |
Also Published As
Publication number | Publication date |
---|---|
CN105659318A (en) | 2016-06-08 |
US20160210966A1 (en) | 2016-07-21 |
JP6244560B2 (en) | 2017-12-13 |
US9905225B2 (en) | 2018-02-27 |
EP3089157B1 (en) | 2020-09-16 |
EP3089157A1 (en) | 2016-11-02 |
CN105659318B (en) | 2019-08-30 |
EP3089157A4 (en) | 2017-01-18 |
JPWO2015098079A1 (en) | 2017-03-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6244560B2 (en) | Speech recognition processing device, speech recognition processing method, and display device | |
USRE49493E1 (en) | Display apparatus, electronic device, interactive system, and controlling methods thereof | |
JP6375521B2 (en) | Voice search device, voice search method, and display device | |
US10586536B2 (en) | Display device and operating method therefor | |
JP5746111B2 (en) | Electronic device and control method thereof | |
JP5819269B2 (en) | Electronic device and control method thereof | |
JP6111030B2 (en) | Electronic device and control method thereof | |
JP6603754B2 (en) | Information processing device | |
US9880808B2 (en) | Display apparatus and method of controlling a display apparatus in a voice recognition system | |
WO2015098109A1 (en) | Speech recognition processing device, speech recognition processing method and display device | |
JP2013037689A (en) | Electronic equipment and control method thereof | |
KR20130018464A (en) | Electronic apparatus and method for controlling electronic apparatus thereof | |
JP2014532933A (en) | Electronic device and control method thereof | |
JP2014138421A (en) | Video processing apparatus, control method for the same, and video processing system | |
KR20150089145A (en) | display apparatus for performing a voice control and method therefor | |
KR20140089836A (en) | Interactive server, display apparatus and controlling method thereof | |
CN108111922B (en) | Electronic device and method for updating channel map thereof | |
KR102089593B1 (en) | Display apparatus, Method for controlling display apparatus and Method for controlling display apparatus in Voice recognition system thereof | |
KR20190099676A (en) | The system and an appratus for providig contents based on a user utterance | |
KR102124396B1 (en) | Display apparatus, Method for controlling display apparatus and Method for controlling display apparatus in Voice recognition system thereof | |
KR102051480B1 (en) | Display apparatus, Method for controlling display apparatus and Method for controlling display apparatus in Voice recognition system thereof | |
KR102045539B1 (en) | Display apparatus, Method for controlling display apparatus and Method for controlling display apparatus in Voice recognition system thereof | |
JP2008096577A (en) | Voice operation system for av device | |
KR20190048334A (en) | Electronic apparatus, voice recognition method and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14874773 Country of ref document: EP Kind code of ref document: A1 |
|
REEP | Request for entry into the european phase |
Ref document number: 2014874773 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2014874773 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2015554558 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15023385 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |