WO2009150592A1 - System and method for generation of an atmosphere - Google Patents

System and method for generation of an atmosphere Download PDF

Info

Publication number
WO2009150592A1
WO2009150592A1 PCT/IB2009/052387 IB2009052387W WO2009150592A1 WO 2009150592 A1 WO2009150592 A1 WO 2009150592A1 IB 2009052387 W IB2009052387 W IB 2009052387W WO 2009150592 A1 WO2009150592 A1 WO 2009150592A1
Authority
WO
WIPO (PCT)
Prior art keywords
atmosphere
parameters
keywords
speech input
predetermined
Prior art date
Application number
PCT/IB2009/052387
Other languages
French (fr)
Inventor
Jonathan D. Mason
Berent W. Meerbeek
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Publication of WO2009150592A1 publication Critical patent/WO2009150592A1/en

Links

Classifications

    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B47/00Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
    • H05B47/10Controlling the light source
    • H05B47/105Controlling the light source in response to determined parameters
    • H05B47/115Controlling the light source in response to determined parameters by determining the presence or movement of objects or living beings
    • H05B47/12Controlling the light source in response to determined parameters by determining the presence or movement of objects or living beings by detecting audible sound
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B47/00Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
    • H05B47/10Controlling the light source
    • H05B47/155Coordinated control of two or more light sources
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Definitions

  • the present invention relates to a system and method for generating an atmosphere.
  • the atmosphere may include generation of ambient lighting and/or sound effects, or in general generating an atmosphere that can be described as artificially generating multi modal sensational inputs to users.
  • Reading a book is a beneficial activity, especially when parents read to their child or when a teacher reads to their class. This activity has been proven to help develop children's imagination and improve their literacy. Reading to a child can also help to strengthen the relationship and bond between parent and child.
  • books are facing tough competition. Today, parents typically spend 10 to 15 minutes only a day reading aloud to their child. A method for enhancing the reading experience would be to generate
  • Atmospheres around the reader to enhance the experience and underline the atmosphere in a written story.
  • Atmosphere generation systems are known, e.g. the AmBX system developed by Philips, where control devices in a room can alter the atmosphere. E.g. the lighting (color, intensity, flashing effects etc.) sound, airflow and vibration.
  • These systems can be used to enhance the users' experience when watching television, DVDs, playing games, listening to music, or it can be used to set a static room ambience, since such atmosphere generation systems can be controlled automatically by the entertainment device, e.g. a PC can control sound and light effects to underline actions when playing a game.
  • the entertainment device e.g. a PC can control sound and light effects to underline actions when playing a game.
  • such system is not suitable for atmosphere control when reading a book aloud.
  • US 6,655,586 Bl discloses a system and method for creating dynamic content such as lighting and sound effects.
  • Identifier tags are embedded in pages of a document. Each of these identifier tags identifies the particular page that a reader is viewing. By correlating the currently read page to information stored memory, dynamic content corresponding to the currently read page can be activated. Such system therefore requires pre-prepared books with identifier tags embedded therein, and therefore the system will not work e.g. when reading an old book aloud.
  • the invention provides a device arranged to control generation of an atmosphere, the device including
  • a user interface arranged to receive a speech input
  • a voice recognition unit arranged to analyze the speech input, and to recognize one or more keywords in the speech input
  • a parameter generator arranged to receive the one or more keywords recognized by the voice recognition unit, and to select a set of atmosphere parameters linked to the one or more keywords.
  • Such device is advantageous e.g. for control of an atmosphere generation system in response to reading aloud.
  • the device can generate atmosphere parameters suited to underline an atmosphere associated with the keyword, e.g. parameters for setting up green light and playback of a sequence of sound of rustling leaves and whistling birds when the keyword "wood" is recognized.
  • the actual physical generation of the atmosphere based on the atmosphere parameters is then performed by associated atmosphere generating means, e.g. in the form of lamps, loudspeakers etc. and drivers installed in a room.
  • the device is advantageous since the voice recognition unit and parameter generator can be implemented in software running either on a dedicated hardware platform, e.g.
  • the device can advantageously be used in general for voice controlled setting of an atmosphere, e.g. in a living room, children's playroom, in a theatre, in a school classroom, in a conference room etc.
  • the device can set up parameters for generating an atmosphere according to spoken commands, such as "Cosy red” to set up damped red light and pleasant silent music in living room or "Wild canyon” for setting up general green light and wild animal sounds in a children's room.
  • spoken commands such as "Cosy red” to set up damped red light and pleasant silent music in living room or "Wild jungle” for setting up general green light and wild animal sounds in a children's room.
  • the device may be programmed to only allow changes of the atmosphere parameters when recognizing commands from one or a limited number of person's voices, or after recognizing a password, or waiting for a signal input (pressing a button etc.) from a user to indicate the desire for a change in the atmosphere.
  • a signal input pressing a button etc.
  • the parameter generator selects a predetermined atmosphere script according to the one or more keywords, wherein the predetermined atmosphere script defines the set of atmosphere parameters.
  • one or more parameters can be set to determine the settings of one or more of: light, sound, vibration, airflow, temperature, humidity, smell etc. to match the script.
  • a predetermined atmosphere script can be a definition of a meaningful set of parameters to give the user a special impression, such as "Water, river”, “Winter, snow storm”, “Hawaii beach” etc.
  • a plurality of scripts can be run at the same time, e.g. "Wood",
  • the set of atmosphere parameters may include a parameter related to setting of lighting, such as parameters related to setting of light color and light intensity.
  • the device may be able to generate a set of atmosphere parameters including a parameter related to setting of at least one of: sound, vibration, airflow, temperature, and smell.
  • the atmosphere parameters may be limited to light and sound effects.
  • the device may be arranged to receive a second user input, and wherein the parameter generator selects the set of atmosphere parameters according to the second user input. In such embodiments, the second user input can be used to overrule or refine the atmosphere setting provided by the speech input.
  • the second user input can be in the form of a manual device that can be operated by a child being told a story, so as to allow the child to overrule or control which script or set of atmosphere parameters to be active, hereby enabling the child to take active part of the story.
  • the second user input can allow the user to control light intensity and sound volume or to switch off the atmosphere setting.
  • the parameter generator may be arranged to maintain one set of atmosphere parameters until a predetermined change condition is met.
  • the device preferably has a suitable strategy for changing between different atmosphere parameters so as to avoid too rapid and confusing changes.
  • a predetermined change condition including a predetermined minimum time, e.g. a minimum time of one minute between changes of atmosphere parameters.
  • the voice recognition unit is arranged to recognize a predetermined voice parameter in the speech input, such as an emphasized word, and wherein the predetermined change condition includes recognition of such predetermined voice parameter.
  • the reader of a story can e.g. raise his/her voice to indicate a keyword to be taken into account in the parameter change, and the device can then maintain a parameter setting until such special voice parameter is detected.
  • the predetermined change condition includes receipt of a second user input, such as a press on a button.
  • the device can await such second user input before the last recognized keyword is translated into a new set of atmosphere parameters.
  • the user interface, the voice recognition unit, and the parameter generator are implemented as a single stand-alone unit, whereas in another embodiment, the voice recognition unit and the parameter generator are implemented as software executable on a computer, such as a PC.
  • the functionality of the voice recognition unit can be implemented such as known by the skilled person.
  • Voice/speech recognition systems running on a PC can now perform large vocabulary searches and convert the speech to text with over 95% accuracy. Once a word has been recognized and converted e.g. to text format, it can then be used as a 'normal' input to a keyword search engine.
  • the device according to the first aspect may especially be used: 1) for home entertainment, 2) in a theatre, 3) in an education room. However, the device is advantageous also for use in many other applications.
  • the invention provides a system including - a device according to the first aspect, and - an atmosphere generator system arranged to receive the set of atmosphere parameters and including generating means arranged to generate an atmosphere according thereto.
  • the generating means may include one or more transducers: light sources (glow lamps, Light Emitting Diodes, picture projectors etc.), sound sources (loudspeakers, headphones etc.), ventilation fans, heating or cooling devices, vibration actuators, smell generators etc.
  • the system may be split into separate components, e.g. PC as control device and an atmosphere driver unit for driving transducers for each modality.
  • control device and driver unit may be integrated into one stand-alone unit capable of driving one or more transducers.
  • the invention provides a method for generating an atmosphere including
  • the invention provides a computer executable program code arranged to perform the method according to the third aspect.
  • Such program code can be designed for execution on a dedicated processor or be designed for execution on a general purpose processor, e.g. a processor in a PC.
  • the invention provides a computer readable data carrier including a computer executable program code according to the fourth aspect.
  • the data carrier may be such as: a hard disk, a CD, a DVD, a memory card, a memory stick etc.
  • Fig. 1 illustrates a block diagram of a device and system embodiment
  • Fig. 2 illustrates a stand-alone embodiment device
  • Fig. 3 illustrates a block diagram of a method embodiment
  • Fig. 4 illustrates generation of a set of atmosphere parameters based on predetermined scripts linked to keywords.
  • Fig. 1 shows a block diagram of a device and system embodiment according to the invention.
  • the atmosphere control device is implemented by a Personal Computer PC with an audio interface card AI serving to receive a speech input SI, e.g. from a person reading aloud or saying a command to the atmosphere generating device.
  • the speech input may be captured by a connected microphone, or via a line audio input.
  • a voice or speech recognition unit VRU running on the PC in software, receives the speech input SI from the audio interface card AI and analyses the speech input SI to recognize words therein.
  • the voice recognition unit compares recognized words with a list of predetermined keywords KW, and when one of the listed keywords KW is recognized, this keyword KW is communicated to a parameter generator PG that links the keyword KW to a set of atmosphere parameters AP linked thereto.
  • the total system includes the atmosphere control device PC and atmosphere generating system AGS. These can be separate apparatuses or partially or fully integrated into one apparatus.
  • the atmosphere generating system AGS includes an atmosphere control unit ACU arranged to drive transducers according to the received atmosphere parameters AP received from the control device PC.
  • the atmosphere parameters AP provided by the atmosphere generating device PC i.e. an interface between PC and AGS can be either in the form of a short code to be translated by the atmosphere control unit ACU to drive transducers accordingly, e.g. lamps AL.
  • the atmosphere parameters AP can be in the form of a code defining a predetermined script to be recognized by the atmosphere control unit ACU, and the atmosphere generating system AGS is then able to select a set of settings for driving the one or more ambient generating transducers AL connected thereto, here illustrated in the form of an ambient lighting device AL.
  • the transfer of atmosphere parameters AP can be performed either by a wired connection between the atmosphere control device PC and the atmosphere generating system AGS.
  • this connection can be wireless, e.g. by means of a Radio Frequency link, so as to allow remote positioning of the atmosphere generating AGS components from the atmosphere control device PC.
  • Fig. 2 illustrates an atmosphere control device ACD as a stand-alone device capable of driving transducer devices placed in the room.
  • the illustrated transducer devices are: a loudspeaker (or a set of loudspeakers), a lighting device (such as a color lighting device by means of LEDs or glow lamps), and a ventilator fan arranged to generate an airflow.
  • the device ACD may either include drivers for the transducers, or the device ACD may be able to merely provide control signals to each transducer, wherein transducer drivers (e.g. power amplifier to the loudspeaker(s)) are built into or at least placed near the respective transducers.
  • transducer drivers e.g. power amplifier to the loudspeaker(s)
  • a microphone is connected to the atmosphere control device ACD to capture the voice of a person reading aloud, and thus the electric output from the microphone provides a speech input to the device ACD.
  • the function of the device ACD may be as described in Fig. 1.
  • the story told by the father to his child may trigger the device ACD to recognize the predetermined keyword “wood” and thus run a script, such as "wood - general” (in contrast to “wood - river” which could be chosen if the word “river” was additionally recognized in the same sentence) according thereto.
  • a script such as "wood - general” (in contrast to “wood - river” which could be chosen if the word “river” was additionally recognized in the same sentence) according thereto.
  • Such script "wood - general” may include generating an atmosphere with: 1) green light from the lighting transducer, e.g.
  • Fig. 3 illustrates in more detail an example of the function of the voice recognition unit, where the dashed box indicates what can be implemented in software.
  • a speech input SI is input to a voice recognition algorithm VR.
  • This algorithm VR analyzes the speech input SI, and outputs recognized words to a keyword search algorithm KWS.
  • This keyword search algorithm KWS compares the recognized words with a predetermined list of keywords, and if a match is found, the recognized keyword is output to an algorithm LTS that selects a script from a predetermined set of links of keywords and scripts. If there is no such script NS, the algorithm returns to the keyword search algorithm KWS.
  • a keyword to script link is found, the corresponding script is provided to a send script SS algorithm that translates the script to a predetermined set of atmosphere parameters associated with the script, and sends a signal or a code representing this set of atmosphere parameters to an atmosphere generation system AGS that drives one or more transducers according to the atmosphere parameters, and thus by means of the physical output from the transducers, an atmosphere A according to the set of atmosphere parameters is generated.
  • an atmosphere generation system AGS that drives one or more transducers according to the atmosphere parameters, and thus by means of the physical output from the transducers, an atmosphere A according to the set of atmosphere parameters is generated.
  • Fig. 4 illustrates an example of possible data structures in the software illustrated in Fig. 3.
  • a recognized word W is in the first box compared with a list of predetermined links between keywords KW1-KW6 and scripts Sl, S2 and S3 by scanning through the keywords KW1-KW6. As illustrated, some keywords KWl, KW6 can lead to the same script Sl. If a match between the recognized word W and a keyword KW1-KW6 is found, then the corresponding script is selected and send to the next box.
  • a set of atmosphere parameters are associated with each script Sl, S2 and S3. E.g.
  • parameters 'a' can indicate the number of an audio sequence
  • 'b' can be a number related to general lighting intensity and color
  • 'c' can be a number related to a dynamic sequence of lighting (e.g. to indicate explosions or a sunset etc.).
  • S3 is seen not to have any 'a' parameters, and thus no audio sequence is associated with this script, only lighting.
  • the atmosphere control device preferably changes from one set of atmosphere parameters to another with careful consideration, e.g. to avoid too frequent changes which could be disturbing and unpleasant. Further, changes may be performed by means of fading to another parameter set rather than changing the parameters abruptly.
  • a number of control methods are possible: I) A time delay could be incorporated between changes of atmosphere parameters, e.g. such that changes are allowed only once a time lapse (e.g. 1 minute minimum) has been observed.
  • the person providing the speech input to the system can indicate to the system when he/she would like the atmosphere to change: a) The person speaking may use a predetermined parameter or character in his/her voice when saying the keywords, e.g. emphasize the keywords as they read ("Red Riding Hood embarked on her journey into the WOOD"). The voice recognition unit is then arranged to recognize words spoken with this predetermined parameter or character being different from the rest of the spoken words, and only when these words are recognized, changes in atmosphere parameters are allowed. b) A second user input, apart from the speech input, may be provided to the atmosphere control device, such as pressing button, when the user, either the reader or the person being read to, indicates that it is time for an atmosphere change. E.g. a child being read to can in this way keep a pleasant atmosphere even though a keyword has been recognized that could possibly lead to an atmosphere change, e.g. a scary one which the child would like to avoid.
  • the devices and systems according to the invention can be rather different in nature, e.g. from a simple implementation in software on a PC controlling intensity of the living room lamps, and to highly advances systems generating multi-modal atmospheres including light, sound, airflow, vibrations, smell etc.
  • the invention provides a device for control of generation of an atmosphere, e.g. in the form of light and sound effects, the device including a user interface for receiving a speech input, a voice recognition unit arranged to recognize one or more keywords in the speech input, and a parameter generator for selecting a set of atmosphere parameters, e.g. parameters indicating the setting of light, sound etc., linked to the one or more keywords.
  • atmosphere parameters e.g. parameters indicating the setting of light, sound etc.
  • These parameters can be sent to an atmosphere generating system that will generate the atmosphere accordingly by means of transducers such as lamps, loudspeakers, vibration actuators, ventilation fans etc.
  • the device allows keywords of a story being read aloud to concurrently execute predetermined scripts that serve to generate atmospheres following and underlining the content of the story being read aloud, e.g.
  • the device is arranged to maintain one set of atmosphere parameters until a special change criterion is met, e.g. lapse of a certain period of time, when the reader emphasizes a keyword, or when a button is pressed.

Landscapes

  • Circuit Arrangement For Electric Light Sources In General (AREA)

Abstract

A device (PC, ACD) for control of generation of an atmosphere, e.g. in the form of light and sound effects, the device (PC, ACD) including a user interface (AI) for receiving a speech input (SI), a voice recognition unit (VRU) arranged to recognize one or more keywords (KW) in the speech input (SI), and a parameter generator (PG) for selecting a set of atmosphere parameters (AP), e.g. parameters indicating the setting of light, sound etc., linked to the one or more keywords (KW). These parameters (AP) can be sent to an atmosphere generating system that will generate the atmosphere accordingly by means of transducers such as lamps, loudspeakers, vibration actuators, ventilation fans etc. The device allows keywords of a story being read aloud to concurrently execute predetermined scripts that serve to generate atmospheres following and underlining the content of the story being read aloud, e.g. to give children a more active impression when being read aloud to. In some embodiments, the device is arranged to maintain one set of atmosphere parameters until a special change criterion is met, e.g. lapse of a certain period of time, when the reader emphasizes a keyword, or when a button is pressed.

Description

System and method for generation of an atmosphere
FIELD OF THE INVENTION
The present invention relates to a system and method for generating an atmosphere. The atmosphere may include generation of ambient lighting and/or sound effects, or in general generating an atmosphere that can be described as artificially generating multi modal sensational inputs to users.
BACKGROUND OF THE INVENTION
Reading a book is a beneficial activity, especially when parents read to their child or when a teacher reads to their class. This activity has been proven to help develop children's imagination and improve their literacy. Reading to a child can also help to strengthen the relationship and bond between parent and child. However, in the digital age with dynamic sources of entertainment available such as DVDs, computers, Internet, games consoles etc., books are facing tough competition. Today, parents typically spend 10 to 15 minutes only a day reading aloud to their child. A method for enhancing the reading experience would be to generate
"atmospheres" around the reader to enhance the experience and underline the atmosphere in a written story. Atmosphere generation systems are known, e.g. the AmBX system developed by Philips, where control devices in a room can alter the atmosphere. E.g. the lighting (color, intensity, flashing effects etc.) sound, airflow and vibration. These systems can be used to enhance the users' experience when watching television, DVDs, playing games, listening to music, or it can be used to set a static room ambience, since such atmosphere generation systems can be controlled automatically by the entertainment device, e.g. a PC can control sound and light effects to underline actions when playing a game. However, such system is not suitable for atmosphere control when reading a book aloud. US 6,655,586 Bl discloses a system and method for creating dynamic content such as lighting and sound effects. Identifier tags are embedded in pages of a document. Each of these identifier tags identifies the particular page that a reader is viewing. By correlating the currently read page to information stored memory, dynamic content corresponding to the currently read page can be activated. Such system therefore requires pre-prepared books with identifier tags embedded therein, and therefore the system will not work e.g. when reading an old book aloud.
BRIEF DESCRIPTION OF THE INVENTION Following the above explanation, it may be seen as an object of the present invention to provide a device and method arranged to control generation of an atmosphere when reading a book aloud, such that the atmosphere changes along with the content in the book.
According to a first aspect, the invention provides a device arranged to control generation of an atmosphere, the device including
- a user interface arranged to receive a speech input,
- a voice recognition unit arranged to analyze the speech input, and to recognize one or more keywords in the speech input,
- a parameter generator arranged to receive the one or more keywords recognized by the voice recognition unit, and to select a set of atmosphere parameters linked to the one or more keywords.
Such device is advantageous e.g. for control of an atmosphere generation system in response to reading aloud. By recognizing keywords in the speech when a person reads aloud, the device can generate atmosphere parameters suited to underline an atmosphere associated with the keyword, e.g. parameters for setting up green light and playback of a sequence of sound of rustling leaves and whistling birds when the keyword "wood" is recognized. The actual physical generation of the atmosphere based on the atmosphere parameters is then performed by associated atmosphere generating means, e.g. in the form of lamps, loudspeakers etc. and drivers installed in a room. The device is advantageous since the voice recognition unit and parameter generator can be implemented in software running either on a dedicated hardware platform, e.g. in the form of a stand-alone device, or it can run in software on a device with a suitable processor power, e.g. a Personal Computer (PC). The user interface can be implemented in the form of a sound card or USB port of a PC, or a simple microphone or line input. Apart from the use for changing an atmosphere along with the content of a story being read aloud, the device can advantageously be used in general for voice controlled setting of an atmosphere, e.g. in a living room, children's playroom, in a theatre, in a school classroom, in a conference room etc. Thus, with a number of predefined keywords, the device can set up parameters for generating an atmosphere according to spoken commands, such as "Cosy red" to set up damped red light and pleasant silent music in living room or "Wild jungle" for setting up general green light and wild animal sounds in a children's room. To avoid undesired changes in the atmosphere, the device may be programmed to only allow changes of the atmosphere parameters when recognizing commands from one or a limited number of person's voices, or after recognizing a password, or waiting for a signal input (pressing a button etc.) from a user to indicate the desire for a change in the atmosphere. In the following atmosphere control device embodiments will be described. In one embodiment, the parameter generator selects a predetermined atmosphere script according to the one or more keywords, wherein the predetermined atmosphere script defines the set of atmosphere parameters. Thus, for each script, one or more parameters can be set to determine the settings of one or more of: light, sound, vibration, airflow, temperature, humidity, smell etc. to match the script. A predetermined atmosphere script can be a definition of a meaningful set of parameters to give the user a special impression, such as "Water, river", "Winter, snow storm", "Hawaii beach" etc. In some embodiments, a plurality of scripts can be run at the same time, e.g. "Wood",
"Animals", while the device is preferably arranged not to allow some combination of scripts, such as "Hawaii beach" and "Winter, snow storm".
The set of atmosphere parameters may include a parameter related to setting of lighting, such as parameters related to setting of light color and light intensity. The device may be able to generate a set of atmosphere parameters including a parameter related to setting of at least one of: sound, vibration, airflow, temperature, and smell. Thus, such device may in advanced embodiment support generation of multi modal atmospheres if many or all of the mentioned modalities are included. However, in more simple embodiments, the atmosphere parameters may be limited to light and sound effects. The device may be arranged to receive a second user input, and wherein the parameter generator selects the set of atmosphere parameters according to the second user input. In such embodiments, the second user input can be used to overrule or refine the atmosphere setting provided by the speech input. E.g. the second user input can be in the form of a manual device that can be operated by a child being told a story, so as to allow the child to overrule or control which script or set of atmosphere parameters to be active, hereby enabling the child to take active part of the story. E.g. the second user input can allow the user to control light intensity and sound volume or to switch off the atmosphere setting.
The parameter generator may be arranged to maintain one set of atmosphere parameters until a predetermined change condition is met. Thus, the device preferably has a suitable strategy for changing between different atmosphere parameters so as to avoid too rapid and confusing changes. This can be obtained by a predetermined change condition including a predetermined minimum time, e.g. a minimum time of one minute between changes of atmosphere parameters. Alternatively or additionally, the voice recognition unit is arranged to recognize a predetermined voice parameter in the speech input, such as an emphasized word, and wherein the predetermined change condition includes recognition of such predetermined voice parameter. Thus, in such embodiment the reader of a story can e.g. raise his/her voice to indicate a keyword to be taken into account in the parameter change, and the device can then maintain a parameter setting until such special voice parameter is detected. More alternatively or additionally, wherein the predetermined change condition includes receipt of a second user input, such as a press on a button. Thus, the device can await such second user input before the last recognized keyword is translated into a new set of atmosphere parameters.
In one embodiment the user interface, the voice recognition unit, and the parameter generator are implemented as a single stand-alone unit, whereas in another embodiment, the voice recognition unit and the parameter generator are implemented as software executable on a computer, such as a PC.
The functionality of the voice recognition unit can be implemented such as known by the skilled person. Voice/speech recognition systems running on a PC can now perform large vocabulary searches and convert the speech to text with over 95% accuracy. Once a word has been recognized and converted e.g. to text format, it can then be used as a 'normal' input to a keyword search engine. A specific example of a voice recognition system is the Philips retailed industrial grade voice recognition system called SpeechMagic, see e.g. the SpeechMagic brochure http://www.speechrecognition.philips. com/index. asp?id=506. The device according to the first aspect may especially be used: 1) for home entertainment, 2) in a theatre, 3) in an education room. However, the device is advantageous also for use in many other applications.
In a second aspect, the invention provides a system including - a device according to the first aspect, and - an atmosphere generator system arranged to receive the set of atmosphere parameters and including generating means arranged to generate an atmosphere according thereto.
The generating means may include one or more transducers: light sources (glow lamps, Light Emitting Diodes, picture projectors etc.), sound sources (loudspeakers, headphones etc.), ventilation fans, heating or cooling devices, vibration actuators, smell generators etc.
The system may be split into separate components, e.g. PC as control device and an atmosphere driver unit for driving transducers for each modality. Alternatively, the control device and driver unit may be integrated into one stand-alone unit capable of driving one or more transducers.
The same embodiments and advantages as mentioned for the first aspect applies as well for the second aspect.
In a third aspect, the invention provides a method for generating an atmosphere including
- receiving a speech input,
- analyzing the speech input and recognizing one or more keywords in the speech input,
- generating a set of atmosphere parameters linked to the one or more keywords.
The same advatages and equivalent embodiments mentioned for the first aspect apply as well for the third aspect.
In a fourth aspect, the invention provides a computer executable program code arranged to perform the method according to the third aspect. Such program code can be designed for execution on a dedicated processor or be designed for execution on a general purpose processor, e.g. a processor in a PC.
In a fifth aspect, the invention provides a computer readable data carrier including a computer executable program code according to the fourth aspect. The data carrier may be such as: a hard disk, a CD, a DVD, a memory card, a memory stick etc. The same advantages and equivalent embodiments mentioned for the first aspect apply as well for the fourth and fifth aspects.
The aspects of the present invention described above may each be combined with any of the other aspects or with sub aspects/embodiments thereof. These and other aspects of the invention will be apparent from and elucidated with reference to the embodiments described hereinafter.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments of the invention will be described, by way of example only, with reference to the drawings, in which: Fig. 1 illustrates a block diagram of a device and system embodiment, Fig. 2 illustrates a stand-alone embodiment device, Fig. 3 illustrates a block diagram of a method embodiment, and Fig. 4 illustrates generation of a set of atmosphere parameters based on predetermined scripts linked to keywords.
DESCRIPTION OF EMBODIMENTS
Fig. 1 shows a block diagram of a device and system embodiment according to the invention. In the illustrated embodiment, the atmosphere control device is implemented by a Personal Computer PC with an audio interface card AI serving to receive a speech input SI, e.g. from a person reading aloud or saying a command to the atmosphere generating device. The speech input may be captured by a connected microphone, or via a line audio input. A voice or speech recognition unit VRU, running on the PC in software, receives the speech input SI from the audio interface card AI and analyses the speech input SI to recognize words therein. The voice recognition unit compares recognized words with a list of predetermined keywords KW, and when one of the listed keywords KW is recognized, this keyword KW is communicated to a parameter generator PG that links the keyword KW to a set of atmosphere parameters AP linked thereto.
The total system includes the atmosphere control device PC and atmosphere generating system AGS. These can be separate apparatuses or partially or fully integrated into one apparatus. The atmosphere generating system AGS includes an atmosphere control unit ACU arranged to drive transducers according to the received atmosphere parameters AP received from the control device PC.
The atmosphere parameters AP provided by the atmosphere generating device PC, i.e. an interface between PC and AGS can be either in the form of a short code to be translated by the atmosphere control unit ACU to drive transducers accordingly, e.g. lamps AL. Especially, the atmosphere parameters AP can be in the form of a code defining a predetermined script to be recognized by the atmosphere control unit ACU, and the atmosphere generating system AGS is then able to select a set of settings for driving the one or more ambient generating transducers AL connected thereto, here illustrated in the form of an ambient lighting device AL.
The transfer of atmosphere parameters AP can be performed either by a wired connection between the atmosphere control device PC and the atmosphere generating system AGS. Alternatively this connection can be wireless, e.g. by means of a Radio Frequency link, so as to allow remote positioning of the atmosphere generating AGS components from the atmosphere control device PC.
Fig. 2 illustrates an atmosphere control device ACD as a stand-alone device capable of driving transducer devices placed in the room. The illustrated transducer devices are: a loudspeaker (or a set of loudspeakers), a lighting device (such as a color lighting device by means of LEDs or glow lamps), and a ventilator fan arranged to generate an airflow. The device ACD may either include drivers for the transducers, or the device ACD may be able to merely provide control signals to each transducer, wherein transducer drivers (e.g. power amplifier to the loudspeaker(s)) are built into or at least placed near the respective transducers. A microphone is connected to the atmosphere control device ACD to capture the voice of a person reading aloud, and thus the electric output from the microphone provides a speech input to the device ACD. The function of the device ACD may be as described in Fig. 1.
With the transducers illustrated in Fig. 2, the story told by the father to his child, e.g. the sentence "Red Riding Hood embarked on her journey into the wood", may trigger the device ACD to recognize the predetermined keyword "wood" and thus run a script, such as "wood - general" (in contrast to "wood - river" which could be chosen if the word "river" was additionally recognized in the same sentence) according thereto. Such script "wood - general" may include generating an atmosphere with: 1) green light from the lighting transducer, e.g. dappled with light from the sun passing through the leaves, 2) an audio sequence of birds twittering and rattling of leaves playing from the loudspeaker(s), and 3) a moderate airflow indicating a gentle breeze rattling the leaves. In this way, the child being read to will experience a more intense impression of being a part of the story being told. In advanced systems, a plurality of scripts can be run together, e.g. the "wood - general" may run simultaneous with a "footsteps" script which may merely be an additional audio sequence of footsteps, in case the keyword "footsteps" or the like was recognized temporally close to the keyword "wood".
Fig. 3 illustrates in more detail an example of the function of the voice recognition unit, where the dashed box indicates what can be implemented in software. A speech input SI is input to a voice recognition algorithm VR. This algorithm VR analyzes the speech input SI, and outputs recognized words to a keyword search algorithm KWS. This keyword search algorithm KWS compares the recognized words with a predetermined list of keywords, and if a match is found, the recognized keyword is output to an algorithm LTS that selects a script from a predetermined set of links of keywords and scripts. If there is no such script NS, the algorithm returns to the keyword search algorithm KWS. However, if a keyword to script link is found, the corresponding script is provided to a send script SS algorithm that translates the script to a predetermined set of atmosphere parameters associated with the script, and sends a signal or a code representing this set of atmosphere parameters to an atmosphere generation system AGS that drives one or more transducers according to the atmosphere parameters, and thus by means of the physical output from the transducers, an atmosphere A according to the set of atmosphere parameters is generated. It is appreciated that the diagram of Fig. 3 illustrates functional parts of a software example with the purpose of explaining the principles, rather than being an implementation proposal for an actual division of parts of a computer executable program code. The core of word recognition based on a speech input can be implemented in many different ways, such as known to the skilled person.
Fig. 4 illustrates an example of possible data structures in the software illustrated in Fig. 3. A recognized word W is in the first box compared with a list of predetermined links between keywords KW1-KW6 and scripts Sl, S2 and S3 by scanning through the keywords KW1-KW6. As illustrated, some keywords KWl, KW6 can lead to the same script Sl. If a match between the recognized word W and a keyword KW1-KW6 is found, then the corresponding script is selected and send to the next box. Here, a set of atmosphere parameters are associated with each script Sl, S2 and S3. E.g. parameters 'a' can indicate the number of an audio sequence, 'b' can be a number related to general lighting intensity and color, while 'c' can be a number related to a dynamic sequence of lighting (e.g. to indicate explosions or a sunset etc.). As seen, not all scripts need to include all types of parameters. S3 is seen not to have any 'a' parameters, and thus no audio sequence is associated with this script, only lighting. The atmosphere control device preferably changes from one set of atmosphere parameters to another with careful consideration, e.g. to avoid too frequent changes which could be disturbing and unpleasant. Further, changes may be performed by means of fading to another parameter set rather than changing the parameters abruptly. A number of control methods are possible: I) A time delay could be incorporated between changes of atmosphere parameters, e.g. such that changes are allowed only once a time lapse (e.g. 1 minute minimum) has been observed.
2) The person providing the speech input to the system can indicate to the system when he/she would like the atmosphere to change: a) The person speaking may use a predetermined parameter or character in his/her voice when saying the keywords, e.g. emphasize the keywords as they read ("Red Riding Hood embarked on her journey into the WOOD"). The voice recognition unit is then arranged to recognize words spoken with this predetermined parameter or character being different from the rest of the spoken words, and only when these words are recognized, changes in atmosphere parameters are allowed. b) A second user input, apart from the speech input, may be provided to the atmosphere control device, such as pressing button, when the user, either the reader or the person being read to, indicates that it is time for an atmosphere change. E.g. a child being read to can in this way keep a pleasant atmosphere even though a keyword has been recognized that could possibly lead to an atmosphere change, e.g. a scary one which the child would like to avoid.
As indicated, the devices and systems according to the invention can be rather different in nature, e.g. from a simple implementation in software on a PC controlling intensity of the living room lamps, and to highly advances systems generating multi-modal atmospheres including light, sound, airflow, vibrations, smell etc.
To sum up, the invention provides a device for control of generation of an atmosphere, e.g. in the form of light and sound effects, the device including a user interface for receiving a speech input, a voice recognition unit arranged to recognize one or more keywords in the speech input, and a parameter generator for selecting a set of atmosphere parameters, e.g. parameters indicating the setting of light, sound etc., linked to the one or more keywords. These parameters can be sent to an atmosphere generating system that will generate the atmosphere accordingly by means of transducers such as lamps, loudspeakers, vibration actuators, ventilation fans etc. The device allows keywords of a story being read aloud to concurrently execute predetermined scripts that serve to generate atmospheres following and underlining the content of the story being read aloud, e.g. to give children a more active impression when being read aloud to. In some embodiments, the device is arranged to maintain one set of atmosphere parameters until a special change criterion is met, e.g. lapse of a certain period of time, when the reader emphasizes a keyword, or when a button is pressed.
Certain specific details of the disclosed embodiment are set forth for purposes of explanation rather than limitation, so as to provide a clear and thorough understanding of the present invention. However, it should be understood by those skilled in this art, that the present invention might be practiced in other embodiments that do not conform exactly to the details set forth herein, without departing significantly from the spirit and scope of this disclosure. Further, in this context, and for the purposes of brevity and clarity, detailed descriptions of well-known apparatuses, circuits and methodologies have been omitted so as to avoid unnecessary detail and possible confusion. The terms "includes" and "including" do not exclude the presence of other elements or steps. Additionally, although individual features may be included in different claims, these may possibly be advantageously combined, and the inclusion in different claims does not imply that a combination of features is not feasible and/or advantageous. In addition, singular references do not exclude a plurality. Thus, references to "a", "an", "first", "second" etc. do not preclude a plurality. Furthermore, reference signs in the claims shall not be construed as limiting the scope.

Claims

CLAIMS:
1. Device (PC, ACD) arranged to control generation of an atmosphere, the device
(PC, ACD) including
- a user interface (AI) arranged to receive a speech input (SI),
- a voice recognition unit (VRU) arranged to analyze the speech input (SI), and to recognize one or more keywords (KW) in the speech input (SI),
- a parameter generator (PG) arranged to receive the one or more keywords (KW) recognized by the voice recognition unit (VRU), and to select a set of atmosphere parameters (AP) linked to the one or more keywords (KW).
2. Device according to claim 1, wherein the parameter generator selects a predetermined atmosphere script (S) according to the one or more keywords (KW), wherein the predetermined atmosphere script (S) defines the set of atmosphere parameters (AP).
3. Device according to claim 1, wherein the set of atmosphere parameters (AP) includes a parameter related to setting of lighting.
4. Device according to claim 3, wherein the set of atmosphere parameters (AP) includes parameters related to setting of light color and light intensity.
5. Device according to claim 1, wherein the set of atmosphere parameters (AP) includes a parameter related to setting of at least one of: sound, vibration, airflow, temperature, humidity, and smell.
6. Device according to claim 1, wherein the device is arranged to receive a second user input, and wherein the parameter generator selects the set of atmosphere parameters according to the second user input.
7. Device according to claim 1, wherein the parameter generator is arranged to maintain one set of atmosphere parameters until a predetermined change condition is met.
8. Device according to claim 7, wherein the predetermined change condition includes a predetermined minimum time.
9. Device according to claim 7, wherein the voice recognition unit (VRU) is arranged to recognize a predetermined voice parameter in the speech input (SI), such as an emphasized word, and wherein the predetermined change condition includes recognition of such predetermined voice parameter.
10. Device according to claim 7, wherein the predetermined change condition includes receipt of a second user input, such as a press on a button.
11. Device according to claim 1 , wherein the user interface (AI), the voice recognition unit (VRU), and the parameter generator (PG) are implemented as a single stand- alone unit (ACD).
12. System including
- a device according to claim 1 , and
- an atmosphere generator system (AGS) arranged to receive the set of atmosphere parameters (AP) and including generating means (AL) arranged to generate an atmosphere according thereto.
13. Method for controlling generation of an atmosphere, the method including
- receiving a speech input, - analyzing the speech input and recognizing one or more keywords in the speech input,
- generating a set of atmosphere parameters linked to the one or more keywords.
14. Computer executable program code arranged to perform the method according to claim 13.
PCT/IB2009/052387 2008-06-12 2009-06-05 System and method for generation of an atmosphere WO2009150592A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP08158134 2008-06-12
EP08158134.0 2008-06-12

Publications (1)

Publication Number Publication Date
WO2009150592A1 true WO2009150592A1 (en) 2009-12-17

Family

ID=41061279

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2009/052387 WO2009150592A1 (en) 2008-06-12 2009-06-05 System and method for generation of an atmosphere

Country Status (1)

Country Link
WO (1) WO2009150592A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012011008A1 (en) * 2010-07-21 2012-01-26 Koninklijke Philips Electronics N.V. Dynamic lighting system with a daily rhythm
CN102691937A (en) * 2012-05-30 2012-09-26 苏州昆仑工业设计有限公司 Bedside lamp capable of recognizing voice commands
WO2016005848A1 (en) * 2014-07-07 2016-01-14 PISANI, Lucia Remote audiovisual communication system between two or more users, lamp with lights with luminous characteristics which can vary according to external information sources, specifically of audio type, and associated communication method
US9609724B2 (en) 2013-03-26 2017-03-28 Philips Lighting Holding B.V. Environment control system
US9642221B2 (en) 2011-11-07 2017-05-02 Philips Lighting Holding B.V. User interface using sounds to control a lighting system
WO2017160498A1 (en) * 2016-03-14 2017-09-21 Amazon Technologies, Inc. Audio scripts for various content
WO2018013752A1 (en) 2016-07-13 2018-01-18 The Marketing Store Worldwide, LP System, apparatus and method for interactive reading
WO2020069979A1 (en) 2018-10-02 2020-04-09 Signify Holding B.V. Determining one or more light effects by looking ahead in a book
US11310891B2 (en) 2016-08-26 2022-04-19 Signify Holding B.V. Controller for controlling a lighting device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001099475A1 (en) * 2000-06-21 2001-12-27 Color Kinetics Incorporated Method and apparatus for controlling a lighting system in response to an audio input
WO2005084339A2 (en) * 2004-03-02 2005-09-15 Color Kinetics Incorporated Entertainment lighting system
WO2007069143A2 (en) * 2005-12-15 2007-06-21 Koninklijke Philips Electronics N. V. System and method for creating artificial atmosphere

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001099475A1 (en) * 2000-06-21 2001-12-27 Color Kinetics Incorporated Method and apparatus for controlling a lighting system in response to an audio input
WO2005084339A2 (en) * 2004-03-02 2005-09-15 Color Kinetics Incorporated Entertainment lighting system
WO2007069143A2 (en) * 2005-12-15 2007-06-21 Koninklijke Philips Electronics N. V. System and method for creating artificial atmosphere

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9635732B2 (en) 2010-07-21 2017-04-25 Philips Lighting Holding B.V. Dynamic lighting system with a daily rhythm
WO2012011008A1 (en) * 2010-07-21 2012-01-26 Koninklijke Philips Electronics N.V. Dynamic lighting system with a daily rhythm
US9642221B2 (en) 2011-11-07 2017-05-02 Philips Lighting Holding B.V. User interface using sounds to control a lighting system
CN102691937A (en) * 2012-05-30 2012-09-26 苏州昆仑工业设计有限公司 Bedside lamp capable of recognizing voice commands
US9609724B2 (en) 2013-03-26 2017-03-28 Philips Lighting Holding B.V. Environment control system
US10021764B2 (en) 2014-07-07 2018-07-10 Patrizio PISANI Remote audiovisual communication system between two or more users, lamp with lights with luminous characteristics which can vary according to external information sources, specifically of audio type, and associated communication method
WO2016005848A1 (en) * 2014-07-07 2016-01-14 PISANI, Lucia Remote audiovisual communication system between two or more users, lamp with lights with luminous characteristics which can vary according to external information sources, specifically of audio type, and associated communication method
WO2017160498A1 (en) * 2016-03-14 2017-09-21 Amazon Technologies, Inc. Audio scripts for various content
WO2018013752A1 (en) 2016-07-13 2018-01-18 The Marketing Store Worldwide, LP System, apparatus and method for interactive reading
CN109661646A (en) * 2016-07-13 2019-04-19 万杰成礼品有限合伙公司 The systems, devices and methods read for interactive mode
EP3485368A4 (en) * 2016-07-13 2020-01-08 The Marketing Store Worldwide, LP System, apparatus and method for interactive reading
US11310891B2 (en) 2016-08-26 2022-04-19 Signify Holding B.V. Controller for controlling a lighting device
WO2020069979A1 (en) 2018-10-02 2020-04-09 Signify Holding B.V. Determining one or more light effects by looking ahead in a book

Similar Documents

Publication Publication Date Title
WO2009150592A1 (en) System and method for generation of an atmosphere
US10249205B2 (en) System and method for integrating special effects with a text source
US20190189019A1 (en) System and Method for Integrating Special Effects with a Text Source
Kaye et al. Sound and music for the theatre: The art & technique of design
Fryer Audio description as audio drama–a practitioner's point of view
JP6725006B2 (en) Control device and equipment control system
JP2020056996A (en) Tone color selectable voice reproduction system, its reproduction method, and computer readable storage medium
TW201434600A (en) Robot for generating body motion corresponding to sound signal
JP2017021125A (en) Voice interactive apparatus
EP2074604A2 (en) Interactive storyteller system
KR20160131505A (en) Method and server for conveting voice
CN107463626A (en) A kind of voice-control educational method, mobile terminal, system and storage medium
US20120173008A1 (en) Method and device for processing audio data
KR101926328B1 (en) Service Appratus and System for training Communication based voice recogintion
WO2019168920A1 (en) System and method for integrating special effects with a text source
CN114120943B (en) Virtual concert processing method, device, equipment and storage medium
CN109835284A (en) A kind of method and apparatus reflecting running state of the vehicle with the sound played
US9037467B2 (en) Speech effects
CN114694635A (en) Sleep scene setting method and device
JP2018180472A (en) Control device, control method, and control program
JP6921311B2 (en) Equipment control system, equipment, equipment control method and program
CN204926792U (en) Steerable audio playback machine that turns over page or leaf
Stahl et al. Auditory Masquing: Wearable Sound Systems for Diegetic Character Voices.
CN110136719A (en) A kind of method, apparatus and system for realizing Intelligent voice dialog
JP6698198B2 (en) Foreign language learning support system, foreign language learning support method and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09762119

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09762119

Country of ref document: EP

Kind code of ref document: A1