WO2021192072A1 - Indoor sound environment generation apparatus, sound source apparatus, indoor sound environment generation method, and sound source apparatus control method - Google Patents

Indoor sound environment generation apparatus, sound source apparatus, indoor sound environment generation method, and sound source apparatus control method Download PDF

Info

Publication number
WO2021192072A1
WO2021192072A1 PCT/JP2020/013201 JP2020013201W WO2021192072A1 WO 2021192072 A1 WO2021192072 A1 WO 2021192072A1 JP 2020013201 W JP2020013201 W JP 2020013201W WO 2021192072 A1 WO2021192072 A1 WO 2021192072A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound
user
sound source
control
indoor
Prior art date
Application number
PCT/JP2020/013201
Other languages
French (fr)
Japanese (ja)
Inventor
尚志 永野
Original Assignee
ヤマハ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ヤマハ株式会社 filed Critical ヤマハ株式会社
Priority to PCT/JP2020/013201 priority Critical patent/WO2021192072A1/en
Publication of WO2021192072A1 publication Critical patent/WO2021192072A1/en

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K11/00Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/16Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/175Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound
    • G10K11/178Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S1/00Two-channel systems

Definitions

  • One embodiment of the present invention relates to an indoor sound environment generation device, a sound source device, an indoor sound environment generation method, and a control method of the sound source device.
  • Patent Documents 1 and 2 disclose a device that outputs a sound that induces sleep to a user who wants to go to bed.
  • Patent Document 1 discloses an acoustic processing device that generates an audio signal to a speaker so as to recognize a sound image three-dimensionally.
  • Patent Document 2 discloses a sound source device that outputs different sounds depending on the state of the user.
  • the device of Patent Document 1 does not output sound according to the state of the user. For example, a user who takes a sleeping posture may want to go to bed or wake up.
  • one of the objects of the embodiment of the present invention is an indoor sound environment generator, a sound source device, an indoor sound environment generation method, and a sound source device that output a sound that can be guided to a desired state according to a user's state. Is to provide a control method for.
  • the indoor sound environment generator acquires a sound signal acquisition unit that acquires a sound signal, a sound signal output unit that outputs a sound signal, a biometric information acquisition unit that acquires the biometric information of the user, and a user's position information. It includes a position information acquisition unit to be acquired and a control unit that controls the sound image localization of the sound signal based on the biometric information and the position information.
  • the sound source device determines a biometric information acquisition unit that acquires the biometric information of the user, a plurality of sound source units, and a control table for controlling the plurality of sound source units based on the biometric information, and the determined control table.
  • a control unit that controls the plurality of sound source units based on the above, and a reading unit that reads a sound source from the plurality of sound source units according to the control of the control unit.
  • FIG. 1 is a schematic view showing a configuration of a sound reproduction system 1 including a sound source device 20 according to the first embodiment.
  • the sound reproduction system 1 includes a sensor 11, a sensor 12, a sensor 13, a sound source device 20, a speaker 51, and a speaker 52.
  • the sound reproduction system 1 causes the user E, who is lying on his back on the bed 5, to hear the sound emitted from the speaker 51 and the speaker 52.
  • the sounds emitted by the speaker 51 and the speaker 52 include, for example, a sound for introducing sleep, a sound for awakening, and the like.
  • the speaker 51 and the speaker 52 are arranged at predetermined positions away from the bed 5. In the example of FIG. 1, the sound is emitted toward the user E in the direction toward the feet of the user E.
  • the speaker 51 amplifies the stereo left (L) channel sound signal output from the sound source device 20 by the built-in amplifier and emits the sound.
  • the speaker 52 amplifies the sound signal of the stereo right (L) channel output from the sound source device 20 by the built-in amplifier and emits the sound.
  • the sensor 11 is attached to the wrist of the user E.
  • the sensor 12 is laid under the pillow.
  • the sensor 13 is laid on the bed.
  • the sensor 11, the sensor 12, and the sensor 13 detect biological information such as the pulse, respiration, body movement, brain wave, and blood pressure of the user E.
  • the sensor 11, the sensor 12, and the sensor 13 have a wireless communication function.
  • the sensor 11, the sensor 12, and the sensor 13 transmit the detected biological information to the sound source device 20.
  • the number of sensors that detect biological information is not limited to three as in the present embodiment.
  • the number of sensors may be one, two, or four or more.
  • the biological information is not limited to the example shown in this embodiment. Further, at least one biological information may be detected.
  • the wireless communication function is not essential.
  • the sensor 11, the sensor 12, and the sensor 13 may be connected to the sound source device 20 by wire.
  • the sensor may be in any form as long as it can acquire biometric information.
  • FIG. 2 is a block diagram showing the configuration of the sound source device 20.
  • the sound source device 20 includes a communication unit 21, a processor 22, a RAM 23, a flash memory 24, a display 25, a user I / F26, and an audio I / F27.
  • the sound source device 20 includes, for example, a personal computer, a smartphone, a tablet computer, or the like.
  • An audio device such as an audio receiver is also an example of a sound source device.
  • the communication unit 21 receives biometric information from the sensor 11, the sensor 12, and the sensor 13 via the wireless communication function.
  • the communication unit 21 may have a wired communication function such as USB or LAN.
  • the display 25 is made of an LCD or the like.
  • User I / F26 is an example of an operation unit.
  • the user I / F 26 includes a mouse, a keyboard, a touch panel, and the like.
  • the user I / F 26 accepts the user's operation.
  • the touch panel may be stacked on the display 25.
  • the user E inputs information such as a bedtime or a wake-up time via the user I / F26. Further, the user E selects the type of sound to be output from the speaker via the user I / F16. For example, the user selects one of the following types: "relax”, “sleep onset”, "good sleep", “wake up”, or "MUTE”.
  • the audio I / F27 is composed of an analog audio terminal, a digital audio terminal, or the like.
  • the audio I / F 27 is connected to the speaker 51 and the speaker 52.
  • the audio I / F 27 outputs a sound signal to the speaker 51 and the speaker 52.
  • the sound source device 20 may transmit a sound signal to the speaker 51 and the speaker 52 by the wireless communication function.
  • the processor 22 is composed of a CPU, a DSP, a SoC (System on a Chip), or the like.
  • the processor 22 performs various operations by reading a program from the flash memory 24, which is a storage medium, and temporarily storing the program in the RAM 23. The program does not need to be stored in the flash memory 24.
  • the processor 22 may temporarily download the program from the server or the like as needed.
  • FIG. 3 is a block diagram showing a functional configuration of the processor 22.
  • FIG. 4 is a flowchart showing the operation of the processor 22.
  • the processor 22 constitutes a biological information acquisition unit 30, a sound source unit 40, a control unit 140, and a reading unit 145 by a program read from the flash memory 24.
  • the sound source unit 40 includes a plurality of sound source units 410, 420, 430, 440 (four in the example of FIG. 3).
  • the biometric information acquisition unit 30 acquires biometric information via the communication unit 21 (S11).
  • the control unit 140 determines the contents of the control table 70 according to the biometric information acquired by the biometric information acquisition unit 30 (S12).
  • the reading unit 145 reads a sound source from the sound source unit 40 based on the control table 70 (S13).
  • FIG. 5 is a diagram showing an example of the control table 70.
  • the control table 70 defines five control modes as an example.
  • the five control modes correspond to any of "relax”, “sleep onset”, “good sleep”, “wake up”, and “MUTE”, respectively.
  • control mode 1 corresponds to "relax”
  • control mode 2 corresponds to "sleep onset”
  • control mode 3 corresponds to "good sleep”
  • control mode 4 corresponds to "wake up”
  • control mode 5 corresponds to "MUTE”.
  • the control unit 140 estimates the physical and mental state of the user based on the biometric information acquired by the biometric information acquisition unit 30, and "relaxes", “sleeps”, “good sleep”, “wakes up”, or “MUTE”. You may automatically select one of them. For example, when the biological information such as pulse, respiration, body movement, brain wave, or blood pressure satisfies a predetermined condition (for example, when it is a high value (a predetermined threshold value or more)), the control unit 140 is a user.
  • a predetermined condition for example, when it is a high value (a predetermined threshold value or more)
  • control unit 140 determines that the user is sleeping, the control unit 140 selects "good sleep”. Alternatively, the control unit 140 may select “MUTE” when it is determined that the user is sleeping. Further, the control unit 140 selects “sleep onset” when the current time reaches the bedtime input by the user. Further, the control unit 140 selects "wake up” when the current time reaches the wake-up time input by the user.
  • the control table 70 includes parameters of four sound sources of hypersonic, binaural beat, natural sound, and music as an example.
  • the sound source unit 410 corresponds to a hypersonic sound source
  • the sound source unit 420 corresponds to a binaural beat sound source
  • the sound source unit 430 corresponds to a natural sound source
  • the sound source unit 440 corresponds to a music sound source. ..
  • Hypersonic is, for example, an inaudible sound of about 20 to 100 Hz. Hypersonic creates a relaxing effect by giving inaudible vibrations to the surface of the human body.
  • the binaural beat is a sound having a frequency difference between the L channel and the R channel.
  • the binaural beat includes a sound of 100 Hz on the L channel and a sound of 110 Hz on the R channel. Binaural beats lead brain waves to a low state by making the user feel such a low frequency difference of about 10 Hz. This causes the binaural beats to have a relaxing effect. Natural sounds include, for example, the sound of wind, the sound of waves, the chirping of birds, or the babbling of rivers.
  • Natural sounds are a random combination of these multiple types of sounds.
  • the natural sound is reproduced with a random length while these multiple types of sounds repeatedly fade in and fade out. This allows natural sounds to introduce sleep and maintain a comfortable sleep.
  • Music for example, is a chord of synthesizer timbres. The lower frequency of the synthesizer timbre chords produces a relaxing effect. The higher frequency of the synthesizer timbre chords causes an arousal effect.
  • user E may select arbitrary sounds in advance.
  • the reading unit 145 reads sound sources from the four sound source units 410, 420, 430, and 440 according to the control of the control unit 140, mixes them, and outputs them to the audio I / F17. As a result, the speaker 51 and the speaker 52 output the mixed sound.
  • the mixed sound is classified as either a relaxing sound, a sleep-inducing sound, a good sleep sound, or an awakening sound.
  • the sounds mixed in control mode 1 include hypersonic, binaural beats, and low frequency synthesizer chord music. The sound obtained by mixing these sounds becomes a relaxing sound that produces an action of relaxing the user E.
  • Sounds mixed in control mode 2 include hypersonic, binaural beats, natural sounds, and low frequency synthesizer chord music. The sound obtained by mixing these sounds becomes a sleep-introducing sound that gives the user E the effect of introducing sleep.
  • the sounds mixed in control mode 3 include hypersonic and natural sounds. The sound of mixing these sounds is a good sleep sound that does not interfere with sleep and also has a relaxing effect.
  • the sounds mixed in control mode 4 include high frequency synthesizer chord music. In addition, music is a sound with a high tempo and a loud volume in two beats. Such music becomes an awakening sound that gives the awakening effect to the user E.
  • control table 70 Note that "-" among the parameters shown in the control table 70 means that natural sounds or music are not used.
  • the tempo of natural sounds and music is the difference from the heart rate or respiratory rate (times / minute) of user E.
  • the tempo of the control table 70 is "-3"
  • the sound source device 20 can relax the user E so that he / she can easily fall asleep by playing natural sounds or music at the same tempo as or lower than the heart rate of the user E.
  • the tempo is "2”
  • the sound source device 20 can shift to the awake state by playing natural sounds or music at a tempo higher than the heart rate of the user E.
  • “1 / f” means that the amplitude, tempo, frequency, etc. are fluctuated by 1 / f.
  • the sound source device 20 can further relax the user E by giving fluctuations.
  • the control unit 140 determines the control table 70 according to the biological information.
  • the sound source device 20 of the present embodiment may repeat the operation of the flowchart shown in FIG. 4 periodically (for example, every few seconds).
  • the contents of the control table 70 change based on the change in the biological information.
  • the tempo of the natural sound in the control mode 2 is set to “-4”, which is even lower.
  • the control table 70 of FIG. 6 is an example in which the effect of sleep induction is enhanced as compared with the example of FIG.
  • the control unit 140 changes the control table 70 to the contents shown in FIG. 6, for example, when the pulse, respiration, and body movement decrease.
  • the control unit 140 may change the control table 70 to the content shown in FIG. 6, for example, when there is no change in pulse, respiration, and body movement.
  • control unit 140 may change any of the other "relaxation", "good sleep", or “wake up” contents based on the biometric information acquired by the biometric information acquisition unit 30.
  • the tempo of the music in the control mode 4 is set to “3”, and the volume is set to “8”.
  • the control unit 140 changes the control table 70 to the contents shown in FIG. 7, for example, when the pulse, respiration, and body movement increase.
  • the control table 70 of FIG. 7 is an example in which the effect of awakening is enhanced as compared with the example of FIG.
  • the control unit 140 may change the control table 70 to the content shown in FIG. 6, for example, when there is no change in pulse, respiration, and body movement.
  • control unit 140 may record in the flash memory 24 the time from the output of the sleep introduction sound or the awakening sound to the transition to the sleep state or the awakening state.
  • the control unit 140 records the time for each sound source and each parameter. Then, the control unit 140 may learn a sound source and a parameter having a high effect of sleep induction by using a predetermined algorithm.
  • control unit 140 changed the contents of the control table 70.
  • the control unit 140 may select one control table from a plurality of control tables based on biological information.
  • the flash memory 24 stores a plurality of control tables corresponding to biometric information.
  • a plurality of control tables corresponding to biometric information may be stored in the server.
  • the control unit 140 transmits the biometric information to the server and acquires the corresponding control table.
  • control unit 140 may determine the control table 70 based on information such as the age, gender, nationality, etc. of the user in addition to the biometric information.
  • the contents of the control table 70 with respect to information such as the age, gender, and nationality of the user are recorded in, for example, a server (not shown).
  • the server records the contents of the control table 70 for information such as the age, gender, or nationality of the user from a large number of devices.
  • the server may learn these and learn the optimum control table 70 for information such as the age, gender, or nationality of the user.
  • the control unit 140 transmits information such as the age, gender, or nationality of the user to the server via the communication unit 21, and receives the corresponding control table 70.
  • control unit 140 may specify the user and determine the control table 70 based on the user's specific result in addition to the biometric information.
  • the user E edits the contents of the control table 70 via the user I / F26.
  • User E changes various parameters such as the type of natural sound, the type of music, the tempo, or the volume.
  • the control unit 140 records the edited contents of the user E in the flash memory 24.
  • the control unit 140 may learn the user's favorite parameters by using a predetermined algorithm according to the edited contents of the control table 70 of the user E. As a result, the control unit 140 determines the control table 70 according to the user's preference.
  • the sound source device 20 may output a relaxing sound to the user E in the living room or the like when the user E is detected to be in an excited state.
  • the sound source device 20 may output an awakening sound to the user E in the office when the user E is detected to be in a sleep state.
  • FIG. 8 is a schematic view showing the configuration of the sound reproduction system 1A including the sound source device 20A according to the second embodiment.
  • the same configurations as those shown in FIG. 1 are designated by the same reference numerals, and the description thereof will be omitted.
  • the sound reproduction system 1A includes an array speaker 50. Similar to the speaker 51 and the speaker 52 of FIG. 1, the array speaker 50 outputs various sounds including a relaxing sound, a sleep introducing sound, a good sleep sound, an awakening sound, and the like.
  • the array speaker 50 includes a plurality of speakers.
  • the array speaker 50 can control the directivity by controlling the volume and the sound emission timing of the sound signals supplied to the plurality of arranged speakers.
  • the array speaker 50 is arranged at a predetermined position away from the bed 5.
  • the array speaker 50 arranges a plurality of speakers in a direction parallel to the minor axis direction of the bed 5.
  • the sound source device 20A and the array speaker 50 are connected by a wireless communication function or a wired communication function.
  • the array speaker 50 amplifies the stereo left (L) channel sound signal and the stereo right (L) channel sound signal output from the sound source device 20 by the built-in amplifier and emits the sound.
  • FIG. 9 is a block diagram showing the configuration of the sound source device 20A.
  • the sound source device 20A includes the same hardware as the sound source device 20. Therefore, each hardware configuration is designated by the same reference numeral, and the description thereof will be omitted.
  • the sound source device 20A is an example of an indoor sound environment generation device.
  • the flash memory 24 of the sound source device 20A further stores a program for configuring the position information acquisition unit 75 and the estimation unit 80.
  • the processor 22 of the sound source device 20A further constitutes the position information acquisition unit 75 and the estimation unit 80 by the program read from the flash memory 24.
  • the position information acquisition unit 75 acquires information regarding the position of the user (for example, coordinates in the room).
  • the estimation unit 80 estimates the physical and mental state of the user based on the biological information. For example, the estimation unit 80 determines that the user is in an excited state when the biological information such as pulse, respiration, body movement, brain wave, or blood pressure is a high value (greater than or equal to a predetermined threshold value).
  • the estimation unit 80 is in a state of falling asleep when biological information such as pulse, respiration, body movement, electroencephalogram, or blood pressure is low (above a predetermined threshold value) and these values further decrease with the passage of time. Judge that. Further, the estimation unit 80 determines that it is in a sleeping state when it detects an electroencephalogram corresponding to REM sleep or non-REM sleep. Further, when the biological information such as pulse, respiration, body movement, brain wave, or blood pressure is low (above a predetermined threshold value) and these values increase with the passage of time, the estimation unit 80 is in a wake-up state. Judge that there is.
  • FIG. 10 is a flowchart showing the operation of the sound source device 20A.
  • the biometric information acquisition unit 30 acquires biometric information via the communication unit 21, and the position information acquisition unit 75 acquires the position information of the user E (S21).
  • the position information acquisition unit 75 acquires position information via, for example, the sensor 12 or the sensor 13.
  • the sensor 12 is laid under the pillow and the sensor 13 is laid on the bed. Therefore, when the position information acquisition unit 75 acquires the biological information from the sensor 12 or the sensor 13, the position information acquisition unit 75 determines that the user E is in the bed 5 and the head position exists at the position of the pillow.
  • the position information is represented by, for example, the coordinates when the room is viewed in a plane.
  • the user E inputs the coordinates of the sensor 12, the sensor 13, and the array speaker 50 in advance via the user I / F16.
  • the position information acquisition unit 75 acquires the coordinates of the sensor 12, the sensor 13, and the array speaker 50 in advance.
  • the position information acquisition unit 75 may acquire the position information via the sensor 11 worn by the user.
  • the sensor 11 transmits, for example, a Bluetooth® beacon signal.
  • the position information acquisition unit 75 measures the distance to the sensor 11 based on the received radio wave intensity of the beacon signal. Since the radio wave intensity is inversely proportional to the square of the distance, it can be converted into information regarding the distance between the sensor 11 and the sound source device 20A.
  • the position information acquisition unit 75 can uniquely identify the position of the sensor 11 by acquiring three or more pieces of information on the distance.
  • the array speaker 50 may receive the beacon signal of the sensor 11, and the position information acquisition unit 75 may receive information regarding the received radio wave intensity of the beacon signal from the array speaker 50.
  • the user E may set a plurality of terminals for receiving the beacon signal in the room. In this case, the position information acquisition unit 75 receives information on the received radio field strength of the beacon signal from a plurality of terminals.
  • the position information acquisition unit 75 may specify the position of the user E by using the temperature sensor. For example, when the position information acquisition unit 75 detects an object of about 36 degrees Celsius via a temperature sensor, it determines that the object is user E and acquires the coordinates of the object. The position information may be acquired.
  • the sound source device 20A acquires a sound signal (S22).
  • the processor 22 acquires a sound signal by the biological information acquisition unit 30, the sound source unit 40, the control unit 140, and the reading unit 145, as in the embodiment shown in the functional block diagram of FIG.
  • the sound source device 20A according to the second embodiment does not need to acquire the sound signal in the mode shown in the first embodiment.
  • the sound source device 20A may acquire a sound signal by reading out a specific content stored in the flash memory 24.
  • the sound source device 20A may acquire a sound signal by receiving a specific content from an information processing terminal such as a smartphone owned by the user or another device such as a server.
  • the control unit 140 performs sound image localization processing based on the biological information and the position information (S23).
  • the sound image localization process is, for example, a process of controlling the directivity by controlling the volume and the sound emission timing of the sound signals supplied to the plurality of speakers of the array speaker 50.
  • control unit 140 directs the directivity of the sound output from the array speaker 50 to the position where the user E is, as shown in FIG. 11, based on the position information. Further, the control unit 140 controls the directivity based on the biological information. For example, the control unit 140 outputs the sound of content such as music to the entire room when the estimation unit 80 determines that the estimation unit 80 is in the awake state even when the user E is at the position of the bed 5. Alternatively, the control unit 140 may output the sound of content such as music to the entire room when the user E is in a place other than the bed 5. Further, the control unit 140 may output an awakening sound when the estimation unit 80 determines that the estimation unit 80 is in a sleep state even when the user E is in a place other than the bed 5.
  • the control unit 140 performs the sound image localization process in which the directivity is controlled. After that, the control unit 140 outputs a sound signal to the array speaker 50 via the audio I / F17 (S24).
  • the control unit 140 may output a sound signal after adjusting the volume and sound emission timing of the array speaker 50, but information indicating the sound signal and volume and sound emission timing of each channel (sound image localization information). May be output to the array speaker 50. In this case, the array speaker 50 adjusts the volume and the sound emission timing.
  • the control unit 140 may output a sound signal and information for controlling the directivity (for example, coordinates indicating the direction of the sound) to the array speaker 50. In this case, the array speaker 50 calculates the volume adjustment amount and the sound emission timing adjustment amount.
  • FIG. 12 is a flowchart showing the operation of the sound source device 20A according to the modified example of the second embodiment.
  • the same reference numerals are given to the operations common to those in FIG. 10, and the description thereof will be omitted.
  • the control unit 140 of the sound source device 20A further identifies the user (S200).
  • the control unit 140 functions as a specific unit that identifies the user.
  • the control unit 140 performs sound image localization processing based on the user's specific result in addition to the biological information and the position information. For example, the control unit 140 controls the directivity so that various sounds reach only a specific user.
  • the control unit 140 directs the directivity of the sound output by the array speaker 50 to the position where the specific user E2 is. As a result, it is difficult for the user E1 to hear the sound output by the array speaker 50. For example, when the user E1 sets the wake-up time at 8:00 am and the user E2 sets the wake-up time at 9:00 am, the control unit 140 outputs an awakening sound to the user E2 at 8:00 am. In this case, only the user E2 can hear the awakening sound without disturbing the user E1 to go to bed.
  • the sound source device 20A may perform a process of acquiring the cancel sound and localizing the cancel sound to a person other than a specific user.
  • the control unit 140 outputs the sound beam B1 related to the awakening sound to the specific user E2.
  • the control unit 140 outputs the sound beam B2 related to the cancel sound to the other user E1.
  • the cancel sound is a sound having the opposite phase of the sound beam B1 related to the awakening sound.
  • the sound beam B2 related to the cancel sound can cancel the awakening sound leaking from the sound beam B1 related to the awakening sound. Therefore, the awakening sound can be heard only by the user E2 without further hindering the user E1 from going to bed.
  • an example of controlling the directivity of the array speaker is shown as an example of sound image localization processing.
  • the sound image localization process can be performed by physically changing the sound emission direction of the directional speaker by a motor or the like.
  • the sound image localization process can also be performed by arranging a plurality of speakers in the room and outputting a sound such as an awakening sound only to the speaker at the position closest to the specific user.
  • the sound image localization process may be, for example, a process of convolving a head-related transfer function into a sound signal.
  • E, E1, E2 ... Users 1, 1A ... Sound reproduction system 1A ... Sound reproduction system 5 ... Beds 11, 12, 13 ... Sensors 20, 20A ... Sound source device 21 ... Communication unit 22 ... Processor 23 ... RAM 24 ... Flash memory 25 ... Display 26 ... User I / F 27 ... Audio I / F 30 ... Biological information acquisition unit 40 ... Sound source unit 50 ... Array speakers 51, 52 ... Speaker 70 ... Control table 75 ... Position information acquisition unit 80 ... Estimating unit 140 ... Control unit 145 ... Reading unit 410, 420, 430, 440 ... Sound source Department

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

An indoor sound environment generation apparatus including: a sound-signal acquisition unit that acquires sound signals; a sound-signal output unit that outputs the sound signals; a biological-information acquisition unit that acquires biological information of a user; a position-information acquisition unit that acquires the position information of the user; and a control unit that controls sound image localization of the sound signals on the basis of the biological information and the position information.

Description

室内用音環境生成装置、音源装置、室内用音環境生成方法および音源装置の制御方法Indoor sound environment generation device, sound source device, indoor sound environment generation method and sound source device control method
 本発明の一実施形態は、室内用音環境生成装置、音源装置、室内用音環境生成方法および音源装置の制御方法に関する。 One embodiment of the present invention relates to an indoor sound environment generation device, a sound source device, an indoor sound environment generation method, and a control method of the sound source device.
 特許文献1、2には、これから就寝したい利用者に睡眠を誘導する音を出力する装置が開示されている。 Patent Documents 1 and 2 disclose a device that outputs a sound that induces sleep to a user who wants to go to bed.
 特許文献1には、音像を三次元的に認識させるようにスピーカへの音声信号を生成する音響処理装置が開示されている。 Patent Document 1 discloses an acoustic processing device that generates an audio signal to a speaker so as to recognize a sound image three-dimensionally.
 特許文献2には、利用者の状態に応じて異なる音を出力する音源装置が開示されている。 Patent Document 2 discloses a sound source device that outputs different sounds depending on the state of the user.
特開2010-172392号公報Japanese Unexamined Patent Publication No. 2010-172392 特開2014-226361号公報Japanese Unexamined Patent Publication No. 2014-226361
 特許文献1の装置は、利用者の状態に応じた音を出力していない。例えば、就寝姿勢を取る利用者は、これから就寝したい場合と、起床したい場合がある。 The device of Patent Document 1 does not output sound according to the state of the user. For example, a user who takes a sleeping posture may want to go to bed or wake up.
 あるいは、例えばこれから就寝したい場合の利用者においても、当該利用者の状態が変わらない場合には、さらに強く睡眠を誘導することが望まれる。 Alternatively, for example, even for a user who wants to go to bed from now on, if the state of the user does not change, it is desired to induce sleep more strongly.
 そこで、本発明の一実施形態の目的の一つは、利用者の状態に応じて望ましい状態まで誘導できる音を出力する室内用音環境生成装置、音源装置、室内用音環境生成方法および音源装置の制御方法を提供することにある。 Therefore, one of the objects of the embodiment of the present invention is an indoor sound environment generator, a sound source device, an indoor sound environment generation method, and a sound source device that output a sound that can be guided to a desired state according to a user's state. Is to provide a control method for.
 室内用音環境生成装置は、音信号を取得する音信号取得部と、音信号を出力する音信号出力部と、利用者の生体情報を取得する生体情報取得部と、利用者の位置情報を取得する位置情報取得部と、前記生体情報および前記位置情報に基づいて前記音信号の音像定位を制御する制御部と、を備える。 The indoor sound environment generator acquires a sound signal acquisition unit that acquires a sound signal, a sound signal output unit that outputs a sound signal, a biometric information acquisition unit that acquires the biometric information of the user, and a user's position information. It includes a position information acquisition unit to be acquired and a control unit that controls the sound image localization of the sound signal based on the biometric information and the position information.
 音源装置は、利用者の生体情報を取得する生体情報取得部と、複数の音源部と、前記生体情報に基づいて前記複数の音源部を制御するための制御テーブルを決定し、決定した制御テーブルに基づいて前記複数の音源部を制御する制御部と、前記制御部の制御に応じて前記複数の音源部から音源を読み出す読出部と、を備える。 The sound source device determines a biometric information acquisition unit that acquires the biometric information of the user, a plurality of sound source units, and a control table for controlling the plurality of sound source units based on the biometric information, and the determined control table. A control unit that controls the plurality of sound source units based on the above, and a reading unit that reads a sound source from the plurality of sound source units according to the control of the control unit.
 本発明の一実施形態によれば、利用者の状態に応じて望ましい状態まで誘導できる音を出力することができる。 According to one embodiment of the present invention, it is possible to output a sound that can guide the user to a desired state according to the state of the user.
本実施形態に係る音源装置20を含む音再生システム1の構成を示す概略図である。It is a schematic diagram which shows the structure of the sound reproduction system 1 including the sound source apparatus 20 which concerns on this embodiment. 音源装置20の構成を示すブロック図である。It is a block diagram which shows the structure of the sound source apparatus 20. プロセッサ22の機能的構成を示すブロック図である。It is a block diagram which shows the functional structure of the processor 22. プロセッサ22の動作を示すフローチャートである。It is a flowchart which shows the operation of the processor 22. 制御テーブル70の一例を示す図である。It is a figure which shows an example of the control table 70. 制御テーブル70の一例を示す図である。It is a figure which shows an example of the control table 70. 制御テーブル70の一例を示す図である。It is a figure which shows an example of the control table 70. 第2実施形態に係る音源装置20Aを含む音再生システム1Aの構成を示す概略図である。It is the schematic which shows the structure of the sound reproduction system 1A including the sound source apparatus 20A which concerns on 2nd Embodiment. 音源装置20Aの構成を示すブロック図である。It is a block diagram which shows the structure of the sound source apparatus 20A. 音源装置20Aの動作を示すフローチャートである。It is a flowchart which shows the operation of the sound source apparatus 20A. 音源装置20Aを含む音再生システム1Aの構成を示す概略図である。It is the schematic which shows the structure of the sound reproduction system 1A including the sound source apparatus 20A. 音源装置20Aの動作を示すフローチャートである。It is a flowchart which shows the operation of the sound source apparatus 20A. 音源装置20Aを含む音再生システム1Aの構成を示す概略図である。It is the schematic which shows the structure of the sound reproduction system 1A including the sound source apparatus 20A. 音源装置20Aを含む音再生システム1Aの構成を示す概略図である。It is the schematic which shows the structure of the sound reproduction system 1A including the sound source apparatus 20A.
 (第1実施形態) 
 図1は、第1実施形態に係る音源装置20を含む音再生システム1の構成を示す概略図である。音再生システム1は、センサ11、センサ12、センサ13、音源装置20、スピーカ51、およびスピーカ52を備えている。音再生システム1は、ベッド5の上で仰向けの就寝姿勢をとっている利用者Eに対して、スピーカ51およびスピーカ52から発せられる音を聴かせる。スピーカ51およびスピーカ52の発する音は、例えば、睡眠を導入するための音、または覚醒させる音等を含む。
(First Embodiment)
FIG. 1 is a schematic view showing a configuration of a sound reproduction system 1 including a sound source device 20 according to the first embodiment. The sound reproduction system 1 includes a sensor 11, a sensor 12, a sensor 13, a sound source device 20, a speaker 51, and a speaker 52. The sound reproduction system 1 causes the user E, who is lying on his back on the bed 5, to hear the sound emitted from the speaker 51 and the speaker 52. The sounds emitted by the speaker 51 and the speaker 52 include, for example, a sound for introducing sleep, a sound for awakening, and the like.
 スピーカ51およびスピーカ52は、ベッド5から離れた所定の位置に配置されている。図1の例では、利用者Eの足元に向かう方向において、利用者Eに放音方向を向けて配置されている。スピーカ51は、音源装置20から出力されるステレオの左(L)チャンネルの音信号を、内蔵アンプで増幅して放音する。スピーカ52は、音源装置20から出力されるステレオの右(L)チャンネルの音信号を内蔵アンプで増幅して放音する。 The speaker 51 and the speaker 52 are arranged at predetermined positions away from the bed 5. In the example of FIG. 1, the sound is emitted toward the user E in the direction toward the feet of the user E. The speaker 51 amplifies the stereo left (L) channel sound signal output from the sound source device 20 by the built-in amplifier and emits the sound. The speaker 52 amplifies the sound signal of the stereo right (L) channel output from the sound source device 20 by the built-in amplifier and emits the sound.
 センサ11は、利用者Eの手首に取り付けられる。センサ12は、枕の下に敷かれる。センサ13は、ベッドの上に敷かれる。センサ11、センサ12、およびセンサ13は、利用者Eの脈拍、呼吸、体動、脳波、あるいは血圧等の生体情報を検出する。センサ11、センサ12、およびセンサ13は、無線通信機能を備える。センサ11、センサ12、およびセンサ13は、検出した生体情報を音源装置20に送信する。なお、生体情報を検出するセンサは、本実施形態の様に3つに限らない。センサは、1つでも2つでも4つ以上でもよい。また、生体情報は、本実施形態に示した例に限らない。さらに生体情報は、少なくとも1つを検出すればよい。 The sensor 11 is attached to the wrist of the user E. The sensor 12 is laid under the pillow. The sensor 13 is laid on the bed. The sensor 11, the sensor 12, and the sensor 13 detect biological information such as the pulse, respiration, body movement, brain wave, and blood pressure of the user E. The sensor 11, the sensor 12, and the sensor 13 have a wireless communication function. The sensor 11, the sensor 12, and the sensor 13 transmit the detected biological information to the sound source device 20. The number of sensors that detect biological information is not limited to three as in the present embodiment. The number of sensors may be one, two, or four or more. Moreover, the biological information is not limited to the example shown in this embodiment. Further, at least one biological information may be detected.
 なお、無線通信機能は必須ではない。例えばセンサ11、センサ12、およびセンサ13は、音源装置20に有線で接続されてもよい。なお、センサは、生体情報を取得できるものであればどの様な態様であってもよい。 The wireless communication function is not essential. For example, the sensor 11, the sensor 12, and the sensor 13 may be connected to the sound source device 20 by wire. The sensor may be in any form as long as it can acquire biometric information.
 図2は、音源装置20の構成を示すブロック図である。音源装置20は、通信部21、プロセッサ22、RAM23、フラッシュメモリ24、表示器25、ユーザI/F26、およびオーディオI/F27を備えている。 FIG. 2 is a block diagram showing the configuration of the sound source device 20. The sound source device 20 includes a communication unit 21, a processor 22, a RAM 23, a flash memory 24, a display 25, a user I / F26, and an audio I / F27.
 音源装置20は、例えばパーソナルコンピュータ、スマートフォン、あるいはタブレット型コンピュータ等からなる。また、オーディオレシーバ等の音響機器も、音源装置の一例である。 The sound source device 20 includes, for example, a personal computer, a smartphone, a tablet computer, or the like. An audio device such as an audio receiver is also an example of a sound source device.
 通信部21は、無線通信機能を介して、センサ11、センサ12、およびセンサ13から生体情報を受信する。通信部21は、USBまたはLAN等の有線通信機能を備えていてもよい。 The communication unit 21 receives biometric information from the sensor 11, the sensor 12, and the sensor 13 via the wireless communication function. The communication unit 21 may have a wired communication function such as USB or LAN.
 表示器25は、LCD等からなる。ユーザI/F26は、操作部の一例である。ユーザI/F26は、マウス、キーボード、あるいはタッチパネル等からなる。ユーザI/F26は、利用者の操作を受け付ける。なお、タッチパネルは、表示器25に積層されていてもよい。利用者Eは、ユーザI/F26を介して、例えば就寝時刻または起床時刻等の情報を入力する。また、利用者Eは、ユーザI/F16を介して、スピーカから出力させる音の種類を選択する。例えば、利用者は、「リラックス」、「入眠」、「快眠」、「起床」、または「MUTE」のいずれかの種類を選択する。 The display 25 is made of an LCD or the like. User I / F26 is an example of an operation unit. The user I / F 26 includes a mouse, a keyboard, a touch panel, and the like. The user I / F 26 accepts the user's operation. The touch panel may be stacked on the display 25. The user E inputs information such as a bedtime or a wake-up time via the user I / F26. Further, the user E selects the type of sound to be output from the speaker via the user I / F16. For example, the user selects one of the following types: "relax", "sleep onset", "good sleep", "wake up", or "MUTE".
 オーディオI/F27は、アナログオーディオ端子またはデジタルオーディオ端子等からなる。オーディオI/F27は、スピーカ51およびスピーカ52に接続されている。なお、オーディオI/F27は、スピーカ51およびスピーカ52に音信号を出力する。なお、音源装置20は、無線通信機能でスピーカ51およびスピーカ52に音信号を送信してもよい。 The audio I / F27 is composed of an analog audio terminal, a digital audio terminal, or the like. The audio I / F 27 is connected to the speaker 51 and the speaker 52. The audio I / F 27 outputs a sound signal to the speaker 51 and the speaker 52. The sound source device 20 may transmit a sound signal to the speaker 51 and the speaker 52 by the wireless communication function.
 プロセッサ22は、CPU、DSP、またはSoC(System on a Chip)等からなる。プロセッサ22は、記憶媒体であるフラッシュメモリ24からプログラムを読み出し、RAM23に一時記憶することで、種々の動作を行う。なお、プログラムは、フラッシュメモリ24に記憶しておく必要はない。プロセッサ22は、必要に応じてサーバ等からプログラムを一時的にダウンロードしてもよい。 The processor 22 is composed of a CPU, a DSP, a SoC (System on a Chip), or the like. The processor 22 performs various operations by reading a program from the flash memory 24, which is a storage medium, and temporarily storing the program in the RAM 23. The program does not need to be stored in the flash memory 24. The processor 22 may temporarily download the program from the server or the like as needed.
 図3は、プロセッサ22の機能的構成を示すブロック図である。図4は、プロセッサ22の動作を示すフローチャートである。プロセッサ22は、フラッシュメモリ24から読み出したプログラムにより、生体情報取得部30、音源部40、制御部140、および読出部145を構成する。音源部40は、複数(図3の例では4つ)の音源部410,420,430,440を含む。 FIG. 3 is a block diagram showing a functional configuration of the processor 22. FIG. 4 is a flowchart showing the operation of the processor 22. The processor 22 constitutes a biological information acquisition unit 30, a sound source unit 40, a control unit 140, and a reading unit 145 by a program read from the flash memory 24. The sound source unit 40 includes a plurality of sound source units 410, 420, 430, 440 (four in the example of FIG. 3).
 生体情報取得部30は、通信部21を介して生体情報を取得する(S11)。制御部140は、生体情報取得部30が取得した生体情報に応じて制御テーブル70の内容を決定する(S12)。読出部145は、制御テーブル70に基づいて、音源部40から音源を読み出す(S13)。 The biometric information acquisition unit 30 acquires biometric information via the communication unit 21 (S11). The control unit 140 determines the contents of the control table 70 according to the biometric information acquired by the biometric information acquisition unit 30 (S12). The reading unit 145 reads a sound source from the sound source unit 40 based on the control table 70 (S13).
 図5は、制御テーブル70の一例を示す図である。制御テーブル70は、一例として、5つの制御モードを規定している。5つの制御モードは、それぞれ、「リラックス」、「入眠」、「快眠」、「起床」、および「MUTE」のいずれかに対応する。例えば、制御モード1は「リラックス」、制御モード2は「入眠」、制御モード3は「快眠」、制御モード4は「起床」、制御モード5は「MUTE」に対応する。 FIG. 5 is a diagram showing an example of the control table 70. The control table 70 defines five control modes as an example. The five control modes correspond to any of "relax", "sleep onset", "good sleep", "wake up", and "MUTE", respectively. For example, control mode 1 corresponds to "relax", control mode 2 corresponds to "sleep onset", control mode 3 corresponds to "good sleep", control mode 4 corresponds to "wake up", and control mode 5 corresponds to "MUTE".
 上述の様に、利用者は、ユーザI/F16を介して、「リラックス」、「入眠」、「快眠」、「起床」、または「MUTE」のいずれかの種類を選択する。あるいは、制御部140は、生体情報取得部30で取得した生体情報に基づいて、利用者の心身状態を推定し、「リラックス」、「入眠」、「快眠」、「起床」、または「MUTE」のいずれかを自動的に選択してもよい。例えば、制御部140は、脈拍、呼吸、体動、脳波、あるいは血圧等の生体情報が所定の条件を満たした場合(例えば、高い値(所定の閾値以上)である場合)には、利用者が興奮状態であると判定し「リラックス」を選択する。また、制御部140は、利用者が睡眠中であると判定した場合には、「快眠」を選択する。あるいは、制御部140は、利用者が睡眠中であると判定した場合には、「MUTE」を選択してもよい。また、制御部140は、現在の時刻が利用者の入力した就寝時刻に達した場合に、「入眠」を選択する。また、制御部140は、現在の時刻が利用者の入力した起床時刻に達した場合に、「起床」を選択する。 As described above, the user selects one of "relax", "sleep onset", "good sleep", "wake up", or "MUTE" via the user I / F16. Alternatively, the control unit 140 estimates the physical and mental state of the user based on the biometric information acquired by the biometric information acquisition unit 30, and "relaxes", "sleeps", "good sleep", "wakes up", or "MUTE". You may automatically select one of them. For example, when the biological information such as pulse, respiration, body movement, brain wave, or blood pressure satisfies a predetermined condition (for example, when it is a high value (a predetermined threshold value or more)), the control unit 140 is a user. Judges that he is in an excited state and selects "Relax". Further, when the control unit 140 determines that the user is sleeping, the control unit 140 selects "good sleep". Alternatively, the control unit 140 may select "MUTE" when it is determined that the user is sleeping. Further, the control unit 140 selects "sleep onset" when the current time reaches the bedtime input by the user. Further, the control unit 140 selects "wake up" when the current time reaches the wake-up time input by the user.
 制御テーブル70は、一例として、ハイパーソニック、バイノーラルビート、自然音、および音楽の4つの音源のパラメータを含む。音源部410は、ハイパーソニックの音源に対応し、音源部420は、バイノーラルビートの音源に対応し、音源部430は、自然音の音源に対応し、音源部440は、音楽の音源に対応する。 The control table 70 includes parameters of four sound sources of hypersonic, binaural beat, natural sound, and music as an example. The sound source unit 410 corresponds to a hypersonic sound source, the sound source unit 420 corresponds to a binaural beat sound source, the sound source unit 430 corresponds to a natural sound source, and the sound source unit 440 corresponds to a music sound source. ..
 ハイパーソニックは、例えば、20~100Hz程度の非可聴音である。ハイパーソニックは、人の体表面に非可聴音の振動を与えることで、リラックス作用を生じさせる。バイノーラルビートは、LチャンネルおよびRチャンネルで周波数差を持たせた音である。例えば、バイノーラルビートは、Lチャンネルに100Hzの音、Rチャンネルに110Hzの音を含む。バイノーラルビートは、この様な10Hz程度の低い周波数差を利用者に感じさせることで、脳波を低い状態に導く。これにより、バイノーラルビートは、リラックス作用を生じさせる。自然音は、例えば風の音、波の音、鳥のさえずり、あるいは川のせせらぎ等の音を含む。自然音は、これらの複数種類の音がランダムに組み合わされる。また、自然音は、これらの複数種類の音がフェードイン、フェードアウトを繰り返しながら、ランダムな長さで再生される。これにより、自然音は、睡眠を導入したり、快適な睡眠を維持することができる。音楽は、一例として、シンセサイザの音色の和音である。シンセサイザの音色の和音のうち低い周波数の音は、リラックス作用を生じさせる。シンセサイザの音色の和音のうち高い周波数の音は、覚醒作用を生じさせる。なお、自然音および音楽は、利用者Eが予め任意の音を選択してもよい。 Hypersonic is, for example, an inaudible sound of about 20 to 100 Hz. Hypersonic creates a relaxing effect by giving inaudible vibrations to the surface of the human body. The binaural beat is a sound having a frequency difference between the L channel and the R channel. For example, the binaural beat includes a sound of 100 Hz on the L channel and a sound of 110 Hz on the R channel. Binaural beats lead brain waves to a low state by making the user feel such a low frequency difference of about 10 Hz. This causes the binaural beats to have a relaxing effect. Natural sounds include, for example, the sound of wind, the sound of waves, the chirping of birds, or the babbling of rivers. Natural sounds are a random combination of these multiple types of sounds. In addition, the natural sound is reproduced with a random length while these multiple types of sounds repeatedly fade in and fade out. This allows natural sounds to introduce sleep and maintain a comfortable sleep. Music, for example, is a chord of synthesizer timbres. The lower frequency of the synthesizer timbre chords produces a relaxing effect. The higher frequency of the synthesizer timbre chords causes an arousal effect. As for natural sounds and music, user E may select arbitrary sounds in advance.
 読出部145は、制御部140の制御に応じて4つの音源部410、420、430、440から音源を読み出し、ミキシングしてオーディオI/F17に出力する。これにより、スピーカ51およびスピーカ52は、ミキシングした音を出力する。 The reading unit 145 reads sound sources from the four sound source units 410, 420, 430, and 440 according to the control of the control unit 140, mixes them, and outputs them to the audio I / F17. As a result, the speaker 51 and the speaker 52 output the mixed sound.
 ミキシングした音は、リラックス音、睡眠導入音、快眠音、または覚醒音のいずれかに分類される。例えば、制御モード1でミキシングした音は、ハイパーソニック、バイノーラルビート、および低周波数のシンセサイザの和音の音楽を含む。これらの音をミキシングした音は、利用者Eをリラックスさせる作用を生む、リラックス音となる。制御モード2でミキシングした音は、ハイパーソニック、バイノーラルビート、自然音、および低周波数のシンセサイザの和音の音楽を含む。これらの音をミキシングした音は、利用者Eに睡眠導入の効果を与える、睡眠導入音となる。制御モード3でミキシングした音は、ハイパーソニック、および自然音を含む。これらの音をミキシングした音は、睡眠を阻害せず、かつリラックス作用も生じる快眠音となる。制御モード4でミキシングした音は、高周波数のシンセサイザの和音の音楽を含む。また、音楽は、2拍子でテンポが高く、音量の大きい音である。この様な音楽は、利用者Eに覚醒効果を与える、覚醒音となる。 The mixed sound is classified as either a relaxing sound, a sleep-inducing sound, a good sleep sound, or an awakening sound. For example, the sounds mixed in control mode 1 include hypersonic, binaural beats, and low frequency synthesizer chord music. The sound obtained by mixing these sounds becomes a relaxing sound that produces an action of relaxing the user E. Sounds mixed in control mode 2 include hypersonic, binaural beats, natural sounds, and low frequency synthesizer chord music. The sound obtained by mixing these sounds becomes a sleep-introducing sound that gives the user E the effect of introducing sleep. The sounds mixed in control mode 3 include hypersonic and natural sounds. The sound of mixing these sounds is a good sleep sound that does not interfere with sleep and also has a relaxing effect. The sounds mixed in control mode 4 include high frequency synthesizer chord music. In addition, music is a sound with a high tempo and a loud volume in two beats. Such music becomes an awakening sound that gives the awakening effect to the user E.
 なお、制御テーブル70に示すパラメータのうち「-」は、自然音または音楽を用いないことを意味する。 Note that "-" among the parameters shown in the control table 70 means that natural sounds or music are not used.
 自然音および音楽のテンポは、利用者Eの心拍数あるいは呼吸数(回/分)との差分である。例えば制御テーブル70のテンポが「-3」であれば、利用者Eの心拍数よりも3低いテンポで再生することを意味する。音源装置20は、利用者Eの心拍数と同じテンポか、あるいは低いテンポで自然音または音楽を再生することで、睡眠に入りやすいようにリラックスさせることができる。テンポが「2」であれば、利用者の心拍数よりも2高いテンポで再生することを意味する。音源装置20は、利用者Eの心拍数よりも高いテンポで自然音または音楽を再生することで、覚醒状態に移行させることができる。「1/f」は、振幅、テンポ、または周波数等を1/fで揺らぎを与えることを意味する。音源装置20は、揺らぎを与えることで、利用者Eをさらにリラックスさせることができる。 The tempo of natural sounds and music is the difference from the heart rate or respiratory rate (times / minute) of user E. For example, if the tempo of the control table 70 is "-3", it means that the game is played at a tempo 3 lower than the heart rate of the user E. The sound source device 20 can relax the user E so that he / she can easily fall asleep by playing natural sounds or music at the same tempo as or lower than the heart rate of the user E. If the tempo is "2", it means that the game is played at a tempo that is 2 higher than the user's heart rate. The sound source device 20 can shift to the awake state by playing natural sounds or music at a tempo higher than the heart rate of the user E. “1 / f” means that the amplitude, tempo, frequency, etc. are fluctuated by 1 / f. The sound source device 20 can further relax the user E by giving fluctuations.
 制御部140は、生体情報に応じて制御テーブル70を決定する。本実施形態の音源装置20は、図4に示したフローチャートの動作を、定期的(例えば数秒経過毎)に繰り返してもよい。この場合、制御テーブル70の内容は、生体情報の変化に基づいて変化する。例えば、図6に示す制御テーブル70は、制御モード2における自然音のテンポがさらに低い「-4」になっている。図6の制御テーブル70は、図5の例よりも睡眠導入の効果を高める一例である。制御部140は、例えば脈拍、呼吸、および体動が低下した場合に、制御テーブル70を図6に示す内容に変更する。あるいは、制御部140は、例えば脈拍、呼吸、および体動に変化がない場合に、制御テーブル70を図6に示す内容に変更してもよい。 The control unit 140 determines the control table 70 according to the biological information. The sound source device 20 of the present embodiment may repeat the operation of the flowchart shown in FIG. 4 periodically (for example, every few seconds). In this case, the contents of the control table 70 change based on the change in the biological information. For example, in the control table 70 shown in FIG. 6, the tempo of the natural sound in the control mode 2 is set to “-4”, which is even lower. The control table 70 of FIG. 6 is an example in which the effect of sleep induction is enhanced as compared with the example of FIG. The control unit 140 changes the control table 70 to the contents shown in FIG. 6, for example, when the pulse, respiration, and body movement decrease. Alternatively, the control unit 140 may change the control table 70 to the content shown in FIG. 6, for example, when there is no change in pulse, respiration, and body movement.
 無論、制御部140は、生体情報取得部30で取得した生体情報に基づいて、他の「リラックス」、「快眠」、または「起床」のいずれかの内容を変更してもよい。例えば、図7に示す制御テーブル70は、制御モード4における音楽のテンポがさらに高い「3」になり、音量がさらに大きい「8」になっている。制御部140は、例えば脈拍、呼吸、および体動が上昇した場合に、制御テーブル70を図7に示す内容に変更する。図7の制御テーブル70は、図5の例よりも覚醒の効果を高める一例である。あるいは、制御部140は、例えば脈拍、呼吸、および体動に変化がない場合に、制御テーブル70を図6に示す内容に変更してもよい。 Of course, the control unit 140 may change any of the other "relaxation", "good sleep", or "wake up" contents based on the biometric information acquired by the biometric information acquisition unit 30. For example, in the control table 70 shown in FIG. 7, the tempo of the music in the control mode 4 is set to “3”, and the volume is set to “8”. The control unit 140 changes the control table 70 to the contents shown in FIG. 7, for example, when the pulse, respiration, and body movement increase. The control table 70 of FIG. 7 is an example in which the effect of awakening is enhanced as compared with the example of FIG. Alternatively, the control unit 140 may change the control table 70 to the content shown in FIG. 6, for example, when there is no change in pulse, respiration, and body movement.
 また、制御部140は、睡眠導入音または覚醒音を出力してから睡眠状態または覚醒状態に移行するまでの時間をフラッシュメモリ24に記録しておいてもよい。制御部140は、当該時間を音源毎、およびパラメータ毎に記録しておく。そして、制御部140は、所定のアルゴリズムを用いて、睡眠導入の効果の高い音源、およびパラメータを学習してもよい。 Further, the control unit 140 may record in the flash memory 24 the time from the output of the sleep introduction sound or the awakening sound to the transition to the sleep state or the awakening state. The control unit 140 records the time for each sound source and each parameter. Then, the control unit 140 may learn a sound source and a parameter having a high effect of sleep induction by using a predetermined algorithm.
 上記の例では、制御部140は、制御テーブル70の内容を変更した。しかし、制御部140は、複数の制御テーブルから、生体情報に基づいて、1つの制御テーブルを選択してもよい。この場合、フラッシュメモリ24は、生体情報に対応する制御テーブルを複数記憶している。また、生体情報に対応する複数の制御テーブルは、サーバに記憶しておいてもよい。この場合、制御部140は、生体情報をサーバに送信し、対応する制御テーブルを取得する。 In the above example, the control unit 140 changed the contents of the control table 70. However, the control unit 140 may select one control table from a plurality of control tables based on biological information. In this case, the flash memory 24 stores a plurality of control tables corresponding to biometric information. Further, a plurality of control tables corresponding to biometric information may be stored in the server. In this case, the control unit 140 transmits the biometric information to the server and acquires the corresponding control table.
 また、制御部140は、生体情報に加えて、利用者の年齢、性別、または国籍等の情報に基づいて制御テーブル70を決定してもよい。利用者の年齢、性別、または国籍等の情報に対する制御テーブル70の内容は、例えばサーバ(不図示)に記録されている。サーバは、多数の装置から、利用者の年齢、性別、または国籍等の情報に対する制御テーブル70の内容を記録している。サーバは、これらの学習し、利用者の年齢、性別、または国籍等の情報に対する最適な制御テーブル70を学習してもよい。制御部140は、通信部21を介して、サーバに利用者の年齢、性別、または国籍等の情報を送信し、対応する制御テーブル70を受信する。 Further, the control unit 140 may determine the control table 70 based on information such as the age, gender, nationality, etc. of the user in addition to the biometric information. The contents of the control table 70 with respect to information such as the age, gender, and nationality of the user are recorded in, for example, a server (not shown). The server records the contents of the control table 70 for information such as the age, gender, or nationality of the user from a large number of devices. The server may learn these and learn the optimum control table 70 for information such as the age, gender, or nationality of the user. The control unit 140 transmits information such as the age, gender, or nationality of the user to the server via the communication unit 21, and receives the corresponding control table 70.
 また、制御部140は、利用者を特定し、生体情報に加えて、利用者の特定結果に基づいて、制御テーブル70を決定してもよい。利用者Eは、ユーザI/F26を介して、制御テーブル70の内容を編集する。利用者Eは、例えば自然音の種類、音楽の種類、テンポ、あるいは音量等の各種のパラメータを変更する。制御部140は、フラッシュメモリ24に利用者Eの編集した内容を記録しておく。制御部140は、利用者Eの制御テーブル70の編集内容に応じて、所定のアルゴリズムを用いて、利用者の好みのパラメータを学習してもよい。これにより、制御部140は、利用者の好みに応じた制御テーブル70を決定する。 Further, the control unit 140 may specify the user and determine the control table 70 based on the user's specific result in addition to the biometric information. The user E edits the contents of the control table 70 via the user I / F26. User E changes various parameters such as the type of natural sound, the type of music, the tempo, or the volume. The control unit 140 records the edited contents of the user E in the flash memory 24. The control unit 140 may learn the user's favorite parameters by using a predetermined algorithm according to the edited contents of the control table 70 of the user E. As a result, the control unit 140 determines the control table 70 according to the user's preference.
 なお、上記実施形態では、寝室のベッド5に居る利用者Eに各種の音を出力する例を示した。しかし、音源装置20は、例えばリビング等にいる利用者Eに対して、当該利用者Eが興奮状態であることを検出した場合に、リラックス音を出力してもよい。あるいは、音源装置20は、オフィスにいる利用者Eに対して、当該利用者Eが入眠状態であることを検出した場合に、覚醒音を出力してもよい。 In the above embodiment, an example of outputting various sounds to the user E in the bed 5 of the bedroom is shown. However, the sound source device 20 may output a relaxing sound to the user E in the living room or the like when the user E is detected to be in an excited state. Alternatively, the sound source device 20 may output an awakening sound to the user E in the office when the user E is detected to be in a sleep state.
 (第2実施形態) 
 次に、図8は、第2実施形態に係る音源装置20Aを含む音再生システム1Aの構成を示す概略図である。図1に示した構成と同じ構成は同じ符号を付し、説明を省略する。音再生システム1Aは、アレイスピーカ50を備えている。アレイスピーカ50は、図1のスピーカ51およびスピーカ52と同様に、リラックス音、睡眠導入音、快眠音、または覚醒音等を含む様々な音を出力する。
(Second Embodiment)
Next, FIG. 8 is a schematic view showing the configuration of the sound reproduction system 1A including the sound source device 20A according to the second embodiment. The same configurations as those shown in FIG. 1 are designated by the same reference numerals, and the description thereof will be omitted. The sound reproduction system 1A includes an array speaker 50. Similar to the speaker 51 and the speaker 52 of FIG. 1, the array speaker 50 outputs various sounds including a relaxing sound, a sleep introducing sound, a good sleep sound, an awakening sound, and the like.
 アレイスピーカ50は、複数のスピーカを備える。アレイスピーカ50は、配列した複数のスピーカに供給する音信号の音量および放音タイミングを制御することで、指向性を制御することができる。図8の例では、アレイスピーカ50は、ベッド5から離れた所定の位置に配置されている。図8の例では、アレイスピーカ50は、ベッド5の短軸方向に平行な方向に複数のスピーカを配列している。音源装置20Aとアレイスピーカ50は、無線通信機能あるいは有線通信機能で接続される。アレイスピーカ50は、音源装置20から出力されるステレオの左(L)チャンネルの音信号およびステレオの右(L)チャンネルの音信号を内蔵アンプで増幅して放音する。 The array speaker 50 includes a plurality of speakers. The array speaker 50 can control the directivity by controlling the volume and the sound emission timing of the sound signals supplied to the plurality of arranged speakers. In the example of FIG. 8, the array speaker 50 is arranged at a predetermined position away from the bed 5. In the example of FIG. 8, the array speaker 50 arranges a plurality of speakers in a direction parallel to the minor axis direction of the bed 5. The sound source device 20A and the array speaker 50 are connected by a wireless communication function or a wired communication function. The array speaker 50 amplifies the stereo left (L) channel sound signal and the stereo right (L) channel sound signal output from the sound source device 20 by the built-in amplifier and emits the sound.
 図9は、音源装置20Aの構成を示すブロック図である。音源装置20Aは、音源装置20と同じハードウェアを備える。そのため、各ハードウェア構成は同じ符号を付し、説明を省略する。この実施形態では、音源装置20Aは、室内用音環境生成装置の一例である。 FIG. 9 is a block diagram showing the configuration of the sound source device 20A. The sound source device 20A includes the same hardware as the sound source device 20. Therefore, each hardware configuration is designated by the same reference numeral, and the description thereof will be omitted. In this embodiment, the sound source device 20A is an example of an indoor sound environment generation device.
 音源装置20Aのフラッシュメモリ24は、さらに、位置情報取得部75および推定部80を構成するためのプログラムを記憶している。音源装置20Aのプロセッサ22は、フラッシュメモリ24から読み出したプログラムにより、さらに、位置情報取得部75および推定部80を構成する。位置情報取得部75は、利用者の位置に関する情報(例えば、室内の座標)を取得する。推定部80は、生体情報に基づいて、利用者の心身状態を推定する。例えば、推定部80は、脈拍、呼吸、体動、脳波、あるいは血圧等の生体情報が高い値(所定の閾値以上)である場合には、利用者が興奮状態であると判定する。また、推定部80は、脈拍、呼吸、体動、脳波、あるいは血圧等の生体情報が低い値(所定の閾値以上)であり、かつ時間経過とともにこれらの値がさらに低下するときは、入眠状態であると判断する。また、推定部80は、レム睡眠またはノンレム睡眠に該当する脳波を検出した場合に、睡眠状態であると判定する。また、推定部80は、脈拍、呼吸、体動、脳波、あるいは血圧等の生体情報が低い値(所定の閾値以上)であり、かつ時間経過とともにこれらの値が上昇するときは、起床状態であると判断する。 The flash memory 24 of the sound source device 20A further stores a program for configuring the position information acquisition unit 75 and the estimation unit 80. The processor 22 of the sound source device 20A further constitutes the position information acquisition unit 75 and the estimation unit 80 by the program read from the flash memory 24. The position information acquisition unit 75 acquires information regarding the position of the user (for example, coordinates in the room). The estimation unit 80 estimates the physical and mental state of the user based on the biological information. For example, the estimation unit 80 determines that the user is in an excited state when the biological information such as pulse, respiration, body movement, brain wave, or blood pressure is a high value (greater than or equal to a predetermined threshold value). Further, the estimation unit 80 is in a state of falling asleep when biological information such as pulse, respiration, body movement, electroencephalogram, or blood pressure is low (above a predetermined threshold value) and these values further decrease with the passage of time. Judge that. Further, the estimation unit 80 determines that it is in a sleeping state when it detects an electroencephalogram corresponding to REM sleep or non-REM sleep. Further, when the biological information such as pulse, respiration, body movement, brain wave, or blood pressure is low (above a predetermined threshold value) and these values increase with the passage of time, the estimation unit 80 is in a wake-up state. Judge that there is.
 図10は、音源装置20Aの動作を示すフローチャートである。まず、生体情報取得部30は、通信部21を介して生体情報を取得し、位置情報取得部75は、利用者Eの位置情報を取得する(S21)。 FIG. 10 is a flowchart showing the operation of the sound source device 20A. First, the biometric information acquisition unit 30 acquires biometric information via the communication unit 21, and the position information acquisition unit 75 acquires the position information of the user E (S21).
 位置情報取得部75は、例えばセンサ12またはセンサ13を介して位置情報を取得する。センサ12は、枕の下に敷かれ、センサ13は、ベッドの上に敷かれる。したがって、位置情報取得部75は、センサ12またはセンサ13から生体情報を取得した場合に、利用者Eがベッド5に居て、枕の位置に頭の位置が存在すると判断する。位置情報は、例えば室内を平面視した座標で表される。例えば、利用者Eは、予めユーザI/F16を介して、センサ12、センサ13、およびアレイスピーカ50の座標を入力する。位置情報取得部75は、予めセンサ12、センサ13、およびアレイスピーカ50の座標を取得する。 The position information acquisition unit 75 acquires position information via, for example, the sensor 12 or the sensor 13. The sensor 12 is laid under the pillow and the sensor 13 is laid on the bed. Therefore, when the position information acquisition unit 75 acquires the biological information from the sensor 12 or the sensor 13, the position information acquisition unit 75 determines that the user E is in the bed 5 and the head position exists at the position of the pillow. The position information is represented by, for example, the coordinates when the room is viewed in a plane. For example, the user E inputs the coordinates of the sensor 12, the sensor 13, and the array speaker 50 in advance via the user I / F16. The position information acquisition unit 75 acquires the coordinates of the sensor 12, the sensor 13, and the array speaker 50 in advance.
 また、位置情報取得部75は、利用者の身につけているセンサ11を介して位置情報を取得してもよい。センサ11は、例えばBluetooth(登録商標)のビーコン信号を送信する。位置情報取得部75は、ビーコン信号の受信電波強度に基づいて、センサ11との距離を測定する。電波強度は、距離の二乗に反比例するため、センサ11と音源装置20Aとの距離に関する情報に変換することができる。位置情報取得部75は、当該距離の情報を3つ以上取得すれば、センサ11の位置を一意に特定できる。例えば、アレイスピーカ50がセンサ11のビーコン信号を受信し、位置情報取得部75がアレイスピーカ50からビーコン信号の受信電波強度に関する情報を受信してもよい。また、利用者Eは、室内にビーコン信号を受信するための複数の端末を設定してもよい。この場合、位置情報取得部75は、複数の端末からビーコン信号の受信電波強度に関する情報を受信する。 Further, the position information acquisition unit 75 may acquire the position information via the sensor 11 worn by the user. The sensor 11 transmits, for example, a Bluetooth® beacon signal. The position information acquisition unit 75 measures the distance to the sensor 11 based on the received radio wave intensity of the beacon signal. Since the radio wave intensity is inversely proportional to the square of the distance, it can be converted into information regarding the distance between the sensor 11 and the sound source device 20A. The position information acquisition unit 75 can uniquely identify the position of the sensor 11 by acquiring three or more pieces of information on the distance. For example, the array speaker 50 may receive the beacon signal of the sensor 11, and the position information acquisition unit 75 may receive information regarding the received radio wave intensity of the beacon signal from the array speaker 50. Further, the user E may set a plurality of terminals for receiving the beacon signal in the room. In this case, the position information acquisition unit 75 receives information on the received radio field strength of the beacon signal from a plurality of terminals.
 また、位置情報取得部75は、温度センサを用いて利用者Eの位置を特定してもよい。例えば、位置情報取得部75は、温度センサを介して摂氏36度程度の対象物を検出した場合に、当該対象物が利用者Eであると判断し、当該対象物の座標を取得することで位置情報を取得してもよい。 Further, the position information acquisition unit 75 may specify the position of the user E by using the temperature sensor. For example, when the position information acquisition unit 75 detects an object of about 36 degrees Celsius via a temperature sensor, it determines that the object is user E and acquires the coordinates of the object. The position information may be acquired.
 次に、音源装置20Aは、音信号を取得する(S22)。プロセッサ22は、図3の機能ブロック図で示した態様と同様に、生体情報取得部30、音源部40、制御部140、および読出部145により、音信号を取得する。ただし、第2実施形態に係る音源装置20Aは、第1実施形態で示した態様で音信号を取得する必要はない。例えば、音源装置20Aは、フラッシュメモリ24に記憶している特定のコンテンツを読み出すことで音信号を取得してもよい。また、音源装置20Aは、利用者の所持するスマートフォン等の情報処理端末、またはサーバ等の他装置から特定のコンテンツを受信することで音信号を取得してもよい。 Next, the sound source device 20A acquires a sound signal (S22). The processor 22 acquires a sound signal by the biological information acquisition unit 30, the sound source unit 40, the control unit 140, and the reading unit 145, as in the embodiment shown in the functional block diagram of FIG. However, the sound source device 20A according to the second embodiment does not need to acquire the sound signal in the mode shown in the first embodiment. For example, the sound source device 20A may acquire a sound signal by reading out a specific content stored in the flash memory 24. Further, the sound source device 20A may acquire a sound signal by receiving a specific content from an information processing terminal such as a smartphone owned by the user or another device such as a server.
 そして、制御部140は、生体情報および位置情報に基づいて音像定位処理を行なう(S23)。音像定位処理は、例えば、アレイスピーカ50の複数のスピーカに供給する音信号の音量および放音タイミングを制御することで、指向性を制御する処理である。 Then, the control unit 140 performs sound image localization processing based on the biological information and the position information (S23). The sound image localization process is, for example, a process of controlling the directivity by controlling the volume and the sound emission timing of the sound signals supplied to the plurality of speakers of the array speaker 50.
 例えば、制御部140は、位置情報に基づいて、図11に示す様に、アレイスピーカ50の出力する音の指向性を利用者Eの居る位置に向ける。さらに、制御部140は、生体情報に基づいて、指向性を制御する。例えば、制御部140は、利用者Eがベッド5の位置に居る場合でも、推定部80が覚醒状態であると判断した場合には、室内全体に音楽等のコンテンツの音を出力する。あるいは、制御部140は、利用者Eがベッド5以外の場所に居る場合には、室内全体に音楽等のコンテンツの音を出力してもよい。また、制御部140は、利用者Eがベッド5以外の場所に居る場合でも、推定部80が入眠状態であると判断した場合には、覚醒音を出力してもよい。 For example, the control unit 140 directs the directivity of the sound output from the array speaker 50 to the position where the user E is, as shown in FIG. 11, based on the position information. Further, the control unit 140 controls the directivity based on the biological information. For example, the control unit 140 outputs the sound of content such as music to the entire room when the estimation unit 80 determines that the estimation unit 80 is in the awake state even when the user E is at the position of the bed 5. Alternatively, the control unit 140 may output the sound of content such as music to the entire room when the user E is in a place other than the bed 5. Further, the control unit 140 may output an awakening sound when the estimation unit 80 determines that the estimation unit 80 is in a sleep state even when the user E is in a place other than the bed 5.
 この様にして、制御部140は、指向性を制御した音像定位処理を行なう。その後、制御部140は、オーディオI/F17を介して、アレイスピーカ50に音信号を出力する(S24)。なお、制御部140は、アレイスピーカ50の音量および放音タイミングを調整した後の音信号を出力してもよいが、各チャンネルの音信号および音量および放音タイミングを示す情報(音像定位情報)をアレイスピーカ50に出力してもよい。この場合、アレイスピーカ50が、音量調整および放音タイミングの調整を行なう。また、制御部140は、音信号と、指向性を制御するための情報(例えば音の方向を示す座標)をアレイスピーカ50に出力してもよい。この場合、アレイスピーカ50が音量調整量および放音タイミングの調整量を計算する。 In this way, the control unit 140 performs the sound image localization process in which the directivity is controlled. After that, the control unit 140 outputs a sound signal to the array speaker 50 via the audio I / F17 (S24). The control unit 140 may output a sound signal after adjusting the volume and sound emission timing of the array speaker 50, but information indicating the sound signal and volume and sound emission timing of each channel (sound image localization information). May be output to the array speaker 50. In this case, the array speaker 50 adjusts the volume and the sound emission timing. Further, the control unit 140 may output a sound signal and information for controlling the directivity (for example, coordinates indicating the direction of the sound) to the array speaker 50. In this case, the array speaker 50 calculates the volume adjustment amount and the sound emission timing adjustment amount.
 次に、図12は、第2実施形態の変形例に係る音源装置20Aの動作を示すフローチャートである。図10と共通する動作については同じ符号を付し、説明を省略する。 Next, FIG. 12 is a flowchart showing the operation of the sound source device 20A according to the modified example of the second embodiment. The same reference numerals are given to the operations common to those in FIG. 10, and the description thereof will be omitted.
 変形例に係る音源装置20Aの制御部140は、さらに利用者を特定する(S200)。この場合、制御部140は、利用者を特定する特定部として機能する。制御部140は、生体情報および位置情報に加えて、利用者の特定結果に基づいて、音像定位処理を行なう。例えば、制御部140は、各種の音を特定の利用者にのみ到達するように指向性を制御する。 The control unit 140 of the sound source device 20A according to the modified example further identifies the user (S200). In this case, the control unit 140 functions as a specific unit that identifies the user. The control unit 140 performs sound image localization processing based on the user's specific result in addition to the biological information and the position information. For example, the control unit 140 controls the directivity so that various sounds reach only a specific user.
 例えば、制御部140は、図13に示す様に、アレイスピーカ50の出力する音の指向性を特定の利用者E2の居る位置に向ける。これにより、利用者E1には、アレイスピーカ50の出力する音が聞こえにくい。例えば、利用者E1が起床時刻を午前8時に設定し、利用者E2が起床時刻を午前9時に設定した場合に、制御部140は、午前8時に覚醒音を利用者E2に出力する。この場合、利用者E1の就寝を妨げることなく、利用者E2にのみ覚醒音を聞かせることができる。 For example, as shown in FIG. 13, the control unit 140 directs the directivity of the sound output by the array speaker 50 to the position where the specific user E2 is. As a result, it is difficult for the user E1 to hear the sound output by the array speaker 50. For example, when the user E1 sets the wake-up time at 8:00 am and the user E2 sets the wake-up time at 9:00 am, the control unit 140 outputs an awakening sound to the user E2 at 8:00 am. In this case, only the user E2 can hear the awakening sound without disturbing the user E1 to go to bed.
 なお、音源装置20Aは、キャンセル音を取得し、特定の利用者以外の人にキャンセル音を定位させる処理を行なってもよい。例えば、制御部140は、図14に示す様に、特定の利用者E2には、覚醒音に係る音ビームB1を出力する。制御部140は、他の利用者E1には、キャンセル音に係る音ビームB2を出力する。キャンセル音は、覚醒音に係る音ビームB1の逆位相の音である。キャンセル音に係る音ビームB2は、覚醒音に係る音ビームB1から漏れる覚醒音をキャンセルすることができる。したがって、さらに利用者E1の就寝を妨げることなく、利用者E2にのみ覚醒音を聞かせることができる。 Note that the sound source device 20A may perform a process of acquiring the cancel sound and localizing the cancel sound to a person other than a specific user. For example, as shown in FIG. 14, the control unit 140 outputs the sound beam B1 related to the awakening sound to the specific user E2. The control unit 140 outputs the sound beam B2 related to the cancel sound to the other user E1. The cancel sound is a sound having the opposite phase of the sound beam B1 related to the awakening sound. The sound beam B2 related to the cancel sound can cancel the awakening sound leaking from the sound beam B1 related to the awakening sound. Therefore, the awakening sound can be heard only by the user E2 without further hindering the user E1 from going to bed.
 本実施形態の説明は、すべての点で例示であって、制限的なものではないと考えられるべきである。本発明の範囲は、上述の実施形態ではなく、特許請求の範囲によって示される。さらに、本発明の範囲は、特許請求の範囲と均等の範囲を含む。 The description of this embodiment should be considered to be exemplary in all respects and not restrictive. The scope of the present invention is shown not by the above-described embodiment but by the scope of claims. Furthermore, the scope of the present invention includes the scope equivalent to the claims.
 例えば、上記実施形態では、音像定位処理の一例としてアレイスピーカの指向性を制御する例を示した。しかし、例えば指向性スピーカの放音方向をモータ等により物理的に変更することでも、音像定位処理を行なうことができる。また、室内に複数のスピーカを配置し、特定の利用者に最も近い位置のスピーカにのみ覚醒音等の音を出力させることでも、音像定位処理を行なうことができる。また、音像定位処理は、例えば音信号に頭部伝達関数を畳み込む処理であってもよい。 For example, in the above embodiment, an example of controlling the directivity of the array speaker is shown as an example of sound image localization processing. However, for example, the sound image localization process can be performed by physically changing the sound emission direction of the directional speaker by a motor or the like. Further, the sound image localization process can also be performed by arranging a plurality of speakers in the room and outputting a sound such as an awakening sound only to the speaker at the position closest to the specific user. Further, the sound image localization process may be, for example, a process of convolving a head-related transfer function into a sound signal.
E,E1,E2…利用者
1,1A…音再生システム
1A…音再生システム
5…ベッド
11,12,13…センサ
20,20A…音源装置
21…通信部
22…プロセッサ
23…RAM
24…フラッシュメモリ
25…表示器
26…ユーザI/F
27…オーディオI/F
30…生体情報取得部
40…音源部
50…アレイスピーカ
51,52…スピーカ
70…制御テーブル
75…位置情報取得部
80…推定部
140…制御部
145…読出部
410,420,430,440…音源部
E, E1, E2 ... Users 1, 1A ... Sound reproduction system 1A ... Sound reproduction system 5 ... Beds 11, 12, 13 ... Sensors 20, 20A ... Sound source device 21 ... Communication unit 22 ... Processor 23 ... RAM
24 ... Flash memory 25 ... Display 26 ... User I / F
27 ... Audio I / F
30 ... Biological information acquisition unit 40 ... Sound source unit 50 ... Array speakers 51, 52 ... Speaker 70 ... Control table 75 ... Position information acquisition unit 80 ... Estimating unit 140 ... Control unit 145 ... Reading unit 410, 420, 430, 440 ... Sound source Department

Claims (20)

  1.  音信号を取得する音信号取得部と、
     音信号を出力する音信号出力部と、
     利用者の生体情報を取得する生体情報取得部と、
     利用者の位置情報を取得する位置情報取得部と、
     前記生体情報および前記位置情報に基づいて前記音信号の音像定位を制御する制御部と、
     を備えた室内用音環境生成装置。
    A sound signal acquisition unit that acquires sound signals,
    A sound signal output unit that outputs a sound signal and
    The biometric information acquisition unit that acquires the biometric information of the user,
    The location information acquisition unit that acquires the location information of the user, and
    A control unit that controls the sound image localization of the sound signal based on the biological information and the position information.
    Indoor sound environment generator equipped with.
  2.  前記生体情報に基づいて、利用者の心身状態を推定する推定部を備え、
     前記制御部は、前記推定部の推定結果および前記位置情報に基づいて、前記音像定位を制御する、
     請求項1に記載の室内用音環境生成装置。
    It is equipped with an estimation unit that estimates the physical and mental condition of the user based on the biological information.
    The control unit controls the sound image localization based on the estimation result of the estimation unit and the position information.
    The indoor sound environment generator according to claim 1.
  3.  前記位置情報取得部は、利用者が身につけるセンサを介して前記位置情報を取得する、
     請求項1または請求項2に記載の室内用音環境生成装置。
    The position information acquisition unit acquires the position information via a sensor worn by the user.
    The indoor sound environment generator according to claim 1 or 2.
  4.  前記制御部は、前記音信号の指向性を制御することで、前記音像定位を制御する、
     請求項1乃至請求項3のいずれか1項に記載の室内用音環境生成装置。
    The control unit controls the sound image localization by controlling the directivity of the sound signal.
    The indoor sound environment generator according to any one of claims 1 to 3.
  5.  前記利用者を特定する特定部を備え、
     前記制御部は、さらに前記特定部の特定結果に基づいて前記音像定位を制御する、
     請求項1乃至請求項4のいずれか1項に記載の室内用音環境生成装置。
    It is equipped with a specific part that identifies the user.
    The control unit further controls the sound image localization based on the specific result of the specific unit.
    The indoor sound environment generator according to any one of claims 1 to 4.
  6.  前記音信号取得部は、キャンセル音を取得し、
     前記制御部は、前記利用者以外の人に前記キャンセル音を定位させる、
     請求項5に記載の室内用音環境生成装置。
    The sound signal acquisition unit acquires a cancel sound and
    The control unit localizes the canceling sound to a person other than the user.
    The indoor sound environment generator according to claim 5.
  7.  利用者の生体情報を取得する生体情報取得部と、
     複数の音源部と、
     前記生体情報に基づいて前記複数の音源部を制御するための制御テーブルを決定し、決定した制御テーブルに基づいて前記複数の音源部を制御する制御部と、
     前記制御部の制御に応じて前記複数の音源部から音源を読み出す読出部と、
     を備えた音源装置。
    The biometric information acquisition unit that acquires the biometric information of the user,
    With multiple sound sources
    A control table for controlling the plurality of sound source units is determined based on the biological information, and a control unit for controlling the plurality of sound source units based on the determined control table.
    A reading unit that reads sound sources from the plurality of sound source units according to the control of the control unit, and
    Sound source device equipped with.
  8.  前記生体情報取得部は、前記生体情報を複数回取得し、
     前記制御部は、前記生体情報取得部で前記生体情報を取得する毎に、前記制御テーブルの決定を繰り返す、
     請求項7に記載の音源装置。
    The biometric information acquisition unit acquires the biometric information a plurality of times.
    The control unit repeats the determination of the control table every time the biometric information acquisition unit acquires the biometric information.
    The sound source device according to claim 7.
  9.  前記制御部は、複数の制御テーブルから、前記生体情報に基づいて、1つの制御テーブルを選択する、
     請求項7または請求項8に記載の音源装置。
    The control unit selects one control table from a plurality of control tables based on the biometric information.
    The sound source device according to claim 7 or 8.
  10.  前記音源は、睡眠導入音または覚醒音を含む、
     請求項7乃至請求項9のいずれか1項に記載の音源装置。
    The sound source includes a sleep-inducing sound or an awakening sound.
    The sound source device according to any one of claims 7 to 9.
  11.  音信号を取得し、
     音信号を出力し、
     利用者の生体情報を取得し、
     利用者の位置情報を取得し、
     前記生体情報および前記位置情報に基づいて前記音信号の音像定位を制御する、
     室内用音環境生成方法。
    Get the sound signal,
    Output a sound signal,
    Obtain the biometric information of the user and
    Get the user's location information and
    Controlling the sound image localization of the sound signal based on the biological information and the position information.
    How to generate a sound environment for a room.
  12.  前記生体情報に基づいて、利用者の心身状態を推定し、
     前記推定の結果および前記位置情報に基づいて、前記音像定位を制御する、
     請求項11に記載の室内用音環境生成方法。
    Based on the biometric information, the physical and mental condition of the user is estimated.
    The sound image localization is controlled based on the estimation result and the position information.
    The method for generating an indoor sound environment according to claim 11.
  13.  前記位置情報を取得することにおいて、利用者が身につけるセンサを介して前記位置情報を取得する、
     請求項11または請求項12に記載の室内用音環境生成方法。
    In acquiring the position information, the position information is acquired via a sensor worn by the user.
    The method for generating an indoor sound environment according to claim 11 or 12.
  14.  前記音信号の指向性を制御することで、前記音像定位を制御する、
     請求項11乃至請求項13のいずれか1項に記載の室内用音環境生成方法。
    By controlling the directivity of the sound signal, the sound image localization is controlled.
    The method for generating an indoor sound environment according to any one of claims 11 to 13.
  15.  前記利用者を特定し、
     さらに前記特定の結果に基づいて前記音像定位を制御する、
     請求項11乃至請求項14のいずれか1項に記載の室内用音環境生成方法。
    Identify the user and
    Further, the sound image localization is controlled based on the specific result.
    The method for generating an indoor sound environment according to any one of claims 11 to 14.
  16.  キャンセル音を取得し、
     前記利用者以外の人に前記キャンセル音を定位させる、
     請求項15に記載の室内用音環境生成方法。
    Get the cancel sound,
    Localize the cancel sound to a person other than the user.
    The method for generating an indoor sound environment according to claim 15.
  17.  利用者の生体情報を取得し、
     前記生体情報に基づいて複数の音源部を制御するための制御テーブルを決定し、決定した制御テーブルに基づいて前記複数の音源部を制御し、
     前記制御に応じて前記複数の音源部から音源を読み出す、
     音源装置の制御方法。
    Obtain the biometric information of the user and
    A control table for controlling a plurality of sound source units is determined based on the biological information, and the plurality of sound source units are controlled based on the determined control table.
    A sound source is read from the plurality of sound source units according to the control.
    How to control the sound source device.
  18.  前記生体情報を複数回取得し、
     前記生体情報を取得する毎に、前記制御テーブルの決定を繰り返す、
     請求項17に記載の音源装置の制御方法。
    The biometric information is acquired multiple times,
    Every time the biological information is acquired, the determination of the control table is repeated.
    The control method for a sound source device according to claim 17.
  19.  複数の制御テーブルから、前記生体情報に基づいて、1つの制御テーブルを選択する、
     請求項17または請求項18に記載の音源装置の制御方法。
    One control table is selected from a plurality of control tables based on the biometric information.
    The control method for a sound source device according to claim 17 or 18.
  20.  前記音源は、睡眠導入音または覚醒音を含む、
     請求項17乃至請求項19のいずれか1項に記載の音源装置の制御方法。
    The sound source includes a sleep-inducing sound or an awakening sound.
    The control method for a sound source device according to any one of claims 17 to 19.
PCT/JP2020/013201 2020-03-25 2020-03-25 Indoor sound environment generation apparatus, sound source apparatus, indoor sound environment generation method, and sound source apparatus control method WO2021192072A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2020/013201 WO2021192072A1 (en) 2020-03-25 2020-03-25 Indoor sound environment generation apparatus, sound source apparatus, indoor sound environment generation method, and sound source apparatus control method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2020/013201 WO2021192072A1 (en) 2020-03-25 2020-03-25 Indoor sound environment generation apparatus, sound source apparatus, indoor sound environment generation method, and sound source apparatus control method

Publications (1)

Publication Number Publication Date
WO2021192072A1 true WO2021192072A1 (en) 2021-09-30

Family

ID=77891254

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/013201 WO2021192072A1 (en) 2020-03-25 2020-03-25 Indoor sound environment generation apparatus, sound source apparatus, indoor sound environment generation method, and sound source apparatus control method

Country Status (1)

Country Link
WO (1) WO2021192072A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024009677A1 (en) * 2022-07-04 2024-01-11 ヤマハ株式会社 Sound processing method, sound processing device, and program
WO2024053123A1 (en) * 2022-09-05 2024-03-14 パナソニックIpマネジメント株式会社 Playback system, playback method, and program

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009510534A (en) * 2005-10-03 2009-03-12 マイサウンド エーピーエス System for reducing the perception of audible noise for human users
WO2018074224A1 (en) * 2016-10-21 2018-04-26 株式会社デイジー Atmosphere generating system, atmosphere generating method, atmosphere generating program, and atmosphere estimating system
WO2018079846A1 (en) * 2016-10-31 2018-05-03 ヤマハ株式会社 Signal processing device, signal processing method and program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009510534A (en) * 2005-10-03 2009-03-12 マイサウンド エーピーエス System for reducing the perception of audible noise for human users
WO2018074224A1 (en) * 2016-10-21 2018-04-26 株式会社デイジー Atmosphere generating system, atmosphere generating method, atmosphere generating program, and atmosphere estimating system
WO2018079846A1 (en) * 2016-10-31 2018-05-03 ヤマハ株式会社 Signal processing device, signal processing method and program

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024009677A1 (en) * 2022-07-04 2024-01-11 ヤマハ株式会社 Sound processing method, sound processing device, and program
WO2024053123A1 (en) * 2022-09-05 2024-03-14 パナソニックIpマネジメント株式会社 Playback system, playback method, and program

Similar Documents

Publication Publication Date Title
AU2004202501B2 (en) Control apparatus and control method
US9978358B2 (en) Sound generator device and sound generation method
US6369312B1 (en) Method for expressing vibratory music and apparatus therefor
JP4739762B2 (en) Audio playback apparatus, audio feedback system and method
WO2021192072A1 (en) Indoor sound environment generation apparatus, sound source apparatus, indoor sound environment generation method, and sound source apparatus control method
WO2016136450A1 (en) Sound source control apparatus, sound source control method, and computer-readable storage medium
US20170182284A1 (en) Device and Method for Generating Sound Signal
US10831437B2 (en) Sound signal controlling apparatus, sound signal controlling method, and recording medium
JP6477300B2 (en) Sound generator
JP2011130099A (en) Device for generating sound environment for falling asleep or wake-up
KR102220738B1 (en) Brainwave activity device for the treatment of depression and dementia and brainwave activity method using the same
JP2017070571A (en) Sound source device
WO2018235629A1 (en) Signal waveform generation device for biological stimulation
WO2017002703A1 (en) Audio signal generation device, audio signal generation method, and computer-readable recording medium
KR101869508B1 (en) Apparatus for providing lullaby
JP3868326B2 (en) Sleep introduction device and psychophysiological effect transfer device
JP2018068962A (en) Sound sleep device
JPH0678998A (en) Acoustic signal control device
KR101611362B1 (en) Audio Apparatus for Health care
JP2011130100A (en) Sound environment generator for onset of sleeping and waking up
JP2017070342A (en) Content reproducing device and program thereof
JPH03128066A (en) Field-effect treatment apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20926833

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20926833

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP