WO2008015733A1 - Dispositif, procédé et programme de commande sonore - Google Patents

Dispositif, procédé et programme de commande sonore Download PDF

Info

Publication number
WO2008015733A1
WO2008015733A1 PCT/JP2006/315166 JP2006315166W WO2008015733A1 WO 2008015733 A1 WO2008015733 A1 WO 2008015733A1 JP 2006315166 W JP2006315166 W JP 2006315166W WO 2008015733 A1 WO2008015733 A1 WO 2008015733A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound
channel
information
volume
data
Prior art date
Application number
PCT/JP2006/315166
Other languages
English (en)
Japanese (ja)
Inventor
Junichi Yoshio
Original Assignee
Pioneer Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pioneer Corporation filed Critical Pioneer Corporation
Priority to PCT/JP2006/315166 priority Critical patent/WO2008015733A1/fr
Publication of WO2008015733A1 publication Critical patent/WO2008015733A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/13Aspects of volume control, not necessarily automatic, in stereophonic sound systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S3/00Systems employing more than two channels, e.g. quadraphonic

Definitions

  • the present application belongs to the technical field of a sound control device, a sound control method, and a sound control program, and more specifically, a sound control device, a sound control method, and a sound control method for controlling sound emission states of a plurality of channels. It belongs to the technical field of sound control programs.
  • the above producer creates an acoustic effect ⁇ iij that maximizes the value of the content, and sets the volume level etc. so that the created acoustic effect can be obtained. After that, the voice or the like is recorded or distributed as acoustic data.
  • the acoustic characteristics in the space where the content is reproduced for example, ambient noise-noise conditions, the size of the space itself, or the sound absorption characteristics of the walls constituting the space
  • the acoustic characteristics assumed by the content producer at the time of production are usually different from the acoustic characteristics in the space where the content is actually played by the viewer. Furthermore, it is practically difficult for the producer to grasp the degree of the difference in all cases.
  • the producer models a plurality of types of playback environments, etc. for the playback player, and the playback environment, etc., that models the playback environment.
  • volume level control information so-called loudness control information
  • control information for low volume playback that should be used when the entire content is played back at a volume lower than the volume intended by the creator, the player with hearing loss views the content.
  • Control information and content to be used for hearing The control information for specific sound that should be used when playing only the volume level of specific sound (for example, announcement sound) within the volume level of other sound is recorded together with the above sound data in advance. Or distribute. Then, a player who reproduces acoustic data to which a plurality of types of the control information is added selects V or the optimum control information depending on the acoustic characteristics in the reproduction environment where the acoustic data is reproduced. To start playback. As a result, the amplification factor and the like of the amplifier used by the player is automatically controlled based on the content of the control information, and thereby the sound data specified by the content of the control information can be reproduced. It is.
  • Patent Document 1 Japanese Patent Laid-Open No. 2006-42027
  • Patent Document 2 Special Table 2003-524906
  • An object of the present invention is to provide a sound control device, a sound control method, and a sound control program capable of reproducing sound data by the player with good acoustic characteristics.
  • the invention according to claim 1 corresponds to sound information including a plurality of channels of channel sound information corresponding to one channel, and each of the channel sound information.
  • Channel loudness for each channel indicating the loudness itself Information is acquired from the outside, using the acquisition means such as the first processing unit, and each channel sound volume information, the sound volume corresponding to each channel sound information is common to all the channels Using the first control means such as the second processing unit to be controlled automatically and the channel sound volume information corresponding to the part of the channel, the part of the channel.
  • Second control means such as a second processing unit for controlling the volume of the pre-record sound corresponding to the channel sound information corresponding to the channel sound information, each channel sound volume information corresponding to the corresponding channel sound information It is configured to be the channel sound volume information for changing the volume of the sound to be played based on a preset audibility correction characteristic of the person.
  • the invention according to claim 3 corresponds to sound information including a plurality of channels of channel sound information corresponding to one channel, and each of the channel sound information.
  • the channel sound information for each channel indicating the sound volume itself is obtained by using an acquisition means such as a first processing unit that obtains the sound from the outside, and the channel sound information.
  • the first control means such as a second processing unit that controls the volume of the sound corresponding to all the channels in common, and a part of the channels correspond to the part of the channels.
  • Second control means such as a second processing unit for controlling the volume of the pre-record sound corresponding to the channel sound information corresponding to the part of the channels using the channel sound volume information;
  • Each And third control means such as a second processing unit for further controlling the loudness based on the listening environment for listening to each sound.
  • the invention according to claim 7 corresponds to sound information including a plurality of channels of channel sound information corresponding to one channel, and each of the channel sound information.
  • Channel sound volume information for each of the channels indicating the volume of the sound itself, an acquisition step of acquiring external force, and using each of the channel sound volume information, the sound corresponding to each of the channel sound information
  • the first control step for controlling the volume in common for all the channels, and for the some of the channels, using the channel sound magnitude information corresponding to the some of the channels
  • Each channel sound volume information includes a channel, and the channel sound volume information corresponding to the channel sound information corresponding to the channel sound information is changed based on a preset human auditory correction characteristic. It is configured to be sound volume information.
  • FIG. 1 is a block diagram showing a schematic configuration of a playback system according to an embodiment.
  • FIG. 2 is a diagram illustrating the content of loudness data according to the embodiment.
  • FIG. 3 is a block diagram showing a schematic configuration of a playback apparatus according to the embodiment.
  • FIG. 4 is a flowchart showing processing in the playback apparatus according to the embodiment.
  • content data which is content of a broadcasting station such as a movie
  • a playback device via a network such as the Internet or broadcast radio waves.
  • the present application is applied to a playback system that plays back the content data in the playback device (that is, watches the movie or the like corresponding to the content data).
  • FIG. 1 is a block diagram showing a schematic configuration of the playback system according to the embodiment
  • FIG. 2 is a diagram illustrating the contents of loudness data according to the embodiment.
  • the playback system S includes a broadcasting station B and a playback device P to which a display unit (not shown) and a speaker 5 are connected, such as a liquid crystal display.
  • the content data D created and digitized at the broadcasting station B is
  • content data D includes image data and sound data corresponding to the content itself such as the movie. Then, based on the amount of information or the content of the content, etc., the broadcast station is divided into a plurality of unit content data DD in advance.
  • the acoustic data in one unit content data DD includes, for example, a header H, control data C, and audio data AD that is an entity as acoustic data, as illustrated in FIG. It is configured.
  • the header H for example, information indicating image data to be reproduced simultaneously corresponding to the audio data AD included in the same unit content data DD, and the content of the audio data AD It includes time information indicating the playback timing measured from the start of playback.
  • control data C includes information indicating attributes such as the number of channels of the audio data AD included in the same unit content data DD, in addition to the loudness data LD described in detail later. .
  • the loudness data LD included in the control data C and according to the embodiment is included in the control data C. This will be described with reference to FIG.
  • the loudness data LD is a time division unit in which audio data AD included in the same unit content data DD is set in advance. Time division unit that is set in consideration of, etc., for example, 10 milliseconds is preferable), and when the frequency is further divided for each temporary division unit obtained by the time division. This is data indicating the volume level itself for each frequency division unit (frequency division unit that divides each frequency (decade) in the frequency of audio data AD, for example, not frequency division unit having absolute value as frequency). is there.
  • the loudness data LD is included in the same unit content data DD, and for each channel of the audio data AD, the volume level in the temporary division unit in the audio data AD (that is, A value obtained by averaging changes in the time direction of (amplitude) by a method that is standardized in advance or arbitrarily set in advance is a level value as loudness data LD in the temporary division unit. Then, this averaging process is performed for each frequency division unit set in advance and for all the frequency bands as the audio data AD, and the control data C includes the loudness data LD corresponding to the frequency division unit. Included.
  • the level value obtained by the above-described averaging process is used as the loudness data LD actually transmitted to the playback device P, for example, as shown in FIG.
  • weighting is performed using a preset correction characteristic indicating human auditory correction.
  • the loudness data LD corrected according to the human audibility for each frequency division unit can be transmitted together with the corresponding audio data AD.
  • FIG. 2 will be described additionally.
  • Figure 2 is commonly referred to as the so-called NR (Noise Rating) curve and ISO (International
  • Each NR curve shows the sound pressure of the audio data AD, which seems to be optimal when sound is emitted as an application corresponding to that number, as a function of frequency.
  • the correction characteristic indicated by the number NR30 seems to be applied when sound equivalent to the audio data AD is emitted in an environment such as a private house or hospital.
  • the correction characteristic indicated by the number NR40 indicates the sound equivalent to the audio data AD in an environment such as a large building, hall, hallway, etc. This shows the sound pressure correction characteristics for each frequency that should be applied when sound is emitted.
  • FIG. 3 is a block diagram showing a schematic configuration of the playback device P according to the embodiment
  • FIG. 4 is a flowchart showing processing in the playback device P according to the embodiment.
  • the playback device P described below is a playback device P that constitutes a so-called 5.1 channel surround system.
  • the playback device P to which the content data D including the loudness data LD is transmitted from the broadcasting station B is a first processing unit as an acquisition means to which the content data D is input. 1, a control unit 2 as a first control unit, a second control unit and a third control unit, second processing units 3A to 3E as a first control unit, a second control unit and a third control unit, and an amplifier 4A to 4E and speakers 5A to 5E constituting the speaker 5 shown in FIG.
  • the second processing units 3A to 3E, the amplifiers 4A to 4E, and the speakers 5A to 5E are each set to emit sound corresponding to one channel. More specifically, for example, the second processing unit 3A, the amplifier 4A, and the speaker 5A emit sound for the subwoofer in the surround system, and the second processing unit 3B, the amplifier 4B, and the speaker 5B The sound for the front left in the surround system is emitted, and the sound for the right front in the surround system is emitted by the second processing unit 3C, the amplifier 4C, and the speaker 5C, and the second processing unit 3D.
  • the first processing unit 1 receives an input signal based on the control signal S from the control unit 1.
  • the audio data AD in the received content data D is separated for each channel, and so-called preprocessing such as preset amplification processing (i.e., common preprocessing for audio data AD of all channels). ) And output to each of the corresponding second processing units 3A to 3E as processing signals S to S for each channel.
  • preprocessing such as preset amplification processing (i.e., common preprocessing for audio data AD of all channels).
  • the first processing unit 1 separates the loudness data LD from the content data D for each channel and outputs it to the control unit 2 as a loudness data signal S.
  • each of the second processing units 3A to 3E is based on the control signals S to S from the control unit 1.
  • Each of the processing signals S to S of each channel has a label corresponding to that channel.
  • the loudness data LD corresponding to each channel is one of the control signals S to S.
  • the amplifiers 4A to 4A that correspond directly as the acoustic signals SA to SA
  • the amplifiers 4A to 4E are respectively output to the amplifiers 4A to 4E so as to control the amplification factors and the like.
  • each of the amplifiers 4A to 4E is indicated by the control signal S inputted thereto.
  • control unit 2 controls the first processing unit 1 and the second processing unit 3A based on the operation signal and the like input from the operation unit equal force (not shown) and the separated loudness data LD.
  • control signals S 1 to S and the loudness data LD for each channel are included.
  • step S1 so-called normalization processing (step S1), so-called emphasis processing Z de-emphasis processing (step S2) and! / So-called multi-channel balance processing (step S3) are performed.
  • step S1 so-called normalization processing
  • step S2 so-called emphasis processing Z de-emphasis processing
  • step S3 So-called multi-channel balance processing
  • each processing is executed independently for each channel.
  • the amplifier 4A is the execution target of all the normalization processing, emphasis processing, Z de-emphasis processing, and multi-channel balance processing.
  • the control signal S output to the amplifier 4A is
  • the three processes have values as a result of superimposing them.
  • the control signal S output to the amplifier 4C is used.
  • step S4 NO
  • the process returns to step S1 to execute the processing of the above steps S1 to S4 for the next time division unit. 4 is finished, the process shown in Fig. 4 is terminated.
  • step S1 As the normalization process in step S1, the same process is performed for all channels. Specifically, the loudness data LD in the time division unit corresponding to one preset channel is described.
  • the control signal S for controlling the input level of the audio data AD of the other channels corresponding to the corresponding time division unit to the amplifiers 4A to 4E or the gains of the amplifiers 4A to 4E themselves.
  • the emphasis processing Z de-emphasis processing is different from the normalization processing described above, and it is said that the emphasis processing Z de-emphasis processing needs to be executed by the loudness data LD.
  • the loudness data L The loudness data L
  • a control signal S for setting the gain of amplifier 4 using the values described in D is generated.
  • control signal S for linearly correcting the gain of the corresponding amplifier 4 is applied according to the content of the loudness data LD.
  • a control signal S for nonlinearly correcting the gain is generated along the NR curve illustrated in FIG. 2 (that is, using human auditory characteristics).
  • the speakers 5A to 5E are actually installed by using a conventional method such as actually emitting a white noise signal from each speaker 5A to 5E.
  • a conventional method such as actually emitting a white noise signal from each speaker 5A to 5E.
  • the control signal S for correcting the gain of the amplifier 4 at the time of actual playback is generated using the sound volume correction coefficient for each channel obtained in the above.
  • volume correction coefficient there are various methods for obtaining the volume correction coefficient in advance. For example, a method of manually adjusting the volume balance between the channels may be used. Alternatively, as described above, A method of automatically obtaining a volume correction count by actually detecting a test signal such as a white noise signal from the channel speakers 5A to 5E with a microphone may be used.
  • the sound corresponding to the audio data AD of each channel using the loudness data LD for each channel acquired from the broadcasting station B is used.
  • the size of the speaker 5A to 5E is further controlled based on the installation position of each speaker 5A to 5E, so that the listener's listening environment including the installation position of the speaker 5A to 5E is a channel. Audio data of AD itself Even if the arrangement is not intended by the producer, it is possible for the listener to listen to each sound with an acoustic characteristic that is well and precisely corrected for each time division unit and each frequency division unit.
  • the listener can listen to each sound while reflecting the intention of the producer of the sound. it can.
  • each loudness data LD is a loudness data LD for changing the volume of the corresponding sound based on a preset hearing sensitivity correction characteristic of the person. The sound can be heard by the listener with good acoustic characteristics that are more suitable for the environment.
  • each loudness data LD power is the loudness data LD indicating the magnitude of each sound itself for each channel in all frequency bands. Can be controlled.
  • step S1 in Fig. 4 when the loudness is controlled in common using the inverse of the value of each loudness data LD as an amplification factor, it is more effective and The normalization process can be executed easily.
  • the arrangement of the speakers 5A to 5E cannot be an ideal arrangement as a content creator.
  • the content of the movie has the power to modify the multi-channel balance for movie theaters for home use, while it is often used for movie theaters as it is.
  • the volume of the actor's dialogue is low, for example, the volume of the background is high.
  • the first processing unit 1 in the playback device P separates and extracts loudness data corresponding to the audio data from the content data transmitted to the playback device P, and the volume level is compensated as in the above-described embodiments. It can also be configured to be used for normal processing.
  • the loudness data LD for each channel acquired from the broadcasting station B is used to control the volume of sound corresponding to the audio data AD of each channel,
  • the normalization process see step S1 in FIG. 4
  • the emphasis Z de-emphasis process see FIG. 4
  • Only step S4 in FIG. 4) may be executed using the loudness data LD for changing the corresponding sound volume based on the human auditory correction characteristic. Even in this case, substantially the same functions and effects as in the above embodiment can be obtained.
  • the broadcasting station B-power content data D is transmitted to the playback device P via a network or broadcasting radio waves as in the above embodiment, for example, a DVD (Digital Versatil e Disc) is recorded on a recording medium such as an optical disc or a node disc, and the corresponding loudness data is recorded on the playback device P owned by the player who has acquired the audio data.
  • the loudness data can be reproduced and used for the sound level correction processing.
  • the power described for transmitting audio data AD and control data C (including loudness data LD) corresponding to the audio data AD is listed at the beginning of the content data D.
  • the loudness data LD can be transmitted by inserting it at a predetermined position.
  • the playback device that has received it extracts and stores the time table of the content data D, reads out the corresponding loudness data LD as the playback of the audio data AD proceeds, and This is used for controlling the volume level and the like.
  • a program corresponding to the flowchart shown in FIG. 4 is recorded on an information recording medium such as a flexible disk or a hard disk, or is acquired and recorded via the Internet or the like, and is read by a general-purpose computer.
  • the computer can be used as the control unit 2 and the second processing units 3A to 3E according to the embodiment.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Tone Control, Compression And Expansion, Limiting Amplitude (AREA)
  • Stereophonic System (AREA)

Abstract

L'invention concerne un dispositif de commande sonore conçu pour la reproduction de données acoustiques d'un contenu par un utilisateur avec une caractéristique acoustique préférable même si l'environnement de reproduction de l'utilisateur n'est pas celui prévu par un producteur de contenu. Le dispositif de commande sonore comprend : une première unité de traitement (1) qui acquiert des données de contenu (D) incluant des données audio (AD) sur une pluralité de canaux et des données de sonie (LD) indiquant la sonie sonore de chacun des canaux; des secondes unités de traitement (3A à 3E) qui réalisent un processus de normalisation pour commander couramment la sonie sonore de chacun des canaux au moyen de chaque donnée de sonie sonore (LD), un processus d'accentuation/désaccentuation servant à commander une sonie sonore correspondant à certains des canaux au moyen d'une donnée de sonie (LD) correspondant à certains des canaux, et un processus d'équilibre multi-canal servant à commander davantage la sonie de chacun des sons commandés selon un environnement d'écoute incluant des positions d'installation de haut-parleurs respectifs (5A à 5E).
PCT/JP2006/315166 2006-07-31 2006-07-31 Dispositif, procédé et programme de commande sonore WO2008015733A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2006/315166 WO2008015733A1 (fr) 2006-07-31 2006-07-31 Dispositif, procédé et programme de commande sonore

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2006/315166 WO2008015733A1 (fr) 2006-07-31 2006-07-31 Dispositif, procédé et programme de commande sonore

Publications (1)

Publication Number Publication Date
WO2008015733A1 true WO2008015733A1 (fr) 2008-02-07

Family

ID=38996921

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2006/315166 WO2008015733A1 (fr) 2006-07-31 2006-07-31 Dispositif, procédé et programme de commande sonore

Country Status (1)

Country Link
WO (1) WO2008015733A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011521511A (ja) * 2008-04-18 2011-07-21 ソニー エリクソン モバイル コミュニケーションズ, エービー 拡張現実を強化した音声
US8295504B2 (en) * 2008-05-06 2012-10-23 Motorola Mobility Llc Methods and devices for fan control of an electronic device based on loudness data
JP2016534402A (ja) * 2013-08-28 2016-11-04 ランダー オーディオ インコーポレイテッド 意味データを用いて自動オーディオ生成を行うシステム及び方法

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH03205659A (ja) * 1989-10-27 1991-09-09 Pioneer Electron Corp ディジタル情報信号記録媒体及びその演奏装置
JP2004056527A (ja) * 2002-07-19 2004-02-19 Pioneer Electronic Corp 周波数特性調整装置および周波数特性調整方法

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH03205659A (ja) * 1989-10-27 1991-09-09 Pioneer Electron Corp ディジタル情報信号記録媒体及びその演奏装置
JP2004056527A (ja) * 2002-07-19 2004-02-19 Pioneer Electronic Corp 周波数特性調整装置および周波数特性調整方法

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011521511A (ja) * 2008-04-18 2011-07-21 ソニー エリクソン モバイル コミュニケーションズ, エービー 拡張現実を強化した音声
US8295504B2 (en) * 2008-05-06 2012-10-23 Motorola Mobility Llc Methods and devices for fan control of an electronic device based on loudness data
JP2016534402A (ja) * 2013-08-28 2016-11-04 ランダー オーディオ インコーポレイテッド 意味データを用いて自動オーディオ生成を行うシステム及び方法

Similar Documents

Publication Publication Date Title
EP1540988B1 (fr) Haut-parleurs intelligents
US6442278B1 (en) Voice-to-remaining audio (VRA) interactive center channel downmix
CN1213556C (zh) 消费者应用中用于处理音频信号的电影院***和相关方法
JP2013501969A (ja) 方法、システム及び機器
CN101800926A (zh) 音频***和控制该音频***的输出的方法
KR20110069112A (ko) 보청기 시스템에서 바이노럴 스테레오를 렌더링하는 방법 및 보청기 시스템
US9111523B2 (en) Device for and a method of processing a signal
US20210257983A1 (en) System and method for digital signal processing
JP2015126460A (ja) ソース機器
US20220345845A1 (en) Method, Systems and Apparatus for Hybrid Near/Far Virtualization for Enhanced Consumer Surround Sound
JP4150749B2 (ja) 立体音響再生システムおよび立体音響再生装置
JP5909100B2 (ja) ラウドネスレンジ制御システム、伝送装置、受信装置、伝送用プログラム、および受信用プログラム
CN117882394A (zh) 通过使用线性化和/或带宽扩展产生第一控制信号和第二控制信号的装置和方法
JP2009159312A (ja) 音声再生装置、音声再生方法、音声再生システム、制御プログラム、および、コンピュータ読み取り可能な記録媒体
US20050047619A1 (en) Apparatus, method, and program for creating all-around acoustic field
JP2007104046A (ja) 音響調整装置
WO2008015733A1 (fr) Dispositif, procédé et programme de commande sonore
US9485578B2 (en) Audio format
JP4534844B2 (ja) デジタルサラウンドシステム、サーバ装置およびアンプ装置
JPH10257600A (ja) マルチチャンネル信号処理方法
JP2006279247A (ja) 音響再生装置
Robjohns Surround sound explained: Part 2
KR102676074B1 (ko) 믹싱 메타데이터를 이용하여 투명 모드를 제공하는 방법 및 오디오 장치
CN101212830A (zh) 音频***的音相扩展装置
KR20240080841A (ko) 믹싱 메타데이터를 이용하여 투명 모드를 제공하는 방법 및 오디오 장치

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 06782044

Country of ref document: EP

Kind code of ref document: A1

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
NENP Non-entry into the national phase

Ref country code: DE

NENP Non-entry into the national phase

Ref country code: RU

NENP Non-entry into the national phase

Ref country code: JP

122 Ep: pct application non-entry in european phase

Ref document number: 06782044

Country of ref document: EP

Kind code of ref document: A1