EP1784049A1 - A method and system for sound reproduction, and a program product - Google Patents

A method and system for sound reproduction, and a program product Download PDF

Info

Publication number
EP1784049A1
EP1784049A1 EP05024347A EP05024347A EP1784049A1 EP 1784049 A1 EP1784049 A1 EP 1784049A1 EP 05024347 A EP05024347 A EP 05024347A EP 05024347 A EP05024347 A EP 05024347A EP 1784049 A1 EP1784049 A1 EP 1784049A1
Authority
EP
European Patent Office
Prior art keywords
sound reproduction
mobile terminal
audio content
mapping
leading mobile
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP05024347A
Other languages
German (de)
French (fr)
Inventor
Michael Hoeyer
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BenQ Corp
Original Assignee
BenQ Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BenQ Corp filed Critical BenQ Corp
Priority to EP05024347A priority Critical patent/EP1784049A1/en
Priority to PCT/EP2006/010704 priority patent/WO2007054285A1/en
Publication of EP1784049A1 publication Critical patent/EP1784049A1/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2205/00Details of stereophonic arrangements covered by H04R5/00 but not provided for in any of its subgroups
    • H04R2205/024Positioning of loudspeaker enclosures for spatial sound reproduction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2420/00Details of connection covered by H04R, not provided for in its groups
    • H04R2420/05Detection of connection of loudspeakers or headphones to amplifiers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2420/00Details of connection covered by H04R, not provided for in its groups
    • H04R2420/07Applications of wireless loudspeakers or wireless microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S3/00Systems employing more than two channels, e.g. quadraphonic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/301Automatic calibration of stereophonic sound system, e.g. with test microphone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/308Electronic adaptation dependent on speaker or headphone connection

Definitions

  • the invention relates to methods, program products and systems for sound reproduction.
  • stereo televisions can be used to reproduce sound - especially music - but also mobile terminals, such as mobile telephones, portable computers, or portable music players that can receive audio content can be used for this purpose.
  • mobile terminals such as mobile telephones, portable computers, or portable music players that can receive audio content can be used for this purpose.
  • a method for sound reproduction comprises the steps of: a) receiving or generating a mapping from at least two instruments, from at least two frequency ranges, from at least two directional channels, or any combination thereof, to at least two sound reproduction devices or to at least one sound reproduction device and a leading mobile terminal; b) receiving audio content; c) using said mapping on the audio content to pass sound information describing an instrument, a frequency range, or a directional channel from said audio content to a corresponding sound reproduction device; and d) at a leading mobile terminal, synchronizing playback on at least one of the sound reproduction devices over a wireless local connection, audio content, can be output via the sound reproduction devices, making the impression of the audio content being reproduced via a surround speaker set or by a small orchestra.
  • step d) is performed at the leading mobile terminal by transmitting a synchronization signal to at least one of the sound reproduction devices, the user control over the sound reproduction may be improved.
  • a gimmick in the form of a light show may be obtained, at least if the optical signal is at least partly within the visible light spectrum. Furthermore, the proper functioning of the sound reproduction device may be checked in a less complex manner, also if one or more of the sound reproduction devices or the mobile terminal has been turned silent.
  • the mapping When the mapping is automatically adapted responsive to a change in the relative position of the sound reproduction devices from each other or from the leading mobile terminal, or responsive to a change of availability of a sound reproduction device, the user experience may be improved further, and if one of the sound reproduction devices is taken away, the missing instrument, frequency range, or directional channel can be replaced to output by at least one of the other sound reproduction devices.
  • the required computing power at the leading mobile terminal can be reduced.
  • a system for sound reproduction that comprises not only a leading mobile terminal comprising a program product adapted to carry out the method according to the invention when executed in a processing unit, but also at least one sound reproduction device comprising means adapted to receive sound information describing at least one instrument, at least one frequency range, or at least one directional channel from the leading mobile terminal and to receive synchronization information from the leading mobile terminal over wireless local connection means, a system with which the user experience may be improved can be obtained. Especially if at least one of the further sound reproduction devices is a mobile terminal too, the versatility of the mobile terminals can be improved, possibly with a gimmick effect.
  • Figure 1 illustrates the idea of a mobile orchestra. Sound reproduction devices 12, 13, 14, which may be deliberately many, form the mobile orchestra.
  • the leading mobile terminal that conducts the orchestra may only direct the performance of the mobile orchestra or also be a part of the mobile orchestra. In the following, the leading mobile terminal is referred to as conductor 11.
  • sound reproduction devices 12, 13, 14 are mobile terminals too, there may be a graphics on the display of each mobile terminal illustrating a face of a musician, the face of a musician robot, or an instrument. Same applies also to the conductor 11, but preferably instead of a musician, a picture of a conductor is shown.
  • Figure 1 shows the conductor 11 from behind, illustrating the principle that if the leading mobile terminal has a display on both sides of its housing, the graphics on the display on the back side of the housing may be different from that on the display on the front side, preferably showing the same object as in the other display but from behind.
  • the same principle can be used to implement graphics on the display or displays of the sound reproduction devices, especially if they are mobile terminals.
  • the looks of the musicians may be changed automatically responsive to the music style.
  • jazz or Blues musicians may have some characteristics, such as clothing, showing different style or ethnic background than that of musicians playing some other class of music, for example.
  • Sets of images may be mapped to a given music style or performer which may be recognized automatically through a genre or artist or record identifier stored with the audio content.
  • FIG. 1 shows the conductor 11 together with sound reproduction devices 12, 13, 14, 25 that together form the mobile orchestra.
  • the conductor 11 receives or generates a mapping 30 from at least two instruments, from at least two frequency ranges, or from at least two directional channels to at least two sound reproduction devices 12, 13, 14, 25.
  • the mapping 30 may further comprise information to map at least one instrument, at least one frequency range, or at least one directional channel to the conductor 11.
  • the conductor 11 receives audio content 31. Then the conductor 11 uses the mapping 30 on the audio content 31 to pass sound information describing an instrument, a frequency range, or a directional channel from the audio content 31 to a corresponding sound reproduction device 12, 13, 14, 25 or to itself 11. Then the conductor 11 preferably synchronizes playback on the sound reproduction devices 12, 13, 14, 25 over a wireless local connection, possibly with the playback locally on the conductor 11.
  • the individual tracks can be transferred from the conductor 11 to the sound reproduction devices 12, 13, 14, 25 via data cable, IrDA or Bluetooth from the conductor 11 to each of the sound reproduction devices 12, 13, 14, 25.
  • the audio content preferably comprises control signals for synchronization. If the conductor 11 detects a control signal (Fig 3, a synchronization mark), it signals this to the sound reproduction devices 12, 13, 14, 25, preferably by lighting up or flashing a light source, such as a LED-flash.
  • a control signal Fig 3, a synchronization mark
  • the light emitted by the light source is preferably at least partially within the visible light spectrum in order to have a visual effect.
  • the control signals can be placed in the audio content at constant time intervals, e.g. every 20 milliseconds. If the audio content is constant bitrate audio content, the use of synchronization marks may not be necessary since then the number of buffers reproduced can be synchronized with an internal timer at the responsive sound reproduction device or in the conductor 11.
  • the sound reproduction devices 12, 13, 14, 25 detect the signaling, e.g. via their light sensors LS, and responsive to the detecting they may discard the rest of the stream buffer and start playing the next buffer which begins with the synchronization signal.
  • the extent of quality degradation, such as jitter, as observed by human listeners can be minimized.
  • the mobile orchestra may give a better stereo or surround sound than a single sound reproduction device since the distance between the sound reproduction devices 12, 13, 14, 25 and of the conductor 11 can be larger than that of normal wired speakers.
  • the cables between the handset and speakers are not necessary but may be replaced by wireless communication.
  • Figure 3 illustrates how the mapping may be used on audio content.
  • the mapping 30 is on the audio content, such as that of an audio file 31, to pass sound information describing at least two instruments, frequency ranges or directional channels to corresponding sound reproduction devices.
  • the mapping 30 is a mapping from four directional channels to four sound reproduction devices 12, 13, 14, 25 and from one frequency range to the conductor 11.
  • the directional channels and the sound reproduction devices assigned are L-FRONT (sound reproduction device 12), L-REAR (sound reproduction device 13), R-REAR (sound reproduction device 14), R-FRONT (sound reproduction device 25).
  • the frequency range BASS is assigned to the conductor 11.
  • the mapping 30 may be automatically adapted responsive to a change in the relative position of the sound reproduction devices 12, 13, 14, 25 from each other or from the leading mobile terminal 11.
  • the mapping 30 may be automatically adapted responsive to a change of availability of a sound reproduction device 12, 13, 14, 25. Then if one sound reproduction device disappears, because of an empty battery or because the user of the sound reproduction device takes the sound reproduction device with him or her, the mapping 30 may be modified by mapping the part of audio content to another sound reproduction device or to the conductor instead of the disappeared (or disappearing) sound reproduction device.
  • Figure 4 shows system architecture with a network server 400 for using the mapping on audio content, such as an audio file 31.
  • the mapping 30 is in the network server 400 that uses in on the audio file and passes the resulting processed audio file 33, preferably through the Internet, to the conductor 11.
  • the conductor 11 passes the processed audio file 33 as a whole or only partially to the sound reproduction devices 12, 13, 14, 25.
  • the audio file 31 may alternatively be converted directly at the conductor 11 from a music file to a desired number of partial audio files to be passed to the desired number of sound reproduction devices.
  • the audio content provides already partial audio files, i.e. tracks.
  • the mapping 30 preferably comprises mapping from each of the tracks to at least one sound reproduction device 12, 13, 14, 25 or to the conductor 11.
  • the audio content, especially the audio file 31, may be in form of a midi file, containing information on sound to be reproduced by different instruments.
  • the mapping 30 preferably comprises mapping from each instrument to at least one sound reproduction device (or to the conductor).

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Stereophonic System (AREA)

Abstract

A method for sound reproduction comprises the steps of:
- a) receiving or generating a mapping (30) from at least two instruments, from at least two frequency ranges, from at least two directional channels (L-FRONT, L-REAR, R-REAR, R-FRONT), or from any combination thereof, to at least two sound reproduction devices (12,13,14,25,11) or to at least one sound reproduction device (12,13,14,25) and a leading mobile terminal (11);
- b) receiving audio content;
- c) using said mapping (30) on the audio content to pass sound information describing an instrument, a frequency range, or a directional channel (L-FRONT,L-REAR, R-REAR, R-FRONT, BASS) from said audio content to a corresponding sound reproduction device (12,13,14,25,11); and
- d) at a leading mobile terminal (11), synchronizing playback on at least one of said sound reproduction devices (12,13,14,25,11) over a wireless local connection.

Description

    Field of the invention
  • The invention relates to methods, program products and systems for sound reproduction.
  • Background art
  • Not only home stereo systems and stereo televisions can be used to reproduce sound - especially music - but also mobile terminals, such as mobile telephones, portable computers, or portable music players that can receive audio content can be used for this purpose.
  • Because of the acoustic properties of most mobile terminals have been only modest at best, the interest to use a mobile terminal for sound reproduction has been mostly limited to reproduction of human speech. Thanks to recent engineering efforts, it can be expected that the coming mobile terminal models have improved audio properties.
  • Summary of the invention
  • It is an objective of the invention to provide a method, a program product and a system for sound reproduction, with which the user experience of sound reproduction may be further improved.
  • This objective can be met with a method as set out in claim 1, with a program product as set out in claim 9, or with a system as set out in claim 10.
  • The dependent claims describe various advantageous embodiments of the invention.
  • Advantages of the invention
  • If a method for sound reproduction comprises the steps of: a) receiving or generating a mapping from at least two instruments, from at least two frequency ranges, from at least two directional channels, or any combination thereof, to at least two sound reproduction devices or to at least one sound reproduction device and a leading mobile terminal; b) receiving audio content; c) using said mapping on the audio content to pass sound information describing an instrument, a frequency range, or a directional channel from said audio content to a corresponding sound reproduction device; and d) at a leading mobile terminal, synchronizing playback on at least one of the sound reproduction devices over a wireless local connection, audio content, can be output via the sound reproduction devices, making the impression of the audio content being reproduced via a surround speaker set or by a small orchestra.
  • If the method step d) is performed at the leading mobile terminal by transmitting a synchronization signal to at least one of the sound reproduction devices, the user control over the sound reproduction may be improved.
  • If said synchronization signal is transmitted as optical signal, a gimmick in the form of a light show may be obtained, at least if the optical signal is at least partly within the visible light spectrum. Furthermore, the proper functioning of the sound reproduction device may be checked in a less complex manner, also if one or more of the sound reproduction devices or the mobile terminal has been turned silent.
  • When the mapping is automatically adapted responsive to a change in the relative position of the sound reproduction devices from each other or from the leading mobile terminal, or responsive to a change of availability of a sound reproduction device, the user experience may be improved further, and if one of the sound reproduction devices is taken away, the missing instrument, frequency range, or directional channel can be replaced to output by at least one of the other sound reproduction devices.
  • If also the method steps a) to c) are performed at the leading mobile terminal, delays due to communication to and from the network can be minimized.
  • If the method steps a) to c) are performed at a network server, the required computing power at the leading mobile terminal can be reduced.
  • With a system for sound reproduction that comprises not only a leading mobile terminal comprising a program product adapted to carry out the method according to the invention when executed in a processing unit, but also at least one sound reproduction device comprising means adapted to receive sound information describing at least one instrument, at least one frequency range, or at least one directional channel from the leading mobile terminal and to receive synchronization information from the leading mobile terminal over wireless local connection means, a system with which the user experience may be improved can be obtained. Especially if at least one of the further sound reproduction devices is a mobile terminal too, the versatility of the mobile terminals can be improved, possibly with a gimmick effect.
  • List of drawings
  • In the following, the invention is described in more detail with reference to embodiments shown in the accompanying drawings in Figures 1 to 4, of which:
    • Figure 1 illustrates the idea of a mobile orchestra;
    • Figure 2 shows the conductor together with sound reproduction devices that together form a mobile orchestra;
    • Figure 3 illustrates how the mapping may be used on audio content; and
    • Figure 4 shows system architecture with a network server for using the mapping on audio content.
  • Same reference numerals refer to similar structural elements throughout the Figures.
  • Detailed description
  • Figure 1 illustrates the idea of a mobile orchestra. Sound reproduction devices 12, 13, 14, which may be deliberately many, form the mobile orchestra. The leading mobile terminal that conducts the orchestra may only direct the performance of the mobile orchestra or also be a part of the mobile orchestra. In the following, the leading mobile terminal is referred to as conductor 11.
  • If sound reproduction devices 12, 13, 14 are mobile terminals too, there may be a graphics on the display of each mobile terminal illustrating a face of a musician, the face of a musician robot, or an instrument. Same applies also to the conductor 11, but preferably instead of a musician, a picture of a conductor is shown.
  • Figure 1 shows the conductor 11 from behind, illustrating the principle that if the leading mobile terminal has a display on both sides of its housing, the graphics on the display on the back side of the housing may be different from that on the display on the front side, preferably showing the same object as in the other display but from behind. The same principle can be used to implement graphics on the display or displays of the sound reproduction devices, especially if they are mobile terminals.
  • The looks of the musicians may be changed automatically responsive to the music style. Jazz or Blues musicians may have some characteristics, such as clothing, showing different style or ethnic background than that of musicians playing some other class of music, for example. Sets of images may be mapped to a given music style or performer which may be recognized automatically through a genre or artist or record identifier stored with the audio content.
  • Figure 2 shows the conductor 11 together with sound reproduction devices 12, 13, 14, 25 that together form the mobile orchestra.
  • The conductor 11 receives or generates a mapping 30 from at least two instruments, from at least two frequency ranges, or from at least two directional channels to at least two sound reproduction devices 12, 13, 14, 25. The mapping 30 may further comprise information to map at least one instrument, at least one frequency range, or at least one directional channel to the conductor 11.
  • The conductor 11 receives audio content 31. Then the conductor 11 uses the mapping 30 on the audio content 31 to pass sound information describing an instrument, a frequency range, or a directional channel from the audio content 31 to a corresponding sound reproduction device 12, 13, 14, 25 or to itself 11. Then the conductor 11 preferably synchronizes playback on the sound reproduction devices 12, 13, 14, 25 over a wireless local connection, possibly with the playback locally on the conductor 11.
  • The individual tracks can be transferred from the conductor 11 to the sound reproduction devices 12, 13, 14, 25 via data cable, IrDA or Bluetooth from the conductor 11 to each of the sound reproduction devices 12, 13, 14, 25.
  • To keep all instruments, frequency ranges or directional channels synchronized, the audio content preferably comprises control signals for synchronization. If the conductor 11 detects a control signal (Fig 3, a synchronization mark), it signals this to the sound reproduction devices 12, 13, 14, 25, preferably by lighting up or flashing a light source, such as a LED-flash. The light emitted by the light source is preferably at least partially within the visible light spectrum in order to have a visual effect.
  • The control signals, such as synchronization marks, can be placed in the audio content at constant time intervals, e.g. every 20 milliseconds. If the audio content is constant bitrate audio content, the use of synchronization marks may not be necessary since then the number of buffers reproduced can be synchronized with an internal timer at the responsive sound reproduction device or in the conductor 11.
  • The sound reproduction devices 12, 13, 14, 25 detect the signaling, e.g. via their light sensors LS, and responsive to the detecting they may discard the rest of the stream buffer and start playing the next buffer which begins with the synchronization signal. By using this kind of synchronization method, the extent of quality degradation, such as jitter, as observed by human listeners can be minimized.
  • The mobile orchestra may give a better stereo or surround sound than a single sound reproduction device since the distance between the sound reproduction devices 12, 13, 14, 25 and of the conductor 11 can be larger than that of normal wired speakers. The cables between the handset and speakers are not necessary but may be replaced by wireless communication.
  • Figure 3 illustrates how the mapping may be used on audio content. The mapping 30 is on the audio content, such as that of an audio file 31, to pass sound information describing at least two instruments, frequency ranges or directional channels to corresponding sound reproduction devices.
  • In the example of Figure 2 the mapping 30 is a mapping from four directional channels to four sound reproduction devices 12, 13, 14, 25 and from one frequency range to the conductor 11. In the mapping 30, the directional channels and the sound reproduction devices assigned are L-FRONT (sound reproduction device 12), L-REAR (sound reproduction device 13), R-REAR (sound reproduction device 14), R-FRONT (sound reproduction device 25). The frequency range BASS is assigned to the conductor 11.
  • The mapping 30 may be automatically adapted responsive to a change in the relative position of the sound reproduction devices 12, 13, 14, 25 from each other or from the leading mobile terminal 11.
  • Alternatively or in addition, the mapping 30 may be automatically adapted responsive to a change of availability of a sound reproduction device 12, 13, 14, 25. Then if one sound reproduction device disappears, because of an empty battery or because the user of the sound reproduction device takes the sound reproduction device with him or her, the mapping 30 may be modified by mapping the part of audio content to another sound reproduction device or to the conductor instead of the disappeared (or disappearing) sound reproduction device.
  • Figure 4 shows system architecture with a network server 400 for using the mapping on audio content, such as an audio file 31. The mapping 30 is in the network server 400 that uses in on the audio file and passes the resulting processed audio file 33, preferably through the Internet, to the conductor 11. The conductor 11 passes the processed audio file 33 as a whole or only partially to the sound reproduction devices 12, 13, 14, 25.
  • As already explained with reference to Figure 2, the audio file 31 may alternatively be converted directly at the conductor 11 from a music file to a desired number of partial audio files to be passed to the desired number of sound reproduction devices.
  • It is also possible that the audio content provides already partial audio files, i.e. tracks. In this case, the mapping 30 preferably comprises mapping from each of the tracks to at least one sound reproduction device 12, 13, 14, 25 or to the conductor 11.
  • Alternatively or in addition, the audio content, especially the audio file 31, may be in form of a midi file, containing information on sound to be reproduced by different instruments. In this case, the mapping 30 preferably comprises mapping from each instrument to at least one sound reproduction device (or to the conductor).

Claims (11)

  1. A method for sound reproduction, comprising the steps of:
    - a) receiving or generating a mapping (30) from at least two instruments (11, 12, 13, 14), from at least two frequency ranges, from at least two directional channels (L-FRONT, L-REAR, R-REAR, R-FRONT, BASS), or from any combination thereof, to at least two sound reproduction devices (12, 13, 14, 25, 11) or to at least one sound reproduction device (12, 13, 14, 25) and a leading mobile terminal (11);
    - b) receiving audio content (31);
    - c) using said mapping (30) on the audio content (31) to pass sound information describing an instrument (11, 12, 13, 14), a frequency range, or a directional channel (L-FRONT, L-REAR, R-REAR, R-FRONT, BASS) from said audio content to a corresponding sound reproduction device (12, 13, 14, 25, 11); and
    - d) at a leading mobile terminal (11), synchronizing playback on at least one of said sound reproduction devices (12, 13, 14, 25, 11) over a wireless local connection.
  2. A method according to claim 1, wherein: the method step d) is performed at the leading mobile terminal (11), by transmitting a synchronization signal to at least one of said sound reproduction devices (12, 13, 14, 25).
  3. A method according to claim 2, wherein: said synchronization signal is transmitted as optical signal.
  4. A method according to claim 2 or 3, wherein: the mapping (30) is automatically adapted responsive to a change in the relative position of the sound reproduction devices (12, 13, 14, 25) from each other or from the leading mobile terminal (11), or responsive to a change of availability of a sound reproduction device (12, 13, 14, 25).
  5. A method according to claim 2, 3, or 4, wherein: also the method steps a) to c) are performed at the leading mobile terminal (11).
  6. A method according to claim 2, 3, or 4, wherein: the method steps a) to c) are performed at a network server (400).
  7. A method according to any one of the preceding claims, wherein: said audio content (31) is an audio file.
  8. A method according to claim 7, wherein: said audio content is transmitted as streaming to the leading mobile terminal (11) from a network server (400).
  9. A program product, comprising: software means adapted, when executed in a processing unit, to carry out at least the method steps c) and d) according to any one of the preceding claims.
  10. A system for sound reproduction, comprising:
    - a leading mobile terminal (11) comprising a program product according to claim 9; and
    - at least one sound reproduction device (12, 13, 14, 25) comprising means (LS) adapted to:
    i) receive sound information describing at least one instrument (11, 12, 13, 14), at least one frequency range, or at least one directional channel (L-FRONT, L-REAR, R-REAR, R-FRONT, BASS) from said leading mobile terminal (11); and
    ii) receive synchronization information from said leading mobile terminal (11) over wireless local connection means (LS).
  11. A system according to claim 10, wherein: at least one of the sound reproduction devices (12, 13, 14, 25) is a further mobile terminal.
EP05024347A 2005-11-08 2005-11-08 A method and system for sound reproduction, and a program product Withdrawn EP1784049A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP05024347A EP1784049A1 (en) 2005-11-08 2005-11-08 A method and system for sound reproduction, and a program product
PCT/EP2006/010704 WO2007054285A1 (en) 2005-11-08 2006-11-08 A method and system for sound reproduction, and a program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP05024347A EP1784049A1 (en) 2005-11-08 2005-11-08 A method and system for sound reproduction, and a program product

Publications (1)

Publication Number Publication Date
EP1784049A1 true EP1784049A1 (en) 2007-05-09

Family

ID=36659810

Family Applications (1)

Application Number Title Priority Date Filing Date
EP05024347A Withdrawn EP1784049A1 (en) 2005-11-08 2005-11-08 A method and system for sound reproduction, and a program product

Country Status (2)

Country Link
EP (1) EP1784049A1 (en)
WO (1) WO2007054285A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009144537A1 (en) * 2008-05-27 2009-12-03 Sony Ericsson Mobile Communications Ab Apparatus and methods for time synchronization of wireless audio data streams
WO2012098191A1 (en) * 2011-01-19 2012-07-26 Devialet Audio processing device
EP2747441A1 (en) * 2012-12-18 2014-06-25 Huawei Technologies Co., Ltd. Multi-terminal synchronous play control method and apparatus
EP2804397A1 (en) * 2013-05-15 2014-11-19 Giga-Byte Technology Co., Ltd. Multiple sound channels speaker system

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102009031995A1 (en) * 2009-07-06 2011-01-13 Neutrik Aktiengesellschaft Method for the wireless real-time transmission of at least one audio signal

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000076272A1 (en) * 1998-12-03 2000-12-14 Audiologic, Incorporated Digital wireless loudspeaker system
WO2004023841A1 (en) * 2002-09-09 2004-03-18 Koninklijke Philips Electronics N.V. Smart speakers
US20040159219A1 (en) * 2003-02-07 2004-08-19 Nokia Corporation Method and apparatus for combining processing power of MIDI-enabled mobile stations to increase polyphony

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000076272A1 (en) * 1998-12-03 2000-12-14 Audiologic, Incorporated Digital wireless loudspeaker system
WO2004023841A1 (en) * 2002-09-09 2004-03-18 Koninklijke Philips Electronics N.V. Smart speakers
US20040159219A1 (en) * 2003-02-07 2004-08-19 Nokia Corporation Method and apparatus for combining processing power of MIDI-enabled mobile stations to increase polyphony

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009144537A1 (en) * 2008-05-27 2009-12-03 Sony Ericsson Mobile Communications Ab Apparatus and methods for time synchronization of wireless audio data streams
WO2012098191A1 (en) * 2011-01-19 2012-07-26 Devialet Audio processing device
CN103329570A (en) * 2011-01-19 2013-09-25 帝瓦雷公司 Audio processing device
US10187723B2 (en) 2011-01-19 2019-01-22 Devialet Audio processing device
EP2747441A1 (en) * 2012-12-18 2014-06-25 Huawei Technologies Co., Ltd. Multi-terminal synchronous play control method and apparatus
US9705944B2 (en) 2012-12-18 2017-07-11 Huawei Technologies Co., Ltd. Multi-terminal synchronous play control method and apparatus
EP2804397A1 (en) * 2013-05-15 2014-11-19 Giga-Byte Technology Co., Ltd. Multiple sound channels speaker system

Also Published As

Publication number Publication date
WO2007054285A1 (en) 2007-05-18

Similar Documents

Publication Publication Date Title
CN110692252B (en) Audio-visual collaboration method with delay management for wide area broadcast
US7096080B2 (en) Method and apparatus for producing and distributing live performance
US9602388B2 (en) Session terminal apparatus and network session system
US11399249B2 (en) Reproduction system and reproduction method
Carôt et al. Network music performance-problems, approaches and perspectives
EP2743917B1 (en) Information system, information reproducing apparatus, information generating method, and storage medium
US20220386062A1 (en) Stereophonic audio rearrangement based on decomposed tracks
EP1784049A1 (en) A method and system for sound reproduction, and a program product
US20240129669A1 (en) Distribution system, sound outputting method, and non-transitory computer-readable recording medium
JP2002044778A (en) Microphone, microphone adapter, display device, mixer, public address system and method
WO2018095022A1 (en) Microphone system
US6525253B1 (en) Transmission of musical tone information
Konstantas et al. The distributed musical rehearsal environment
US10863259B2 (en) Headphone set
JP4422656B2 (en) Remote multi-point concert system using network
JP5256682B2 (en) Information processing apparatus, information processing method, and program
KR101657110B1 (en) portable set-top box of music accompaniment
JP2003085068A (en) Live information providing server, information communication terminal, live information providing system and live information providing method
JP7434083B2 (en) karaoke equipment
WO2024100920A1 (en) Information processing device, information processing method, and program for information processing
JP6819236B2 (en) Sound processing equipment, sound processing methods, and programs
JP6834398B2 (en) Sound processing equipment, sound processing methods, and programs
KR20050083389A (en) Apparatus of karaoke based on multi channel and method thereof
WO2018092286A1 (en) Sound processing device, sound processing method and program
CN113068056A (en) Audio playing method and device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA HR MK YU

AKX Designation fees paid
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20071110

REG Reference to a national code

Ref country code: DE

Ref legal event code: 8566