CN111565352A - Method performed by computing device, playback device, calibration system and method thereof - Google Patents

Method performed by computing device, playback device, calibration system and method thereof Download PDF

Info

Publication number
CN111565352A
CN111565352A CN202010187024.8A CN202010187024A CN111565352A CN 111565352 A CN111565352 A CN 111565352A CN 202010187024 A CN202010187024 A CN 202010187024A CN 111565352 A CN111565352 A CN 111565352A
Authority
CN
China
Prior art keywords
playback
playback device
zone
audio
audio signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010187024.8A
Other languages
Chinese (zh)
Other versions
CN111565352B (en
Inventor
蒂莫西·希恩
西蒙·贾维斯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sonos Inc
Original Assignee
Sonos Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US14/481,505 external-priority patent/US9952825B2/en
Priority claimed from US14/481,514 external-priority patent/US9891881B2/en
Application filed by Sonos Inc filed Critical Sonos Inc
Publication of CN111565352A publication Critical patent/CN111565352A/en
Application granted granted Critical
Publication of CN111565352B publication Critical patent/CN111565352B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/301Automatic calibration of stereophonic sound system, e.g. with test microphone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R27/00Public address systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2227/00Details of public address [PA] systems covered by H04R27/00 but not provided for in any of its subgroups
    • H04R2227/005Audio distribution systems for home, i.e. multi-room use
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/09Electronic reduction of distortion of stereophonic sound systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/13Aspects of volume control, not necessarily automatic, in stereophonic sound systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/15Aspects of sound capture and related signal processing for recording or reproduction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/305Electronic adaptation of stereophonic audio signals to reverberation of the listening space

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Stereophonic System (AREA)

Abstract

A method performed by a computing device and a playback device and a calibration system and method thereof are disclosed. The method performed by a computing device in communication with a media playback system includes: receiving data indicative of a second audio signal from a playback device in the playback zone, the second audio signal detected by the microphone while the playback device is playing the first audio signal; determining, based on the received data, acoustic characteristics of the playback device from a first database that associates acoustic characteristics of the playback device with a particular playback device model; determining an acoustic characteristic of the playback zone by removing the determined acoustic characteristic of the playback device from the second audio signal; and determining an audio processing algorithm based on the acoustic characteristics of the playback device; wherein the microphone is in the playback device; and determining the audio processing algorithm is based on the determined acoustic characteristics of the playback zone, according to a second database comprising audio processing algorithms associated with respective acoustic characteristics of the playback zone.

Description

Method performed by computing device, playback device, calibration system and method thereof
This patent application is a divisional application of patent applications with international filing date of 2015, 9, 8, national application number of 201580047998.3, entitled "audio processing algorithm and database".
Cross Reference to Related Applications
This application claims priority from U.S. application No. 14/481,505 filed on 9/2014 and U.S. application No. 14/481,514 filed on 9/2014, which are incorporated herein by reference in their entirety.
Technical Field
The present disclosure relates to consumer products, and more particularly to methods, systems, products, features, services, and other elements related to media playback or some aspect thereof.
Background
The options for accessing and listening to digital Audio with larger sound settings were limited until 2003, in 2003 SONOS corporation filed one of its first patent applications entitled "Method for Synchronizing Audio Playback between multiple network Devices" and began to open the market for media Playback systems in 2005. Sonos wireless HiFi systems enable people to experience music from multiple sources via one or more networked playback devices. Through a software control application installed on a smartphone, tablet or computer, a person can play his or her desired music in any room with a networked playback device. In addition, using the controller, for example, different songs may be streamed to each room with a playback device, the rooms may be grouped together for synchronized playback, or the same song may be listened to in all rooms simultaneously.
Given the growing interest in digital media, there remains a need to develop consumer accessible technologies to further enhance the listening experience.
Drawings
The features, aspects, and advantages of the presently disclosed technology may be better understood with regard to the following description, appended claims, and accompanying drawings where:
FIG. 1 illustrates an example media playback system configuration in which certain embodiments may be implemented;
FIG. 2 shows a functional block diagram of an example playback device;
FIG. 3 shows a functional block diagram of an example control device;
FIG. 4 illustrates an example controller interface;
FIG. 5 illustrates an example flow diagram of a first method for maintaining a database of audio processing algorithms;
FIG. 6A shows an exemplary portion of a first database of audio processing algorithms;
FIG. 6B shows an exemplary portion of a second database of audio processing algorithms;
FIG. 7 illustrates an example flow chart of a second method for maintaining a database of audio processing algorithms;
FIG. 8 illustrates an example playback zone in which a playback device may be calibrated;
FIG. 9 illustrates an example flow diagram of a first method for determining an audio processing algorithm based on one or more playback zone characteristics;
FIG. 10 illustrates an example flow diagram of a second method for determining an audio processing algorithm based on one or more playback zone characteristics; and
FIG. 11 illustrates an example flow diagram for identifying an audio processing algorithm from a database of audio processing algorithms.
The drawings are for purposes of illustrating example embodiments, and it is to be understood that the invention is not limited to the arrangements and instrumentality shown in the drawings.
Detailed Description
Summary of the invention
When the playback device plays the audio content in the playback zone, the quality of the playback may depend on the acoustic characteristics of the playback zone. In the discussion herein, a playback zone may include one or more playback devices or groups of playback devices. The acoustic characteristics of the playback zone may depend on other factors such as the size of the playback zone, the type of furniture in the playback zone, and the arrangement of the furniture in the playback zone. Thus, different playback zones may have different acoustic characteristics. Since a given model of playback device may be used in various different playback zones having different acoustic characteristics, a single audio processing algorithm may not provide consistent quality of audio playback by the playback device in each of the different playback zones.
Examples discussed herein relate to determining an audio processing algorithm to be applied by a playback device based on acoustic characteristics of a playback zone in which the playback device is located. The playback device applying the determined audio processing algorithm when playing the audio content in the playback zone may cause the audio content rendered in the playback zone by the playback device to exhibit, at least to some extent, predetermined audio characteristics. In one case, application of the audio processing algorithm may change the audio amplification at one or more audio frequencies of the audio content. Other examples are also possible.
In one example, a database of audio processing algorithms may be maintained, and the audio processing algorithms may be identified in the database based on one or more characteristics of the playback zone. The one or more characteristics of the playback zone may include acoustic characteristics of the playback zone and/or one or more of: the size of the playback zone, the flooring and/or wall material of the playback zone, and the number and/or type of furniture in the playback zone, among others.
Maintaining a database of audio processing algorithms may include: determining at least one audio processing algorithm corresponding to one or more characteristics of the playback zone; and adding the determined audio processing algorithm to a database. In one example, the database may be stored on one or more devices that maintain the database or on one or more other devices. In the discussion herein, unless otherwise noted, the functions for maintaining a database may be performed by one or more computing devices (i.e., servers), one or more playback devices, or one or more controller devices, among others. However, for the sake of brevity, one or more devices that perform the functions described above may be referred to collectively as a computing device.
In one example, determining such an audio processing algorithm may include the computing device determining an acoustic characteristic of the playback zone. In one case, the playback zone can be a model chamber that simulates the playback zone in which the playback device is likely to play audio content. In this case, one or more physical characteristics (i.e., dimensions, and flooring and wall materials, etc.) of the model room may be predetermined. In another case, the playback zone may be a room in the home of the user of the playback device. In this case, the physical characteristics of the playback zone may be provided by the user or may be unknown.
In one example, the computing device may cause a playback device in the playback zone to play an audio signal. In one case, the played audio signal may include audio content having frequencies that cover substantially the entire frequency range that the playback device is capable of presenting. The playback device may then detect the audio signal using a microphone of the playback device. The microphone of the playback device may be a built-in microphone of the playback device. In one case, the detected audio signal may include a portion corresponding to the played audio signal. For example, the detected audio signal may include a component of the played audio signal that is reflected within the playback zone. The computing device may receive the detected audio signal from the playback device and determine an acoustic response of the playback zone based on the detected audio signal.
The computing device may then determine the acoustic characteristics of the playback zone by removing the acoustic characteristics of the playback device from the acoustic response of the playback zone. The acoustic characteristic of the playback device may be an acoustic characteristic corresponding to a model of the playback device. In one case, the acoustic characteristics corresponding to a model of the playback device may be determined based on the audio signal played and detected in the anechoic chamber by a representative playback device of the model.
The computing device may then determine a corresponding audio processing algorithm based on the determined acoustic characteristics of the playback zone and the predetermined audio characteristics. The predetermined audio characteristic may include a particular frequency equalization that is deemed to be audible. The respective audio processing algorithms may be determined such that when the playback device plays the audio content in the playback zone, the playback device applies the respective audio processing algorithms to cause the audio content rendered by the playback device in the playback zone to exhibit, at least to some extent, predetermined audio characteristics. For example, if the acoustic characteristic of the playback zone is an acoustic characteristic in which a particular audio frequency is attenuated more than other frequencies, the corresponding audio processing algorithm may include increased amplification of the particular audio frequency. Other examples are also possible.
The determined association between the audio processing algorithm and the acoustic characteristics of the playback zone may then be stored as an entry in a database. In some cases, additionally or alternatively, associations between audio processing algorithms and one or more other characteristics of the playback zone may be stored in a database. For example, if the playback zone is a particular size, an association between the audio processing algorithm and the particular room size may be stored in a database. Other examples are also possible.
In one example, the database may be accessible by the computing device to identify audio processing algorithms to be applied by the playback device in the playback zone. In one example, the computing device accessing the database and identifying the audio processing algorithm may be the same computing device that maintains the database as described above. In another example, the computing devices may be different computing devices.
In some cases, accessing the database to identify the audio processing algorithms to be applied by the playback device in the playback zone may be part of a calibration of the playback device. Such calibration of the playback device may be initiated by the playback device itself, by a server in communication with the playback device, or by a controller device. In one case, calibration may be initiated because the playback device is new and calibration is part of the initial setup of the playback device. In another case, the playback device may have been relocated, relocated in the same playback zone or relocated from one playback zone to another. In yet another case, the calibration may be initiated by a user of the playback device, e.g., via a controller device.
In one example, calibration of the playback device may include the computing device prompting a user of the playback device to indicate one or more characteristics of the playback zone, such as an approximate size of the playback zone, flooring or wall material, and a number of pieces of furniture, among others. The computing device may prompt the user via a user interface on the controller device. Based on one or more characteristics of the playback zone provided by the user, audio processing algorithms corresponding to the one or more characteristics of the playback zone may be identified in the database, and thus, the playback device may apply the identified audio processing algorithms when playing audio content in the playback zone.
In another example, calibration of the playback device may include determining acoustic characteristics of the playback zone and identifying a corresponding audio processing algorithm based on the acoustic characteristics of the playback zone. The determination of the acoustic properties of the playback zone may be similar to the determination of the acoustic properties of the playback zone described above. For example, a playback device in a playback zone for which the playback device is calibrated may play a first audio signal and then use a microphone of the playback device to detect a second audio signal. The acoustic properties of the playback zone may then be determined based on the second audio signal. A corresponding audio processing algorithm may be identified in the database based on the determined acoustic characteristics, and thus, the playback device may apply the identified audio processing algorithm when playing audio content in the playback zone. As indicated above, the application of the respective audio processing algorithms by the playback device when playing audio content in the playback zone may cause the audio content presented by the playback device in the playback zone to exhibit, at least to some extent, predetermined audio characteristics.
Although the discussion of calibration of a playback device discussed above generally includes a database of audio processing algorithms, one of ordinary skill in the art will appreciate that a computing device may determine the audio processing algorithms for a playback zone without accessing the database. For example, instead of identifying the corresponding audio processing algorithm in the database, the computing device may determine the audio processing algorithm by computing the audio processing algorithm based on the acoustic characteristics of the playback zone (from the detected audio signal) and predetermined audio characteristics, similar to that described above with respect to the maintenance and generation of the audio processing algorithm entries of the database. Other examples are also possible.
In one case, the playback device to be calibrated may be one of a plurality of playback devices configured to synchronously play audio content in the playback zone. In this case, determining the acoustic characteristics of the playback zone may also include playing an audio signal in the playback zone by the other playback device. In one example, during the determination of the audio processing algorithm, each of the plurality of playback devices in the playback zone may play an audio signal simultaneously such that the audio signal detected by the microphone of the playback device may include portions corresponding to the audio signal played by that playback device and portions of the audio signal played by other playback devices in the playback zone. An acoustic response of the playback zone may be determined based on the detected audio signal, and the acoustic characteristics of the playback zone including the other playback devices may be determined by removing the calibrated acoustic characteristics of the playback device from the acoustic response of the playback zone. The audio processing algorithm may then be calculated based on the acoustic characteristics of the playback zone or identified in a database and applied by the playback device.
In another case, two or more of the plurality of playback devices in the playback zone may each have a respective built-in microphone and may each be individually calibrated according to the above description. In one example, the acoustic characteristics of the playback zone may be determined based on a set of audio signals detected by a microphone of each of the two or more playback devices, and an audio processing algorithm corresponding to the acoustic characteristics may be identified for each of the two or more playback devices. Other examples are also possible.
As described above, the present discussion includes determining the audio processing algorithms to be applied by a playback device based on the acoustic characteristics of the particular playback zone in which the playback device is located. In one aspect, a computing device is provided. The computing device includes a processor and a memory storing instructions executable by the processor to cause the computing device to perform functions. The functions include: causing a playback device to play a first audio signal in a playback zone; and receiving, from the playback device, data indicative of a second audio signal detected by a microphone of the playback device. The second audio signal includes a portion corresponding to the first audio signal. The functions further include: determining an audio processing algorithm based on the second audio signal and the acoustic characteristics of the playback device; and transmitting data indicative of the determined audio processing algorithm to the playback device.
In another aspect, a computing device is provided. The computing device includes a processor and a memory storing instructions executable by the processor to cause the computing device to perform functions. The functions include: causing the first playback device to play the first audio signal in the playback zone and causing the second playback device to play the second audio signal in the playback zone; and receiving data from the first playback device indicative of the third audio signal detected by the microphone of the first playback device. The third audio signal includes: (i) a portion corresponding to a first audio signal; and (ii) a portion corresponding to a second audio signal played by a second playback device. The functions further include: determining an audio processing algorithm based on the third audio signal and the acoustic characteristics of the first playback device; and transmitting data indicative of the determined audio processing algorithm to the first playback device.
In another aspect, a playback device is provided. The playback device includes a processor, a microphone, and a memory having stored thereon instructions executable by the processor to cause the playback device to perform functions. The functions include: playing the first audio signal while in the playback zone; and detecting the second audio signal by the microphone. The second audio signal includes a portion corresponding to the first audio signal. The functions further include: determining an audio processing algorithm based on the second audio signal and the acoustic characteristics of the playback device; and applying the determined audio processing algorithm to audio data corresponding to the media item when the media item is played in the playback zone.
In another aspect, a computing device is provided. The computing device includes a processor and a memory storing instructions executable by the processor to cause the computing device to perform functions. The functions include: causing a playback device to play a first audio signal in a playback zone; and receiving data indicative of a second audio signal detected by a microphone of the playback device. The second audio signal includes a portion corresponding to the first audio signal played by the playback device. The functions further include: determining an acoustic characteristic of the playback zone based on the second audio signal and the characteristic of the playback device; determining an audio processing algorithm based on the acoustic characteristics of the playback zone; and causing an association between the audio processing algorithm and the acoustic characteristics of the playback zone to be stored in a database.
In another aspect, a computing device is provided. The computing device includes a processor and a memory storing instructions executable by the processor to cause the computing device to perform functions. The functions include: causing a playback device to play a first audio signal in a playback zone; and receiving (i) data indicative of one or more characteristics of the playback zone and (ii) data indicative of a second audio signal detected by a microphone of the playback device. The second audio signal includes a portion corresponding to an audio signal played by the playback device. The functions further include: determining an audio processing algorithm based on the second audio signal and the characteristics of the playback device; and causing an association between the determined audio processing algorithm and at least one of the characteristics of one or more of the playback zones to be stored in a database.
In another aspect, a computing device is provided. The computing device includes a processor and a memory storing instructions executable by the processor to cause the playback device to perform functions. The functions include maintaining a database of (i) a plurality of audio processing algorithms and (ii) a plurality of playback zone characteristics. Each of the plurality of audio processing algorithms corresponds to at least one of the plurality of playback zone characteristics. The functions further include: receiving data indicative of one or more characteristics of a playback zone; identifying an audio processing algorithm in a database based on the data; and transmitting data indicative of the identified audio processing algorithm.
While some examples described herein may refer to functions performed by a particular actor, such as a "user," and/or other entity, it should be understood that this is for illustration purposes only. The claims should not be construed as requiring the action of any such example actor unless the claim's own language expressly requires such action. One of ordinary skill in the art will appreciate that the present disclosure includes many other embodiments.
Example work Environment
Fig. 1 illustrates an example configuration of a media playback system 100 in which one or more embodiments disclosed herein may be implemented or realized. The media playback system 100 shown is associated with an example home environment having several rooms and spaces, such as, for example, a home, an office, a restaurant, and a living room. As shown in the example of fig. 1, the media playback system 100 includes: playback devices 102-124, control devices 126 and 128, and wired or wireless network router 130.
Additional discussion regarding the different components of the example media playback system 100 and how the different components may interact to provide a media experience for a user may be found in the following sections. Although the discussion herein may generally include an example media playback system 100, the techniques described herein are not limited to application in the home environment, particularly as shown in fig. 1. For example, the techniques described herein may be useful in environments where multi-zone audio may be desirable, such as, for example, commercial environments like restaurants, malls, or airports, vehicles like Sport Utility Vehicles (SUVs), buses or cars, ships or boats, airplanes, and the like.
a.Example playback device
Fig. 2 shows a functional block diagram of an example playback device 200, which example playback device 200 may be configured as one or more of the playback devices 102-124 of the media playback system 100 of fig. 1. The playback device 200 may include a processor 202, software components 204, a memory 206, an audio processing component 208, an audio amplifier 210, a speaker 212, a microphone 220, and a network interface 214 including a wireless interface 216 and a wired interface 218. In one case, the playback device 200 may not include the speaker 212, but may include a speaker interface for connecting the playback device 200 with an external speaker. In another case, the playback device 200 may include neither the speaker 212 nor the audio amplifier 210, but may include only an audio interface for connecting the playback device 200 with an external audio amplifier or audiovisual receiver.
In one example, the processor 202 may be a clock driven computational component configured to process input data according to instructions stored in the memory 206. The memory 206 may be a tangible computer-readable medium configured to store instructions executable by the processor 202. For example, the memory 206 may be a data memory capable of loading one or more of the software components 204 executable by the processor 202 to implement certain functions. In one example, the functions may include the playback device 200 retrieving audio data from an audio source or another playback device. In another example, the functionality may include the playback device 200 sending audio data to another device or playback device on the network. In yet another example, the functionality may include pairing of the playback device 200 with one or more playback devices to create a multi-channel audio environment.
The particular functionality may include synchronized playback of audio content by the playback device 200 with one or more other playback devices. During synchronized playback, the listener will preferably not be able to perceive a time delay difference between the playback of the audio content by the playback device 200 and the playback of the audio content by one or more other playback devices. U.S. patent No.8,234,395 entitled "System and method for synchronizing operations, a plurality of algorithms of independent locked digital data processing devices" provides some examples of audio playback synchronization between playback devices in greater detail and is hereby incorporated by reference.
The memory 206 may also be configured to store data associated with the playback device 200, such as one or more zones and/or zone groups of which the playback device 200 is a part, audio sources accessible by the playback device 200, or playback queues with which the playback device 200 (or some other playback device) may be associated. The data may be stored as one or more state variables that are periodically updated and used to describe the state of the playback device 200. The memory 206 may also include data associated with the state of other devices of the media system and sometimes shared between the devices such that one or more of the devices has up-to-date data associated with the system. Other embodiments are also possible.
The audio processing component 208 may include one or more of the following: digital-to-analog converter (DAC), analog-to-digital converter (ADC), audio preprocessing component, audio enhancement component, Digital Signal Processor (DSP), etc. In one implementation, one or more of the audio processing components 208 may be a sub-component of the processor 202. In one example, the audio processing component 208 may process and/or intentionally alter audio content to produce an audio signal. The resulting audio signal may then be provided to an audio amplifier 210 for amplification and playback through a speaker 212. In particular, the audio amplifier 210 may include a device configured to amplify an audio signal to a level for driving one or more of the speakers 212. The speaker 212 may include a separate transducer (e.g., a "driver") or a complete speaker system including a housing with one or more drivers. The particular drivers of the speaker 212 may include, for example, a subwoofer (e.g., for low frequencies), a midrange driver (e.g., for mid-range frequencies), and/or a tweeter (e.g., for high frequencies). In some cases, each transducer of the one or more speakers 212 may be driven by a separate corresponding audio amplifier of the audio amplifier 210. In addition to generating analog signals for playback by the playback device 200, the audio processing component 208 may be configured to process audio content to be sent to one or more other playback devices for playback.
The audio content to be processed and/or played back by the playback device 200 may be received from an external source, such as via an audio line in connection (e.g., an auto-detect 3.5mm audio line in connection) or the network interface 214.
Microphone 220 may include an audio sensor configured to convert detected sound into an electrical signal. The electrical signals may be processed by the audio processing component 208 and/or the processor 202. The microphone 220 may be positioned in one or more orientations at one or more locations on the playback device 200. Microphone 220 may be configured to detect sound in one or more frequency ranges. In one case, one or more of the microphones 220 may be configured to detect sound within a frequency range of audio that the playback device 200 is capable of presenting. In another case, one or more of the microphones 220 may be configured to detect sound within a frequency range that is audible to a person. Other examples are also possible.
The network interface 214 may be configured to facilitate data flow between the playback device 200 and one or more other devices on a data network. Likewise, the playback device 200 may be configured to receive audio content over a data network from one or more other playback devices in communication with the playback device 200, a network device within a local area network, or a source of audio content over a wide area network such as the internet. In one example, audio content and other signals transmitted and received by the playback device 200 may be transmitted in the form of digital packet data that includes an Internet Protocol (IP) based source address and an IP based destination address. In this case, the network interface 214 may be configured to parse the digital packet data so that the playback device 200 properly receives and processes the data destined for the playback device 200.
As shown, the network interface 214 may include a wireless interface 216 and a wired interface 218. Wireless interface 216 may provide network interface functionality for playback device 200 to wirelessly communicate with other devices (e.g., other playback devices, speakers, receivers, network devices, control devices associated with playback device 200 within a data network) according to a communication protocol (e.g., any wireless standard, including ieee802.11a, 802.11b, 802.11G, 802.11n, 802.11ac, 802.15, 4G mobile communication standards, etc.). The wired interface 218 may provide network interface functionality for the playback device 200 to communicate with other devices over a wired connection according to a communication protocol (e.g., IEEE 802.3). Although the network interface 214 shown in fig. 2 includes both a wireless interface 216 and a wired interface 218, in some embodiments the network interface 214 may include only a wireless interface or only a wired interface.
In one example, the playback device 200 can be paired with one other playback device to play two separate audio components of audio content. For example, the playback device 200 may be configured to play a left channel audio component, while other playback devices may be configured to play a right channel audio component, thereby creating or enhancing a stereo effect of the audio content. Paired playback devices (also referred to as "bound playback devices") can also play audio content in synchronization with other playback devices.
In another example, the playback device 200 may be acoustically joined with one or more other playback devices to form a single joined playback device. Because the federated playback device may have additional speaker drivers through which audio content may be rendered, the federated playback device may be configured to process and reproduce sound differently than the non-federated playback device or the paired playback device. For example, if the playback device 200 is a playback device designed to present low-band audio content (i.e., a subwoofer), the playback device 200 can be joined with a playback device designed to present full-band audio content. In this case, when coupled with the low frequency playback device 200, the full band playback device may be configured to present only the mid frequency component and the high frequency component of the audio content, while the low frequency playback device 200 presents the low frequency component of the audio content. The federated playback device may also be paired with a single playback device or another federated playback device.
For example, SONOS companies currently publicly sell (or have publicly sold) specific playback devices that include "PLAY: 1 "," PLAY: 3 "," PLAY: 5 "," PLAYBAR "," CONNECT: AMP "," CONNECT ", and" SUB ". Additionally or alternatively, any other past, present, and/or future playback devices may be used to implement the playback devices of the example embodiments disclosed herein. Additionally, it should be understood that the playback device is not limited to the example shown in FIG. 2 or the SONOS product offering. For example, the playback device may include a wired or wireless headset. In another example, the playback device may include or interact with a docking station for a personal mobile media playback device. In yet another example, the playback device may be necessary to form another device or component, such as a television, a lighting fixture, or some other device for indoor or outdoor use.
b.Example playback zone configuration
Referring again to the media playback system 100 of fig. 1, the environment may have one or more playback zones, each having one or more playback devices. The media playback system 100 may be created with one or more playback zones, after which one or more zones may be added or removed to achieve the example configuration shown in fig. 1. Each area may be named according to different rooms or spaces, such as an office, a bathroom, a master bedroom, a kitchen, a dining room, a living room, and/or a balcony. In one case, the individual playback zones may include multiple rooms or spaces. In another case, a single room or space may include multiple playback zones.
As shown in fig. 1, balcony, restaurant, kitchen, bathroom, office and bedroom areas each have one playback device, while living room and main bedroom areas each have multiple playback devices. In the living room area, playback devices 104, 106, 108, and 110 may be configured to: the audio content is played synchronously as a separate playback device, as one or more bundled playback devices, as one or more joined playback devices, or any combination thereof. Similarly, in the master-slave scenario, the playback devices 122 and 124 may be configured to: the audio content is played synchronously as individual playback devices, as bundled playback devices, or as a joint playback device.
In one example, each of the one or more playback zones in the environment of fig. 1 may be playing different audio content. For example, a user may be grilling on a balcony area and listening to hip-hop music being played by the playback device 102, while another user may be preparing food in a kitchen area and listening to classical music being played by the playback device 114. In another example, a playback zone may play the same audio content in synchronization with another playback zone. For example, the user may be in an office area where the playback device 118 is playing the same rock music as the playback device 102 in the balcony area. In this case, the playback devices 102 and 118 may play the rock music synchronously so that the audio content being loud played may be enjoyed seamlessly (or at least substantially seamlessly) as the user moves between different playback zones. As described in the previously cited U.S. patent No.8,234,395, synchronization between playback zones may be accomplished in a manner similar to the manner of synchronization between playback devices.
As set forth above, the zone configuration of the media playback system 100 may be dynamically modified, and in some implementations, the media playback system 100 supports many configurations. For example, if a user physically moves one or more playback devices to or from a zone, the media playback system 100 may be reconfigured to accommodate one or more changes. For example, if a user physically moves playback device 102 from a balcony area to an office area, the office area may now include both playback device 118 and playback device 102. If desired, the playback device 102 may be paired or grouped with an office area and/or the playback device 102 renamed via control devices such as control devices 126 and 128. On the other hand, if one or more playback devices are moved to a particular zone in the indoor environment that is not already a playback zone, a new playback zone may be created for the particular zone.
Further, different playback zones of the media playback system 100 may be dynamically combined into zone groups or divided into separate playback zones. For example, the restaurant area and the kitchen area 114 may be combined into a regional group for a banquet such that the playback devices 112 and 114 may present audio content synchronously. On the other hand, if one user wants to listen to music in the living room space and another user wants to watch television, the living room area may be divided into a television area including playback device 104 and a listening area including playback devices 106, 108, and 110.
c.Example control device
Fig. 3 shows a functional block diagram of an example control device 300, which example control device 300 may be configured as one or both of the control devices 126 and 128 of the media playback system 100. As shown, the control device 300 may include a processor 302, a memory 304, a network interface 306, a user interface 308, and a microphone 310. In one example, the control device 300 may be a dedicated controller for the media playback system 100. In another example, control device 300 may be a network device capable of installing media playback system controller application software, e.g., an iPhoneTM、iPadTMOr any other smart phone, tablet or network device (e.g., a networked computer such as a PC or Mac)TM)。
The processor 302 may be configured to perform functions related to facilitating user access, control, and configuration of the media playback system 100. The memory 304 may be configured to store instructions that can be executed by the processor 302 to perform those functions. The memory 304 may also be configured to store media playback system controller application software and other data associated with the media playback system 100 and the user.
Microphone 310 may include an audio sensor configured to convert detected sound into an electrical signal. The electrical signals may be processed by a processor 302. In one case, if control device 300 is a device that may also be used as a means of voice communication or voice recording, one or more of microphones 310 may be microphones to facilitate these functions. For example, one or more of the microphones 310 may be configured to detect sound within a frequency range that can be produced by a person and/or a frequency range that can be heard by a person. Other examples are also possible.
In one example, the network interface 306 may be based on an industry standard (e.g., infrared standards, wireless standards, wired standards including IEEE 802.3, wireless standards including IEEE802.11a, 802.11b, 802.11G, 802.11n, 802.11ac, 802.15, 4G mobile communication standards, etc.). The network interface 306 may provide a method for the control device 300 to communicate with other devices in the media playback system 100. In one example, data and information (e.g., such as state variables) may be communicated between the control device 300 and other devices via the network interface 306. For example, the control device 300 may receive the playback zone and zone group configuration in the media playback system 100 from a playback device or another network device via the network interface 306 or the control device 300 may transmit the playback zone and zone group configuration in the media playback system 100 to another playback device or network device via the network interface 306. In some cases, the other network device may be another control device.
Playback device control commands, such as volume control and audio playback control, may also be communicated from the control device 300 to the playback device via the network interface 306. As set forth above, the change in the configuration of the media playback system 100 may also be performed by the user using the control device 300. Configuration changes may include, among others: adding or removing one or more playback devices to or from a zone; adding or removing one or more regions to or from a group of regions; form bound or affiliated players; one or more playback devices are separated from bound or affiliated players. Thus, the control device 300 may sometimes be referred to as a controller, whether the control device 300 is a dedicated controller or a network device that installs media playback system controller application software.
The user interface 308 of the control device 300 may be configured to facilitate user access and control of the media playback system 100 by providing a controller interface, such as the controller interface 400 shown in fig. 4. The controller interface 400 includes a playback control zone 410, a playback zone 420, a playback status zone 430, a playback queue zone 440, and an audio content source zone 450. The user interface 400 shown is only one example of a user interface that may be disposed on a network device, such as the control device 300 of fig. 3 (and/or the control devices 126 and 128 of fig. 1), and accessed by a user to control a media playback system, such as the media playback system 100. Alternatively, other user interfaces of different formats, different types, and different interaction sequences may be implemented on one or more network devices to provide comparable control access to the media playback system.
The playback control zone 410 can include selectable (e.g., by touch or by use of a cursor) icons for causing playback devices in a selected playback zone or group of zones to play or pause, fast forward, fast rewind, skip next, skip previous, enter/exit random mode, enter/exit repeat mode, enter/exit cross-fade mode. The playback control zone 410 may also include selectable icons or the like for modifying equalization settings and playback volume.
The playback zone 420 may include a representation of a playback zone in the media playback system 100. In some implementations, the graphical representation of the playback zone can optionally bring up additional selectable icons that manage or configure the playback zone in the media playback system, such as creation of a bound zone, creation of a zone group, separation of a zone group, and renaming of a zone group, among others.
For example, as shown, a "grouping" icon may be arranged in each of the graphical representations of the playback zones. A "grouping" icon provided in the graphical representation of a particular region may optionally bring up an option to select one or more other regions of the media playback system to be grouped with the particular region. Once grouped, the playback devices in a zone that have been grouped with a particular zone will be configured to play audio content in synchronization with one or more playback devices in the particular zone. Similarly, a "grouping" icon may be provided in the graphical representation of the zone group. In this case, the "group" icon may optionally bring up the option of deselecting one or more regions of the regional group to be removed from the regional group. Other interactions and implementations for grouping and ungrouping regions via a user interface, such as user interface 400, are also possible. The representation of the playback zones in playback zone 420 may be dynamically updated as the playback zone or zone group configuration is modified.
The playback status zone 430 can include a graphical representation of audio content in the selected playback zone or group of zones that is currently being played, previously played, or scheduled to be played next. The selected playback zone or zone group may be visually distinguished on the user interface, such as in the playback zone 420 and/or the playback status zone 430. The graphical representation may include the track name, artist name, album year, track length, and other relevant information useful to the user in knowing when to control the media playback system via the user interface 400.
The playback queue zone 440 may include a graphical representation of the audio content in the playback queue associated with the selected playback zone or group of zones. In some implementations, each playback zone or zone group can be associated with a playback queue that includes information corresponding to zero or more audio items for playback by the playback zone or zone group. For example, each audio item in the playback queue can include a Uniform Resource Identifier (URI), a Uniform Resource Locator (URL), or some other identifier that a playback device in a playback zone or group of zones can use to find and/or retrieve audio items from a local audio content source or a networked audio content source that are likely to be used for playback by the playback device.
In one example, a playlist may be added to the playback queue, in which case information corresponding to each audio item in the playlist may be added to the playback queue. In another example, the audio items in the playback queue may be used as a playlist. In yet another example, the playback queue may be empty or filled but "out of use" when the playback zone or zone group is continuously playing streaming audio content, such as an internet broadcast that may be continuously played until otherwise stopped, rather than playing discrete audio items having a playback duration. In an alternative embodiment, when a playback zone or group of zones is playing internet radio and/or other streaming audio content items, the playback queue may include those items and be "in use". Other examples are also possible.
When a playback zone or group of zones is "grouped" or "ungrouped," the playback queue associated with the affected playback zone or group of zones may be cleared or re-associated. For example, if a first playback zone that includes a first playback queue is grouped with a second playback zone that includes a second playback queue, the created zone group may have an associated playback queue that is initially empty that includes audio items from the first playback queue (e.g., if the second playback zone is added to the first playback zone), that includes audio items from the second playback queue (e.g., if the first playback zone is added to the second playback zone), or that includes a combination of audio items from both the first playback queue and the second playback queue. Subsequently, if the created zone group is ungrouped, the resulting first playback zone may be re-associated with the previous first playback queue or may be associated with a new playback queue that is empty or that includes audio items from the playback queue associated with the created zone group prior to the created zone group being ungrouped. Similarly, the resulting second playback zone may be re-associated with the previous second playback queue, or associated with a new playback queue that is empty or that includes audio items from the playback queue associated with the created zone group before the created zone group was ungrouped. Other examples are also possible.
Referring again to the user interface 400 of fig. 4, the graphical representation of the audio content in the playback queue region 440 may include a track name, an artist name, a track length, and other relevant information associated with the audio content in the playback queue. In one example, the graphical representation of the audio content may optionally bring up further selectable icons for managing and/or manipulating the playback queue and/or the audio content represented in the playback queue. For example, the represented audio content may be removed from the playback queue, may be moved to a different location in the playback queue, or may be selected to be played immediately, or may be selected to be played after any audio content currently being played, and so forth. The playback queue associated with a playback zone or zone group may be stored in memory on one or more playback devices in the playback zone or zone group, or may be stored in memory on playback devices not in the playback zone or zone group, and/or may be stored in memory on some other designated device.
The audio content source region 450 may include a graphical representation of an alternative audio content source from which audio content may be retrieved and played by a selected playback zone or group of zones. A discussion of audio content sources may be found in the following sections.
d.Example Audio content Source
As previously described, one or more playback devices in a zone or group of zones may be configured to retrieve audio content for playback from various available audio content sources (e.g., according to a corresponding URI or URL of the audio content). In one example, the playback device may retrieve audio content directly from a corresponding audio content source (e.g., a line-in connection). In another example, audio content may be provided to a playback device over a network via one or more other playback devices or network devices.
Example audio content sources may include: a media playback system such as a memory of one or more playback devices in the media playback system 100 of fig. 1, a local music library on one or more network devices (e.g., such as a control device, a web-enabled personal computer, or Network Attached Storage (NAS)), a streaming audio service that provides audio content via the internet (e.g., the cloud), or an audio source connected to the media playback system via a line-in connection on a playback device or network device, etc.
In some implementations, the audio content source can be added to or removed from a media playback system, such as the media playback system 100 of fig. 1, periodically. In one example, indexing audio items may be performed whenever one or more audio content sources are added, removed, or updated. Indexing the audio item may include: scanning all folders/directories that are shared on a network accessible by playback devices in a media playback system for identifiable audio items; and generating or updating an audio content database that includes metadata (e.g., title, artist, album, track length, among others) and other associated information such as the URI or URL of each identifiable audio item found. Other examples for managing and maintaining audio content sources are possible.
The above discussion of playback devices, controller devices, playback zone configurations, and media content sources provides but a few examples of operating environments in which the functions and methods described below may be implemented. Other operating environments and configurations of media playback systems, playback devices, and network devices not explicitly described herein are also applicable and suitable for implementation of the described functions and methods.
Maintaining a database of signal processing algorithms
As noted above, some examples discussed herein relate to maintaining a database of audio processing algorithms. In some cases, the maintenance of the database may also include generating and/or updating entries for the audio processing algorithms of the database. Each audio processing algorithm in the database may correspond to one or more characteristics of the playback zone. In one example, the one or more characteristics of the playback zone may include acoustic characteristics of the playback zone. Although the following discussion may generally refer to determining an audio processing algorithm to be stored as an entry in a database, one of ordinary skill in the art will appreciate that a similar function may also be performed to update an existing entry in a database. The database may be accessed to identify an audio processing algorithm to be applied by the playback device when the playback device plays audio content in a particular playback zone.
a.Example database of audio processing algorithms and corresponding playback zone acoustic characteristics
Fig. 5 illustrates an example flow diagram of a method 500 for maintaining a database of audio processing algorithms and playback zone acoustic characteristics. As described above, maintaining the database of audio processing algorithms may include determining the audio processing algorithms to store in the database. The method 500 shown in fig. 5 represents an embodiment of a method that can be implemented in an operating environment including, for example, the media playback system 100 of fig. 1, one or more playback devices 200 of fig. 2, and one or more control devices 300 of fig. 3. In one example, method 500 may be performed by a computing device in communication with a media playback system, such as media playback system 100. In another example, some or all of the functionality of method 500 may alternatively be performed by one or more other computing devices, such as one or more servers, one or more playback devices, and/or one or more controller devices.
The method 500 may include one or more operations, functions, or actions as illustrated by one or more of blocks 502 through 510. Although the blocks are shown in a sequential order, the blocks may also be performed in parallel, and/or in a different order than described herein. Moreover, various blocks may be combined into fewer blocks, separated into additional blocks, and/or removed based on a desired implementation. Additionally, for the method 500 and other processes and methods disclosed herein, a flowchart illustrates the functionality and operation of one possible implementation of the current embodiment. In this regard, each block may represent a module, segment, or portion of program code, which comprises one or more instructions executable by a processor for implementing the specified logical function or step in the process. The program code may be stored on any type of computer readable medium, such as a storage device including a diskette or hard drive, for example.
The computer readable medium may include non-transitory computer readable media, such as computer readable media that store data for short periods of time, such as register memory, processor cache memory, and Random Access Memory (RAM), for example. For example, computer-readable media may also include non-transitory media such as secondary memory or persistent long-term memory, like Read Only Memory (ROM), optical or magnetic disks, compact disk read only memory (CD-ROM). The computer readable medium may also be any other volatile or non-volatile storage system. For example, the computer-readable medium may be considered a computer-readable storage medium or a tangible storage device. Additionally, for the method 500 disclosed herein as well as other processes and methods, each block may represent circuitry wired to perform a particular logical function in the process.
As shown in fig. 5, the method 500 includes: at block 502, the computing device causes the playback device to play a first audio signal in a playback zone; at block 504, receiving data indicative of a second audio signal detected by a microphone of a playback device; at block 506, determining an acoustic characteristic of the playback zone based on the second audio signal and the characteristic of the playback device; at block 508, an audio processing algorithm is determined based on the acoustic characteristics of the playback zone; and at block 510, causing an association between the audio processing algorithm and the acoustic characteristics of the playback zone to be stored in a database.
As previously described, the database can be accessed to identify an audio processing algorithm to be applied by the playback device when playing audio content in the playback zone. Thus, in one example, method 500 may be performed for various different playback zones to build a database of audio processing algorithms corresponding to various different playback environments.
At block 502, the method 500 includes causing a playback device to play a first audio signal in a playback zone. The playback device may be a playback device similar to playback device 200 shown in fig. 2. In one case, the computing device may cause the playback device to play the first audio signal by sending a command to play the first audio signal. In another case, the computing device may also provide the first audio signal to be played to the playback device.
In one example, the first audio signal may be used to determine an acoustic response of the playback zone. Thus, the first audio signal may be a test signal or measurement signal representing audio content that may be played by the playback device during normal use by the user. Thus, the first audio signal may comprise audio content having a frequency substantially covering a renderable frequency range or a frequency range audible to a human being of the playback device.
In one example, the playback zone can be a playback zone that represents one of a plurality of playback environments in which the playback device can play audio content during normal use by a user. Referring to fig. 1, the playback zone may represent any of various rooms and zone groups in the media playback system 100. For example, the playback zone may represent a restaurant.
In one case, the playback zone can be a model playback zone constructed to simulate a listening environment in which the playback device can play audio content. In one example, the playback zone may be one of a plurality of playback zones constructed to simulate a plurality of playback environments. For the purpose of populating a database of such audio processing algorithms, multiple playback zones may be constructed. In this case, certain characteristics of the playback zone may be predetermined and/or known. For example, the size of the playback zone, the flooring or wall material of the playback zone (or other characteristics that may affect the audio reflection characteristics of the playback zone), the number of pieces of furniture in the playback zone, or the size and type of pieces of furniture in the playback zone, etc. may be characteristics of the playback zone that may be predetermined and/or known.
In another case, the playback zone may be a room in the home of the user of the playback device. For example, as part of building a database, a user of a playback device (e.g., a client and/or tester) may be invited to use their playback device to perform the functions of method 500 to build the database. In some cases, certain characteristics of the user playback zone may be unknown. In other cases, some or all of certain characteristics of the user playback zone may be provided by the user. The database populated from the functionality of performing method 500 may include entries based on simulated playback zones and/or user playback zones.
While block 502 includes the computing device causing the playback device to play the first audio signal, one of ordinary skill in the art will appreciate that playback of the first audio signal by the playback device is not necessarily caused or initiated by the computing device. For example, the controller device may send a command to the playback device to cause the playback device to play the first audio signal. In another example, the playback device may play the first audio signal without receiving a command from the computing device or the controller. Other examples are also possible.
At block 504, the method 500 includes receiving data indicative of a second audio signal detected by a microphone of a playback device. As described above, the playback device may be a playback device similar to the playback device 200 shown in fig. 2. Thus, the microphone may be the microphone 220. In one example, a computing device may receive data from a playback device. In another example, the computing device may receive the data via another playback device, a controller device, or another server.
The second audio signal may be detected by a microphone of the playback device while the playback device is playing the first audio signal, or shortly thereafter. The second audio signal may comprise a detectable audio signal present in the playback zone. For example, the second audio signal may include a portion corresponding to the first audio signal played by the playback device.
In one example, the computing device may receive data indicative of the detected second audio signal as a media stream from the playback device at the same time that the microphone detects the second audio signal. In another example, the computing device may receive data indicative of the second audio signal from the playback device once detection of the first audio signal by a microphone of the playback device is complete. In either case, the playback device may process the detected second audio signal (via an audio processing component, e.g., audio processing component 208 of playback device 200) to generate data indicative of the second audio signal, and transmit the data to the computing device. In one example, generating the data indicative of the second audio signal may include converting the second audio signal from an analog signal to a digital signal. Other examples are also possible.
At block 506, the method 500 includes determining an acoustic characteristic of the playback zone based on the second audio signal and the characteristic of the playback device. As described above, the second audio signal may include a portion corresponding to the first audio signal played by the playback device in the playback zone.
The characteristics of the playback device may include one or more of the following: the acoustic characteristics of the playback device, the specifications of the playback device (i.e., number of transducers, frequency range, amplifier wattage, etc.), and the model of the playback device. In some cases, the acoustic characteristics of the playback device and/or the specifications of the playback device may be associated with a model of the playback device. For example, a particular model of playback device may have substantially the same specifications and acoustic characteristics. In one example, a database of models of playback devices, acoustic characteristics of models of playback devices, and/or specifications of models of playback devices may be maintained on a computing device or another device in communication with the computing device.
In one example, an acoustic response from a playback device playing a first audio signal in a playback zone may be represented by a relationship between the first audio signal and a second audio signal. Mathematically, if the first audio signal is f (t), the second audio signal is s (t), and the acoustic response of the playback device playing the first audio signal in the playback zone is hr(t) then
Figure BDA0002414552320000201
Wherein the content of the first and second substances,
Figure BDA0002414552320000202
representing a mathematical convolution function. Thus, given a second audio signal s (t) detected by the microphone of the playback device and a first signal f (t) played by the playback device,h can be calculatedr(t)。
In one case, since the first audio signal f (t) is played by the playback device, the acoustic response hr(t) may include: (i) acoustic characteristics of the playback device; and (ii) an acoustic characteristic of the playback zone that is independent of the playback device. Mathematically, the relationship can be expressed as
hr(t)=hp(t)+hroom(t) (2)
Wherein h isp(t) is an acoustic characteristic of the playback device, and hroom(t) is the acoustic properties of the playback zone, which are independent of the playback device. Thus, the acoustic characteristics of the playback zone that are not associated with the playback device may be determined by removing the acoustic characteristics of the playback device from the acoustic response of the playback zone to the first audio signal played by the playback device. In other words,
hroom(t)=hr(t)–hp(t) (3)
in one example, the acoustic properties h of the playback device may be determined by the following stepsp(t): the playback device or a representative playback device of the same model is placed in the anechoic chamber, the playback device is caused to play the measurement signal in the anechoic chamber, and the response signal is detected by a microphone of the playback device. The measurement signal played by the playback device in the anechoic chamber may be similar to the first audio signal f (t) discussed above. For example, the measurement signal may have audio content with a frequency substantially covering a renderable frequency range of the playback device or a frequency range audible to a human.
Acoustic properties h of playback devicesp(t) may represent a relationship between the played measurement signal and the detected response signal. For example, if the measurement signal has a first signal amplitude at a specific frequency and the detected response signal has a second signal amplitude different from the first signal amplitude at the specific frequency, the acoustic characteristic h of the playback devicep(t) may indicate signal amplification or attenuation at a particular frequency.
Mathematically, if the measured signal is x (t), the detected noise isThe answer signal is y (t), and the acoustic characteristic of the playback device in the anechoic chamber is hp(t) then
Figure BDA0002414552320000211
Therefore, h can be calculated based on the measurement signal x (t) and the detected response signal y (t)p(t) of (d). As described above, hp(t) may be representative acoustic characteristics of a playback device for the same model as the playback device used in the anechoic chamber.
In one example, as described above, the acoustic characteristic h is referencedp(t) may be stored in association with a model of the playback device and/or a specification of the playback device. In one example, hp(t) may be stored on the computing device. In another example, hp(t) may be stored on the playback device and other playback devices of the same model. In another case, h may be storedpThe inverse of (t) (denoted by h)p -1(t)) instead of hp(t)。
Referring again to block 506, accordingly, the acoustic characteristics h of the playback device may be based on the first audio signal f (t), the second audio signal s (t), andp(t) to determine the acoustic properties h of the playback zoneroom(t) of (d). In one example, the inverse h of the acoustic characteristics of the playback device may be invertedp -1(t) is applied to equation (2). In other words,
Figure BDA0002414552320000212
wherein I (t) is a pulse signal. The acoustic properties h of the playback zone can then be determinedroom(t) is simplified as:
Figure BDA0002414552320000213
at block 506, the method 500 includes determining an audio processing algorithm based on the acoustic characteristics of the playback zone and the predetermined audio signal. In one example, an audio processing algorithm may be determined such that: when the playback device plays the first audio signal in the playback zone, the playback device applying the determined audio processing algorithm may generate a third audio signal having audio characteristics that are substantially the same as, or at least exhibit to some extent, the predetermined audio characteristics.
In one example, the predetermined audio characteristic may be an audio frequency equalization that is considered to be audible. In one case, the predetermined audio characteristic may include an equalization that is substantially uniform across a renderable frequency range of the playback device. In another case, the predetermined audio characteristics may include an equalization that is deemed pleasant to a typical listener. In yet another case, the predetermined audio characteristics may include a frequency response deemed appropriate for a particular music type.
In either case, the computing device may determine the audio processing algorithm based on the acoustic characteristics and the predetermined audio characteristics. In one example, if the acoustic characteristic of the playback zone is an acoustic characteristic in which a particular audio frequency is attenuated more than other frequencies, and the predetermined audio characteristic includes an equalization in which the particular audio frequency is minimally attenuated, the corresponding audio processing algorithm may include increased amplification at the particular audio frequency.
If the predetermined audio characteristic is represented by a predetermined audio signal z (t) and the audio processing algorithm is represented by p (t), the predetermined audio signal z (t), the audio processing algorithm and the acoustic characteristic h of the playback zone are determined by the predetermined audio signal z (t)roomThe relationship between (t) can be described mathematically as:
Figure BDA0002414552320000221
thus, the audio processing algorithm p (t) can be described mathematically as:
Figure BDA0002414552320000222
in some cases, determining the audio processing algorithm may include determining one or more parameters of the audio processing algorithm (i.e., coefficients of p (t)). For example, the audio processing algorithm may include certain signal amplification gains at certain respective frequencies of the audio signal. Thus, parameters indicative of certain signal amplifications and/or certain corresponding frequencies of the audio signal may be identified to determine the audio processing algorithm p (t).
At block 510, the method 500 includes causing an association between an audio processing algorithm and acoustic characteristics of a playback zone to be stored in a database. Thus, the acoustic properties h of the playback zone as determined at blocks 504 and 506 may be includedroom(t) and entries of the determined corresponding audio processing algorithms p (t) are added to the database. In one example, the database may be stored on local storage of the computing device. In another example, if the database is stored on another device, the computing device may send the audio processing algorithm and the acoustic characteristics of the playback zone to the other device for storage in the database. Other examples are also possible.
As described above, the playback zone for which the audio processing algorithm is determined may be a model playback zone for simulating a listening environment in which the playback device may play audio content or a room of a user of the playback device. In some cases, the database may include entries generated based on audio signals played and detected in the model playback zone and entries generated based on audio signals played and detected in a room of a user of the playback device.
Fig. 6A shows an example portion of a database 600 of audio processing algorithms in which the audio processing algorithms p (t) determined in the above discussion may be stored. As shown, the portion of the database 600 may include a plurality of entries 602-608. The entry 602 may include a playback zone acoustic characteristic hroom -1(t) -1. As described above, the acoustic characteristic hroom -1(t) -1 may be a mathematical representation of the acoustic characteristics of the playback zone calculated based on the audio signal detected by the playback device and the characteristics of the playback device. Also as described above, in the entry 602, the acoustic characteristic h is correlated withroom -1(t) -1 may correspond based on the acoustic property hroom -1(t) -1 and predetermined audio characteristicsCoefficient w of physical algorithm1、x1、y1And z1
As further shown, the entry 604 of the database 600 may include a playback zone acoustic characteristic hroom -1(t) -2 and processing algorithm coefficient w2、x2、y2And z2The entry 606 of the database 600 may include the playback zone acoustic characteristics hroom -1(t) -3 and processing algorithm coefficient w3,x3,y3And z3And entry 608 of database 600 may include playback zone acoustic characteristics hroom -1(t) -4 and processing algorithm coefficient w4、x4、y4And z4
Those of ordinary skill in the art will appreciate that database 600 is but one example of a database that may be populated and maintained by performing the functions of method 500. In one example, the playback zone acoustic characteristics may be stored in different formats or mathematical states (i.e., inverse function versus non-inverse function). In another example, the audio processing algorithm may be stored as a function and/or an equalization function. Other examples are also possible.
In one example, some of the above functions may be performed multiple times for the same playback device in the same playback zone to determine the acoustic characteristics h of the playback zoneroom(t) and the corresponding processing algorithm p (t). For example, blocks 502-506 may be performed multiple times to determine multiple acoustic characteristics of the playback zone. A combined (i.e. averaged) acoustic characteristic of the playback zone may be determined from the plurality of acoustic characteristics and the corresponding processing algorithm p (t) may be determined based on the combined acoustic characteristic of the playback zone. The corresponding processing algorithm p (t) can then be correlated with the acoustic properties h of the playback zoneroom(t) or hroom -1(t) the association between (t) is stored in a database. In some cases, the first audio signal played by the playback device in the playback zone may be substantially the same audio signal during each iteration of the function. In other cases, the first audio signal played by the playback device in the playback zone may be some iterations for the functionA different audio signal for each iteration. Other examples are also possible.
The method 500 (or some variation of the method 500) as described above may also be performed to generate other entries in the database. For example, assuming that the playback device is a first playback device, the playback zone is a first playback zone, and the audio processing algorithm is a first audio processing algorithm, method 500 may additionally or alternatively be performed using a second playback device in a second playback zone. In one example, the second playback device may play the fourth audio signal in the second playback zone, and the microphone of the second playback device may detect a fifth audio signal that includes a portion of the fourth audio signal played by the second playback device. The computing device may then receive data indicative of the fifth audio signal and determine the acoustic characteristics of the second playback zone based on the fifth audio signal and the characteristics of the second playback device.
Based on the acoustic characteristics of the second playback zone, the computing device may determine a second audio processing algorithm such that: when the second playback device plays the fourth audio signal in the playback zone, the second playback device applies the determined second audio processing algorithm to generate a sixth audio signal having audio characteristics substantially identical to predetermined audio characteristics, which are represented by the predetermined audio signal z (t) shown in equations (7) and (8). The computing device may then cause an association between the second audio processing algorithm and the acoustic characteristics of the second playback zone to be stored in a database.
While many playback zones may be similar in size, construction material, and/or furniture type and arrangement, it is unlikely that two playback zones will have exactly the same playback zone acoustic characteristics. Thus, rather than storing a separate entry for each unique playback zone acoustic characteristic and their respective audio processing algorithms, which may require an impractical number of storage devices, entries for similar or substantially identical playback zone acoustic characteristics may be combined.
In one case, when the two playback zones are substantially similar rooms, the acoustic characteristics of the two playback zones may be similar. In another case, as set forth above, the computing device may perform method 500 multiple times for the same playback device in the same playback zone. In yet another case, the computing device may perform method 500 for different playback devices in the same playback zone. In yet another case, the computing device may perform method 500 for playback devices in the same playback zone but at different locations in the playback zone. Other examples are also possible.
In either case, during the process of generating the entry for the playback zone acoustic characteristic and the corresponding audio processing algorithm, the computing device may determine that the two playback zones have substantially the same playback zone acoustic characteristic. The computing device may then responsively determine a third audio processing algorithm based on the first audio processing algorithm and the second audio processing algorithm. For example, the computing device may determine the third audio processing algorithm by taking an average of the parameters of the first audio processing algorithm and the second audio processing algorithm.
The computing device may then store an association between the third audio processing algorithm and substantially the same acoustic characteristics in a database. In one example, the database entry for the third audio processing algorithm may have a respective acoustic characteristic determined based on an average of two substantially identical acoustic characteristics. In some cases, as set forth above, to save storage, the database may have only one entry for substantially the same acoustic characteristics. Thus, entries for the acoustic characteristics of the first playback zone and the second playback zone may be discarded in favor of entries for the third audio processing algorithm. Other examples are also possible.
While the above discussion collectively refers to the method 500 being performed by a computing device, one of ordinary skill in the art will appreciate that the functions of the method 500 may alternatively be performed by one or more other devices, such as one or more servers, one or more playback devices, and/or one or more controller devices, as described above. In other words, one or more of blocks 502-510 may be performed by a computing device, while one or more other of blocks 502-510 may be performed by one or more other computing devices.
In one example, as described above, playback of the first audio signal by the playback device at block 502 may be performed by the playback device without any external commands. Alternatively, the playback device may play the first audio signal in response to a command from the controller device and/or another playback device. In another example, blocks 502-506 may be performed by one or more playback devices or one or more controller devices, and the computing device may perform blocks 508 and 510. In yet another example, blocks 502-508 may be performed by one or more playback devices or one or more controller devices, and the computing device may perform the functions of storing audio processing algorithms only at block 510. Other examples are also possible.
b.Example database of one or more characteristics of audio processing algorithms and corresponding playback zones
As previously described, a playback zone may have one or more playback zone characteristics. As described above, the one or more playback zone characteristics may include acoustic characteristics of the playback zone. The one or more characteristics of the playback zone may also include one or more of: (a) the size of the playback zone, (b) the audio reflection characteristics of the playback zone, (c) the intended use of the playback zone, (d) the number of pieces of furniture in the playback zone, (e) the size of pieces of furniture in the playback zone, and (f) the type of furniture in the playback zone. In one case, the audio reflectivity characteristics of the playback zone may be related to the flooring material and/or the wall material of the playback zone.
In some examples, an association between the determined audio processing algorithm (e.g., p (t) above) and another one or more characteristics of the playback zone may be stored in a database. Fig. 7 illustrates an example flow diagram of a method 700 for maintaining a database of one or more characteristics of audio processing algorithms and playback zones. The method 700 shown in fig. 7 illustrates an embodiment of a method that can be implemented in an operating environment including, for example, the media playback system 100 of fig. 1, one or more playback devices 200 of fig. 2, and one or more control devices 300 of fig. 3. In one example, method 700 may be performed by a computing device in communication with a media playback system, such as media playback system 100. In another example, some or all of the functionality of method 700 may alternatively be performed by one or more other computing devices, such as one or more servers, one or more playback devices, and/or one or more controller devices.
The method 700 may include one or more operations, functions, or actions as illustrated by one or more of blocks 702 through 708. Although the blocks are shown in a sequential order, the blocks may also be performed in parallel, and/or in a different order than described herein. Moreover, various blocks may be combined into fewer blocks, separated into additional blocks, and/or removed based on a desired implementation.
As shown in fig. 7, method 700 includes: at block 702, causing a playback device to play a first audio signal in a playback zone; at block 704, receiving (i) data indicative of one or more characteristics of a playback zone and (ii) data indicative of a second audio signal detected by a microphone of a playback device; at block 706, determining an audio processing algorithm based on the second audio signal and the characteristics of the playback device; and at block 708, causing an association between the determined audio processing algorithm and at least one of the one or more characteristics of the playback zone to be stored in a database.
At block 702, the method 700 includes the computing device causing the playback device to play a first audio signal in a playback zone. In one example, block 702 may include functionality that is the same as or substantially the same as the functionality of block 502 described in conjunction with fig. 5. For example, the first audio signal may include audio content having a frequency that substantially covers a renderable frequency range or a frequency range audible to a human being of the playback device. Accordingly, any discussion above regarding block 502 also applies to block 702.
At block 704, the method 700 includes receiving (i) data indicative of one or more characteristics of a playback zone and (ii) data indicative of a second audio signal detected by a microphone of a playback device. In one example, block 704 may include functionality that is the same as or substantially the same as the functionality of block 504 described in conjunction with fig. 5. For example, the second audio signal may include a portion corresponding to the first audio signal played by the playback device. Accordingly, any discussion above regarding block 504 also applies to block 704.
In addition to that previously described with respect to block 504, block 704 also includes receiving data indicative of one or more characteristics of the playback zone. As described above, the playback zone may be a model playback zone that simulates a listening environment in which the playback device may play audio content. In this case, some of the one or more playback zone characteristics of the playback zone may be known. For example, the size, plan view, building materials, and furniture of the playback zone may be known. In one case, the model playback zone may be constructed for the purpose of determining the audio processing algorithm of the database, in which case some of the one or more playback zone characteristics may be predetermined. In another case, the playback zone may be a room of a user of the playback device. As mentioned above, such characteristics of the playback zone may have an effect on the acoustic characteristics of the playback zone.
In one example, a computing device may receive data indicative of one or more playback zone characteristics via a controller interface of a controller device used by a user or an acoustic engineer. In another example, the computing device may receive data from a playback device in the playback zone indicating one or more characteristics of the playback zone. For example, data indicative of one or more characteristics may be received along with data indicative of the second audio signal. The data indicative of the one or more playback zone characteristics may be received before, during, or after the playback device plays back the first audio signal at block 702. Other examples are also possible.
At block 706, the method 700 includes determining an audio processing algorithm based on the second audio signal and the characteristics of the playback device. In one example, block 706 may include the same or similar functionality as described above in blocks 506 and 508 of fig. 5. For example, determining the audio processing algorithm may include determining an acoustic characteristic of the playback zone based on the second audio signal and a characteristic of the playback device, and then determining the audio processing algorithm based on the acoustic characteristic of the playback zone. As described above, the characteristics of the playback device may include one or more of the following: the acoustic characteristics of the playback device, the specifications of the playback device, and the model of the playback device.
As previously described, the playback device applying the determined audio processing algorithm while playing the first audio signal in the playback zone may generate a third audio signal having audio characteristics substantially the same as, or at least embodying to some extent, the predetermined audio characteristics. In one case, the predetermined audio characteristic may be the same or substantially the same as the predetermined audio characteristic described above as the predetermined audio signal p (t). Other examples are also possible.
At block 708, the method 800 includes storing an association between the determined audio processing algorithm and at least one of the one or more characteristics of the playback zone in a database. In one example, block 708 may include the same or similar functionality as described above in block 510. In this case, however, the computing device may cause an association between the audio processing algorithm and at least one of the one or more characteristics of the playback zone, in addition to or instead of the acoustic characteristics of the playback zone, to be stored in a database.
As described above, the playback zone for which the audio processing algorithm is determined may be a model playback zone for simulating a listening environment in which the playback device may play audio content or a room of a user of the playback device. In some cases, the database may include entries generated based on audio signals played and detected in the model playback zone and entries generated based on audio signals played and detected in a room of a user of the playback device.
Fig. 6B illustrates an example portion of a database 650 of audio processing algorithms in which the audio processing algorithms determined in the discussion above and the associations between the audio processing algorithms and the playback zone acoustic characteristics may be stored. As shown, the portion of database 650 may include a plurality of entries 652 through 658, similar to entries 602 through 608 of database 600. For example, entry 652 and entry 602 may have the same playback zone acoustic characteristics and the same audio processing algorithm coefficients, entry 654 and entry 604 may have the same playback zone acoustic characteristics and the same audio processing algorithm coefficients, entry 656 and entry 606 may have the same playback zone acoustic characteristics and the same audio processing algorithm coefficients, and entry 658 and entry 608 may have the same playback zone acoustic characteristics and the same audio processing algorithm coefficients.
In addition to the playback zone acoustic characteristics, the database 650 may also include zone size information indicating the size of the playback zone with the corresponding playback zone acoustic characteristics and the audio processing algorithm determined based on the corresponding playback zone acoustic characteristics. For example, as shown, the entry 652 may have an area size of a1×b1×c1Entry 654 may have a region size of a2×b2×c2The entry 656 may have an area size a3×b3×c3And entry 654 may have a region size of a4×b4×c4. Thus, in this example, the one or more characteristics stored in association with the determined audio processing algorithm include the acoustic characteristics of the playback zone and the size of the playback zone. Other examples are also possible.
Those of ordinary skill in the art will appreciate that database 650 is only one example of a database that may be populated and maintained by performing the functions of method 700. In one example, the playback zone acoustic characteristics may be stored in different formats or mathematical states (i.e., inverse and non-inverse functions). In another example, the audio processing algorithm may be stored as a function and/or an equalization function. In yet another example, the database 650 may include only the zone sizes and corresponding audio processing algorithms, and not the corresponding playback zone acoustic characteristics. Other examples are also possible.
Similar to method 500, method 700 (or some variation of method 700) may also be performed as described above to generate other entries in the database. For example, assuming that the playback device is a first playback device, the playback zone is a first playback zone, and the audio processing algorithm is a first audio processing algorithm, method 600 may additionally or alternatively be performed using a second playback device in a second playback zone. In one example, the second playback device may play the fourth audio signal in the second playback zone, and the microphone of the second playback device may detect a fifth audio signal that includes a portion of the fourth audio signal played by the second playback device. The computing device may then receive (i) data indicative of one or more characteristics of the second playback zone and (ii) data indicative of a fifth audio signal detected by a microphone of the second playback device in the second playback zone.
The computing device may then determine the acoustic characteristics of the second playback zone based on the fifth audio signal and the characteristics of the second playback device. The computing device may determine a second audio processing algorithm based on the acoustic characteristics of the second playback zone such that: when the second playback device plays the fourth audio signal in the playback zone, the second playback device applies the determined second audio processing algorithm to generate a sixth audio signal having audio characteristics substantially identical to predetermined audio characteristics, which are represented by the predetermined audio signal z (t) shown in equations (7) and (8). The computing device may then cause an association between the second audio processing algorithm and at least one of the one or more characteristics of the second playback zone to be stored in a database.
Similar to that discussed above in connection with method 500, during the process of generating an entry for the database, the computing device may determine two playback zones with similar or substantially the same playback zone acoustic characteristics. Thus, as described above, the computing device may combine (i.e., by averaging) the playback zone acoustic characteristics and the determined audio processing algorithms corresponding to the playback zone acoustic characteristics, and store the combined playback zone acoustic characteristics and combined audio processing algorithms as a single entry in the database. Other examples are also possible.
Similar to the case of method 500, although the above discussion collectively refers to method 700 as being performed by a computing device, one of ordinary skill in the art will appreciate that the functions of method 700 may alternatively be performed by one or more other computing devices, such as one or more servers, one or more playback devices, and/or one or more controller devices. In other words, one or more of blocks 702-708 may be performed by a computing device, while one or more other of blocks 702-708 may be performed by one or more other computing devices. Other computing devices may include one or more playback devices, one or more controller devices, and/or one or more servers.
In one example, as described above, playback of the first audio signal by the playback device at block 702 may be performed by the playback device without any external commands. Alternatively, the playback device may play the first audio signal in response to a command from the controller device and/or another playback device. In another example, blocks 702-706 may be performed by one or more playback devices or one or more controller devices, and the computing device may perform block 708. Other examples are also possible.
Calibrating playback devices based on playback zone characteristics
As described above, some examples described herein include calibrating a playback device for a playback zone. In some cases, calibration of the playback device may include determining an audio processing algorithm to be applied by the playback device when playing audio content in the playback zone.
Fig. 8 shows an example playback environment 800 in which a playback device may be calibrated. As shown, playback environment 800 includes a computing device 802, playback devices 804 and 806, a controller device 808, and a playback zone 810. The playback device 804 and the playback device 806 may be similar to the playback device 200 shown in fig. 2. Thus, the playback device 804 and the playback device 806 may each have a microphone, e.g., microphone 220. In some cases, only one of the playback device 804 and the playback device 806 may have a microphone.
In one example, the playback device 804 and the playback device 806 may be part of a media playback system and may be configured to play audio content synchronously, such as shown and discussed above with respect to the media playback system 100 of fig. 1. In one case, the playback device 804 and the playback device 806 can be grouped together to play audio content synchronously in the playback zone 810. Referring again to fig. 1, the playback zone 810 can be any one or more of a different room and zone group in the media playback system 100. For example, the playback zone 810 may be a master-bedroom. In this case, the playback device 804 and the playback device 806 may correspond to the playback devices 122 and 124, respectively.
In one example, the controller device 808 can be a device that can be used to control a media playback system. In one case, the controller device 808 may be similar to the control device 300 of fig. 3. Although the controller device 808 of fig. 8 is shown inside the playback zone 810, the controller device 808 may be outside the playback zone 810 or may move into or out of the playback zone 810 while communicating with the playback device 804, the playback device 806, and/or any other device in the media playback system.
In one example, the computing device 802 can be a server in communication with a media playback system. The computing device 802 may be configured to maintain a database of information associated with the media playback system (e.g., registration numbers associated with the playback device 804 and the playback device 806). As described in the previous section, the computing device 802 may also be configured to maintain a database of audio processing algorithms. Other examples are also possible.
Methods 900, 1000, and 1100, as will be discussed below, provide functionality that may be performed to calibrate playback devices in a playback zone, such as playback device 804 and playback device 806 in playback zone 810.
a.First example method for determining an audio processing algorithm based on a detected audio signal
Fig. 9 illustrates an example flow diagram of a method 900 for determining an audio processing algorithm based on one or more playback zone characteristics. The method 900 shown in fig. 9 represents an embodiment of a method that can be implemented in an operating environment including, for example, the media playback system 100 of fig. 1, one or more playback devices 200 of fig. 2, one or more control devices 300 of fig. 3, and the playback environment 800 of fig. 8. In one example, method 900 may be performed by a computing device in communication with a media playback system. In another example, some or all of the functionality of method 900 may alternatively be performed by one or more other computing devices associated with the media playback system, such as one or more servers, one or more playback devices, and/or one or more controller devices.
Method 900 may include one or more operations, functions, or actions as illustrated by one or more of blocks 902-908. Although the blocks are shown in a sequential order, the blocks may also be performed in parallel, and/or in a different order than described herein. Moreover, various blocks may be combined into fewer blocks, separated into additional blocks, and/or removed based on a desired implementation.
As shown in fig. 9, method 900 includes: at block 902, causing a playback device to play a first audio signal in a playback zone; at block 904, receiving data from the playback device indicative of a second audio signal detected by a microphone of the playback device; at block 906, an audio processing algorithm is determined based on the second audio signal and the acoustic characteristics of the playback device; and at block 908, transmitting data indicative of the determined audio processing algorithm to the playback device.
At block 902, the method 900 includes causing the playback device to play the first audio signal in the playback zone. Referring to fig. 8, the playback device may be the playback device 804, and the playback zone may be the playback zone 810. Thus, the playback device may be a playback device similar to the playback device 200 shown in fig. 2.
In one example, the computing device 802 may determine that the playback device 804 is to be calibrated for the playback zone 810 and responsively cause the playback device 804 to play the first audio signal in the playback zone 810. In one case, the computing device 802 may determine that the playback device 804 is to be calibrated based on input received from a user indicating that the playback device 804 is to be calibrated. In one example, input may be received from a user via the controller device 808. In another case, the computing device 802 may determine that the playback device 804 is to be calibrated because the playback device 804 is a new playback device or was recently moved to the playback zone 810. In yet another case, calibration of the playback device 804 (or any other playback device in the media playback system) may be performed periodically. Accordingly, the computing device 802 may determine that the playback device 804 is to be calibrated based on the calibration schedule of the playback device 804. Other examples are also possible. In response to determining that the playback device 804 is to be calibrated, the computing device 802 may then cause the playback device 804 to play the first audio signal.
While block 902 includes the computing device 802 causing the playback device 804 to play the first audio signal, one of ordinary skill in the art will appreciate that playback of the first audio signal by the playback device 804 is not necessarily caused or initiated by the computing device 802. For example, the controller device 808 may send a command to the playback device 804 to cause the playback device 804 to play the first audio signal. In another example, the playback device 806 can cause the playback device 804 to play the first audio signal. In yet another example, the playback device 804 can play the first audio signal without receiving a command from the computing device 802, the playback device 806, or the controller device 808. In one example, the playback device 804 may determine to perform calibration and responsively play the first audio signal based on movement of the playback device 804 or a change in a playback zone of the playback device 804. Other examples are also possible.
As suggested, the first audio signal may be a test signal or a measurement signal used to calibrate the playback device 804 for the playback zone 810. Thus, the first audio signal may represent audio content that may be played by the playback device during normal use by the user. Thus, the first audio signal may comprise audio content having a frequency substantially covering a renderable frequency range or a frequency range audible to a human being of the playback device. In another example, the first audio signal may be a favorite or a frequently played audio track of a user of the playback device.
At block 904, the method 900 includes receiving, from the playback device, a second audio signal detected by a microphone of the playback device. Continuing with the above example, assuming that the playback device 804 is similar to the playback device 200 of fig. 2, the microphone of the playback device 804 may be similar to the microphone 220 of the playback device 200. In one example, the computing device 802 may receive data from the playback device 804. In another example, computing device 804 may receive the data via another playback device (e.g., playback device 806), a controller device (e.g., controller device 808), or another computing device, such as another server.
The microphone of the playback device 804 may detect the second audio signal while the playback device 804 is playing the first audio signal, or shortly thereafter. The second audio signal may include sound present in the playback zone. For example, the second audio signal may include a portion corresponding to the first audio signal played by the playback device 804.
In one example, the computing device 802 may receive data indicative of the first audio signal as a media stream from the playback device 804 while the microphone detects the second audio signal. In another example, the computing device 802 may receive data indicative of the second audio signal from the playback device 804 once detection of the second audio signal by the microphone of the playback device 804 is complete. In either case, the playback device 804 may process the detected second audio signal (via an audio processing component, e.g., the audio processing component 208 of the playback device 200) to generate data indicative of the second audio signal, and transmit the data to the computing device 802. In one example, generating the data indicative of the second audio signal may include converting the second audio signal from an analog signal to a digital signal. Other examples are also possible.
At block 906, the method 900 includes determining an audio processing algorithm based on the second audio signal and the acoustic characteristics of the playback device. In one example, the acoustic characteristic of the playback device may be h as discussed above with respect to block 506 of method 500 shown in fig. 5p(t) of (d). For example, as described above, byThe following steps are performed to determine the acoustic characteristics of the playback device: the method includes causing a reference playback device to play a measurement signal in an anechoic chamber, receiving data from the reference playback device indicative of an audio signal detected by a microphone of the reference playback device, and determining an acoustic characteristic of the playback device based on a comparison between the detected audio signal and the measurement signal.
As set forth above, the reference playback device may have the same model as the playback device 804 calibrated for the playback zone 810. Also, similar to that discussed above with respect to block 506, the computing device may thereby determine characteristics of the playback zone based on the acoustic characteristics of the playback device and the second audio signal.
In one example, the computing device 802 may determine an audio processing algorithm based on the acoustic characteristics of the playback zone, similar to that discussed above with respect to block 508. Thus, the computing device 802 may determine an audio processing algorithm based on the acoustic characteristics of the playback zone and the predetermined audio characteristics such that: when the playback device 804 plays the first audio signal in the playback zone 810, the playback device 804 applying the determined audio processing algorithm may generate a third audio signal having audio characteristics that are substantially the same as, or at least that exhibit to some extent, the predetermined audio characteristics.
In another example, the computing device 802 may select an audio processing algorithm from a plurality of audio processing algorithms that corresponds to the acoustic characteristics of the playback zone 810. For example, the computing device may access a database (e.g., database 600 of fig. 6A and database 650 of fig. 6B) and identify an audio processing algorithm based on the acoustic characteristics of the playback zone 810. For example, referring to database 600 of FIG. 6A, if the acoustic characteristic of playback zone 810 is determined to be hroom -1(t) -3, then the coefficient w can be identified3、x3、y3And z3The audio processing algorithm of database entry 606.
In some cases, no acoustic characteristics may be found in the database that exactly match the determined acoustic characteristics of the playback zone 810. In this case, the audio processing algorithm corresponding to the acoustic characteristics in the database that are most similar to the acoustic characteristics of the playback zone 810 may be identified. Other examples are also possible.
At block 908, the method 900 includes transmitting data indicative of the determined audio processing algorithm to the playback device. Continuing with the above example, the computing device 802 (or one or more other devices) may transmit data indicative of the determined audio processing algorithm to the playback device 804. The data indicative of the determined audio processing algorithm may also include commands that cause the playback device 804 to apply the determined audio processing algorithm when playing audio content in the playback zone 810. In one example, applying an audio processing algorithm to the audio content may modify the frequency equalization of the audio content. In another example, applying an audio processing algorithm to audio content may modify a volume range of the audio content. Other examples are also possible.
In some cases, the playback zone may include multiple playback devices configured to play audio content synchronously. For example, as described above, the playback device 804 and the playback device 806 may be configured to synchronously play audio content in the playback zone 810. In this case, the calibration of one of the playback devices may include the other playback devices.
In one example, a playback zone (e.g., playback zone 810) can include a first playback device (e.g., playback device 804) and a second playback device (e.g., playback device 806) configured to synchronously play audio content. Calibration of the playback device 804, as coordinated and performed by the computing device 802, may include having the playback device 804 play a first audio signal and having the playback device 806 play a second audio signal.
In one case, the computing device 802 may cause the playback device 806 to play the second audio signal in synchronization with the playback device 804 playing back the first audio signal. In one case, the second audio signal may be orthogonal to the first audio signal such that components of the synchronously played audio content played by either of the playback device 804 and the playback device 806 are discernable. In another case, the computing device may cause the playback device 806 to play the second audio signal after playback of the first audio signal by the playback device 804 is complete. Other examples are also possible.
Similar to that discussed with respect to block 904, the computing device 802 may then receive, from the playback device 804, a third audio signal detected by a microphone of the playback device 804. In this case, however, the third audio signal may include a portion corresponding to the first audio signal played by the playback device 804 and a portion corresponding to the second audio signal played by the playback device 806.
Similar to that described above with respect to blocks 906 and 908, the computing device 802 may then determine an audio processing algorithm based on the third audio signal and the acoustic characteristics of the playback device 804, and transmit data indicative of the determined audio processing algorithm to the playback device 804 for application by the playback device 804 when playing audio content in the playback zone 810.
In one case, the playback device 806 may also have a microphone, as described above, and may also be calibrated similarly to that described above. As shown, the first audio signal played by the playback device 804 and the second audio signal played by the playback device 806 may be orthogonal or otherwise distinguishable. For example, as also described above, the playback device 806 may play the second audio signal after playback of the first audio signal by the playback device 804 is complete. In another case, the second audio signal may have a phase that is orthogonal to the phase of the first audio signal. In yet another case, the second audio signal may have a different and/or varying frequency range than the first audio signal. Other examples are also possible.
In either case, the discernable first and second audio signals may cause the computing device 802 to resolve, from the third audio signal detected by the playback device 804, the contribution of the playback device 804 to the detected third audio signal and the contribution of the playback device 806 to the detected third audio signal. Respective audio processing algorithms may then be determined for the playback device 804 and the playback device 806.
Similar to that discussed above with respect to block 508, a corresponding audio processing algorithm may be determined. In one case, a first acoustic characteristic of the playback zone may be determined based on the third audio signal detected by the playback device 604, and a second acoustic characteristic of the playback zone may be determined based on the fourth audio signal detected by the playback device 806. Similar to the third audio signal, the fourth audio signal may also include a portion corresponding to the first audio signal played by the playback device 804 and a portion corresponding to the second audio signal played by the playback device 806.
Respective audio processing algorithms for the playback device 804 and the playback device 806 may then be determined based on the first acoustic characteristic of the playback zone and the second acoustic characteristic of the playback zone, either individually or in combination. In some cases, the combination of the first acoustic characteristic of the playback zone and the second acoustic characteristic of the playback zone may represent a more comprehensive acoustic characteristic of the playback zone than the first acoustic characteristic or the second acoustic characteristic of the playback zone alone. The corresponding audio processing algorithms are then sent to the playback device 804 and the playback device 806 for the playback device 804 and the playback device 806 to apply when playing the audio content in the playback zone 810. Other examples are also possible.
While the above discussion collectively refers to the method 900 as being performed by the computing device 802 of fig. 8, one of ordinary skill in the art will appreciate that the functions of the method 900 may alternatively be performed by one or more other computing devices, such as one or more servers, one or more playback devices, and/or one or more controller devices, as described above. For example, the functionality of method 900 for calibrating playback device 804 to playback zone 810 may be performed by playback device 804, playback device 806, controller device 808, or another device not shown in fig. 8 in communication with playback device 804.
Further, in some cases, one or more of blocks 902-908 may be performed by computing device 802, while one or more other of blocks 902-908 may be performed by one or more other devices. For example, blocks 902 and 904 may be performed by one or more of playback device 804, playback device 806, and playback device 808. In other words, coordinating devices other than computing device 802 may coordinate calibrating playback device 804 for playback zone 810.
In some cases, at block 906, the coordinating device may transmit a second audio signal to the computing device 802 such that the computing device 802 may determine an audio processing algorithm based on the second audio signal and the acoustic characteristics of the playback device. The acoustic characteristics of the playback device may be provided to the computing device 802 by a coordinating device or from another device that stores the characteristics of the playback device. In one case, computing device 802 may perform the calculation of block 906 because computing device 802 has greater processing power than the coordinating device.
In one example, when the audio processing algorithm is determined, the computing device 802 can send the determined audio processing algorithm directly to the playback device 804 for application by the playback device 804 when playing audio content in the playback zone 810. In another example, when an audio processing algorithm is determined, the computing device 802 may send the determined audio processing algorithm to the coordinating device, and the coordinating device may perform block 908 and send the determined processing algorithm to the playback device 804 (if the coordinating device is not the playback device 804). Other examples are also possible.
b.Second example method for determining an audio processing algorithm based on a detected audio signal
In some cases, as described above, calibration of playback devices in a playback zone may be coordinated and performed by a computing device (e.g., a server) or a controller device. In other cases, as also described above, calibration of the playback device may be coordinated and/or performed by the calibrated playback device.
As performed by a calibrated playback device, fig. 10 illustrates an example flow diagram of a method 1000 for determining an audio processing algorithm based on one or more playback zone characteristics. The method 1000 shown in fig. 10 represents an embodiment of a method that can be implemented in an operating environment including, for example, the media playback system 100 of fig. 1, one or more playback devices 200 of fig. 2, one or more control devices 300 of fig. 3, and the playback environment 800 of fig. 8. As shown, method 800 may be performed by a playback device to be calibrated for a playback zone. In some cases, some of the functions of method 1000 may alternatively be performed by one or more other computing devices, such as one or more servers, one or more other playback devices, and/or one or more controller devices.
The method 1000 may include one or more operations, functions, or actions as illustrated by one or more of blocks 1002-1008. Although the blocks are shown in a sequential order, the blocks may also be performed in parallel, and/or in a different order than described herein. Moreover, various blocks may be combined into fewer blocks, separated into additional blocks, and/or removed based on a desired implementation.
As shown in fig. 10, the method 1000 includes: at block 1002, playing a first audio signal while in a playback zone; at block 1004, detecting, by the microphone, a second audio signal; at block 1006, an audio processing algorithm is determined based on the second audio signal and the acoustic characteristics of the playback device; and at block 1008, applying the determined audio processing algorithm to audio data corresponding to the media item as the media item is played.
At block 1002, the method 1000 includes playing a first audio signal while in a playback zone. Referring to fig. 8, the playback device performing the method 1000 may be the playback device 804, with the playback device 804 in the playback zone 810. In one example, block 1002 may be similar to block 902, but is performed by the calibrated playback device 804 instead of the computing device 802. However, any discussion above with respect to block 902 also applies to block 1002, and some variations may sometimes be made.
At block 1004, the method 1000 includes detecting, by the microphone, a second audio signal. The second audio signal may include a portion corresponding to the first audio signal played by the playback device. In one example, block 1004 may be similar to block 904, but performed by the calibrated playback device 804 instead of the computing device 802. However, any discussion above with respect to block 904 also applies to block 1004, and some variations may sometimes be made.
At block 1006, the method 1000 includes determining an audio processing algorithm based on the second audio signal and the acoustic characteristics of the playback device. In one example, block 1006 may be similar to block 906, but performed by the calibrated playback device 804 instead of the computing device 802. However, any discussion above with respect to block 906 also applies to block 1006, and some variations may sometimes be made.
In one case, as discussed with respect to block 906, the functionality for determining an audio processing algorithm may be performed entirely by the playback device 804 being calibrated for the playback zone 810. Accordingly, the playback device 804 may determine the acoustic characteristics of the playback zone 610 based on the second audio signal and the acoustic characteristics of the playback device 804. In one case, the playback device 804 may have stored locally the acoustic characteristics of the playback device 804. In another case, the playback device 804 may receive the acoustic characteristics of the playback device 804 from another device.
In one example, the playback device 804 may then select an audio processing algorithm from a plurality of audio processing algorithms that corresponds to the acoustic characteristics of the playback zone 610. For example, the playback device 804 may access a database (e.g., databases 600 and 650, shown in fig. 6A and 6B, respectively, and described in connection with fig. 6A and 6B, supra) and identify audio processing algorithms in the database that correspond to acoustic characteristics that are substantially similar to the acoustic characteristics of the playback zone 610.
In another example, similar to the functionality described above with respect to block 906 of method 900 and/or block 508 of method 500, the playback device 804 may calculate an audio processing algorithm based on the acoustic characteristics of the playback zone 610 and the predetermined audio characteristics such that: when the playback device 804 plays the first audio signal in the playback zone 810, the playback device 804 applying the determined audio processing algorithm may generate a third audio signal having audio characteristics that are substantially the same as, or at least that exhibit to some extent, the predetermined audio characteristics.
In yet another example, as discussed in the previous section, another device other than the playback device 804 may perform some or all of the functionality of block 1006. For example, the playback device 804 may transmit data indicative of the detected second audio signal to a computing device (e.g., computing device 802), another playback device (e.g., playback device 806), a controller device (e.g., controller device 808), and/or some other device in communication with the playback device 804 and request an audio processing algorithm. In another example, the playback device 804 can determine the acoustic characteristics of the playback zone 810 based on the detected audio signals and transmit data indicative of the determined acoustic characteristics of the playback zone 810 to other devices based on the determined acoustic characteristics of the playback zone 810 if an audio processing algorithm is requested.
In other words, in one aspect, the playback device 804 may determine the audio processing algorithm by requesting the audio processing algorithm from another device based on the detected second audio signal provided by the playback device 804 to the other device and/or the acoustic characteristics of the playback zone 810.
In the event that the playback device 804 provides data indicative of the detected second audio signal but does not provide the acoustic characteristics of the playback zone 810, the playback device 804 may also transmit the acoustic characteristics of the playback device 804 along with the data indicative of the detected second audio signal so that another device may determine the acoustic characteristics of the playback zone 810. In another case, the device receiving the data indicative of the detected second audio signal may determine a model of the playback device 804 that transmitted the data based on the data, and determine the acoustic characteristics of the playback device 804 based on the model of the playback device 804 (i.e., a playback device acoustic characteristics database). Other examples are also possible.
The playback device 804 may then receive the determined audio processing algorithm. In one case, the playback device 804 may send the second audio signal to another device because the other device has greater processing power than the playback device 804. In another case, the playback device 804 and one or more other devices may perform computations and functions in parallel to efficiently utilize processing power. Other examples are also possible.
At block 1008, the method 800 includes applying the determined audio processing algorithm to audio data corresponding to the media item when the media item is played. In one example, applying an audio processing algorithm to the audio data of the media item may modify the frequency equalization of the media item as the playback device 804 plays the media item in the playback region 810. In another example, applying an audio processing algorithm to the audio data of the media item may modify the volume range of the media item as the playback device 804 plays the media item in the playback region 810. In one example, the playback device 804 may store the determined audio processing algorithm in a local storage and apply the audio processing algorithm when playing audio content in the playback zone 810.
In one example, the playback device 804 may be calibrated for different configurations of the playback device 804. For example, the playback device 804 may be calibrated for a first configuration that includes playback alone in the playback zone 810 and a second configuration that includes playback in the playback zone 810 in synchronization with the playback device 806. In this case, a first audio processing algorithm is determined, stored and applied to a first playback configuration of the playback device, and a second audio processing algorithm is determined, stored and applied to a second playback configuration of the playback device.
The playback device 804 may then determine which audio processing algorithm to apply when playing the audio content in the playback zone 810 based on the playback configuration of the playback device 804 at a given time. For example, if the playback device 804 plays audio content in the playback zone 810 without the playback device 806, the playback device 804 may apply a first audio processing algorithm. On the other hand, if the playback device 804 plays audio content in the playback zone 810 in synchronization with the playback device 806, the playback device 804 may apply a second audio processing algorithm. Other examples are also possible.
c.Example method for determining an audio processing algorithm based on playback zone characteristics
In the discussion above, the determination of the audio processing algorithm may generally be based on a determination of the acoustic characteristics of the playback zone, as determined based on an audio signal detected by a playback device in the playback zone. In some cases, the audio processing algorithms may also be identified based on other characteristics of the playback zone, in addition to or instead of the acoustic characteristics of the playback zone.
Fig. 11 illustrates an example flow diagram for providing an audio processing algorithm from a database of audio processing algorithms based on one or more characteristics of a playback zone. The method 1100 shown in fig. 11 represents an embodiment of a method that can be implemented in an operating environment including, for example, the media playback system 100 of fig. 1, one or more playback devices 200 of fig. 2, one or more control devices 300 of fig. 3, and the playback environment 800 of fig. 8. In one example, the method 1100 may be performed individually or collectively by one or more playback devices, one or more controller devices, one or more servers, or one or more computing devices in communication with a playback device to be calibrated for a playback zone.
The method 1100 may include one or more operations, functions, or actions as illustrated by one or more of blocks 1102-1108. Although the blocks are shown in a sequential order, the blocks may also be performed in parallel, and/or in a different order than described herein. Moreover, various blocks may be combined into fewer blocks, separated into additional blocks, and/or removed based on a desired implementation.
As shown in fig. 11, method 1100 includes: at block 1102, a database of (i) a plurality of audio processing algorithms and (ii) a plurality of playback zone characteristics is maintained; at block 1104, receiving data indicative of one or more characteristics of a playback zone; at block 1106, an audio processing algorithm is identified in a database based on the data; and at block 1108, data indicative of the identified audio processing algorithm is sent.
At block 1102, the method 1100 includes maintaining a database of (i) a plurality of audio processing algorithms and (ii) a plurality of playback zone characteristics. In one example, the database may be similar to databases 600 and 650 illustrated in fig. 6A and 6B, respectively, and described in connection with fig. 6A and 6B, above. Thus, each of the plurality of audio processing algorithms may correspond to one or more of the plurality of playback zone characteristics. The database may be maintained as described above with respect to method 500 of fig. 5 and 700 of fig. 7, respectively. As described above, the database may or may not be stored locally on the device that maintains the database.
At block 1104, the method 1100 includes receiving data indicative of one or more characteristics of a playback zone. In one example, the one or more characteristics of the playback zone may include acoustic characteristics of the playback zone. In another example, the one or more characteristics of the playback zone may include a size of the playback zone, flooring material of the playback zone, wall material of the playback zone, intended use of the playback zone, number of pieces of furniture in the playback zone, size of pieces of furniture in the playback zone, and type of furniture in the playback zone, among others.
In one example, referring again to fig. 8, the playback device 804 can be calibrated for the playback zone 810. As described above, the method 1100 may be performed by the playback device 804, the playback device 806, the controller device 808, the computing device 802, or another device in communication with the playback device 804, individually or collectively, that is calibrated.
In one case, the one or more characteristics may include acoustic characteristics of the playback zone 810. In this case, the playback device 804 in the playback zone 810 may play the first audio signal and detect, by a microphone of the playback device 804, the second audio signal that includes a portion corresponding to the first audio signal. In one example, the data indicative of the one or more characteristics may be data indicative of the detected second audio signal. In another example, similar to that previously discussed, based on the detected second audio signal and the acoustic characteristics of the playback device 804, the acoustic characteristics of the playback zone 810 may be determined. The data indicative of the one or more characteristics may then be indicative of acoustic characteristics of the playback zone. In either case, the data indicative of the one or more characteristics may then be received by at least one of the one or more devices performing method 1100.
In another case, the one or more characteristics may include a size of the playback zone, a flooring material of the playback zone, a wall material of the playback zone, and the like. In this case, the user may be prompted to enter or select one or more characteristics of the playback zone 810 via a controller interface provided by a controller device (e.g., controller device 808). For example, the controller interface may provide a list of playback zone sizes and/or a list of furniture arrangements, etc., from which the user selects. Data indicative of one or more characteristics of the playback zone 810 as provided by the user may then be received by at least one of the one or more devices performing the method 1100.
At block 1106, the method 1100 includes identifying an audio processing algorithm in a database based on the data. Referring to the case where the one or more characteristics include acoustic characteristics of the playback zone 810, an audio processing algorithm may be identified in the database based on the acoustic characteristics of the playback zone 810. For example, referring to database 600 of FIG. 6A, if the received data indicates that the acoustic characteristic of playback zone 810 is hroom -1(t) -3 or substantially with hroom -1(t) -3 are the same, it can be recognized that the coefficient is w3、x3、y3And z3The audio processing algorithm of database entry 606. In case the data indicative of one or more characteristics of the playback zone only comprises data indicative of the detected second audio signal, the acoustic characteristics of the playback zone may be further determined as described before the audio processing algorithm is identified. Other examples are also possible.
The audio processing algorithm may be identified in the database based on the size of the playback zone, with reference to one or more characteristics including other characteristics such as the size of the playback zone. For example, referring to database 650 of FIG. 6B, if the received data indicates that the size of playback zone 810 is a4×b4× c4, or substantially with a4×b4×c4The same, then the coefficient w can be identified4、x4、y4And z4The audio processing algorithm of database entry 658. Other examples are also possible.
In some cases, it may be based on a connection-inOne or more characteristics of the playback zone indicated in the received data identify more than one audio processing algorithm. For example, the acoustic characteristic of the playback zone 810 may be determined as hroom -1(t) -3 which correlates to the audio processing algorithm parameter w as provided in entry 656 of database 650 of FIG. 63、x3、y3And z3Correspondingly, the size of the user-provided playback zone 810 may be a4×b4×c4Which is compared to the audio processing algorithm parameter w as provided in entry 6584、,x4、y4And z4And correspondingly.
In one example, audio processing algorithms corresponding to matching or substantially matching acoustic characteristics may be prioritized. In another example, an average of the audio processing algorithms (i.e., an average of the parameters) may be calculated, and the average audio processing algorithm may be the identified audio processing algorithm. Other examples are also possible.
At block 1108, the method 1100 includes transmitting data indicative of the identified audio processing algorithm. Continuing with the above example, data indicative of the identified audio processing algorithm may be sent to the playback device 804 being calibrated for the playback zone 810. In one case, the data indicative of the identified audio processing algorithm may be sent directly to the playback device 804. In another case, for example, if calibration of the playback device 804 is coordinated by the controller device 808 and if an audio processing algorithm is recognized by the computing device 802, data indicative of the recognized audio processing algorithm may be transmitted from the computing device 802 to the playback device 804 via the controller device 808. Other examples are also possible.
As described above, the functions of method 1100 may be performed by one or more of the following: one or more servers, one or more playback devices, and/or one or more controller devices. In one example, maintaining the database at block 1102 may be performed by the computing device 802, and receiving data indicative of one or more characteristics of the playback zone at block 1104 may be performed by the controller device 808 (which may be provided to the controller device 808 by the calibrated playback device 804 in the playback zone 810). Block 1106 may be performed by a controller device 808 in communication with the computing device 802 to access a database maintained by the computing device 802 to identify audio signal processing, and block 1108 may include the computing device 802 transmitting data indicative of the identified audio processing algorithm directly to the playback device 804 or to the playback device 804 via the controller device 808.
In another example, the functions of method 1100 may be performed entirely or substantially entirely by one device. For example, as discussed with respect to block 1102, the computing device 802 may maintain a database.
The computing device 802 may then coordinate calibration of the playback device 804. For example, the computing device 802 may cause the playback device 804 to play a first audio signal and detect a second audio signal, receive data indicative of the detected second audio signal from the playback device 804, and determine the acoustic characteristics of the playback zone 810 based on the data from the playback device 804. In another case, the computing device 802 may cause the controller device 808 to prompt the user to provide one or more characteristics (i.e., dimensions, etc., as described above) of the playback zone 810 and receive data indicative of the user-provided characteristics of the playback zone 810.
The computing device may then identify an audio processing algorithm based on the received data at block 1106 and transmit data indicative of the identified audio processing algorithm to the playback device 804 at block 1108. The computing device 802 may also send commands to the playback device 804 to apply the identified audio processing algorithms when audio content is played in the playback zone 810. Other examples are also possible.
Summary of the invention
The above description discloses, among other things, various example systems, methods, apparatus, and articles of manufacture including components, such as firmware and/or software, executed on hardware. It should be understood that these examples are exemplary only, and should not be considered as limiting. For example, it is contemplated that any or all of these firmware, hardware, and/or software aspects or components could be embodied exclusively in hardware, exclusively in software, exclusively in firmware, or in any combination of hardware, software, and/or firmware. Accordingly, the examples provided are not the only way to implement such systems, methods, apparatuses, and/or articles of manufacture.
Furthermore, references herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one example embodiment of the invention. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Thus, those skilled in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
The following examples set forth further or alternative aspects of the disclosure. The device in any of the following examples may be a component of any device described herein or any configuration of a device described herein.
(feature 1) a computing device comprising:
a processor; and
a memory storing instructions executable by the processor to cause the computing device to perform functions comprising:
causing a playback device to play a first audio signal in a playback zone;
receiving, from the playback device, data indicative of a second audio signal detected by a microphone of the playback device, the second audio signal including a portion corresponding to the first audio signal;
determining an audio processing algorithm based on the second audio signal and an acoustic characteristic of the playback device; and
transmitting data indicative of the determined audio processing algorithm to the playback device.
(feature 2) the computing device of any of the preceding features, wherein, when the playback device plays the first audio signal in the playback zone, the playback device applies the determined audio processing algorithm to generate a third audio signal having audio characteristics substantially the same as predetermined audio characteristics.
(feature 3) the computing device of any of the preceding features, wherein determining an audio processing algorithm further comprises:
determining an acoustic characteristic of the playback zone based on the second audio signal and an acoustic characteristic of the playback device; and
selecting an audio processing algorithm from a plurality of audio processing algorithms corresponding to the determined acoustic characteristics of the playback zone.
(feature 4) the computing device of any of the preceding features, wherein determining an audio processing algorithm further comprises:
determining an acoustic characteristic of the playback zone based on the second audio signal and an acoustic characteristic of the playback device; and
calculating the audio processing algorithm based on the acoustic characteristics of the playback zone and the predetermined audio characteristics.
(feature 5) the computing device of any of the preceding features, wherein determining an audio processing algorithm comprises:
one or more parameters of the audio processing algorithm are determined.
(feature 6) the computing device of any of the preceding features, wherein the functions further comprise:
causing the reference playback device to play the measurement signal in the anechoic chamber;
receiving data from the reference playback device indicative of an audio signal detected by a microphone of the reference playback device, wherein the detected audio signal includes a portion corresponding to the measurement signal played in the anechoic chamber; and
determining an acoustic characteristic of the playback device based on a comparison between the detected audio signal and the measurement signal.
(feature 7) a computing device comprising:
a processor; and
a memory storing instructions executable by the processor to cause the computing device to perform functions comprising:
causing a first playback device to play a first audio signal in a playback zone;
causing a second playback device to play a second audio signal in the playback zone;
receiving, from the first playback device, data indicative of a third audio signal detected by a microphone of the first playback device, the third audio signal comprising: (i) a portion corresponding to the first audio signal, and (ii) a portion corresponding to the second audio signal played by the second playback device;
determining an audio processing algorithm based on the third audio signal and the acoustic characteristics of the first playback device; and
transmitting data indicative of the determined audio processing algorithm to the first playback device.
(feature 8) the computing device of feature 7, wherein, when the first playback device plays the first audio signal in the playback zone, the first playback device applies the determined audio processing algorithm to generate a fourth audio signal having audio characteristics substantially the same as the predetermined audio characteristics.
(feature 9) the computing device of any of features 7-8, wherein determining an audio processing algorithm further comprises:
determining an acoustic characteristic of the playback zone based on the third audio signal and an acoustic characteristic of the first playback device; and
an audio processing algorithm corresponding to the acoustic characteristics of the playback zone is selected from a plurality of audio processing algorithms.
(feature 10) the computing device of any of features 7-9, wherein to cause a second playback device to play a second audio signal comprises to cause the second playback device to play the second audio signal in synchronization with playback of the first audio signal by the first playback device.
(feature 11) the computing device of any of features 7-10, wherein causing a second playback device to play a second audio signal includes causing the second playback device to play the second audio signal after playback of the first audio signal by the first playback device is complete.
(feature 12) the computing device of any of features 7-11, wherein the first audio signal is orthogonal to the second audio signal.
(feature 13) the computing device of any of features 7-12, wherein the first playback device and the second playback device are in a zone group of playback devices configured to synchronously play audio content.
(feature 14) a playback device, comprising:
a processor;
a microphone; and
a memory storing instructions executable by the processor to cause the playback device to perform functions comprising:
playing the first audio signal while in the playback zone;
detecting, by the microphone, a second audio signal, the second audio signal including a portion corresponding to the first audio signal;
determining an audio processing algorithm based on the second audio signal and an acoustic characteristic of the playback device; and
applying the determined audio processing algorithm to audio data corresponding to the media item when the media item is played in the playback zone.
(feature 15) the computing device of feature 14, wherein, when the first audio signal is played in the playback zone, applying the determined audio processing algorithm produces a third audio signal having audio characteristics substantially the same as the predetermined audio characteristics.
(feature 16) the computing device of any of features 14-15, wherein determining an audio processing algorithm further comprises:
determining one or more characteristics of the playback zone based on the second audio signal and acoustic characteristics of the playback device; and
selecting an audio processing algorithm from a plurality of audio processing algorithms corresponding to the one or more characteristics of the playback zone.
(feature 17) the computing device of any of features 14-16, wherein determining an audio processing algorithm comprises:
sending a transmission to a computing device indicating (i) the second audio signal and (ii) a characteristic of the playback device; and
receiving data indicative of the audio processing algorithm from the computing device.
(feature 18) the computing device of any of features 14-17, wherein the functions further comprise storing the determined audio processing algorithm in the memory.
(feature 19) the computing device of any of features 14-18, wherein applying the audio processing algorithm to the audio data comprises modifying frequency equalization of the media item.
(feature 20) the computing device of any of features 14-19, wherein applying the audio processing algorithm to the audio data includes modifying a volume range of the media item.
(feature 21) a computing device comprising:
a processor; and
a memory storing instructions executable by the processor to cause the computing device to perform functions comprising:
causing a playback device to play a first audio signal in a playback zone;
receiving data indicative of a second audio signal detected by a microphone of the playback device, wherein the second audio signal includes a portion corresponding to the first audio signal played by the playback device;
determining an acoustic characteristic of the playback zone based on the second audio signal and a characteristic of the playback device;
determining an audio processing algorithm based on the acoustic characteristics of the playback zone; and
causing an association between the audio processing algorithm and the acoustic characteristics of the playback zone to be stored in a database.
(feature 22) the computing device of any of the preceding features, wherein when the playback device plays the first audio signal in the playback zone, the playback device applies the determined audio processing algorithm to generate a third audio signal having audio characteristics substantially the same as predetermined audio characteristics.
(feature 23) the computing device of any of the preceding features, wherein the playback device is a first playback device, the playback zone is a first playback zone, the audio processing algorithm is a first audio processing algorithm, and wherein the functions further comprise:
causing the second playback device to play the fourth audio signal in the second playback zone;
receiving data indicative of a fifth audio signal detected by a microphone of the second playback device, wherein the fifth audio signal includes a portion corresponding to the fourth audio signal played by the second playback device;
determining an acoustic characteristic of the second playback zone based on the fifth audio signal and a characteristic of the second playback device;
determining a second audio processing algorithm based on the acoustic characteristics of the second playback zone; and
causing an association between the second audio processing algorithm and the acoustic characteristics of the second playback zone to be stored in the database.
(feature 24) the computing device of feature 3, wherein the first playback device applies the determined first audio processing algorithm to generate a third audio signal having audio characteristics that are substantially the same as the predetermined audio characteristics when the first playback device plays the first audio signal in the first playback zone, and wherein the second playback device applies the determined second audio processing algorithm to generate a sixth audio signal having audio characteristics that are substantially the same as the predetermined audio characteristics when the second playback device plays the fourth audio signal in the second playback zone.
(feature 25) the computing device of feature 3, wherein the functions further comprise:
determining that the acoustic characteristics of the second playback zone are substantially the same as the acoustic characteristics of the first playback zone;
responsively, determining a third audio processing algorithm based on the first audio processing algorithm and the second audio processing algorithm; and
causing an association between the third audio processing algorithm and the acoustic characteristics of the first playback zone to be stored in the database.
(feature 26) the computing device of any of features 21-25, wherein determining an audio processing algorithm comprises:
one or more parameters of the audio processing algorithm are determined.
(feature 27) the computing device of any of features 21-26, wherein the functions further comprise:
receiving data indicative of one or more characteristics of the playback zone; and
causing an association between the one or more characteristics of the playback zone and the second audio processing algorithm to be stored in the database.
(feature 28) the computing device of feature 27, wherein the one or more characteristics of the playback zone include one or more of: (a) a size of the playback zone, (b) audio reflection characteristics of the playback zone, (c) an intended use of the playback zone, (d) a number of pieces of furniture in the playback zone, (e) a size of pieces of furniture in the playback zone, and (f) a type of furniture in the playback zone.
(feature 29) a computing device comprising:
a processor; and
a memory storing instructions executable by the processor to cause the computing device to perform functions comprising:
causing a playback device to play a first audio signal in a playback zone;
receiving (i) data indicative of one or more characteristics of the playback zone and (ii) data indicative of a second audio signal detected by a microphone of the playback device, wherein the second audio signal includes a portion corresponding to the audio signal played by the playback device;
determining an audio processing algorithm based on the second audio signal and a characteristic of the playback device; and
causing an association between the determined audio processing algorithm and at least one of the one or more characteristics of the playback zone to be stored in a database.
(feature 30) the computing device of feature 29, wherein determining an audio processing algorithm further comprises:
determining an acoustic characteristic of the playback zone based on the second audio signal and a characteristic of the playback device; and
determining an audio processing algorithm based on the acoustic characteristics of the playback zone such that: when the playback device plays the second audio signal in the playback zone, the playback device applies the determined audio processing algorithm to generate a third audio signal having audio characteristics substantially the same as the predetermined audio characteristics.
(feature 31) the computing device of any of features 29-30, wherein the playback device is a first playback device, the playback zone is a first playback zone, the audio processing algorithm is a first audio processing algorithm, and wherein the functions further comprise:
causing the second playback device to play the third audio signal in the second playback zone;
receiving (i) data indicative of one or more characteristics of the second playback zone and (ii) data indicative of a fourth audio signal detected by a microphone of a second playback device in the second playback zone, wherein the fourth audio signal includes a portion corresponding to the third audio signal played by the playback device;
determining an audio processing algorithm based on the fourth audio signal and characteristics of the second playback device; and
causing an association between the second audio processing algorithm and at least one of the one or more characteristics of the second playback zone to be stored in the database.
(feature 32) the computing device of feature 29, wherein determining a second audio processing algorithm further comprises:
determining an acoustic characteristic of the playback zone based on the fourth audio signal and a characteristic of the playback device; and
determining an audio processing algorithm based on the acoustic characteristics of the playback zone such that: when the second playback device plays the third audio signal in the playback zone, the second playback device applies the determined audio processing algorithm to generate a fifth audio signal having audio characteristics substantially the same as the predetermined audio characteristics.
(feature 33) the computing device of feature 32, wherein the functions further comprise:
determining that the acoustic characteristics of the second playback zone are substantially the same as the acoustic characteristics of the first playback zone;
responsively, determining a third audio processing algorithm based on the first audio processing algorithm and the second audio processing algorithm; and
causing an association between the third audio processing algorithm and at least one of the one or more characteristics of the first playback zone to be stored in the database.
(feature 34) the computing device of any of features 29-33, wherein the one or more characteristics of the playback zone include one or more of: (a) a size of the playback zone, (b) audio reflection characteristics of the playback zone, (c) an intended use of the playback zone, (d) a number of pieces of furniture in the playback zone, (e) a size of pieces of furniture in the playback zone, (f) a type of furniture in the playback zone, and (g) acoustic characteristics of the playback zone.
(feature 35) a computing device comprising:
a processor; and
a memory storing instructions executable by the processor to cause the playback device to perform functions comprising:
maintaining a database of (i) a plurality of audio processing algorithms and (ii) a plurality of playback zone characteristics, wherein each audio processing algorithm of the plurality of audio processing algorithms corresponds to at least one playback zone characteristic of the plurality of playback zone characteristics;
receiving data indicative of one or more characteristics of a playback zone;
identifying an audio processing algorithm in a database based on the data; and
data indicative of the identified audio processing algorithm is transmitted.
(feature 36) the computing device of feature 35, wherein the data is further indicative of audio signals detected by microphones of playback devices in the playback zone.
(feature 37) the computing device of feature 36, wherein identifying in the database the audio processing algorithm further comprises:
determining an acoustic characteristic of the playback zone based on the detected audio signal and a characteristic of the playback device; and
identifying an audio processing algorithm in the database based on the determined acoustic characteristics of the playback zone.
(feature 38) the computing device of feature 35, wherein the plurality of playback zone characteristics includes one or more of: (a) a size of the playback zone, (b) audio reflection characteristics of the playback zone, (c) an intended use of the playback zone, (d) a number of pieces of furniture in the playback zone, (e) a size of pieces of furniture in the playback zone, (f) a type of furniture in the playback zone, and (g) acoustic characteristics of the playback zone.
(feature 39) the computing device of feature 35, wherein the data indicative of one or more characteristics of the playback zone is received from a controller device.
(feature 40) the computing device of feature 35, wherein the data indicative of one or more characteristics of the playback zone is received from a playback device in the playback zone.
The description is presented primarily in terms of exemplary environments, systems, processes, steps, logic blocks, processes, and other symbolic representations that are directly or indirectly analogous to the operation of data processing devices coupled to a network. These process descriptions and representations are generally used by those skilled in the art to most effectively convey the substance of their work to others skilled in the art. Numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. However, it will be understood by those skilled in the art that certain embodiments of the present disclosure may be practiced without the specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail as not to unnecessarily obscure aspects of the embodiments. Accordingly, the scope of the present disclosure is defined by the appended claims rather than by the foregoing description of the embodiments.
Where any of the appended claims are understood to cover an implementation in pure software and/or firmware, at least one element in at least one example is hereby expressly defined to include a tangible, non-transitory medium such as a memory, DVD, CD, Blu-ray, etc. storing the software and/or firmware.
Inventive concept
The invention provides the following inventive concepts:
1. a computing device (300) configured for:
causing a playback device (200) to play a first audio signal in a playback zone;
receiving data indicative of a second audio signal detected by a microphone (220) of the playback device, the second audio signal comprising a portion corresponding to the first audio signal;
determining an audio processing algorithm based on the second audio signal and an acoustic characteristic of the playback device; and
performing at least one of:
transmitting data indicative of the determined audio processing algorithm to the playback device (200); and
causing an association between the audio processing algorithm and the acoustic characteristics of the playback zone to be stored in a database.
2. The computing device of inventive concept 1, wherein, when the playback device (200) plays the first audio signal in the playback zone, the playback device (200) applies the determined audio processing algorithm to generate a third audio signal having audio characteristics substantially the same as predetermined audio characteristics.
3. The computing device of one of inventive concepts 1 or 2, wherein determining the audio processing algorithm further comprises:
determining an acoustic characteristic of the playback zone based on the second audio signal and an acoustic characteristic of the playback device (200); and
determining an audio processing algorithm based on the determined acoustic characteristics of the playback zone.
4. The computing device of inventive concept 3, wherein to determine an audio processing algorithm based on the determined acoustic characteristics of the playback zone comprises one of:
selecting an audio processing algorithm from a plurality of audio processing algorithms corresponding to the determined acoustic characteristics of the playback zone; and
calculating the audio processing algorithm based on the acoustic characteristics of the playback zone and predetermined audio characteristics.
5. The computing device of one of inventive concepts 1 to 4, wherein determining an audio processing algorithm comprises determining one or more parameters of the audio processing algorithm.
6. The computing device according to one of the inventive concepts 1 to 5, further configured for:
causing a reference playback device (200) to play a measurement signal in an anechoic chamber;
receiving data from the reference playback device (200) indicative of an audio signal detected by a microphone (220) of the reference playback device (220), wherein the detected audio signal comprises a portion corresponding to the measurement signal played in the anechoic chamber; and
determining an acoustic characteristic of the playback device (200) based on a comparison between the detected audio signal and the measurement signal.
7. The computing device of any of the preceding inventive concepts, further configured to:
prior to receiving data indicative of the second audio signal, causing a second playback device (200) to play a fourth audio signal in the playback zone,
wherein the second audio signal further comprises a portion corresponding to the fourth audio signal played by the second playback device (200).
8. The computing device of inventive concept 7, further configured to cause the second playback device (200) to play the fourth audio signal in one of the following ways:
playing the fourth audio signal in synchronization with playback of the first audio signal by the first playback device (200); and
playing the fourth audio signal after playback of the first audio signal by the first playback device (200) is complete.
9. The computing device of one of inventive concepts 7 to 8, wherein the first audio signal is orthogonal to the third audio signal.
10. The computing device according to one of the inventive concepts 7 to 9, wherein the first playback device (200) and the second playback device (200) are in a zone group of playback devices (200) configured to play audio content synchronously.
11. The computing device of one of inventive concepts 1-2 and 5-10, wherein determining the audio processing algorithm further comprises:
receiving data indicative of one or more characteristics of the playback zone; and
causing an association between the one or more characteristics of the playback zone and the audio processing algorithm to be stored in the database.
12. The computing device of one of inventive concepts 1 to 6 and 9 to 11, wherein the playback device is a first playback device, the playback zone is a first playback zone, the audio processing algorithm is a first audio processing algorithm, and wherein the functions further comprise:
causing the second playback device to play the fourth audio signal in the second playback zone;
receiving data indicative of a fifth audio signal detected by a microphone of the second playback device, wherein the fifth audio signal includes a portion corresponding to the fourth audio signal played by the second playback device;
determining an acoustic characteristic of the second playback zone based on the fifth audio signal and a characteristic of the second playback device;
determining a second audio processing algorithm based on the acoustic characteristics of the second playback zone; and
causing an association between the second audio processing algorithm and the acoustic characteristics of the second playback zone to be stored in the database.
13. The computing device of inventive concept 12, wherein the functions further comprise:
determining that the acoustic characteristics of the second playback zone are substantially the same as the acoustic characteristics of the first playback zone;
responsively, determining a third audio processing algorithm based on the first audio processing algorithm and the second audio processing algorithm; and
causing an association between the third audio processing algorithm and the acoustic characteristics of the first playback zone to be stored in the database.
14. The computing device of one of the inventive concepts 11 to 13, wherein the data indicative of the one or more characteristics of the playback zone is received from one of a controller device and a playback device in the playback zone.
15. The computing device of any of the preceding inventive concepts, wherein the one or more characteristics of the playback zone include one or more of: (a) a size of the playback zone, (b) audio reflection characteristics of the playback zone, (c) an intended use of the playback zone, (d) a number of pieces of furniture in the playback zone, (e) a size of pieces of furniture in the playback zone, and (f) a type of furniture in the playback zone.
16. A playback device (200) in a playback zone, comprising:
a microphone (220); and
a processor (202) configured for:
playing the first audio signal;
detecting, by the microphone (220), a second audio signal, the second audio signal comprising a portion corresponding to the first audio signal;
determining an audio processing algorithm based on the second audio signal and an acoustic characteristic of the playback device (200); and
applying the determined audio processing algorithm to audio data corresponding to the media item when the media item is played in the playback zone.
17. The playback device of one of inventive concepts 16, wherein determining the audio processing algorithm further comprises:
determining one or more characteristics of the playback zone based on the second audio signal and acoustic characteristics of the playback device (200); and
selecting an audio processing algorithm from a plurality of audio processing algorithms corresponding to the one or more characteristics of the playback zone.
18. The playback device of one of inventive concepts 16, wherein determining the audio processing algorithm comprises:
sending a transmission to a computing device (300) indicative of (i) the second audio signal and (ii) a characteristic of the playback device (200); and
receiving data indicative of the audio processing algorithm from the computing device (300).
19. The playback device according to one of the inventive concepts 16 to 18, further comprising a memory (206) configured to store the determined audio processing algorithm.
20. The playback device of one of the inventive concepts 16 to 19, wherein applying the audio processing algorithm to the audio data comprises modifying at least one of:
frequency equalization of the media items; and
a volume range of the media item.

Claims (14)

1. A method performed by a computing device (802, 808) in communication with a media playback system, the method comprising:
receiving (502, 702), from a playback device (200, 804, 806) located in a playback zone (810), data indicative of a second audio signal detected by a microphone (220) while the playback device is playing a first audio signal;
determining, based on the received data, acoustic characteristics of a playback device from a first database that associates acoustic characteristics of the playback device with a particular playback device model;
determining an acoustic characteristic of the playback zone by removing the determined acoustic characteristic of the playback device from the second audio signal; and
determining (508, 706) an audio processing algorithm based on the acoustic characteristics of the playback device;
wherein the microphone (220) is in the playback device; and
the determining of the audio processing algorithm is based on the determined acoustic characteristics of the playback zone, according to a second database comprising audio processing algorithms associated with respective acoustic characteristics of playback zones.
2. The method of claim 1, wherein receiving data indicative of the second audio signal further comprises receiving data indicative of a model of the playback device.
3. The method of claim 1, wherein the second database comprising audio processing algorithms associated with respective acoustic characteristics of playback zones is stored on local storage of the computing device.
4. The method of claim 1, wherein the first database comprising acoustic characteristics of playback devices associated with respective playback device models is generated by:
causing the reference playback device to play the measurement signal x (t) in the anechoic chamber;
receiving data from the reference playback device indicative of an audio signal y (t) detected by a microphone in the reference playback device, wherein the detected audio signal comprises a portion corresponding to the measurement signal played in the anechoic chamber such that
Figure FDA0002414552310000011
And
determining a predetermined acoustic property h of the playback device based on a comparison between the detected audio signal y (t) and the measurement signal x (t)p(t)。
5. The method of any preceding claim, wherein the computing device is a server or controller device.
6. The method of any preceding claim, wherein applying, by the first playback device, the audio processing algorithm to the audio data of the media item results in a modification of the frequency equalization of the media item when the first playback device plays the media item in the playback zone.
7. The method of claim 1 or 2, wherein the one or more acoustic characteristics of the playback zone comprise one or more of: (a) a size of the playback zone, (b) audio reflection characteristics of the playback zone, (c) an intended use of the playback zone, (d) a number of pieces of furniture in the playback zone, (e) a size of pieces of furniture in the playback zone, and (f) a type of furniture in the playback zone.
8. The method of any preceding claim, wherein the data indicative of one or more characteristics of the playback zone is received by the computing device from one of a controller device and a playback device in the playback zone.
9. The method of claim 1 or 2, wherein the playback device is a first playback device, wherein the playback zone is a first playback zone, wherein the audio processing algorithm is a first audio processing algorithm, and wherein the computing device is further configured to:
causing the second playback device to play the third audio signal in the second playback zone;
receiving, by the computing device, data indicative of a fourth audio signal detected by a microphone in the second playback device, wherein the fourth audio signal includes a portion corresponding to the third audio signal played by the second playback device;
determining an acoustic characteristic of the second playback zone based on the fourth audio signal and a predetermined acoustic characteristic of the second playback device;
determining a second audio processing algorithm based on the acoustic characteristics of the second playback zone;
wherein the determining a second audio processing algorithm is based on the acoustic characteristics of the playback zone in accordance with a second database comprising audio processing algorithms associated with the acoustic characteristics of the second playback zone.
10. The method of claim 9, further configured for:
determining that the acoustic characteristics of the second playback zone are the same as the acoustic characteristics of the first playback zone;
responsively, determining a third audio processing algorithm based on the first audio processing algorithm and the second audio processing algorithm;
wherein determining a third audio processing algorithm is based on the acoustic characteristics of the first playback zone in accordance with a second database comprising audio processing algorithms associated with the acoustic characteristics of the first playback zone.
11. A system for calibrating a playback device (200), comprising:
the playback device (200), the playback device (200) comprising a built-in microphone (220), speaker (212), and network interface (214); and
a controller (300), the controller (300) comprising a network interface (306) and a processor configured to perform the method of any of the preceding claims.
12. A method of calibrating a playback device (200, 804, 806), the method comprising:
playing (502, 902) a first audio signal f (t) by the playback device (200, 804, 806) located in a playback zone (810);
detecting a second audio signal using a microphone included in the playback device while the playback device is playing the first audio signal;
transmitting, by the playback device to a computing device, data indicative of the second audio signal and a model of the playback device;
receiving an audio processing algorithm from the computing device based on the model of the playback device and the second audio signal; and
applying, by the playback device, the received audio processing algorithm to the media item when the media item is played by the playback device.
13. A playback device (200, 804, 806) configured to be located in a playback zone, comprising:
a speaker (212);
a microphone (220); and
a processor (202) configured to calibrate the playback device by:
playing (502, 902) a first audio signal f (t) by the playback device (200, 804, 806) located in a playback zone (810);
detecting a second audio signal using a microphone included in the playback device while the playback device is playing the first audio signal;
transmitting, by the playback device to a computing device, data indicative of the second audio signal and a model of the playback device;
receiving an audio processing algorithm from the computing device based on the model of the playback device and the second audio signal; and
applying, by the playback device, the received audio processing algorithm to the media item when the media item is played by the playback device.
14. The playback device of claim 13, wherein determining the audio processing algorithm further comprises:
selecting an audio processing algorithm from a plurality of audio processing algorithms corresponding to the determined acoustic characteristics of the playback zone.
CN202010187024.8A 2014-09-09 2015-09-08 Method performed by computing device, playback device, calibration system and method thereof Active CN111565352B (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US14/481,514 2014-09-09
US14/481,505 2014-09-09
US14/481,505 US9952825B2 (en) 2014-09-09 2014-09-09 Audio processing algorithms
US14/481,514 US9891881B2 (en) 2014-09-09 2014-09-09 Audio processing algorithm database
CN201580047998.3A CN106688248B (en) 2014-09-09 2015-09-08 Audio processing algorithms and databases

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201580047998.3A Division CN106688248B (en) 2014-09-09 2015-09-08 Audio processing algorithms and databases

Publications (2)

Publication Number Publication Date
CN111565352A true CN111565352A (en) 2020-08-21
CN111565352B CN111565352B (en) 2021-08-06

Family

ID=54292894

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202010187024.8A Active CN111565352B (en) 2014-09-09 2015-09-08 Method performed by computing device, playback device, calibration system and method thereof
CN201580047998.3A Active CN106688248B (en) 2014-09-09 2015-09-08 Audio processing algorithms and databases

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201580047998.3A Active CN106688248B (en) 2014-09-09 2015-09-08 Audio processing algorithms and databases

Country Status (4)

Country Link
EP (2) EP3111678B1 (en)
JP (4) JP6503457B2 (en)
CN (2) CN111565352B (en)
WO (1) WO2016040324A1 (en)

Families Citing this family (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9084058B2 (en) 2011-12-29 2015-07-14 Sonos, Inc. Sound field calibration using listener localization
US9706323B2 (en) 2014-09-09 2017-07-11 Sonos, Inc. Playback device calibration
US9219460B2 (en) 2014-03-17 2015-12-22 Sonos, Inc. Audio settings based on environment
US9106192B2 (en) 2012-06-28 2015-08-11 Sonos, Inc. System and method for device playback calibration
US9690539B2 (en) 2012-06-28 2017-06-27 Sonos, Inc. Speaker calibration user interface
US9264839B2 (en) 2014-03-17 2016-02-16 Sonos, Inc. Playback device configuration based on proximity detection
US10127006B2 (en) 2014-09-09 2018-11-13 Sonos, Inc. Facilitating calibration of an audio playback device
US9910634B2 (en) 2014-09-09 2018-03-06 Sonos, Inc. Microphone calibration
US9891881B2 (en) 2014-09-09 2018-02-13 Sonos, Inc. Audio processing algorithm database
US9952825B2 (en) 2014-09-09 2018-04-24 Sonos, Inc. Audio processing algorithms
WO2016172593A1 (en) 2015-04-24 2016-10-27 Sonos, Inc. Playback device calibration user interfaces
US10664224B2 (en) 2015-04-24 2020-05-26 Sonos, Inc. Speaker calibration user interface
US9538305B2 (en) 2015-07-28 2017-01-03 Sonos, Inc. Calibration error conditions
JP6437695B2 (en) 2015-09-17 2018-12-12 ソノズ インコーポレイテッド How to facilitate calibration of audio playback devices
US9693165B2 (en) 2015-09-17 2017-06-27 Sonos, Inc. Validation of audio calibration using multi-dimensional motion check
US9743207B1 (en) 2016-01-18 2017-08-22 Sonos, Inc. Calibration using multiple recording devices
US10003899B2 (en) 2016-01-25 2018-06-19 Sonos, Inc. Calibration with particular locations
US11106423B2 (en) 2016-01-25 2021-08-31 Sonos, Inc. Evaluating calibration of a playback device
US9860662B2 (en) 2016-04-01 2018-01-02 Sonos, Inc. Updating playback device configuration information based on calibration data
EP3226575B1 (en) * 2016-04-01 2019-05-15 Sonos, Inc. Updating playback device configuration information based on calibration data
US9864574B2 (en) 2016-04-01 2018-01-09 Sonos, Inc. Playback device calibration based on representation spectral characteristics
US9763018B1 (en) 2016-04-12 2017-09-12 Sonos, Inc. Calibration of audio playback devices
US9794710B1 (en) 2016-07-15 2017-10-17 Sonos, Inc. Spatial audio correction
US9860670B1 (en) 2016-07-15 2018-01-02 Sonos, Inc. Spectral correction using spatial calibration
US10372406B2 (en) 2016-07-22 2019-08-06 Sonos, Inc. Calibration interface
US10459684B2 (en) 2016-08-05 2019-10-29 Sonos, Inc. Calibration of a playback device based on an estimated frequency response
US10299061B1 (en) 2018-08-28 2019-05-21 Sonos, Inc. Playback device calibration
US11206484B2 (en) 2018-08-28 2021-12-21 Sonos, Inc. Passive speaker authentication
US10959018B1 (en) * 2019-01-18 2021-03-23 Amazon Technologies, Inc. Method for autonomous loudspeaker room adaptation
US10734965B1 (en) 2019-08-12 2020-08-04 Sonos, Inc. Audio calibration of a portable playback device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1369188A (en) * 1999-08-11 2002-09-11 太平洋微超声公司 Compensation system and method for sound reproduction
CN1447624A (en) * 2002-03-25 2003-10-08 伯斯有限公司 Automatic audio system equalization
CN101032187A (en) * 2004-10-26 2007-09-05 英特尔公司 System and method for optimizing media center audio through microphones embedded in a remote control
CN102004823A (en) * 2010-11-11 2011-04-06 浙江中科电声研发中心 Numerical value simulation method of vibration and acoustic characteristics of speaker
CN102318325A (en) * 2009-02-11 2012-01-11 Nxp股份有限公司 Controlling an adaptation of a behavior of an audio device to a current acoustic environmental condition
CN102893633A (en) * 2010-05-06 2013-01-23 杜比实验室特许公司 Audio system equalization for portable media playback devices
CN103811010A (en) * 2010-02-24 2014-05-21 弗劳恩霍夫应用研究促进协会 Apparatus for generating an enhanced downmix signal, method for generating an enhanced downmix signal and computer program
EP2747081A1 (en) * 2012-12-18 2014-06-25 Oticon A/s An audio processing device comprising artifact reduction

Family Cites Families (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0828920B2 (en) * 1992-01-20 1996-03-21 松下電器産業株式会社 Speaker measuring device
JP2870359B2 (en) * 1993-05-11 1999-03-17 ヤマハ株式会社 Acoustic characteristic correction device
JPH10307592A (en) * 1997-05-08 1998-11-17 Alpine Electron Inc Data distributing system for on-vehicle audio device
JP4187391B2 (en) * 2000-08-28 2008-11-26 富士通テン株式会社 In-vehicle audio service method
JP2004159037A (en) * 2002-11-06 2004-06-03 Sony Corp Automatic sound adjustment system, sound adjusting device, sound analyzer, and sound analysis processing program
US8234395B2 (en) 2003-07-28 2012-07-31 Sonos, Inc. System and method for synchronizing operations among a plurality of independently clocked digital data processing devices
JP2005086686A (en) * 2003-09-10 2005-03-31 Fujitsu Ten Ltd Electronic equipment
LV13342B (en) * 2005-05-18 2005-10-20 Real Sound Lab Sia Method and device for correction of acoustic parameters of electro-acoustic transducers
JP4407571B2 (en) * 2005-06-06 2010-02-03 株式会社デンソー In-vehicle system, vehicle interior sound field adjustment system, and portable terminal
JP2007271802A (en) * 2006-03-30 2007-10-18 Kenwood Corp Content reproduction system and computer program
JP4725422B2 (en) * 2006-06-02 2011-07-13 コニカミノルタホールディングス株式会社 Echo cancellation circuit, acoustic device, network camera, and echo cancellation method
JP2008035254A (en) * 2006-07-28 2008-02-14 Sharp Corp Sound output device and television receiver
US7845233B2 (en) * 2007-02-02 2010-12-07 Seagrave Charles G Sound sensor array with optical outputs
JP2008228133A (en) * 2007-03-15 2008-09-25 Matsushita Electric Ind Co Ltd Acoustic system
JP5313549B2 (en) * 2008-05-27 2013-10-09 アルパイン株式会社 Acoustic information providing system and in-vehicle acoustic device
US8819554B2 (en) * 2008-12-23 2014-08-26 At&T Intellectual Property I, L.P. System and method for playing media
US8300840B1 (en) * 2009-02-10 2012-10-30 Frye Electronics, Inc. Multiple superimposed audio frequency test system and sound chamber with attenuated echo properties
JP2011164166A (en) * 2010-02-05 2011-08-25 D&M Holdings Inc Audio signal amplifying apparatus
US9307340B2 (en) * 2010-05-06 2016-04-05 Dolby Laboratories Licensing Corporation Audio system equalization for portable media playback devices
JP5533248B2 (en) * 2010-05-20 2014-06-25 ソニー株式会社 Audio signal processing apparatus and audio signal processing method
DE102011076484A1 (en) * 2011-05-25 2012-11-29 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. SOUND PLAYING DEVICE WITH HORIZONTAL SIMULATION
US9438996B2 (en) * 2012-02-21 2016-09-06 Intertrust Technologies Corporation Systems and methods for calibrating speakers
JP2013247456A (en) * 2012-05-24 2013-12-09 Toshiba Corp Acoustic processing device, acoustic processing method, acoustic processing program, and acoustic processing system
US9106192B2 (en) * 2012-06-28 2015-08-11 Sonos, Inc. System and method for device playback calibration
WO2014085510A1 (en) * 2012-11-30 2014-06-05 Dts, Inc. Method and apparatus for personalized audio virtualization

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1369188A (en) * 1999-08-11 2002-09-11 太平洋微超声公司 Compensation system and method for sound reproduction
CN1447624A (en) * 2002-03-25 2003-10-08 伯斯有限公司 Automatic audio system equalization
CN101032187A (en) * 2004-10-26 2007-09-05 英特尔公司 System and method for optimizing media center audio through microphones embedded in a remote control
CN102318325A (en) * 2009-02-11 2012-01-11 Nxp股份有限公司 Controlling an adaptation of a behavior of an audio device to a current acoustic environmental condition
CN103811010A (en) * 2010-02-24 2014-05-21 弗劳恩霍夫应用研究促进协会 Apparatus for generating an enhanced downmix signal, method for generating an enhanced downmix signal and computer program
CN102893633A (en) * 2010-05-06 2013-01-23 杜比实验室特许公司 Audio system equalization for portable media playback devices
CN102004823A (en) * 2010-11-11 2011-04-06 浙江中科电声研发中心 Numerical value simulation method of vibration and acoustic characteristics of speaker
EP2747081A1 (en) * 2012-12-18 2014-06-25 Oticon A/s An audio processing device comprising artifact reduction

Also Published As

Publication number Publication date
JP2017528083A (en) 2017-09-21
JP6503457B2 (en) 2019-04-17
JP7110301B2 (en) 2022-08-01
JP6792015B2 (en) 2020-11-25
EP4243450A2 (en) 2023-09-13
EP3111678B1 (en) 2023-11-01
WO2016040324A1 (en) 2016-03-17
CN111565352B (en) 2021-08-06
CN106688248A (en) 2017-05-17
JP2019134470A (en) 2019-08-08
EP3111678A1 (en) 2017-01-04
JP2021044818A (en) 2021-03-18
CN106688248B (en) 2020-04-14
JP2022163061A (en) 2022-10-25
EP4243450A3 (en) 2023-11-15

Similar Documents

Publication Publication Date Title
CN111565352B (en) Method performed by computing device, playback device, calibration system and method thereof
US11625219B2 (en) Audio processing algorithms
CN110719561B (en) Computing device, computer readable medium, and method executed by computing device
US10127008B2 (en) Audio processing algorithm database
US10853027B2 (en) Calibration of a playback device based on an estimated frequency response
US10701501B2 (en) Playback device calibration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant