WO2021132852A1 - Procédé de sortie de données audio et dispositif électronique prenant en charge ledit procédé - Google Patents

Procédé de sortie de données audio et dispositif électronique prenant en charge ledit procédé Download PDF

Info

Publication number
WO2021132852A1
WO2021132852A1 PCT/KR2020/012910 KR2020012910W WO2021132852A1 WO 2021132852 A1 WO2021132852 A1 WO 2021132852A1 KR 2020012910 W KR2020012910 W KR 2020012910W WO 2021132852 A1 WO2021132852 A1 WO 2021132852A1
Authority
WO
WIPO (PCT)
Prior art keywords
electronic device
user
processor
speakers
audio data
Prior art date
Application number
PCT/KR2020/012910
Other languages
English (en)
Korean (ko)
Inventor
고성환
김기훈
박영현
박의순
박진우
방경호
송은정
정문식
조준영
Original Assignee
삼성전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 삼성전자 주식회사 filed Critical 삼성전자 주식회사
Publication of WO2021132852A1 publication Critical patent/WO2021132852A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/302Electronic adaptation of stereophonic sound system to listener position or orientation
    • H04S7/303Tracking of listener position or orientation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/005Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2499/00Aspects covered by H04R or H04S not otherwise provided for in their subgroups
    • H04R2499/10General applications
    • H04R2499/11Transducers incorporated or for use in hand-held devices, e.g. mobile phones, PDA's, camera's
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/11Application of ambisonics in stereophonic audio systems

Definitions

  • Various embodiments of the present disclosure relate to a method of outputting audio data and an electronic device supporting the same.
  • An electronic device such as a smart phone may provide various functions.
  • the electronic device may receive a user's voice through a microphone and may provide a function of outputting voice data through a speaker.
  • the electronic device may transmit the user's voice received through the microphone to the external electronic device, and may output the other's voice through the speaker.
  • Existing electronic devices support only dual mono sound transmission and reception during a call. For example, even if the electronic device is equipped with a stereo speaker, the electronic device does not output stereo sound audio data during communication, but outputs dual mono audio data. Since most of the recently released electronic devices are equipped with stereo speakers, a function of outputting stereo audio data during a call is required.
  • Various embodiments of the present disclosure may provide an audio data output method for selecting and outputting audio data based on a positional relationship between an electronic device and a user, and an electronic device supporting the same.
  • An electronic device includes a plurality of microphones, a plurality of speakers, a sensor, a memory, and a processor operatively connected to the plurality of microphones, the plurality of speakers, the sensor, and the memory. including, wherein the processor receives the user's voice through each of the plurality of microphones, and based on a difference in reception time of the user's voice received through each of the plurality of microphones, the electronic device and determine the positional relationship between the users, determine the posture of the electronic device based on sensor information measured through the sensor, and determine the posture of the electronic device based on the determined positional relationship and the determined posture of the electronic device, the electronic device may be set to determine audio data output through the plurality of speakers included in the .
  • the method for outputting audio data of an electronic device includes an operation of receiving a user's voice through each of a plurality of microphones included in the electronic device, and each of the plurality of microphones An operation of determining a positional relationship between the electronic device and the user based on a difference in the reception time of the received user's voice, and the posture of the electronic device based on sensor information measured through a sensor included in the electronic device and determining the audio data to be output through the plurality of speakers included in the electronic device based on the determined positional relationship and the determined posture of the electronic device.
  • the electronic device is operatively configured with a plurality of microphones, a plurality of speakers, a camera, a memory, and the plurality of microphones, the plurality of speakers, the camera, and the memory.
  • a connected processor wherein the processor receives a user's voice through each of the plurality of microphones, obtains an image captured by the camera, and obtains a position value of an object corresponding to the user from the image and determining a positional relationship between the electronic device and the user based on a difference in reception time of the user's voice received through each of the plurality of microphones and a position value of the object, and based on the determined positional relationship
  • the processor may be set to determine the audio data output through the plurality of speakers.
  • high-quality audio sound may be provided to the user by selectively outputting audio data based on a positional relationship between the electronic device and the user.
  • FIG. 1 is a block diagram of an electronic device in a network environment according to various embodiments of the present disclosure
  • FIG. 2 is a block diagram of an electronic device related to output of audio data according to an embodiment of the present invention.
  • FIG. 3 is a diagram illustrating a method of outputting audio data according to an embodiment of the present invention.
  • FIG. 4 is a diagram illustrating a method of selectively outputting audio data based on a positional relationship between an electronic device and a user and a posture of the electronic device, according to an embodiment of the present invention.
  • FIG. 5 is a diagram illustrating another method of selectively outputting audio data based on a positional relationship between an electronic device and a user and a posture of the electronic device, according to an embodiment of the present invention.
  • FIG. 6 is a diagram illustrating another method of selectively outputting audio data based on a positional relationship between an electronic device and a user and a posture of the electronic device, according to an embodiment of the present invention.
  • FIG. 7 is a diagram illustrating a method of selectively outputting audio data based on a positional relationship between an electronic device and a user, according to an embodiment of the present invention.
  • FIG. 8 is a diagram illustrating another method of selectively outputting audio data based on a positional relationship between an electronic device and a user, according to an embodiment of the present invention.
  • FIG. 9 is a view for explaining a preset area according to an arrangement position of a plurality of speakers, according to an embodiment of the present invention.
  • FIG. 1 is a block diagram of an electronic device 101 in a network environment 100 according to various embodiments.
  • the electronic device 101 communicates with the electronic device 102 through a first network 198 (eg, a short-range wireless communication network) or a second network 199 . It may communicate with the electronic device 104 or the server 108 through (eg, a long-distance wireless communication network). According to an embodiment, the electronic device 101 may communicate with the electronic device 104 through the server 108 .
  • a first network 198 eg, a short-range wireless communication network
  • a second network 199 e.g., a second network 199 . It may communicate with the electronic device 104 or the server 108 through (eg, a long-distance wireless communication network). According to an embodiment, the electronic device 101 may communicate with the electronic device 104 through the server 108 .
  • the electronic device 101 includes a processor 120 , a memory 130 , an input device 150 , a sound output device 155 , a display device 160 , an audio module 170 , and a sensor module ( 176 , interface 177 , haptic module 179 , camera module 180 , power management module 188 , battery 189 , communication module 190 , subscriber identification module 196 , or antenna module 197 . ) may be included. In some embodiments, at least one of these components (eg, the display device 160 or the camera module 180 ) may be omitted or one or more other components may be added to the electronic device 101 . In some embodiments, some of these components may be implemented as one integrated circuit. For example, the sensor module 176 (eg, a fingerprint sensor, an iris sensor, or an illuminance sensor) may be implemented while being embedded in the display device 160 (eg, a display).
  • the sensor module 176 eg, a fingerprint sensor, an iris sensor, or an illuminance sensor
  • the processor 120 executes software (eg, the program 140) to execute at least one other component (eg, a hardware or software component) of the electronic device 101 connected to the processor 120 . It can control and perform various data processing or operations. According to one embodiment, as at least part of data processing or operation, the processor 120 converts commands or data received from other components (eg, the sensor module 176 or the communication module 190 ) to the volatile memory 132 . may be loaded into the volatile memory 132 , process commands or data stored in the volatile memory 132 , and store the resulting data in the non-volatile memory 134 .
  • software eg, the program 140
  • the processor 120 converts commands or data received from other components (eg, the sensor module 176 or the communication module 190 ) to the volatile memory 132 .
  • the volatile memory 132 may be loaded into the volatile memory 132 , process commands or data stored in the volatile memory 132 , and store the resulting data in the non-volatile memory 134 .
  • the processor 120 includes a main processor 121 (eg, a central processing unit or an application processor), and a secondary processor 123 (eg, a graphic processing unit, an image signal processor) that can operate independently or together with the main processor , a sensor hub processor, or a communication processor). Additionally or alternatively, the auxiliary processor 123 may be configured to use less power than the main processor 121 or to be specialized for a designated function. The auxiliary processor 123 may be implemented separately from or as a part of the main processor 121 .
  • a main processor 121 eg, a central processing unit or an application processor
  • a secondary processor 123 eg, a graphic processing unit, an image signal processor
  • the auxiliary processor 123 may be configured to use less power than the main processor 121 or to be specialized for a designated function.
  • the auxiliary processor 123 may be implemented separately from or as a part of the main processor 121 .
  • the auxiliary processor 123 may be, for example, on behalf of the main processor 121 while the main processor 121 is in an inactive (eg, sleep) state, or when the main processor 121 is active (eg, executing an application). ), together with the main processor 121, at least one of the components of the electronic device 101 (eg, the display device 160, the sensor module 176, or the communication module 190) It is possible to control at least some of the related functions or states.
  • the coprocessor 123 eg, an image signal processor or a communication processor
  • may be implemented as part of another functionally related component eg, the camera module 180 or the communication module 190. have.
  • the memory 130 may store various data used by at least one component (eg, the processor 120 or the sensor module 176 ) of the electronic device 101 .
  • the data may include, for example, input data or output data for software (eg, the program 140 ) and instructions related thereto.
  • the memory 130 may include a volatile memory 132 or a non-volatile memory 134 .
  • the program 140 may be stored as software in the memory 130 , and may include, for example, an operating system 142 , middleware 144 , or an application 146 .
  • the input device 150 may receive a command or data to be used by a component (eg, the processor 120 ) of the electronic device 101 from the outside (eg, a user) of the electronic device 101 .
  • the input device 150 may include, for example, a microphone, a mouse, a keyboard, or a digital pen (eg, a stylus pen).
  • the sound output device 155 may output a sound signal to the outside of the electronic device 101 .
  • the sound output device 155 may include, for example, a speaker or a receiver.
  • the speaker can be used for general purposes such as multimedia playback or recording playback, and the receiver can be used to receive incoming calls. According to one embodiment, the receiver may be implemented separately from or as part of the speaker.
  • the display device 160 may visually provide information to the outside (eg, a user) of the electronic device 101 .
  • the display device 160 may include, for example, a display, a hologram device, or a projector and a control circuit for controlling the corresponding device.
  • the display device 160 may include a touch circuitry configured to sense a touch or a sensor circuit (eg, a pressure sensor) configured to measure the intensity of a force generated by the touch. have.
  • the audio module 170 may convert a sound into an electric signal or, conversely, convert an electric signal into a sound. According to an embodiment, the audio module 170 acquires a sound through the input device 150 , or an external electronic device (eg, a sound output device 155 ) connected directly or wirelessly with the electronic device 101 .
  • the electronic device 102) eg, a speaker or headphones
  • the electronic device 102 may output a sound.
  • the sensor module 176 detects an operating state (eg, power or temperature) of the electronic device 101 or an external environmental state (eg, user state), and generates an electrical signal or data value corresponding to the sensed state. can do.
  • the sensor module 176 may include, for example, a gesture sensor, a gyro sensor, a barometric sensor, a magnetic sensor, an acceleration sensor, a grip sensor, a proximity sensor, a color sensor, an IR (infrared) sensor, a biometric sensor, It may include a temperature sensor, a humidity sensor, or an illuminance sensor.
  • the interface 177 may support one or more specified protocols that may be used by the electronic device 101 to directly or wirelessly connect with an external electronic device (eg, the electronic device 102 ).
  • the interface 177 may include, for example, a high definition multimedia interface (HDMI), a universal serial bus (USB) interface, an SD card interface, or an audio interface.
  • the connection terminal 178 may include a connector through which the electronic device 101 can be physically connected to an external electronic device (eg, the electronic device 102 ).
  • the connection terminal 178 may include, for example, an HDMI connector, a USB connector, an SD card connector, or an audio connector (eg, a headphone connector).
  • the haptic module 179 may convert an electrical signal into a mechanical stimulus (eg, vibration or movement) or an electrical stimulus that the user can perceive through tactile or kinesthetic sense.
  • the haptic module 179 may include, for example, a motor, a piezoelectric element, or an electrical stimulation device.
  • the camera module 180 may capture still images and moving images. According to an embodiment, the camera module 180 may include one or more lenses, image sensors, image signal processors, or flashes.
  • the power management module 188 may manage power supplied to the electronic device 101 .
  • the power management module 188 may be implemented as, for example, at least a part of a power management integrated circuit (PMIC).
  • PMIC power management integrated circuit
  • the battery 189 may supply power to at least one component of the electronic device 101 .
  • the battery 189 may include, for example, a non-rechargeable primary cell, a rechargeable secondary cell, or a fuel cell.
  • the communication module 190 is a direct (eg, wired) communication channel or a wireless communication channel between the electronic device 101 and an external electronic device (eg, the electronic device 102, the electronic device 104, or the server 108). It can support establishment and communication through the established communication channel.
  • the communication module 190 may include one or more communication processors that operate independently of the processor 120 (eg, an application processor) and support direct (eg, wired) communication or wireless communication.
  • the communication module 190 is a wireless communication module 192 (eg, a cellular communication module, a short-range wireless communication module, or a global navigation satellite system (GNSS) communication module) or a wired communication module 194 (eg, : It may include a local area network (LAN) communication module, or a power line communication module).
  • a wireless communication module 192 eg, a cellular communication module, a short-range wireless communication module, or a global navigation satellite system (GNSS) communication module
  • GNSS global navigation satellite system
  • wired communication module 194 eg, : It may include a local area network (LAN) communication module, or a power line communication module.
  • a corresponding communication module may be a first network 198 (eg, a short-range communication network such as Bluetooth, WiFi direct, or infrared data association (IrDA)) or a second network 199 (eg, a cellular network, the Internet, or It may communicate with an external electronic device via a computer network (eg, a telecommunication network such as a LAN or WAN).
  • a computer network eg, a telecommunication network such as a LAN or WAN.
  • These various types of communication modules may be integrated into one component (eg, a single chip) or may be implemented as a plurality of components (eg, multiple chips) separate from each other.
  • the wireless communication module 192 uses subscriber information (eg, International Mobile Subscriber Identifier (IMSI)) stored in the subscriber identification module 196 within a communication network such as the first network 198 or the second network 199 .
  • subscriber information eg, International Mobile Subscriber Identifier (IMSI)
  • IMSI International Mobile Subscriber Identifier
  • the antenna module 197 may transmit or receive a signal or power to the outside (eg, an external electronic device).
  • the antenna module 197 may include one antenna including a conductor formed on a substrate (eg, a PCB) or a radiator formed of a conductive pattern.
  • the antenna module 197 may include a plurality of antennas. In this case, it may be selected from the first plurality of antennas.
  • a signal or power may be transmitted or received between the communication module 190 and an external electronic device through the selected at least one antenna.
  • other components eg, RFIC
  • other than the radiator may be additionally formed as a part of the antenna module 197 .
  • peripheral devices eg, a bus, general purpose input and output (GPIO), serial peripheral interface (SPI), or mobile industry processor interface (MIPI)
  • GPIO general purpose input and output
  • SPI serial peripheral interface
  • MIPI mobile industry processor interface
  • the command or data may be transmitted or received between the electronic device 101 and the external electronic device 104 through the server 108 connected to the second network 199 .
  • Each of the electronic devices 102 and 104 may be the same or a different type of the electronic device 101 .
  • all or a part of operations executed in the electronic device 101 may be executed in one or more of the external electronic devices 102 , 104 , or 108 .
  • the electronic device 101 may perform the function or service itself instead of executing the function or service itself.
  • one or more external electronic devices may be requested to perform at least a part of the function or the service.
  • the one or more external electronic devices that have received the request may execute at least a part of the requested function or service, or an additional function or service related to the request, and transmit a result of the execution to the electronic device 101 .
  • the electronic device 101 may process the result as it is or additionally and provide it as at least a part of a response to the request.
  • cloud computing, distributed computing, or client-server computing technology may be used.
  • FIG. 2 is a block diagram of an electronic device related to output of audio data according to an embodiment of the present invention.
  • the electronic device 200 converts audio data output through the plurality of speakers 202 to stereo sound or mono sound based on the positional relationship between the electronic device 200 and the user. (mono sound) can be provided.
  • the electronic device 200 may selectively provide stereo sound or mono sound in order to provide a better sound quality call environment to the user during a call in a hands-free situation.
  • the electronic device 200 determines a positional relationship between the electronic device 200 and the user to provide stereo sound when the user is located within a preset area, and provides mono sound when the user is located outside the preset area.
  • the preset area is an area set according to the positions of the plurality of speakers 202 disposed on the electronic device 200, and is a sweet spot that provides stereo sound to the user and provides the best sound quality. may include.
  • the electronic device 200 for providing the above-described function includes a plurality of microphones 201 , a plurality of speakers 202 , a sensor 203 , a camera 204 , a memory 205 and It may include a processor 206 .
  • the configuration of the electronic device 200 is not limited thereto. According to various embodiments, the electronic device 200 may omit at least one component among the above-described components, and may further include at least one other component.
  • the plurality of microphones 201 may receive a user's voice. Also, the plurality of microphones 201 may provide the received voice to the processor 206 . In FIG. 2 , it has been described that the plurality of microphones 201 include a first microphone 201a and a second microphone 201b, but the number of microphones included in the electronic device 200 is limited thereto. no. According to an embodiment, the electronic device 200 may further include at least one other microphone.
  • the plurality of speakers 202 may output audio data received from the processor 206 .
  • the plurality of speakers 202 may output audio data selected by the processor 206 to provide sound to the user.
  • FIG. 2 it has been described that the plurality of speakers 202 include the first speaker 202a and the second speaker 202b, but the number of speakers included in the electronic device 200 is limited thereto. no.
  • the sensor 203 may be disposed inside the electronic device 200 to detect an operating state of the electronic device 200 or an external environmental state, and may generate an electrical signal or data value corresponding to the sensed state. According to an embodiment, the sensor 203 may acquire sensor information related to the posture of the electronic device 200 . For example, whenever the posture of the electronic device 200 is changed, the sensor 203 measures the change angle of the electronic device 200 and provides the measured change angle to the processor 206 as sensor information. can
  • the sensor 203 may measure a change angle of the electronic device 200 based on an imaginary line passing through the plurality of speakers 202 disposed in the electronic device 200 , and measure The changed angle of the electronic device 200 may be provided to the processor 206 as sensor information.
  • the sensor 203 may include, for example, at least one of a gyro sensor and an acceleration sensor. However, the type of the sensor 203 is not limited thereto.
  • the camera 204 may acquire image data by photographing an object (eg, a user).
  • the image data may include at least one of still image data and moving image data.
  • the memory 205 may store various data used by at least one component of the electronic device 200 .
  • the memory 205 stores various data such as voice acquired from a plurality of microphones 201 , audio data output through a plurality of speakers 202 , and a captured image acquired from the camera 204 . can be saved
  • the processor 206 may be operatively connected to other components of the electronic device 200 to control operations of the other components.
  • the processor 206 is operatively connected to a plurality of microphones 201 , a plurality of speakers 202 , a sensor 203 , a camera 204 , and a memory 205 to enable the plurality of microphones ( 201 ), the plurality of speakers 202 , the sensor 203 , the camera 204 , and the memory 205 .
  • the processor 206 may receive a user's voice through the plurality of microphones 201 . Also, the processor 206 may determine the positional relationship between the electronic device 200 and the user based on a difference in reception time of the user's voice received through each of the plurality of microphones 201 . For example, when receiving the user's voice through the first microphone 201a and the second microphone 201b included in the plurality of microphones 201 , the processor 206 controls the first microphone 201a A positional relationship between the electronic device 200 and the user based on a difference between a first time when the user's voice is received through ) and a second time when the user's voice is received through the second microphone 201b can be judged
  • the processor 206 compares the received first time with the second time, and based on the first threshold value and the compared value, a position between the electronic device 200 and the user relationship can be judged.
  • the electronic device 200 may determine whether the user is within a preset area based on the determined positional relationship.
  • the processor compares the first time with the second time, and a comparison value of 3 can confirm.
  • the processor 206 may compare the obtained comparison value 3 with a first threshold value 5, in which case, if the comparison value is equal to or greater than the first threshold value, The processor 206 may determine that the user is not located in the preset region, and when the comparison value is less than the first threshold value, determine that the user is located in the preset region.
  • the processor 206 may determine that the current user is within the preset area. That is, the first threshold value may be referred to as reference information for determining the positional relationship between the electronic device 200 and the user based on the time of the received voice. However, the first threshold value may be changed according to a size of the electronic device 200 and a location where each of the plurality of speakers 202 disposed in the electronic device 200 is disposed.
  • the processor 206 may obtain sensor information related to the posture of the electronic device 200 through the sensor 203 . According to an embodiment, the processor 206 may determine the posture of the electronic device 200 based on angle information of the electronic device 200 measured by the sensor 203 . For example, the processor 206 may determine whether the electronic device is disposed along the vertical axis (or vertically) or horizontally (or horizontally) based on the angle information of the electronic device 200 .
  • the processor 206 measures a change angle of the electronic device to obtain and obtain sensor information The one piece of information may be compared with the second threshold value. According to an embodiment, the processor 206 may determine that the change angle of the electronic device 200 is 49 degrees based on the sensor information acquired through the sensor 203 . In this case, when the second threshold value is 45 degrees, the processor 206 may determine the posture of the electronic device 200 by comparing the change angle of 49 degrees with the second threshold value of 45 degrees.
  • the processor 206 determines that the user is not located in an area in which the positional relationship between the electronic device 200 and the user is preset.
  • the second threshold value is reference information for determining whether the postures of the processor 206 and the current electronic device 200 are postures capable of providing stereo audio data to the user, and is changed for each electronic device 200 . It can be considered as standard information.
  • the processor 206 is configured to generate audio output through the plurality of speakers 202 based on the positional relationship between the electronic device 200 and the user and the posture of the electronic device 200 . data can be determined. The processor 206 may determine audio data to be output through each of the plurality of speakers 202 according to whether the user is located in a preset area.
  • the processor 206 may output at least partially different audio data to each of the plurality of speakers 202 to provide stereo sound.
  • the processor 206 may output the same audio data to each of the plurality of speakers 202 to provide mono sound. That is, the processor 206 determines whether the user is located in a preset area in which the stereo sound can be provided from the electronic device 200 based on the first threshold value and the second threshold value, and determines the result of the determination. Based on this, you can decide whether to provide stereo sound or mono sound to the user.
  • the preset area may be changed according to the location of each of the plurality of speakers 202 disposed in the electronic device 200 and/or the size of the electronic device 200 .
  • the first area and the third area may be set as preset areas according to the positions of the first speaker located above the electronic device 200 and the second speaker located below the electronic device 200 .
  • the processor 206 determines that the user is in the preset area. It is determined that it is located, and in order to provide an optimal sound to the user, at least a portion of different audio data may be output to each of the plurality of speakers 202 to provide stereo sound to the user.
  • the processor 206 may provide a mono sound instead of a stereo sound.
  • the processor 206 may receive a user's voice through each of the plurality of microphones 201 , and determine whether to provide a mono sound to the user based on a difference in reception time of the received voice. For example, the processor 206 may determine a positional relationship between the electronic device 200 and the user based on a difference in reception time of a voice received through each of the plurality of microphones 201 . At this time, if it is determined that the user is located outside a preset area based on the arrangement positions of the plurality of speakers 202 included in the electronic device 200 , the processor 206 provides a mono sound to the user. can
  • the processor 206 may acquire an image obtained by photographing the object from the camera 204 . Also, the processor 206 may more accurately determine the positional relationship between the electronic device 200 and the user based on the captured image, the time difference between the identified voice signals, and the determined posture of the electronic device. The processor 206 may determine audio data to be output through the plurality of speakers 202 based on the determined positional relationship between the electronic device 200 and the user.
  • the processor 206 uses the first threshold value, the second threshold value, and the camera 204 to operate the electronic device ( 200) and the user may be determined, and audio data to be output through each of the plurality of speakers 202 may be set based on the determined positional relationship.
  • the processor 206 determines that the electronic device 200 is lying down through the sensor 203, The positional relationship between the electronic device 200 and the user is determined again based on the user's voice received through each of the plurality of microphones 201 and the image captured by the camera 204, and the plurality of Audio data to be output through each of the speakers 202 may be reset.
  • the processor 206 may determine audio data to be output through the plurality of speakers 202 according to whether the user is located in a preset area. For example, when the user is located in the preset area, the processor 206 may output at least partially different audio data to each of the plurality of speakers 202 to provide stereo sound. As another example, when the user is located outside the preset area, the processor 206 may output the same audio data to each of the plurality of speakers 202 to provide mono sound.
  • the reason that the processor 206 determines that the user is not located inside the preset area and provides the mono sound to the user is that when stereo sound is provided to the user located outside the preset area, the plurality of At least a portion of the audio data output through each of the speakers 202 may cause an extremely aggravated interference phenomenon. Accordingly, since the sound quality of the stereo sound deteriorated due to the interference phenomenon is inferior to that of the mono sound, the processor 206 performs the same through each of the plurality of speakers 202 if the user is not located within the preset area. A mono sound can be provided by outputting audio data.
  • the preset area may be determined according to a location of a plurality of speakers 202 disposed in the electronic device 200 or a size of the electronic device 200 .
  • the preset area may include a sweet spot that provides the best sound quality when stereo sound is provided to the user based on the arrangement positions of the plurality of speakers 202 .
  • the processor 206 when the processor 206 outputs stereo sound through each of the plurality of speakers 202 , the processor 206 uses a filter to prevent crosstalk, which is a phenomenon that occurs when audio data interferes with each other.
  • a filter to prevent crosstalk, which is a phenomenon that occurs when audio data interferes with each other.
  • a filter may be applied to the stereo sound output by the processor 206 through each of the plurality of speakers 202 to cancel crosstalk.
  • the electronic device includes a plurality of microphones (eg, the first microphone 201a and the second microphone 201b) and a plurality of speakers.
  • a plurality of microphones eg, the first microphone 201a and the second microphone 201b
  • a plurality of speakers eg, first speaker 202a and second speaker 202b
  • sensor eg, sensor 203
  • memory eg, memory 205
  • a processor eg, processor 206 operatively coupled to the sensor and the memory, wherein the processor receives, through each of the plurality of microphones, a user's voice, and each of the plurality of microphones based on a difference in reception time of the user's voice received through the user, determine a positional relationship between the electronic device and the user, and determine the posture of the electronic device based on sensor information measured through the sensor
  • Audio data output through the plurality of speakers included in the electronic device may be determined based on the determined positional relationship and
  • the processor determines whether the user is located in a preset area based on the determined positional relationship and the determined posture of the electronic device, and determines that the user is not located in the preset area. When it is determined that the same audio data is output through each of the plurality of speakers, and when it is determined that the user is located in the preset area, at least some different audio data are output through each of the plurality of speakers can be set.
  • the processor may set the preset area based on a location where the plurality of speakers are disposed in the electronic device.
  • the electronic device further includes a camera (eg, a camera 204 ), and the processor includes an image captured by the camera and a voice of the user received through each of the plurality of microphones. It may be configured to determine a positional relationship between the electronic device and the user based on a reception time difference of .
  • a camera eg, a camera 204
  • the processor includes an image captured by the camera and a voice of the user received through each of the plurality of microphones. It may be configured to determine a positional relationship between the electronic device and the user based on a reception time difference of .
  • the electronic device further includes at least one other microphone
  • the processor is configured to include a reception time of the user's voice received through each of the plurality of microphones and the at least one other microphone. It may be configured to determine a positional relationship between the electronic device and the user based on a reception time of the received user's voice.
  • the electronic device may further include a filter for preventing a crosstalk phenomenon occurring between the audio data output through each of the plurality of speakers.
  • the processor outputs the same audio data through each of the plurality of speakers when the value indicating the difference in the reception time of the user's voice is greater than or equal to a preset first threshold value, When a value indicating a difference in reception time of a user's voice is smaller than the first threshold value, at least a portion of the audio data may be set to output different audio data through each of the plurality of speakers.
  • the processor calculates the angle of the electronic device based on the sensor information, and when the calculated angle of the electronic device is greater than or equal to a preset second threshold value, the plurality of speakers The same audio data may be output through each, and when the calculated angle of the electronic device is smaller than the second threshold value, at least some of the different audio data may be output through each of the plurality of speakers.
  • the electronic device further includes a camera (eg, a camera 204 ), the processor acquires an image captured by the camera, and a position of an object corresponding to the user in the image It may be configured to obtain a value and reconstruct the determined audio data based on the determined positional relationship, the determined posture of the electronic device, and the position value of the object.
  • a camera eg, a camera 204
  • the processor acquires an image captured by the camera, and a position of an object corresponding to the user in the image It may be configured to obtain a value and reconstruct the determined audio data based on the determined positional relationship, the determined posture of the electronic device, and the position value of the object.
  • the electronic device includes a plurality of microphones (eg, the first microphone 201a and the second microphone 201b) and a plurality of speakers.
  • a plurality of microphones eg, the first microphone 201a and the second microphone 201b
  • a plurality of speakers eg, first speaker 202a and second speaker 202b
  • camera eg, camera 204
  • memory eg, memory 205
  • the processor receives a user's voice through each of the plurality of microphones, and an image captured by the camera to obtain the position value of the object corresponding to the user in the image, and based on the reception time difference of the user's voice received through each of the plurality of microphones and the position value of the object, the electronic and determine a positional relationship between the device and the user, and determine the audio data to be output through the plurality of speakers based on
  • the processor selects at least two of the plurality of speakers based on the determined positional relationship, and outputs at least partially different audio data through each of the selected at least two speakers. can be set to
  • FIG. 3 is a diagram illustrating a method of outputting audio data according to an embodiment of the present invention.
  • the processor eg, the processor 206 performs each of a plurality of microphones (eg, the plurality of microphones 201 ) disposed in the electronic device (eg, the electronic device 200 ). It is possible to receive the user's voice signal through the electronic device (eg, the electronic device 200 ).
  • the processor may determine a positional relationship between the electronic device and the user based on a received time difference of a voice signal received through each of the plurality of microphones.
  • a first microphone eg, a first microphone 201a among the plurality of microphones is disposed on one side (eg, a left side) of the electronic device
  • a second microphone eg, a second microphone among the plurality of microphones 2
  • the microphone 201b is disposed on the other side (eg, the right side) of the electronic device, when a user located in the direction of one side (eg, the left side) of the electronic device speaks, the first microphone located closer to the user will 2 Can receive voice faster than microphone.
  • the processor compares T1, which is the time of the voice signal received by the first microphone 201a, with T2, which is the time of the voice signal received by the second microphone 201b, based on the cross-correlation, and receives the voice You can see the time difference.
  • the processor may determine a positional relationship between the electronic device and the user based on a difference in reception time of the received voice.
  • the processor may determine the posture of the electronic device based on sensor information received from a sensor (eg, the sensor 203).
  • the sensor may include a gyro sensor and an acceleration sensor, and is not limited thereto as long as sensor information for determining the posture of the electronic device can be obtained.
  • the sensor may measure a change angle of the electronic device based on a virtual line passing through the plurality of speakers disposed in the electronic device, and provide the measured change angle of the electronic device as sensor information to the processor.
  • the processor may determine the posture of the electronic device based on the sensor information.
  • the processor may determine and output audio data output through each of a plurality of speakers (eg, a plurality of speakers 202) based on the positional relationship and the determined posture of the electronic device. .
  • a plurality of speakers eg, a plurality of speakers 202
  • the processor may output stereo sound through the plurality of speakers.
  • the processor may output a mono sound through the plurality of speakers.
  • FIG. 4 is a diagram illustrating a method of selectively outputting audio data based on a positional relationship between an electronic device and a user and a posture of the electronic device, according to an embodiment of the present invention.
  • the processor may compare the voice received through each of the plurality of microphones (eg, the plurality of microphones 201) based on the cross-correlation. , it is possible to check the difference in reception time of the received voice based on the comparison result.
  • the processor may compare a first threshold value that is preset reference information for determining the positional relationship between the electronic device and the user and a difference in reception time of the received voice. When the difference in the reception time of the received voice is greater than or equal to the first threshold value, the processor may determine that the user is not located in the preset area.
  • the processor when the processor determines that the user is not located within the preset area, in operation 403 , the processor outputs the same audio data through each of the plurality of speakers (eg, the plurality of speakers 202 ).
  • a mono sound can be provided to the user. If the processor provides stereo sound when the user's location is outside the preset area, interference may occur between audio data that is at least partially different from each other outputted through each of the plurality of speakers. Accordingly, when the user's location is outside the preset area, the processor may provide a mono sound outputting the same audio data through the plurality of speakers.
  • the processor may compare the angle of the electronic device determined based on sensor information received from a sensor (eg, the sensor 203) with a preset second threshold value.
  • the processor may determine the posture of the electronic device based on the comparison result. For example, when the angle of the electronic device is smaller than the second threshold value, the processor may determine that the electronic device is disposed along the vertical axis. Also, when the angle of the electronic device is greater than or equal to the second threshold value, the processor may determine that the electronic device is arranged in a horizontal axis.
  • the processor determines a preset area according to positions of a plurality of speakers disposed in the electronic device. It is determined that the user is not located inside the , and in operation 403 , the same audio data may be output through each of the plurality of speakers to provide a mono sound to the user.
  • the processor determines the positions of the plurality of speakers disposed in the electronic device It is determined that the user is located in the preset area according to , and in operation 404 , at least some different audio data may be output through each of the plurality of speakers to provide stereo sound to the user.
  • FIG. 5 is a diagram illustrating another method of selectively outputting audio data based on a positional relationship between an electronic device and a user and a posture of the electronic device, according to an embodiment of the present invention.
  • the processor determines a difference between the reception time of the received voice and the preset first Thresholds can be compared.
  • the processor may determine that the user is not located in the preset area.
  • the processor when it is determined that the user is not located inside the preset area, in operation 503 , the processor outputs the same audio data through each of a plurality of speakers (eg, a plurality of speakers 202 ). Thus, a mono sound can be provided to the user.
  • a plurality of speakers eg, a plurality of speakers 202
  • the processor may compare an angle of the electronic device determined based on sensor information received from a sensor (eg, the sensor 203) with a preset second threshold value.
  • the processor may determine the posture of the electronic device based on the comparison result. For example, when the angle of the electronic device is smaller than the second threshold value, the processor may determine that the electronic device is disposed along the vertical axis. Also, when the angle of the electronic device is greater than or equal to the second threshold value, the processor may determine that the electronic device is arranged in a horizontal axis.
  • the processor determines a preset area according to positions of a plurality of speakers disposed in the electronic device. It is determined that the user is not located inside the , and in operation 503 , the same audio data may be output through each of the plurality of speakers to provide a mono sound to the user.
  • the processor determines the positions of the plurality of speakers disposed in the electronic device It is determined that the user is located in the preset area according to , and in operation 504 , stereo audio data to be output through each of the plurality of speakers may be configured. For example, the processor may configure audio data that is at least partially different from each other output through each of the plurality of speakers.
  • the processor may specify a positional relationship between the electronic device and the user based on an image captured by a camera (eg, the camera 204). For example, the processor may obtain an image obtained by photographing an object (eg, a user) through the camera, and determine the position of the object in the image. Also, the processor may specify a positional relationship between the electronic device and the user based on a difference in reception time of a voice received through each of the plurality of microphones and a position of an object in the image. In more detail, the processor may roughly determine a positional relationship between the electronic device and the user based on a difference in reception time of a voice received through each of the plurality of microphones in operation 501 .
  • the positional relationship between the electronic device and the user determined based on the difference in the reception time of the voice may include information on the distance and the direction between the electronic device and the user.
  • the processor checks the position value of the object corresponding to the user in the image captured by the user through the camera, and the electronic device and a direction between the user and the user may be specified in any one direction.
  • the processor may apply a filter to the stereo audio data based on the specified positional relationship between the electronic device and the user.
  • the processor filters the at least partly different audio data to prevent a crosstalk phenomenon in which the at least partly different audio data interferes with each other.
  • XTC filter can be applied.
  • the processor may provide stereo sound to a user located in the preset area by outputting audio data to which at least a portion to which the filter is applied is different through the plurality of speakers.
  • the processor transmits at least partially different audio data based on the determined positional relationship and the posture of the electronic device.
  • a stereo sound may be provided to the user by outputting the output through each of the plurality of speakers.
  • the processor may reconstruct the stereo sound based on the specified positional relationship between the electronic device and the user. For example, the processor may reconstruct audio data that is at least partially different from the audio data output through the plurality of speakers based on the specified positional relationship between the electronic device and the user.
  • FIG. 6 is a diagram illustrating another method of selectively outputting audio data based on a positional relationship between an electronic device and a user and a posture of the electronic device, according to an embodiment of the present invention.
  • the processor compares the reception time difference of the voice received through each of the plurality of microphones (eg, the plurality of microphones 201 ) with a preset first threshold value to the electronic device and a positional relationship between the user and the user may be determined.
  • the processor compares the sensor information received from the sensor (eg, the sensor 203 ) with the second threshold value in operation 602 when the difference in the reception time of the received voice is smaller than a preset first threshold value to the electronic device position can be judged.
  • the processor may determine that the user is located in a preset area.
  • the processor configures at least partly different audio data based on the determined positional relationship and the determined posture of the electronic device in order to provide stereo sound to the user.
  • the processor determines the reception time of the voice received through the plurality of microphones and the at least one By comparing each of the reception times of the voices received through the other microphones based on the cross-correlation, the difference in the reception times of the received voices can be confirmed.
  • the processor may perform trilateration based on a difference in reception time of the received voice to more accurately identify a positional relationship between the electronic device and the user.
  • a reception time of a voice signal received by each of the plurality of microphones eg, the first microphone 201a and the second microphone 201b
  • the at least one other microphone eg, :
  • T3 the reception time of the voice signal received through the third microphone (not shown)
  • T3 trilateration is performed based on the time difference between T1, T2, and T3 to determine the positional relationship between the electronic device and the user. can be specified.
  • the processor may output the stereo audio data configured in operation 604 through the plurality of speakers.
  • the processor may apply a filter to the stereo audio data based on the specified positional relationship between the electronic device and the user.
  • the processor filters the at least partly different audio data to prevent a crosstalk phenomenon in which the at least partly different audio data interferes with each other.
  • XTC filter can be applied.
  • the processor may provide stereo sound to a user located in the preset area by outputting audio data to which at least a portion of which the filter is applied is different through the plurality of speakers.
  • the processor may reconstruct the stereo sound based on the specified positional relationship between the electronic device and the user. For example, the processor may reconstruct audio data that is at least partially different from the audio data output through the plurality of speakers based on the specified positional relationship between the electronic device and the user.
  • FIG. 7 is a diagram illustrating a method of selectively outputting audio data based on a positional relationship between an electronic device and a user, according to an embodiment of the present invention.
  • the processor may receive the user's voice through each of the plurality of microphones (eg, the plurality of microphones 201).
  • the processor may obtain an image of an object (eg, a user) from a camera (eg, the camera 204).
  • an object eg, a user
  • a camera eg, the camera 204
  • the processor may determine a positional relationship between the electronic device and the user based on a difference in reception time of a voice received through each of the plurality of microphones and an image acquired through the camera. For example, the processor may determine a difference in reception time of the received voice by comparing the reception times of the voice signals received through each of the plurality of microphones based on the cross-correlation. Also, the processor may obtain a position value of an object corresponding to the user in the captured image. The processor may determine the positional relationship between the electronic device and the user based on a difference between the position value of the object corresponding to the user and the reception time of the received voice.
  • the processor may roughly determine a positional relationship between the electronic device and the user based on a difference in reception time of the voice received through each of the plurality of microphones in operation 701 .
  • the positional relationship between the electronic device and the user determined based on the difference in the reception time of the voice may include information on the distance and the direction between the electronic device and the user.
  • the processor checks the position value of the object corresponding to the user in the image captured by the user through the camera, and the electronic device and a direction between the user and the user may be specified in any one direction.
  • the processor may determine audio data to be output through each of the plurality of speakers based on the determined positional relationship. According to an embodiment, when the user is located inside a preset area according to the arrangement positions of the plurality of speakers based on the determined positional relationship, the processor may at least partially use different audio By outputting data, stereo sound can be provided to the user. According to an embodiment, when the user is not located inside the preset area, the processor may output the same audio data through each of the plurality of speakers to provide a mono sound to the user.
  • FIG. 8 is a diagram illustrating another method of selectively outputting audio data based on a positional relationship between an electronic device and a user, according to an embodiment of the present invention.
  • the plurality of microphones may include at least three or more microphones.
  • the processor eg, the processor 206 may receive the user's voice through each of the plurality of microphones (eg, the plurality of microphones 201).
  • the processor may determine a positional relationship between the electronic device and the user based on a difference in reception time of a voice received through each of the plurality of microphones. For example, the processor may determine a positional relationship between the electronic device and the user by trilaterating the reception time difference of the voice received through each of the plurality of microphones. According to an embodiment, the processor compares the time of the voice signal received through each of the plurality of microphones based on the cross-correlation to determine the difference in the reception time of the received voice.
  • the processor may determine and output audio data output through a plurality of speakers (eg, a plurality of speakers 202) based on the determined positional relationship. For example, when the user is located in a predetermined area according to the arrangement position of the plurality of speakers, at least a part of different audio data may be output through each of the plurality of speakers to provide stereo sound to the user. As another example, when the user is not located in the preset area, the processor may output the same audio data through each of the plurality of speakers to provide a mono sound to the user.
  • a plurality of speakers eg, a plurality of speakers 202
  • a method of outputting audio data of an electronic device includes a plurality of microphones (eg, the first microphone 201a) included in the electronic device and A positional relationship between the electronic device and the user based on an operation of receiving a user's voice through each of the second microphones 201b) and a difference in reception time of the user's voice received through each of the plurality of microphones an operation of determining a posture of the electronic device based on sensor information measured through a sensor (eg, sensor 203) included in the electronic device, an operation of determining the posture of the electronic device, and the determined positional relationship and the determined determining the audio data to be output through the plurality of speakers (eg, the first speaker 202a and the second speaker 202b) included in the electronic device based on the posture of the electronic device have.
  • a sensor eg, sensor 203
  • the determining of the audio data may include determining whether the user is located within a preset area based on the determined positional relationship and the determined posture of the electronic device; outputting the same audio data through each of the plurality of speakers when it is determined that the user is not located within the preset area; and when it is determined that the user is located within the preset area, at least through each of the plurality of speakers Some may include an operation of outputting other audio data.
  • the method of outputting the audio data may further include setting the preset region based on a position where the plurality of speakers are arranged in the electronic device.
  • the determining of the positional relationship between the electronic device and the user may include using an image captured by a camera (eg, camera 204 ) included in the electronic device and each of the plurality of microphones. and determining a positional relationship between the electronic device and the user based on a difference in reception time of the received user's voice.
  • a camera eg, camera 204
  • the determining of the positional relationship between the electronic device and the user may include a reception time of the user's voice received through each of the plurality of microphones and at least one other microphone included in the electronic device. and determining a positional relationship between the electronic device and the user based on the reception time of the user's voice received through the .
  • the method of outputting the audio data may further include preventing a crosstalk phenomenon occurring between the audio data output through each of the plurality of speakers through a filter included in the electronic device.
  • the determining of the audio data may include, when a value representing a difference in reception time of the user's voice is greater than or equal to a preset first threshold value, audio outputted through each of the plurality of speakers Determining data as the same audio data, and when a value indicating a difference in reception time of the user's voice is less than the first threshold value, at least partially different audio data output through each of the plurality of speakers It may include an operation to determine with data.
  • the determining of the posture of the electronic device includes calculating the angle of the electronic device based on the sensor information, and the determining of the audio data includes the calculated electronic device determining that the audio data output through each of the plurality of speakers is the same audio data when the angle of is greater than or equal to a preset second threshold value, and the calculated angle of the electronic device is the second threshold value
  • the method may include determining, at least in part, audio data output through each of the plurality of speakers as different audio data.
  • the method of outputting the audio data includes an operation of acquiring an image photographed through a camera of the electronic device, an operation of acquiring a position value of an object corresponding to the user from the image, and the determined
  • the method may further include reconstructing the determined audio data based on the positional relationship, the determined posture of the electronic device, and the position value of the object.
  • FIG. 9 is a view for explaining a preset area according to an arrangement position of a plurality of speakers (eg, a plurality of speakers 202) according to an embodiment of the present invention.
  • the processor may set a preset area based on a location where the plurality of speakers are disposed in the electronic device (eg, the electronic device 200 ).
  • the preset area is an area set according to the positions of the plurality of speakers disposed on the electronic device, and may include a sweet spot that provides stereo sound to the user and provides the best sound quality.
  • the first area and the third area are preset areas according to the positions of the first speaker 901 located above the electronic device and the second speaker 902 located below the electronic device.
  • the processor may provide stereo sound to the user by outputting at least partially different audio data through a plurality of speakers.
  • the processor when the user is located in the second area and the fourth area, the processor outputs the same audio data through the first speaker 901 located above and the second speaker 902 located below the user. can provide mono sound.
  • the processor determines the positions of the electronic device and the user. Based on the relationship, at least two or more of the plurality of microphones may be selected to provide stereo sound to the user. For example, when the user is located in the first area (or the third area), the processor is at least capable of providing stereo sound to the user located in the first area (or the third area) among the plurality of speakers. More than one speaker can be selected. For example, the processor outputs stereo sound to the first region (or the third region) by outputting at least partially different audio data through the first and second speakers 901 and 902 disposed above and below the electronic device.
  • the processor may provide at least stereo sound to the user located in the second area (or fourth area) among the plurality of speakers. More than one speaker can be selected.
  • the processor outputs stereo sound to the second region (or the second region) by outputting at least partially different audio data through a third speaker (not shown) and a fourth speaker (not shown) disposed on both sides of the electronic device. It can be provided to users located in area 4).
  • the electronic device may have various types of devices.
  • the electronic device may include, for example, a portable communication device (eg, a smart phone), a computer device, a portable multimedia device, a portable medical device, a camera, a wearable device, or a home appliance device.
  • a portable communication device eg, a smart phone
  • a computer device e.g., a smart phone
  • a portable multimedia device e.g., a portable medical device
  • a camera e.g., a portable medical device
  • a camera e.g., a portable medical device
  • a camera e.g., a portable medical device
  • a wearable device e.g., a smart bracelet
  • a home appliance device e.g., a home appliance
  • first, second, or “first”, “second” may simply be used to distinguish the element from other elements in question, and may refer to elements in other aspects (e.g., importance or order) is not limited. It is said that one (eg, first) component is “coupled” or “connected” to another (eg, second) component, with or without the terms “functionally” or “communicatively”. When referenced, it means that one component can be connected to the other component directly (eg by wire), wirelessly, or through a third component.
  • module may include a unit implemented in hardware, software, or firmware, and may be used interchangeably with terms such as, for example, logic, logic block, component, or circuit.
  • a module may be an integrally formed part or a minimum unit or a part of the part that performs one or more functions.
  • the module may be implemented in the form of an application-specific integrated circuit (ASIC).
  • ASIC application-specific integrated circuit
  • Various embodiments of the present document include one or more stored in a storage medium (eg, the internal memory 136 or the external memory 138) readable by a machine (eg, the electronic device 101). It may be implemented as software (eg, program 140) including instructions.
  • the processor eg, the processor 120
  • the device may call at least one of one or more instructions stored from a storage medium and execute it. This makes it possible for the device to be operated to perform at least one function according to the at least one command called.
  • the one or more instructions may include code generated by a compiler or code executable by an interpreter.
  • the device-readable storage medium may be provided in the form of a non-transitory storage medium.
  • 'non-transitory' only means that the storage medium is a tangible device and does not contain a signal (eg, electromagnetic wave), and this term refers to the case where data is semi-permanently stored in the storage medium and It does not distinguish between temporary storage cases.
  • a signal eg, electromagnetic wave
  • the method according to various embodiments disclosed in this document may be provided as included in a computer program product.
  • Computer program products may be traded between sellers and buyers as commodities.
  • the computer program product is distributed in the form of a device-readable storage medium (eg compact disc read only memory (CD-ROM)), or via an application store (eg Play Store TM ) or on two user devices ( It can be distributed (eg downloaded or uploaded) directly, online between smartphones (eg: smartphones).
  • a part of the computer program product may be temporarily stored or temporarily created in a machine-readable storage medium such as a memory of a server of a manufacturer, a server of an application store, or a relay server.
  • each component eg, a module or a program of the above-described components may include a singular or a plurality of entities.
  • one or more components or operations among the above-described corresponding components may be omitted, or one or more other components or operations may be added.
  • a plurality of components eg, a module or a program
  • the integrated component may perform one or more functions of each component of the plurality of components identically or similarly to those performed by the corresponding component among the plurality of components prior to the integration. .
  • operations performed by a module, program, or other component are executed sequentially, in parallel, repeatedly, or heuristically, or one or more of the operations are executed in a different order, or omitted. or one or more other operations may be added.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Otolaryngology (AREA)
  • Telephone Function (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Stereophonic System (AREA)

Abstract

L'invention concerne un dispositif électronique comprenant : de multiples microphones ; de multiples haut-parleurs ; un capteur ; une mémoire ; et un processeur connecté fonctionnellement aux multiples microphones, aux multiples haut-parleurs, au capteur et à la mémoire, le processeur étant configuré pour : recevoir une voix d'un utilisateur par le biais de chacun des multiples microphones ; déterminer une relation de position entre le dispositif électronique et l'utilisateur d'après une différence d'une heure de réception à laquelle la voix de l'utilisateur est reçue par le biais de chacun des multiples microphones ; déterminer la position du dispositif électronique d'après les informations de capteur mesurées par le biais du capteur ; et déterminer une sortie de données audio au moyen des multiples haut-parleurs inclus dans le dispositif électronique d'après la relation de position déterminée et la position déterminée du dispositif électronique. Divers autres modes de réalisation de l'invention sont également possibles.
PCT/KR2020/012910 2019-12-26 2020-09-24 Procédé de sortie de données audio et dispositif électronique prenant en charge ledit procédé WO2021132852A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020190175666A KR20210083059A (ko) 2019-12-26 2019-12-26 오디오 데이터의 출력 방법 및 이를 지원하는 전자 장치
KR10-2019-0175666 2019-12-26

Publications (1)

Publication Number Publication Date
WO2021132852A1 true WO2021132852A1 (fr) 2021-07-01

Family

ID=76574863

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/012910 WO2021132852A1 (fr) 2019-12-26 2020-09-24 Procédé de sortie de données audio et dispositif électronique prenant en charge ledit procédé

Country Status (2)

Country Link
KR (1) KR20210083059A (fr)
WO (1) WO2021132852A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023040515A1 (fr) * 2021-09-16 2023-03-23 Oppo广东移动通信有限公司 Procédé de commande audio et dispositif de lecture audio

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20230023494A (ko) * 2021-08-10 2023-02-17 삼성전자주식회사 사운드 신호를 보정하는 전자 장치 및 전자 장치의 제어 방법

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170022727A (ko) * 2015-08-21 2017-03-02 삼성전자주식회사 전자 장치의 음향 처리 방법 및 그 전자 장치
US20180204574A1 (en) * 2012-09-26 2018-07-19 Amazon Technologies, Inc. Altering Audio to Improve Automatic Speech Recognition
KR20180108878A (ko) * 2013-11-22 2018-10-04 애플 인크. 핸즈프리 빔 패턴 구성
KR20180132276A (ko) * 2017-06-02 2018-12-12 네이버 주식회사 사용자의 위치 및 공간에 알맞은 정보를 능동적으로 제공하는 방법 및 장치
KR20190119948A (ko) * 2018-04-13 2019-10-23 삼성전자주식회사 전자 장치 및 이의 스테레오 오디오 신호 처리 방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180204574A1 (en) * 2012-09-26 2018-07-19 Amazon Technologies, Inc. Altering Audio to Improve Automatic Speech Recognition
KR20180108878A (ko) * 2013-11-22 2018-10-04 애플 인크. 핸즈프리 빔 패턴 구성
KR20170022727A (ko) * 2015-08-21 2017-03-02 삼성전자주식회사 전자 장치의 음향 처리 방법 및 그 전자 장치
KR20180132276A (ko) * 2017-06-02 2018-12-12 네이버 주식회사 사용자의 위치 및 공간에 알맞은 정보를 능동적으로 제공하는 방법 및 장치
KR20190119948A (ko) * 2018-04-13 2019-10-23 삼성전자주식회사 전자 장치 및 이의 스테레오 오디오 신호 처리 방법

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023040515A1 (fr) * 2021-09-16 2023-03-23 Oppo广东移动通信有限公司 Procédé de commande audio et dispositif de lecture audio

Also Published As

Publication number Publication date
KR20210083059A (ko) 2021-07-06

Similar Documents

Publication Publication Date Title
WO2020204365A1 (fr) Dispositif électronique et procédé de communication avec un dispositif externe via une ligne de source d'alimentation
WO2020204611A1 (fr) Procédé de détection du port d'un dispositif acoustique, et dispositif acoustique prenant le procédé en charge
WO2019045394A1 (fr) Dispositif électronique pour vérifier la proximité d'un objet externe à l'aide d'un signal dans une bande de fréquence spécifiée, et procédé de commande de dispositif électronique
WO2020096413A1 (fr) Caméra escamotable et rotative et dispositif électronique comprenant celle-ci
WO2020141793A1 (fr) Dispositif électronique ayant un affichage pliable et son procédé de commande
WO2019221466A1 (fr) Dispositif électronique et procédé de transmission d'informations à un dispositif externe pour régler une énergie sans fil à transmettre à partir d'un dispositif externe sur la base de la proximité d'un objet externe
WO2020067639A1 (fr) Dispositif électronique d'appariement avec un stylet et procédé associé
WO2021085902A1 (fr) Dispositif électronique pour délivrer des données audio d'une pluralité d'applications, et son procédé d'utilisation
WO2019208930A1 (fr) Dispositif électronique apte à fournir une communication wi-fi et une communication de point d'accès sans fil mobile, et procédé de commande associé
WO2021132852A1 (fr) Procédé de sortie de données audio et dispositif électronique prenant en charge ledit procédé
WO2019172518A1 (fr) Appareil et procédé de détermination d'un indice de faisceau d'un réseau antennaire
WO2019209075A1 (fr) Dispositif électronique et procédé de commande de dispositif électronique externe
WO2019164079A1 (fr) Procédé pour effectuer une authentification biométrique en fonction de l'affichage d'un objet lié à une authentification biométrique et dispositif électronique associé
WO2020153738A1 (fr) Dispositif électronique et procédé de connexion d'un nœud de masse à un module de caméra
WO2019231296A1 (fr) Dispositif électronique et procédé destiné à empêcher la corrosion d'une fiche audio
WO2020171342A1 (fr) Dispositif électronique permettant de fournir un service d'intelligence artificielle visualisé sur la base d'informations concernant un objet externe, et procédé de fonctionnement pour dispositif électronique
WO2019172723A1 (fr) Interface connectée à un capteur d'image et dispositif électronique comprenant des interfaces connectées parmi une pluralité de processeurs
WO2019151604A1 (fr) Appareil et procédé pour réaliser une fonction d'antenne à l'aide d'un connecteur usb
WO2021112500A1 (fr) Dispositif électronique et procédé pour corriger une image dans une commutation de caméra
WO2021125875A1 (fr) Dispositif électronique pour fournir un service de traitement d'image à travers un réseau
WO2020256318A1 (fr) Dispositif électronique et procédé d'identification d'un objet inséré dans une prise jack d'écouteur
WO2020262835A1 (fr) Dispositif électronique et procédé de détermination de dispositif audio destinés au traitement d'un signal audio au moyen dudit dispositif électronique
WO2021033921A1 (fr) Module de caméra comprenant une carte de circuit imprimé, et dispositif électronique le comprenant
WO2020130729A1 (fr) Dispositif électronique pliable destiné à fournir des informations associées à un événement et son procédé de fonctionnement
WO2019172610A1 (fr) Dispositif électronique et procédé pour réaliser un paiement à l'aide d'un module audio

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20905729

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20905729

Country of ref document: EP

Kind code of ref document: A1