WO2022154546A1 - Dispositif habitronique pour effectuer une commande de volume automatique - Google Patents

Dispositif habitronique pour effectuer une commande de volume automatique Download PDF

Info

Publication number
WO2022154546A1
WO2022154546A1 PCT/KR2022/000690 KR2022000690W WO2022154546A1 WO 2022154546 A1 WO2022154546 A1 WO 2022154546A1 KR 2022000690 W KR2022000690 W KR 2022000690W WO 2022154546 A1 WO2022154546 A1 WO 2022154546A1
Authority
WO
WIPO (PCT)
Prior art keywords
audio signal
volume
external device
processor
user
Prior art date
Application number
PCT/KR2022/000690
Other languages
English (en)
Korean (ko)
Inventor
남명우
권오채
김희진
황인제
Original Assignee
삼성전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 삼성전자 주식회사 filed Critical 삼성전자 주식회사
Publication of WO2022154546A1 publication Critical patent/WO2022154546A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/163Wearable computers, e.g. on a belt
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/26Recognition of special voice characteristics, e.g. for use in lie detectors; Recognition of animal voices
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/005Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/01Aspects of volume control, not necessarily automatic, in sound systems

Definitions

  • Various embodiments according to the present disclosure relate to a wearable device that controls the volume of an external device that outputs a sound, and more particularly, controls the volume of the external device based on the location and space of a user wearing the wearable device It's about technology.
  • Wearable electronic devices that are worn on the user's body (eg, wrist) to control external electronic devices (eg, headset, earphone, smart glasses, head mounted device: HMD); Smart watch (smart watch) is becoming popular.
  • the number of users who use various services (eg, listening to music, watching a video, and making a voice call) by using a wearable electronic device is increasing.
  • a wearable device connects to an external electronic device (eg, a TV, Bluetooth speaker, smart phone, tablet PC, notebook, desktop, or wearable device) wired or wirelessly to provide various services (eg: Watching broadcasts, listening to music, watching videos) can be provided.
  • an external electronic device eg, a TV, Bluetooth speaker, smart phone, tablet PC, notebook, desktop, or wearable device
  • the wearable device may provide various services by being connected to an external electronic device that outputs audio by wire or wirelessly.
  • the position of the user wearing the wearable device may continuously change, but when the external electronic device is in a fixed position, an obstacle may occur in effective service provision.
  • the size of the audio volume recognized by the user may vary according to the user's distance from the external electronic device including the video display device or the audio device.
  • a perceived volume based on a sound output from the external electronic device may decrease.
  • a perceived volume based on a sound output from the external electronic device may increase.
  • the external electronic device since the external electronic device is in a fixed position, the user has no intention to listen to the sound output from the external electronic device, so even when the user moves to a different space from the external electronic device, the external electronic device continues to output sound and makes unnecessary noise. This can happen.
  • a perceived volume of a sound output from a specific external electronic device may vary according to a distance between the plurality of external electronic devices and the user.
  • the conversation may be interrupted by the sound of the external electronic device.
  • the wearable device includes a microphone, a communication circuit for transmitting and receiving a control signal to and from an external device, and at least one processor electrically connected to the microphone and the communication circuit, wherein the at least one processor is configured to allow the wearable device to operate on the user's body.
  • a first audio signal is obtained from an external device using a microphone in a state of being worn
  • a second audio signal is obtained from an external device using a microphone in response to the lapse of a set time
  • a first audio signal and a second audio signal may determine the type of the user's wearing space based on the , and control the volume of the external device using the communication circuit based on the audio output control method corresponding to the type of the wearing space.
  • the wearable device receives a first audio signal from an external device using a microphone while the wearable device is worn on a user's body.
  • a wearable device includes a microphone, a communication circuit for transmitting and receiving control signals to and from a plurality of external devices, and at least one processor electrically connected to the microphone and the communication circuit, wherein the at least one processor includes the wearable device
  • a third audio signal is obtained from each of a plurality of devices using a microphone in a state worn on the user's body
  • a fourth audio signal is obtained from each of a plurality of external devices using a microphone in response to the lapse of a set time, , determine the distances between the user and the plurality of external devices based on the third audio signal and the fourth audio signal, and control the volume of at least one of the plurality of external devices using a communication circuit based on the determination result have.
  • a wearable device may automatically control an audio output of an external device including an audio device or an image display device.
  • a wearable device may automatically control the volume of an external device only with a microphone and a processor included in the wearable device without a separate sensor in the space for detecting or calculating the user's space or location.
  • the wearable device according to the present disclosure may increase the user's convenience by controlling the volume of the external device using a different control method according to the type of the user's wearing space.
  • the wearable device may output a sound capable of providing a perceived volume similar to the initially set volume without receiving a user input for adjusting the volume of the external device even if the location of the user changes.
  • the wearable device may control the volume of the external device so that the sound output by the external device does not interfere with conversation.
  • FIG. 1 is a diagram illustrating an example of a system including a wearable device and an external device connected through wireless communication according to an embodiment.
  • FIG. 2 is a block diagram illustrating a configuration of a wearable device according to an exemplary embodiment.
  • FIG. 3 is a diagram illustrating an example of a type of wearing space divided based on an external device.
  • 4A is a diagram illustrating controlling a volume of an external device based on a change in a user's position in a first space, according to an exemplary embodiment.
  • 4B is a diagram illustrating a change in an audio signal based on a user's movement in a first space according to an exemplary embodiment.
  • FIG. 5 is a diagram for describing a signal processing method for determining a distance between a user and an external device based on acquired audio signals, according to an exemplary embodiment.
  • 6A is a diagram illustrating movement of a user from a first space to a second space, according to an exemplary embodiment.
  • 6B is a diagram illustrating a change in a received audio signal as a user moves from a first space to a second space, according to an embodiment.
  • FIG. 7A is a diagram illustrating movement of a user from a first space to a third space, according to an exemplary embodiment.
  • FIG. 7B is a diagram illustrating a change in a received audio signal as a user moves from a first space to a third space, according to an exemplary embodiment.
  • FIG. 8 is a flowchart of a process of controlling a volume of an external device based on a user's location in a wearable device according to an exemplary embodiment.
  • FIG. 9 is a flowchart of a process of controlling a volume of an external device by classifying a user's space type in a wearable device according to an exemplary embodiment.
  • FIG. 10 is a diagram illustrating control of a volume of an external device according to a user's utterance in a wearable device according to an exemplary embodiment.
  • 11A is a diagram illustrating a location of a user with respect to a plurality of external devices, according to an exemplary embodiment.
  • 11B is a diagram illustrating controlling a volume of at least one external device based on a distance from a plurality of external devices, according to an exemplary embodiment.
  • FIG. 12 is a block diagram of an electronic device in a network environment according to an embodiment.
  • FIG. 1 is a diagram illustrating an example in which a wearable device is connected to an external device by wireless communication according to an exemplary embodiment.
  • the wearable device 100 of FIG. 1 may be a smart watch as shown.
  • the present invention is not limited thereto, and the wearable device 100 may be a device of various types that can be used while being attached to a user's body.
  • the wearable device 100 may include a head mounted device (HMD) or smart glasses.
  • HMD head mounted device
  • the wearable device 100 may include the strap 130 , and the strap 130 may be attached to the user's body by being wound around the user's wrist.
  • the present invention is not limited thereto, and the wearable device 100 may be attached to various body parts of the user according to the shape and size of the wearable device 100 .
  • the wearable device 100 may also be attached to a hand, the back of a hand, a finger, a fingernail, a fingertip, or the like.
  • the external device 110 of FIG. 1 may be a speaker as shown.
  • the present invention is not limited thereto, and the external device 110 may be various types of electronic devices that output sound at a temporarily fixed position.
  • it may include an audio device (eg, a connected speaker) and an image display device (eg, a network TV, HBBTV, smart TV, smart phone, notebook computer, tablet PC).
  • an audio device eg, a connected speaker
  • an image display device eg, a network TV, HBBTV, smart TV, smart phone, notebook computer, tablet PC.
  • the external device 110 may output audio data (eg, music data, audio data included in a moving picture) stored in the internal memory.
  • the external device 110 may establish a wireless communication channel with the wearable device 100 .
  • the communication circuit 230 is a communication module of Bluetooth (Bluetooth), RF (Radio Frequency) communication, infrared (IR) communication, Wi-Fi (WIFI), UWB (Ultra-wideband) and / or Zigbee (Zigbee) communication module. is available.
  • the external device 110 may receive and output audio data from the wearable device 100 through a channel connection.
  • FIG. 2 is a block diagram illustrating a configuration of a wearable device according to an exemplary embodiment.
  • the wearable device 100 may include a processor 210 , a microphone 220 , a communication circuit 230 , and a memory 240 .
  • the wearable device 100 may include additional components in addition to the components illustrated in FIG. 2 or may omit at least one of the components illustrated in FIG. 2 .
  • the components listed above may be operatively or electrically connected to each other.
  • the wearable device 100 may omit at least some of the components illustrated in FIG. 2 or may further include other components. Some of the components of the wearable device 100 illustrated in FIG. 2 may be replaced with other components performing similar functions.
  • the components illustrated in FIG. 2 may include at least one piece of hardware.
  • the processor 210 may control the overall operation of the wearable device 100 .
  • the processor 210 may be electrically connected to the microphone 220 , the communication circuit 230 , and the memory 240 to perform a specified operation.
  • the processor 210 may execute an operation or data processing related to control and/or communication of at least one other component of the wearable device 100 using instructions stored in the memory 240 .
  • the processor 210 includes a central processing unit (CPU), a graphics processing unit (GPU), a micro controller unit (MCU), a sensor hub, a supplementary processor, a communication processor, and an application. It may include at least one of a processor (application processor), application specific integrated circuit (ASIC), and field programmable gate arrays (FPGA), and may have a plurality of cores.
  • the microphone 220 may acquire an external sound and analyze the acquired external sound. For example, the microphone 220 may obtain a sound from the external device 110 and convert it into an audio signal. According to an embodiment, the microphone 220 may receive the user's utterance of the wearable device 100 and convert it into an electrical signal.
  • the communication circuit 230 may communicate with the external device 110 described with reference to FIG. 1 .
  • the wearable device 100 may transmit a control signal to the external device 110 using the communication circuit 230 .
  • the wearable device 100 may transmit audio data to the external device 110 using the communication circuit 230 .
  • the memory 240 may store various data acquired or used by at least one component (eg, a processor, a microphone) of the wearable device 100 .
  • the memory 240 may store audio data to be output through the external device 110 .
  • the memory 240 may store information about the user's utterance of the wearable device 100 acquired using the microphone 220 .
  • the processor 210 may obtain an audio signal from the external device 110 using the microphone 220 while the wearable device 100 is worn on the user's body. For example, when the external device 110 outputs sound (audio data), the microphone 220 may convert the detected sound into an electrical signal and provide it to the processor 210 . According to an embodiment, the external device 110 may output a sound based on audio data stored in an internal memory of the external device 110 or audio data received from the wearable device 100 .
  • the processor 210 may control the volume of the external device 110 using the communication circuit 230 .
  • the processor 210 may control the volume of the external device 110 based on a user input (eg, a volume control button touch input, a voice signal input) for controlling the volume of the external device 110 .
  • a user input eg, a volume control button touch input, a voice signal input
  • the processor 210 transmits a volume control signal to the external device 110 using the communication circuit 230 , and the external device 110 adjusts the volume of the output sound based on the received volume control signal.
  • the processor 210 may acquire the first audio signal through the microphone 220 based on the volume control of the external device 110 through a user input. For example, when there is no user input for a predetermined time after continuous user input for volume control, the processor 210 may obtain the first audio signal through the microphone 220 . Accordingly, the processor 210 includes information on the sound detected by the wearable device 100 through the microphone 220 while the external device 110 outputs the sound based on the volume adjusted by the user input. A first audio signal may be obtained. That is, when the processor 210 performs an input for controlling the volume of the external device 110 according to the user's desired volume, the first audio signal from the external device 110 through the microphone 220 based on the user input can be obtained. The processor 210 may store the first audio signal in the memory 240 .
  • the processor 210 may obtain the second audio signal from the external device 110 using the microphone 220 .
  • the processor 210 may acquire the second audio signal after a set time elapses after acquiring the first audio signal. For example, the processor 210 may periodically acquire the second audio signal every 10 seconds after there is a user input for volume control.
  • the processor 210 may continuously acquire the second audio signal from the external device 110 using the microphone 220 .
  • the processor 210 may acquire the second audio signal in response to the movement of the wearable device 100 .
  • the processor 210 detects movement information (eg, an acceleration value) of the wearable apparatus 100 using a motion sensor included in the wearable apparatus 100 , and the size of the movement information is critical. When the value is greater than or equal to the value, the second audio signal may be acquired.
  • movement information eg, an acceleration value
  • the processor 210 may determine the type of the user's wearable device 100 wearing space based on the first audio signal and the second audio signal. A description of the type of the wearing space according to an embodiment will be described later with reference to FIG. 3 .
  • the processor 210 may determine the type of the wearing space of the wearable device 100 by comparing the first audio signal and the second audio signal. For example, it may be determined whether it is in the same space as the external device 110 among a plurality of separated spaces. According to an embodiment, the processor 210 may determine the distance from the external device 110 to the user wearing the wearable device 100 by comparing the first audio signal and the second audio signal. For example, the processor 210 may determine the distance between the user and the external device 110 when acquiring the first audio signal and the distance between the user and the external device 110 when acquiring the second audio signal. have. Accordingly, the processor 210 may determine a change in the distance of the user from the external device 110 according to the time point at which the first audio signal is obtained and the time point at which the second audio signal is obtained.
  • the processor 210 controls the volume of the external device 110 using the communication circuit 230 based on a volume control method corresponding to the type of the wearing space of the user wearing the wearable device 100 .
  • the processor 210 sets the types of the wearing space of the plurality of wearable devices 100 and stores data on the volume control method of the external device 110 corresponding to each of the types of the wearing space in memory. (240) can be stored.
  • the processor 210 may control the volume of the external device 110 in a volume control method corresponding to the determined type of wearing space based on the volume control method stored in the memory 240 .
  • the processor 210 may transmit the volume control signal to the external device 110 through the communication circuit 230 based on the volume control method corresponding to the type of the wearing space.
  • the external device 110 may receive a volume control signal and output a sound based on the received volume control signal.
  • FIG. 3 is a diagram illustrating an example of a type of wearing space divided based on an external device.
  • the type of the wearing space of the user who wears the wearable device 100 may be divided into a plurality of types. According to various embodiments, the type of the wearing space is not limited to the type of the wearing space described with reference to FIG. 3 , and may be distinguished through various criteria.
  • the type of the wearing space may be classified based on various criteria. For example, the type of the wearing space may be classified based on the size of the space and the existence of an object disposed in the space. According to an embodiment, the type of the wearing space may be classified based on the location of the external device 300 (eg, the external device 110 of FIG. 1 ) in the space and the location of the user. According to an embodiment, the type of the wearing space may be divided into a first space 310 , a second space 320 , and a third space 330 .
  • the first space 310 may represent an open space in which there is no object such as a wall disposed between the external device 300 and the user.
  • the first space 310 may represent a living space in which the external device 300 for outputting audio is located.
  • the second space 320 may represent a space in which an object such as a wall is disposed between the external device 300 and the user.
  • the second space 320 may represent a space in which an audio signal output from the external device 300 is refracted and delivered to the user by being blocked by a structure such as a wall between the external device 300 and the user.
  • the second space is a space partially blocked by a structure such as a wall between the external device 300 and the user, and may represent a half opened space.
  • the wearable device 100 user's general home is described as a reference
  • the second space 310 is a living room through a structure such as a wall while the external device 300 for outputting audio is located in the living space. and may represent a partially separated kitchen.
  • the third space 330 may represent a space completely separated by an object such as a wall between the external device 300 and the user. Specifically, the third space 330 may represent a space blocked in all directions by a structure such as a wall between the external device 300 and the user. According to an embodiment, the third space 330 is a space between the external device 300 and a user blocked by a structure such as a wall, and may represent a closed space. For example, if the wearable device 100 user's general home is described as the basis, the third space 330 is separated through a structure such as a wall while the external device 300 for outputting audio is located in the living space. This can represent a room that can be accessed through a door. Accordingly, when the user is located in the third space 330 , it may be assumed that the user has no intention to listen to the audio output through the external device 110 .
  • the wearable device 100 may identify whether the type of space in which the wearable device 100 is located has changed based on a comparison result between audio signals received through the microphone 220 .
  • the division of the wearing space shown in FIG. 3 is for explaining an embodiment, and the type of space divided by the wearable device 100 may be divided into other forms.
  • 4A is a diagram illustrating controlling a volume of an external device based on a change in a user's position in a first space, according to an exemplary embodiment.
  • 4B is a diagram illustrating a change in an audio signal based on a user's movement in a first space according to an exemplary embodiment.
  • an audio signal may be expressed based on a frequency.
  • a vertical axis may represent a volume
  • a horizontal axis may represent a frequency.
  • the horizontal axis of the graph may represent the low-pitched sound part LF to the high-pitched sound part HF.
  • a user wearing the wearable device 100 on the body may move in the first space 310 .
  • the user may move from the first location 410 to the second location 420 away from the external device 300 by a first distance a.
  • the second location 420 may indicate a location separated by a second distance b from the external device 300 .
  • the processor 210 may obtain the first audio signal 411 from the external device 300 using the microphone 220 at the first location 410 .
  • the processor 210 may determine a position where the volume of the external device 300 is controlled as the first position 410 based on a user input received by the wearable device 100 .
  • the processor 210 obtains at least one user input for controlling the volume of the external device 300 at the first location 410 , and a specified time after the at least one user input is obtained. If an additional user input is not obtained during this time, the first audio signal 411 may be obtained.
  • the processor 210 repeatedly receives a user input for increasing the volume of the external device 300 in three steps to increase the volume by three steps, and then, if the user input is not received for 2 seconds, the microphone 220 ) to obtain an audio signal.
  • the processor 210 may obtain the second audio signal 421 from the external device 300 using the microphone 220 at the second location 420 .
  • the size of the audio signal output from the external device 300 acquired through the microphone 220 may change. have.
  • the volume of the audio signal received through the microphone 220 may be decreased or increased.
  • the processor 210 may determine that the user's location has changed based on a change in the volume of the received audio signal. For example, the processor 210 may determine that the user's location has changed with respect to the external device 300 based on the first audio signal 411 and the second audio signal 421 .
  • the processor 210 may determine that the volume of the low-pitched sound part LF and the high-pitched sound part HF has been reduced in the second audio signal 421 as compared to the first audio signal 411 . Accordingly, the processor 210 may determine that the distance of the user from the external device 300 has increased based on the overall decrease in the volume. For convenience of explanation, only the second audio signal 421 based on the user's distance from the external device 300 is shown through FIG. 4B , but the distance between the user and the external device 300 may be variously changed.
  • the processor 210 may identify a change in the distance between the user and the external device 300 based on the second audio signal 421 .
  • the processor 210 may determine the distance between the external device 300 and the user based on a time or volume difference between the first audio signal 411 and the second audio signal 421 .
  • the content of determining the change in the user's location will be described later with reference to FIG. 5 .
  • the processor 210 may transmit a control signal for controlling the volume of the external device 300 based on a change in the distance between the external device 300 and the user through the communication circuit 230 .
  • the first distance a is smaller than the second distance b (or when the volume of the first audio signal is greater than the volume of the second audio signal, or when the first audio signal is 2, when a time delay occurs in the audio signal), that is, as the user's location from the external device 300 changes from the first location 410 to the second location 420, it may be assumed that the user moves away from each other.
  • the volume of the audio output from the external device 300 recognized by the user may be reduced.
  • the processor 210 receives a control signal for increasing the volume of the external device 300 when the user's location is further away from the external device 300 in the second location 420 compared to the first location 410 . It can be transmitted to the external device 300 . According to an embodiment, the external device 300 may increase the volume based on the received control signal.
  • the first distance (a) is greater than the second distance (b), that is, as the user's location from the external device 300 changes from the first location 410 to the second location 420 . It can be assumed that close According to an embodiment, when the distance of the user from the external device 300 is increased, the volume of the audio output from the external device 300 recognized by the user may be increased. Accordingly, when the position of the wearable device in the second position 420 is closer to the external device 300 than in the first position 410 , the processor 210 decreases the volume of the external device 300 . may be transmitted to the external device 300 . According to an embodiment, the external device 300 may reduce the volume based on the received control signal.
  • the wearable device 100 determines a change in the user's position based on the audio signal of the external device 300 acquired through the microphone 220 and automatically controls the volume, the wearable device 100 A user of can listen to the audio output from the external device 300 at a similar perceived volume.
  • the processor 210 may control the volume of the external device 300 within a predetermined range. For example, the processor 210 may set a minimum volume and a maximum volume of the volume of the external device 300 , and may control the volume of the external device 300 in a range from the minimum volume to the maximum volume. According to an embodiment, the minimum volume and the maximum volume may be set based on a user input received by the wearable device 100 .
  • FIG. 5 is a diagram illustrating determination of a distance between a user and an external device based on acquired audio signals 500 according to an exemplary embodiment.
  • the acquired audio signals 500 include an original audio signal 510 of audio data output through the external device 300 , a first audio signal 520 acquired at a first position, and a change in position. It may include the second audio signal 523 obtained according to .
  • the processor 210 may determine the distance between the user and the external device 300 based on the acquired audio signals 500 . According to an embodiment, when the external device 300 outputs audio data stored in the internal memory, the processor 210 controls the original audio signal ( 510) can be obtained. According to another embodiment, when the external device 300 receives and outputs audio data from the wearable device 100 , the processor 210 may acquire the original audio signal 510 stored in the memory 240 . According to another embodiment, the processor 210 may receive the original audio signal 510 from the external device 300 or a separate external server (not shown) through the communication circuit 230 .
  • the processor 210 may remove the first noise 521 included in the first audio signal 520 with reference to the original audio signal 510 . According to an embodiment, the processor 210 may obtain the second audio signal 523 from which noise has been removed by referring to the original audio signal 510 . Accordingly, the processor 210 may obtain the first audio signal and the second audio signal from which the noise has been removed.
  • the processor 210 determines the type of the user's wearing space based on the noise-removed first audio signal and the noise-removed second audio signal 523 , and communicates with the external device 300 and the external device 300 . It is possible to determine the distance of the user. For example, based on the time difference t or the volume difference a between the noise-removed first audio signal and the noise-removed second audio signal 523, the external device 300 and the user distance can be determined.
  • the processor 210 may determine the distance between the user and the external device 300 by using the acquired audio signals 500 and various distance measurement techniques. For example, the processor 210 may determine the distance between the electronic device 100 and the external device 300 using ultra-wideband communication with the external device 300 . As another example, the processor 210 may perform a communication between the electronic device 100 and the external device 300 based on a received signal strength indicator (RSSI) of a wireless communication signal received from the external device 300 . You can also judge the distance.
  • RSSI received signal strength indicator
  • 6A is a diagram illustrating movement of a user from a first space to a second space, according to an exemplary embodiment.
  • 6B is a diagram illustrating a change in a received audio signal as a user moves from a first space to a second space, according to an embodiment.
  • a user wearing the wearable device 100 on the body may move from the first space 310 to the second space 320 .
  • the user may move from a first location 410 in the first space 310 to a third location 610 in the second space 320 .
  • the processor 210 may obtain the first audio signal 411 from the external device 300 using the microphone 220 at the first location 410 .
  • the processor 210 may acquire the third audio signal 611 from the external device 300 using the microphone 220 at the third location 610 .
  • the size of the audio output of the external device 300 obtained through the microphone 220 may be changed. Also, as the user moves from the first space 310 to the second space 320 .
  • the audio signal may be received with a different volume for each frequency. For example, as the type of the user's wearing space of the wearable device 100 is changed to the second space 320 which is a semi-open/closed space, the sound wave in the low frequency band is transmitted to the microphone 220 by the diffraction phenomenon and , high-frequency band sound waves can be blocked.
  • the treble frequency region may indicate a frequency region of 1 kHz or more.
  • the processor 210 determines the type of the user's wearing space based on the change in the volume for each frequency of the third audio signal 611 acquired using the microphone 220 at the third location 610 . It may be determined that the first space 310 is changed to the second space 320 . For example, the processor 210 compares the first audio signal 411 with the type of the wearing space based on the difference between the volume decay rate of the high-pitched frequency region and the volume decay rate of the low-pitched frequency region of the third audio signal 611 . It may be determined that this is the second space 320 .
  • the processor 210 is configured to allow the external device 300 to maintain the volume based on the change of the user's wearing space from the first space 310 to the second space 320, so that the communication circuit 230 ) to transmit a control signal. For example, if the type of the user's wearing space corresponds to the second space 320 , the processor 210 determines that the wearable device 100 receives the external device ( The external device 300 may be controlled to maintain the volume of the sound output from the 300 . Accordingly, even if the user's location changes with respect to the external device 300 , the processor 210 may adjust the volume of the external device 300 within a predetermined range. For example, when the high-frequency band volume of the third audio signal 611 is attenuated, the electronic device 100 may increase the high-frequency band volume of the sound output from the external device 300 .
  • 7A is a diagram illustrating movement of a user from a first space to a third space, according to an exemplary embodiment.
  • 7B is a diagram illustrating a change in a received audio signal as a user moves from a first space to a third space, according to an exemplary embodiment.
  • a user wearing the wearable device 100 on the body may move from the first space 310 to the third space 330 .
  • the user may move from the first location 410 to the fourth location 710 in the third space 330 .
  • the processor 210 may acquire the first audio signal 411 from the external device 300 using a microphone at the first location 410 . In an embodiment, the processor 210 may obtain the fourth audio signal 711 from the external device 300 using the microphone 220 at the fourth location 710 .
  • the size of the audio output of the external device 300 obtained through the microphone 220 may be changed.
  • the audio output of the external device 300 obtained through the microphone 220 is The volume can be greatly reduced.
  • the volume level of the fourth audio signal 711 for each frequency may be greatly reduced.
  • the processor 210 determines the type of the user's wearing space to be the first based on the change in the volume of the fourth audio signal 711 acquired using the microphone 220 at the fourth location 710 . It may be determined that the space 310 is changed to the third space 330 . For example, the processor 210 may set a threshold value for determining that it is the third space 330 . The processor 210 may determine that the user's location is changed from the first space 310 to the third space 330 when the volume of each frequency of the fourth audio signal 711 is equal to or less than a threshold value. According to various embodiments, the threshold value may be set in various ways. According to an embodiment, the threshold value may be determined based on the first audio signal 411 . For example, a value (eg, a quarter value of the volume) reduced by a predetermined level based on the first audio signal 411 may be set as the threshold value.
  • a value eg, a quarter value of the volume
  • the processor 210 communicates so that the external device 300 sets the volume to the minimum volume based on the change in the type of the user's wearing space from the first space 310 to the third space 330 .
  • a control signal may be transmitted through the circuit 230 .
  • the processor 210 determines that the user no longer has a will to listen, and controls the volume of the external device 300 to the minimum volume.
  • the reference of the minimum volume may be preset to various values by the processor 210 .
  • the processor 210 may reduce unnecessary sound output by controlling the volume of the external device 300 to the minimum volume based on the change in the type of the user's wearing space to the third space 330 . have.
  • FIG. 8 is a flowchart 800 of a process of controlling a volume of an external device based on a user's location in a wearable device according to an embodiment.
  • the processor 210 of the wearable device 100 may acquire a first audio signal at a first location through the microphone 220 in operation 801 .
  • the first audio signal may be obtained from the external device 300 through the microphone 220 .
  • the microphone 220 may obtain a sound output from the external device 300 and convert the obtained sound into a first audio signal.
  • the first location may be determined based on a user input controlling the volume of the external device 300 .
  • the processor 210 sets a first position based on a user input for controlling the volume of the external device 300 and obtains a first position using the microphone 220 at the first position.
  • a first audio signal may be obtained based on the sound.
  • the processor 210 may acquire the first audio signal based on the last user input. . According to another embodiment, when starting audio reproduction through the external device 300 , the processor 210 may acquire the first audio signal.
  • the processor 210 may acquire a second audio signal from the external device 300 at a second location using the microphone 220 .
  • the second location may indicate a location of the first space 310 , the second space 320 , or the third space 330 described with reference to FIG. 3 .
  • the processor 210 may determine the type of the wearable space of the user of the wearable device 100 based on the first audio signal and the second audio signal. For example, the processor 210 obtains a sound output from the external device 300 using the microphone 220 , and determines the type of the wearing space based on a first audio signal and a second audio signal based on the obtained sound. can judge The processor 210 may determine the type of the wearing space based on the change amount of the second audio signal based on the first audio signal. According to an embodiment, the type of the wearing space may include the first space 310 , the second space 320 , or the third space 330 .
  • the processor 210 may control the volume of the external device 300 based on the audio output control method of the external device 300 corresponding to the determined type of wearing space. .
  • the processor 210 may transmit a control signal through the communication circuit 230 based on an audio output control method corresponding to the type of the wearing space.
  • the external device 300 may control the audio output based on the received control signal.
  • the processor 210 may control the audio output of the external device 300 differently according to the type of the wearing space.
  • FIG. 9 is a flowchart 900 of a process of controlling a volume of an external device by classifying a user's space type in a wearable device according to an embodiment.
  • the processor 210 may acquire the first audio signal in response to the user's volume setting.
  • the processor 210 may obtain the first audio signal from the external device 300 using the microphone 220 .
  • the processor 210 sets a position at which the volume of the external device 300 is controlled to a first position based on a user input of the wearable device 100 , and the audio signal obtained from the first position is the second position. 1 may be an audio signal.
  • the processor 210 may share information related to the original audio signal with the external device 300 .
  • audio data stored in the internal memory of the external device 300 may be output.
  • the external device 300 may receive and output audio data from the wearable device 100 .
  • the processor 210 may receive information about the original audio data from the external device 300 using the communication circuit 230 .
  • the processor 210 may omit operation 903 .
  • the wearable device 100 receives audio data from the external server or the external device 300 . You may.
  • the processor 210 may acquire the second audio signal from the external device 300 using the microphone 220 in operation 905 . According to an embodiment, the processor 210 may acquire the second audio signal as a set time elapses after acquiring the first audio signal. In an embodiment, the processor 210 may continuously acquire the second audio signal.
  • the processor 210 may determine whether the type of the user's wearing space is the first space 310 based on the first audio signal and the second audio signal. For example, when the volume of the second audio signal is decreased or increased compared to the first audio signal, the processor 210 may determine that the type of the user's wearing space is the first space 310 . According to an embodiment, when it is determined that the user is in the first space 310 , the processor 210 may determine the distance between the user and the external device 300 in operation 909 . According to an embodiment, the processor 210 may separate noise from the first audio signal based on the information related to the original audio signal obtained in operation 903 .
  • the processor 210 may separate noise from the second audio signal based on information related to the original audio signal. According to an embodiment, the processor 210 may determine the distance between the external device 300 and the user based on the first audio signal and the second audio signal from which the noise is separated. For example, the processor 210 compares the distance between the user and the external device 300 when acquiring the first audio signal to determine whether the distance between the external device 300 and the user is greater when acquiring the second audio signal. You can determine whether it is close or not. In an embodiment, when the volume of the second audio signal is increased compared to the first audio signal, the processor 210 may determine that the distance between the external device 300 and the user is getting closer. In another embodiment, when the volume of the second audio signal is reduced compared to the first audio signal, the processor 210 may determine that the distance between the external device 300 and the user is increased.
  • the processor 210 may control the volume of the external device 300 based on the determined distance in operation 911 . According to an embodiment, the processor 210 may transmit a control signal for controlling the volume of the external device 300 through the communication circuit 230 based on the determined distance. In an embodiment, when the distance between the user and the external device 300 is closer than when obtaining the first audio signal, the processor 210 generates a control signal for reducing the volume of the external device 300, The communication circuit 230 may be used to transmit. In another embodiment, when the distance between the user and the external device 300 is greater than when acquiring the first audio signal, the processor 210 generates a control signal for increasing the volume of the external device 300, The communication circuit 230 may be used to transmit. Accordingly, the user may listen at a perceived volume similar to that when acquiring the first audio signal without directly controlling the volume of the external device 300 .
  • the processor 210 determines whether the type of the user's wearing space is the second space 320 .
  • the processor 210 may determine whether the type of the user's wearing space is the second space 320 based on the first audio signal and the second audio signal.
  • the processor 210 determines the type of the user's wearing space. It may be determined that it is the second space 320 .
  • the processor 210 may transmit a control signal through the communication circuit 230 so that the volume of the external device 300 is not increased or decreased even when the user's location changes. According to an embodiment, the processor 210 may control the external device 300 to maintain the last volume before it is determined as the second space 320 . According to an embodiment, the external device 300 may maintain the volume based on a control signal for maintaining the volume.
  • the processor 210 may determine that the type of the user's wearing space is the third space 330 .
  • the processor 210 determines that the type of the user's wearing space is the third space 330 .
  • the threshold value may be a value predefined by the processor 210 .
  • the processor 210 may set a threshold value based on the first audio signal.
  • the processor 210 may generate a control signal to control the volume of the external device 300 to a minimum volume, and transmit the control signal through the communication circuit 230 .
  • the processor 210 may set the minimum volume.
  • the external device 300 may output audio at a minimum volume based on the received control signal.
  • FIG. 10 is a diagram illustrating control of a volume of an external device according to a user's utterance in a wearable device according to an exemplary embodiment.
  • the processor 210 may determine whether the user's utterance is included in the second audio signal described with reference to FIG. 9 .
  • the processor 210 may acquire the sound of the user and the external device 300 through the microphone 220 .
  • the processor 210 may use the user's utterance information stored in the memory 240 to determine whether the user's utterance is included.
  • the memory 240 may store information about the user's voice of the wearable device 100 . The information about the voice may be obtained when the user makes a utterance.
  • the processor 210 may control the volume of the external device 300 to be less than or equal to a reference value.
  • the processor 210 may set the reference value at a volume level that does not interfere with the user's utterance.
  • the processor 210 may generate a control signal for controlling the volume of the external device 300 to be less than or equal to a reference value, and transmit the generated control signal through the communication circuit 230 .
  • the processor 210 reduces the volume of the external device 300 to a reference value or less when the user's speech starts, and determines when the speech ends to reduce the volume of the external device 300 to the original volume.
  • the processor 210 may determine that the user's speech has started. In addition, when the user's voice is not included in the audio signal acquired through the microphone 220 for a specified time, the processor 210 may determine that the user's utterance has ended.
  • 11A is a diagram illustrating a location of a user with respect to a plurality of external devices, according to an exemplary embodiment.
  • 11B is a diagram illustrating controlling a volume of at least one external device based on a distance from a plurality of external devices, according to an exemplary embodiment.
  • the user of the wearable device 100 may listen to audio output from the plurality of external devices A to D using the plurality of external devices A to D. have.
  • the user's location may change to the first location 1110 , the second location 1120 , and/or the third location 1130 based on the plurality of external devices A to D. .
  • the first location 1110 may indicate a location where the distance of the user from each of the plurality of external devices A to D is the same.
  • the second location 1120 has a shorter distance between the first external device A and the user, and the second to fourth external devices B, C, and D, compared to the first location 1110 .
  • Fields and distances may indicate distant locations.
  • the third location 1130 is closer to the third and fourth external devices C and D than the first location 1110 , and the first and second external devices A , B) and the distance can represent distant locations.
  • the four external devices A to D and the first to third positions 1110 to 1130 have been described as the basis, but the number of external devices and the location of the user are not limited thereto. can be varied.
  • the user's Perceived loudness can vary.
  • the processor 210 acquires the sound output from the plurality of external devices A to D using the microphone 220 , and uses an audio signal based on the acquired sound to determine the user's position at the first position ( 1110) can be determined.
  • the processor 210 may set the plurality of external devices A to D so that the volumes of the plurality of external devices A to D are all the same based on the user's location being the first location 1110 . can be controlled.
  • the distance between the first external device A and the user may be close, and the distance from the second to fourth external devices B, C, and D may be greater. Accordingly, the perceived volume of each of the plurality of external devices A to D perceived by the user may be different. For example, as the user's position changes to the second position 1120 , the perceived volume of the sound output from the first external device A increases, and the second to fourth external devices B to D The perceived volume of the output sound may be decreased.
  • the processor 210 acquires the sound output from the plurality of external devices A to D using the microphone 220, and uses an audio signal based on the acquired sound to determine the user's position at the second position ( 1120) can be determined. In an embodiment, the processor 210 reduces the volume of the first external device A and the volume of the second to fourth external devices B to D based on the user's location being the second location 1120 . may control the plurality of external devices A to D to increase the .
  • the third location graph 1131 indicating the distance and the volume of the plurality of external devices A to D based on the third location
  • the distance from the third and fourth external devices C and D may be close, and the distance from the first and second external devices A and B may be far. Accordingly, the perceived volume of each of the plurality of external devices A to D perceived by the user may be different. For example, as the user's location changes to the third location 1130 , the perceived volume of sounds output from the third and fourth external devices C and D increases, and the first and second external devices C and D increase. The perceived volume of the sound output from (A, B) may be reduced.
  • the processor 210 acquires the sound output from the plurality of external devices A to D using the microphone 220, and uses an audio signal based on the acquired sound to determine the user's position at the third position ( 1130) can be determined. In an embodiment, the processor 210 increases the volume of the first and second external devices A and B based on the user's location being the third location 1130, and increases the volume of the third and fourth external devices ( The plurality of external devices A to D may be controlled to increase the volume of C and D).
  • the processor 210 may utilize various distance measurement techniques to determine the user's location.
  • the processor 210 may utilize not only the audio signal acquired through the microphone 220 but also the ultra-wideband wireless distance measurement technology.
  • FIG. 12 is a block diagram of an electronic device 1201 in a network environment 1200, according to various embodiments.
  • the electronic device 1201 communicates with the electronic device 1202 through a first network 1298 (eg, a short-range wireless communication network) or a second network 1299 . It may communicate with at least one of the electronic device 1204 and the server 1208 through (eg, a long-distance wireless communication network). According to an embodiment, the electronic device 1201 may communicate with the electronic device 1204 through the server 1208 .
  • a first network 1298 eg, a short-range wireless communication network
  • a second network 1299 e.g., a second network 1299
  • the electronic device 1201 may communicate with the electronic device 1204 through the server 1208 .
  • the electronic device 1201 includes a processor 1220 , a memory 1230 , an input module 1250 , a sound output module 1255 , a display module 1260 , an audio module 1270 , and a sensor module ( 1276), interface 1277, connection terminal 1278, haptic module 1279, camera module 1280, power management module 1288, battery 1289, communication module 1290, subscriber identification module 1296 , or an antenna module 1297 .
  • at least one of these components eg, the connection terminal 1278
  • some of these components are integrated into one component (eg, display module 1260). can be
  • the processor 1220 executes software (eg, a program 1240) to execute at least one other component (eg, a hardware or software component) of the electronic device 1201 connected to the processor 1220. It can control and perform various data processing or operations. According to one embodiment, as at least part of data processing or operation, the processor 1220 may store commands or data received from other components (eg, the sensor module 1276 or the communication module 1290 ) into the volatile memory 1232 . may be stored in , process commands or data stored in the volatile memory 1232 , and store the result data in the non-volatile memory 1234 .
  • software eg, a program 1240
  • the processor 1220 may store commands or data received from other components (eg, the sensor module 1276 or the communication module 1290 ) into the volatile memory 1232 .
  • the processor 1220 is the main processor 1221 (eg, a central processing unit or an application processor) or a secondary processor 1223 (eg, a graphics processing unit, a neural network processing unit) a neural processing unit (NPU), an image signal processor, a sensor hub processor, or a communication processor).
  • the main processor 1221 e.g, a central processing unit or an application processor
  • a secondary processor 1223 e.g, a graphics processing unit, a neural network processing unit
  • a neural processing unit e.g., a neural processing unit (NPU), an image signal processor, a sensor hub processor, or a communication processor.
  • the coprocessor 1223 may be, for example, on behalf of the main processor 1221 while the main processor 1221 is in an inactive (eg, sleep) state, or the main processor 1221 is active (eg, executing an application). ), together with the main processor 1221, at least one of the components of the electronic device 1201 (eg, the display module 1260, the sensor module 1276, or the communication module 1290) It is possible to control at least some of the related functions or states.
  • coprocessor 1223 eg, image signal processor or communication processor
  • may be implemented as part of another functionally related component eg, camera module 1280 or communication module 1290. have.
  • the auxiliary processor 1223 may include a hardware structure specialized for processing an artificial intelligence model.
  • Artificial intelligence models can be created through machine learning. Such learning may be performed, for example, in the electronic device 1201 itself on which the artificial intelligence model is performed, or may be performed through a separate server (eg, the server 1208).
  • the learning algorithm may include, for example, supervised learning, unsupervised learning, semi-supervised learning, or reinforcement learning, but in the above example not limited
  • the artificial intelligence model may include a plurality of artificial neural network layers.
  • Artificial neural networks include deep neural networks (DNNs), convolutional neural networks (CNNs), recurrent neural networks (RNNs), restricted boltzmann machines (RBMs), deep belief networks (DBNs), bidirectional recurrent deep neural networks (BRDNNs), It may be one of deep Q-networks or a combination of two or more of the above, but is not limited to the above example.
  • the artificial intelligence model may include, in addition to, or alternatively, a software structure in addition to the hardware structure.
  • the memory 1230 may store various data used by at least one component of the electronic device 1201 (eg, the processor 1220 or the sensor module 1276).
  • the data may include, for example, input data or output data for software (eg, a program 1240) and instructions related thereto.
  • the memory 1230 may include a volatile memory 1232 or a non-volatile memory 1234 .
  • the program 1240 may be stored as software in the memory 1230 , and may include, for example, an operating system 1242 , middleware 1244 , or an application 1246 .
  • the input module 1250 may receive a command or data to be used by a component (eg, the processor 1220 ) of the electronic device 1201 from the outside (eg, a user) of the electronic device 1201 .
  • the input module 1250 may include, for example, a microphone, a mouse, a keyboard, a key (eg, a button), or a digital pen (eg, a stylus pen).
  • the sound output module 1255 may output a sound signal to the outside of the electronic device 1201 .
  • the sound output module 1255 may include, for example, a speaker or a receiver.
  • the speaker can be used for general purposes such as multimedia playback or recording playback.
  • the receiver can be used to receive incoming calls. According to one embodiment, the receiver may be implemented separately from or as part of the speaker.
  • the display module 1260 may visually provide information to the outside (eg, a user) of the electronic device 1201 .
  • the display module 1260 may include, for example, a display, a hologram device, or a projector and a control circuit for controlling the corresponding device.
  • the display module 1260 may include a touch sensor configured to sense a touch or a pressure sensor configured to measure the intensity of a force generated by the touch.
  • the audio module 1270 may convert a sound into an electric signal or, conversely, convert an electric signal into a sound. According to an embodiment, the audio module 1270 acquires a sound through the input module 1250 or an external electronic device (eg, a sound output module 1255 ) directly or wirelessly connected to the electronic device 1201 .
  • the electronic device 1202) eg, a speaker or headphones
  • the sensor module 1276 detects an operating state (eg, power or temperature) of the electronic device 1201 or an external environmental state (eg, user state), and generates an electrical signal or data value corresponding to the sensed state. can do.
  • the sensor module 1276 may include, for example, a gesture sensor, a gyro sensor, a barometric pressure sensor, a magnetic sensor, an acceleration sensor, a grip sensor, a proximity sensor, a color sensor, an IR (infrared) sensor, a biometric sensor, It may include a temperature sensor, a humidity sensor, or an illuminance sensor.
  • the interface 1277 may support one or more specified protocols that may be used for the electronic device 1201 to directly or wirelessly connect with an external electronic device (eg, the electronic device 1202 ).
  • the interface 1277 may include, for example, a high definition multimedia interface (HDMI), a universal serial bus (USB) interface, an SD card interface, or an audio interface.
  • HDMI high definition multimedia interface
  • USB universal serial bus
  • SD card interface Secure Digital Card
  • connection terminal 1278 may include a connector through which the electronic device 1201 can be physically connected to an external electronic device (eg, the electronic device 1202 ).
  • the connection terminal 1278 may include, for example, an HDMI connector, a USB connector, an SD card connector, or an audio connector (eg, a headphone connector).
  • the haptic module 1279 may convert an electrical signal into a mechanical stimulus (eg, vibration or movement) or an electrical stimulus that the user can perceive through tactile or kinesthetic sense.
  • the haptic module 1279 may include, for example, a motor, a piezoelectric element, or an electrical stimulation device.
  • the camera module 1280 may capture still images and moving images. According to one embodiment, the camera module 1280 may include one or more lenses, image sensors, image signal processors, or flashes.
  • the power management module 1288 may manage power supplied to the electronic device 1201 .
  • the power management module 1288 may be implemented as, for example, at least a part of a power management integrated circuit (PMIC).
  • PMIC power management integrated circuit
  • the battery 1289 may supply power to at least one component of the electronic device 1201 .
  • battery 1289 may include, for example, a non-rechargeable primary cell, a rechargeable secondary cell, or a fuel cell.
  • the communication module 1290 is a direct (eg, wired) communication channel or a wireless communication channel between the electronic device 1201 and an external electronic device (eg, the electronic device 1202, the electronic device 1204, or the server 1208). It can support establishment and communication performance through the established communication channel.
  • the communication module 1290 operates independently of the processor 1220 (eg, an application processor) and may include one or more communication processors supporting direct (eg, wired) communication or wireless communication.
  • the communication module 1290 may include a wireless communication module 1292 (eg, a cellular communication module, a short-range wireless communication module, or a global navigation satellite system (GNSS) communication module) or a wired communication module 1294 (eg, : It may include a local area network (LAN) communication module, or a power line communication module).
  • a corresponding communication module among these communication modules is a first network 1298 (eg, a short-range communication network such as Bluetooth, wireless fidelity (WiFi) direct, or infrared data association (IrDA)) or a second network 1299 (eg, legacy).
  • a first network 1298 eg, a short-range communication network such as Bluetooth, wireless fidelity (WiFi) direct, or infrared data association (IrDA)
  • a second network 1299 eg, legacy
  • the wireless communication module 1292 uses subscriber information (eg, International Mobile Subscriber Identifier (IMSI)) stored in the subscriber identification module 1296 within a communication network, such as the first network 1298 or the second network 1299 .
  • the electronic device 1201 may be identified or authenticated.
  • the wireless communication module 1292 may support a 5G network after a 4G network and a next-generation communication technology, for example, a new radio access technology (NR).
  • NR access technology includes high-speed transmission of high-capacity data (eMBB (enhanced mobile broadband)), minimization of terminal power and access to multiple terminals (mMTC (massive machine type communications)), or high reliability and low latency (URLLC (ultra-reliable and low-latency) -latency communications)).
  • eMBB enhanced mobile broadband
  • mMTC massive machine type communications
  • URLLC ultra-reliable and low-latency
  • the wireless communication module 1292 may support a high frequency band (eg, mmWave band) to achieve a high data rate, for example.
  • a high frequency band eg, mmWave band
  • the wireless communication module 1292 uses various technologies for securing performance in a high frequency band, for example, beamforming, massive multiple-input and multiple-output (MIMO), and all-dimensional multiplexing. It may support technologies such as full dimensional MIMO (FD-MIMO), an array antenna, analog beam-forming, or a large scale antenna.
  • the wireless communication module 1292 may support various requirements specified in the electronic device 1201 , an external electronic device (eg, the electronic device 1204 ), or a network system (eg, the second network 1299 ).
  • the wireless communication module 1292 includes a peak data rate (eg, 20 Gbps or more) for realization of eMBB, loss coverage for realization of mMTC (eg, 164 dB or less), or U-plane latency (for URLLC realization) ( Example: Downlink (DL) and uplink (UL) each 0.5 ms or less, or round trip 1 ms or less) can be supported.
  • a peak data rate eg, 20 Gbps or more
  • mMTC eg, 164 dB or less
  • U-plane latency for URLLC realization
  • the antenna module 1297 may transmit or receive a signal or power to the outside (eg, an external electronic device).
  • the antenna module 1297 may include an antenna including a conductor formed on a substrate (eg, a PCB) or a radiator formed of a conductive pattern.
  • the antenna module 1297 may include a plurality of antennas (eg, an array antenna). In this case, at least one antenna suitable for a communication method used in a communication network such as the first network 1298 or the second network 1299 is connected from the plurality of antennas by, for example, the communication module 1290 . can be selected. A signal or power may be transmitted or received between the communication module 1290 and an external electronic device through the selected at least one antenna.
  • other components eg, a radio frequency integrated circuit (RFIC)
  • RFIC radio frequency integrated circuit
  • the antenna module 1297 may form a mmWave antenna module.
  • the mmWave antenna module comprises a printed circuit board, an RFIC disposed on or adjacent to a first side (eg, bottom side) of the printed circuit board and capable of supporting a designated high frequency band (eg, mmWave band); and a plurality of antennas (eg, an array antenna) disposed on or adjacent to a second side (eg, top or side) of the printed circuit board and capable of transmitting or receiving signals of the designated high frequency band. can do.
  • peripheral devices eg, a bus, general purpose input and output (GPIO), serial peripheral interface (SPI), or mobile industry processor interface (MIPI)
  • GPIO general purpose input and output
  • SPI serial peripheral interface
  • MIPI mobile industry processor interface
  • the command or data may be transmitted or received between the electronic device 1201 and the external electronic device 1204 through the server 1208 connected to the second network 1299 .
  • Each of the external electronic devices 1202 or 1204 may be the same or a different type of the electronic device 1201 .
  • all or a part of the operations executed in the electronic device 1201 may be executed in one or more of the external electronic devices 1202 , 1204 , or 1208 .
  • the electronic device 1201 may perform the function or service by itself instead of executing the function or service itself.
  • one or more external electronic devices may be requested to perform at least a part of the function or the service.
  • One or more external electronic devices that have received the request may execute at least a part of the requested function or service, or an additional function or service related to the request, and transmit a result of the execution to the electronic device 1201 .
  • the electronic device 1201 may process the result as it is or additionally and provide it as at least a part of a response to the request.
  • cloud computing, distributed computing, mobile edge computing (MEC), or client-server computing technology may be used.
  • the electronic device 1201 may provide an ultra-low latency service using, for example, distributed computing or mobile edge computing.
  • the external electronic device 1204 may include an Internet of things (IoT) device.
  • IoT Internet of things
  • the server 1208 may be an intelligent server using machine learning and/or neural networks. According to an embodiment, the external electronic device 1204 or the server 1208 may be included in the second network 1299 .
  • the electronic device 1201 may be applied to an intelligent service (eg, smart home, smart city, smart car, or health care) based on 5G communication technology and IoT-related technology.
  • the electronic device may have various types of devices.
  • the electronic device may include, for example, a portable communication device (eg, a smart phone), a computer device, a portable multimedia device, a portable medical device, a camera, a wearable device, or a home appliance device.
  • a portable communication device eg, a smart phone
  • a computer device e.g., a smart phone
  • a portable multimedia device e.g., a portable medical device
  • a camera e.g., a portable medical device
  • a camera e.g., a portable medical device
  • a camera e.g., a portable medical device
  • a wearable device e.g., a smart bracelet
  • a home appliance device e.g., a home appliance
  • first”, “second”, or “first” or “second” may simply be used to distinguish an element from other such elements, and may refer elements to other aspects (e.g., importance or order) is not limited. It is said that one (eg, first) component is “coupled” or “connected” to another (eg, second) component, with or without the terms “functionally” or “communicatively”. When referenced, it means that one component can be connected to the other component directly (eg by wire), wirelessly, or through a third component.
  • module used in various embodiments of this document may include a unit implemented in hardware, software, or firmware, and is interchangeable with terms such as, for example, logic, logic block, component, or circuit.
  • a module may be an integrally formed part or a minimum unit or a part of the part that performs one or more functions.
  • the module may be implemented in the form of an application-specific integrated circuit (ASIC).
  • ASIC application-specific integrated circuit
  • Various embodiments of the present document include one or more instructions stored in a storage medium (eg, internal memory 1236 or external memory 1238) readable by a machine (eg, electronic device 1201). may be implemented as software (eg, a program 1240) including
  • a processor eg, processor 1220
  • a device eg, electronic device 1201
  • the one or more instructions may include code generated by a compiler or code executable by an interpreter.
  • the device-readable storage medium may be provided in the form of a non-transitory storage medium.
  • 'non-transitory' only means that the storage medium is a tangible device and does not contain a signal (eg, electromagnetic wave), and this term is used in cases where data is semi-permanently stored in the storage medium and It does not distinguish between temporary storage cases.
  • a signal eg, electromagnetic wave
  • the method according to various embodiments disclosed in this document may be provided as included in a computer program product.
  • Computer program products may be traded between sellers and buyers as commodities.
  • the computer program product is distributed in the form of a machine-readable storage medium (eg compact disc read only memory (CD-ROM)), or via an application store (eg Play Store TM ) or on two user devices ( It can be distributed (eg downloaded or uploaded) directly or online between smartphones (eg: smartphones).
  • a part of the computer program product may be temporarily stored or temporarily created in a machine-readable storage medium such as a memory of a server of a manufacturer, a server of an application store, or a relay server.
  • each component eg, a module or a program of the above-described components may include a singular or a plurality of entities, and some of the plurality of entities may be separately disposed in other components. have.
  • one or more components or operations among the above-described corresponding components may be omitted, or one or more other components or operations may be added.
  • a plurality of components eg, a module or a program
  • the integrated component may perform one or more functions of each component of the plurality of components identically or similarly to those performed by the corresponding component among the plurality of components prior to the integration. .
  • operations performed by a module, program, or other component are executed sequentially, in parallel, repeatedly, or heuristically, or one or more of the operations are executed in a different order, or omitted. , or one or more other operations may be added.
  • the wearable device eg, the wearable device 100 of FIG. 2
  • the wearable device includes a microphone (eg, the microphone 220 of FIG. 2 ) and a communication circuit for transmitting and receiving a control signal to and from an external device ( For example: the communication circuit 23 of FIG. 2 ), the microphone, and at least one processor electrically connected to the communication circuit (eg, the processor 210 of FIG.
  • the at least one processor is the wearable In a state in which the device is worn on the user's body, obtaining a first audio signal from the external device using the microphone, and in response to the lapse of a set time, obtaining a second audio signal from the external device using the microphone, The type of the user's wearing space is determined based on the first audio signal and the second audio signal, and the volume level of the external device is determined using the communication circuit based on an audio output control method corresponding to the type of the wearing space. can be controlled.
  • the at least one processor receives an original audio signal from the external device using the communication circuit, and generates noise from each of the first audio signal and the second audio signal based on the original audio signal. may be separated, and the type of the wearing space may be determined based on the first audio signal and the second audio signal from which the noise is separated.
  • the at least one processor in response to the type of the wearing space being the first type, the at least one processor, based on a time or volume difference between the first audio signal and the second audio signal, the external device and a change in the distance between the user and the user may be determined, and the volume of the external device may be adjusted based on the change in the distance.
  • the at least one processor may determine the type of the wearing space as the third type in response to a volume level of the second audio signal being equal to or less than a first reference value.
  • the at least one processor may control the volume of the external device to a minimum volume.
  • the at least one processor may determine whether the type of the wearing space is the second type, based on a result of comparing the volume of each frequency of the first audio signal and the second audio signal. have.
  • the at least one processor of an embodiment is configured to provide a control signal such that the frequency-specific volume of the second audio signal is maintained as the frequency-specific volume of the first audio signal. can be transmitted.
  • the at least one processor is configured to respond to a difference between a first volume decay rate of a high-pitched frequency region included in the second audio signal and a second volume decay rate of a low-pitched frequency region included in the second audio signal. It may be determined based on whether the type of the wearing space is the second type.
  • the at least one processor may control the microphone to acquire the first audio signal in response to receiving a user input for adjusting the volume of the external device.
  • the at least one processor may determine the location of the electronic device with respect to the external device using ultra-wideband communication with the external device.
  • the at least one processor in response to determining whether the user's utterance is included in the second audio signal, and determining that the user's utterance is included in the second audio signal, the at least one processor, in response to determining whether the user's utterance is included in the second audio signal, and determining that the user's utterance is included in the second audio signal, The volume of the external device may be controlled to be less than or equal to a third reference value.
  • the at least one processor identifies the end of the user's speech based on the second audio signal, and sets the volume of the external device to be less than or equal to the third reference value based on the identification of the end of the speech. You can control to change the volume to the previous volume.
  • a method of operating a wearable device including a microphone (eg, the microphone 220 of FIG. 2 ) and a communication circuit (eg, the communication circuit 230 of FIG. 2 ) for transmitting and receiving a control signal to and from an external device,
  • An operation of acquiring a first audio signal from the external device using the microphone while the wearable device is worn on the user's body, and receiving a second audio signal from the external device using the microphone in response to a set time elapse Based on an operation of acquiring, an operation of determining the type of the user's wearing space based on the first audio signal and the second audio signal, and an audio output control method corresponding to the type of the wearing space, using the communication circuit It may include an operation of controlling the volume of the external device.
  • the operation of controlling the volume of the external device may include, in response to the type of the wearing space being the first type, the external device and the second audio signal based on the first audio signal and the second audio signal. It may include an operation of determining the user's distance and an operation of adjusting a volume of the external device to be proportional to the determined distance.
  • the determining of the type of the wearing space includes determining the type of the wearing space as the third type in response to a volume level of the second audio signal being less than or equal to a first reference value, ,
  • the operation of controlling the volume of the external device may include controlling the volume of the external device to a predetermined minimum volume in response to determining that the type of the wearing space is the third type.
  • the determining of the type of the wearing space may include setting the type of the wearing space to the second type based on a first frequency of the first audio signal and a second frequency of the second audio signal. and determining, wherein the controlling of the volume of the external device may include: in response to determining that the type of the wearing space is the second type, the volume of the second audio signal for each frequency of the first audio signal It may include an operation of transmitting a control signal to maintain the volume for each frequency.
  • the operation of acquiring the first audio signal may be an operation of acquiring the first audio signal in response to the user adjusting the volume of the external device.
  • the method of operating the wearable device may include determining whether the user's utterance is included in the second audio signal, and responding to determining whether the user's utterance is included in the second audio signal
  • the operation of controlling the volume of the external device to be less than or equal to a third reference value may be further included.
  • the determining of the distance of the user may include determining the location of the electronic device with respect to the external device using ultra-wideband communication with the external device.
  • a wearable device (eg, the wearable device 100 of FIG. 2 ) according to an embodiment includes a microphone (eg, the microphone 220 of FIG. 2 ) and a communication circuit (eg, the microphone 220 of FIG. 2 ) for transmitting and receiving a control signal to and from a plurality of external devices: The communication circuit 230 of FIG. 2), the microphone, and at least one processor (eg, the processor 210 of FIG.
  • a third audio signal is obtained from each of the plurality of devices using the microphone in a state worn on the user's body, and in response to the lapse of a set time, a fourth audio signal is obtained from each of the plurality of external devices using the microphone acquire a signal, determine distances between the user and each of the plurality of external devices based on the third audio signal and the fourth audio signal, and use the communication circuit based on the determination result to determine the plurality of external devices
  • the volume of at least one of the devices may be controlled.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Computational Linguistics (AREA)
  • Otolaryngology (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Electrophonic Musical Instruments (AREA)
  • Selective Calling Equipment (AREA)

Abstract

Dans un mode de réalisation, un dispositif habitronique comprend : un microphone ; un circuit de communication pour émettre/recevoir un signal de commande vers/depuis un dispositif externe ; et au moins un processeur connecté électriquement au microphone et au circuit de communication, ledit processeur pouvant acquérir un premier signal audio depuis le dispositif externe en utilisant le microphone lorsque le dispositif habitronique est porté sur le corps d'un utilisateur, acquérir un second signal depuis le dispositif externe en utilisant le microphone en réponse à l'expiration du délai défini, déterminer le type d'espace habitronique de l'utilisateur sur la base du premier signal audio et du second signal audio, et commander le volume du dispositif externe en utilisant le circuit de communication sur la base d'un mode de commande de sortie audio correspondant au type d'espace habitronique.
PCT/KR2022/000690 2021-01-15 2022-01-14 Dispositif habitronique pour effectuer une commande de volume automatique WO2022154546A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020210006300A KR20220103543A (ko) 2021-01-15 2021-01-15 자동 음량 제어를 수행하는 웨어러블 장치
KR10-2021-0006300 2021-01-15

Publications (1)

Publication Number Publication Date
WO2022154546A1 true WO2022154546A1 (fr) 2022-07-21

Family

ID=82447696

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2022/000690 WO2022154546A1 (fr) 2021-01-15 2022-01-14 Dispositif habitronique pour effectuer une commande de volume automatique

Country Status (2)

Country Link
KR (1) KR20220103543A (fr)
WO (1) WO2022154546A1 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024029644A1 (fr) * 2022-08-03 2024-02-08 엘지전자 주식회사 Appareil de réception audio sans fil, appareil de transmission audio sans fil et système de sortie audio sans fil comprenant ceux-ci
WO2024106796A1 (fr) * 2022-11-15 2024-05-23 삼성전자 주식회사 Procédé de commande de réglage audio et dispositif électronique portable le prenant en charge

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130156207A1 (en) * 2011-12-16 2013-06-20 Qualcomm Incorporated Optimizing audio processing functions by dynamically compensating for variable distances between speaker(s) and microphone(s) in an accessory device
KR20160108051A (ko) * 2015-03-06 2016-09-19 삼성전자주식회사 웨어러블 전자 장치 및 그 제어 방법
KR20180048044A (ko) * 2016-11-02 2018-05-10 현대자동차주식회사 지오펜싱 서비스를 제공하는 차량용 전자 장치 및 방법
KR20200065930A (ko) * 2018-11-30 2020-06-09 삼성전자주식회사 전자 장치 및 전자 장치의 제어 방법
KR20200084080A (ko) * 2019-01-02 2020-07-10 올리브유니온(주) 환경 변화 및 소음 변화에 따른 적응형 입체 히어링 시스템 및 그 방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130156207A1 (en) * 2011-12-16 2013-06-20 Qualcomm Incorporated Optimizing audio processing functions by dynamically compensating for variable distances between speaker(s) and microphone(s) in an accessory device
KR20160108051A (ko) * 2015-03-06 2016-09-19 삼성전자주식회사 웨어러블 전자 장치 및 그 제어 방법
KR20180048044A (ko) * 2016-11-02 2018-05-10 현대자동차주식회사 지오펜싱 서비스를 제공하는 차량용 전자 장치 및 방법
KR20200065930A (ko) * 2018-11-30 2020-06-09 삼성전자주식회사 전자 장치 및 전자 장치의 제어 방법
KR20200084080A (ko) * 2019-01-02 2020-07-10 올리브유니온(주) 환경 변화 및 소음 변화에 따른 적응형 입체 히어링 시스템 및 그 방법

Also Published As

Publication number Publication date
KR20220103543A (ko) 2022-07-22

Similar Documents

Publication Publication Date Title
WO2022055068A1 (fr) Dispositif électronique pour identifier une commande contenue dans de la voix et son procédé de fonctionnement
WO2022154546A1 (fr) Dispositif habitronique pour effectuer une commande de volume automatique
WO2022119287A1 (fr) Dispositif électronique incluant un écran flexible et procédé de fonctionnement associé
WO2022030882A1 (fr) Dispositif électronique de traitement de données audio, et procédé d'exploitation de celui-ci
WO2022154440A1 (fr) Dispositif électronique de traitement de données audio, et procédé d'exploitation associé
WO2022005248A1 (fr) Procédé et dispositif électronique pour détecter un signal audio ambiant
WO2021201429A1 (fr) Dispositif électronique et procédé pour commander une sortie audio de celui-ci
WO2022025452A1 (fr) Dispositif électronique et procédé de fonctionnement de dispositif électronique
WO2021221440A1 (fr) Procédé d'amélioration de qualité du son et dispositif s'y rapportant
WO2022154321A1 (fr) Dispositif électronique commutant une connexion de communication conformément à un environnement de bruit, et procédé de commande associé
WO2024076043A1 (fr) Dispositif électronique et procédé de génération de signal sonore de vibration
WO2023158268A1 (fr) Microphone basé sur un bruit externe et procédé de commande de capteur et dispositif électronique
WO2023080401A1 (fr) Procédé et dispositif d'enregistrement sonore par dispositif électronique au moyen d'écouteurs
WO2024076061A1 (fr) Dispositif électronique pliable et procédé de diminution de la génération d'écho
WO2024034784A1 (fr) Dispositif électronique, procédé et support de stockage non transitoire lisible par ordinateur pour effectuer un processus de publicité synchronisé avec un processus de publicité dans un autre dispositif électronique
WO2024053931A1 (fr) Procédé de commutation de microphone et dispositif électronique
WO2024136196A1 (fr) Dispositif électronique pour émettre un son à partir d'un dispositif externe, son procédé de fonctionnement et support de stockage
WO2023149720A1 (fr) Dispositif électronique comprenant un module de capteur
WO2023085642A1 (fr) Procédé de commande de fonctionnement et dispositif électronique associé
WO2023063627A1 (fr) Dispositif électronique de commande de son ambiant sur la base d'une scène audio, et son procédé de fonctionnement
WO2022154394A1 (fr) Dispositif électronique pour réduire le bruit interne et son procédé de fonctionnement
WO2022203456A1 (fr) Dispositif électronique et procédé de traitement de signal vocal
WO2022114648A1 (fr) Dispositif électronique de paramétrage d'un écran d'arrière-plan et procédé de fonctionnement dudit dispositif
WO2024029728A1 (fr) Dispositif électronique habitronique pour reconnaissance tactile, son procédé de fonctionnement et support de stockage
WO2022186471A1 (fr) Procédé pour fournir un service d'appel de groupe et dispositif électronique le prenant en charge

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22739739

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22739739

Country of ref document: EP

Kind code of ref document: A1