CN114690113A - Method and device for determining position of equipment - Google Patents

Method and device for determining position of equipment Download PDF

Info

Publication number
CN114690113A
CN114690113A CN202011645221.6A CN202011645221A CN114690113A CN 114690113 A CN114690113 A CN 114690113A CN 202011645221 A CN202011645221 A CN 202011645221A CN 114690113 A CN114690113 A CN 114690113A
Authority
CN
China
Prior art keywords
audio signal
indication information
smart
mobile phone
audio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011645221.6A
Other languages
Chinese (zh)
Inventor
张勇智
任革林
赵文华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202011645221.6A priority Critical patent/CN114690113A/en
Publication of CN114690113A publication Critical patent/CN114690113A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S1/00Beacons or beacon systems transmitting signals having a characteristic or characteristics capable of being detected by non-directional receivers and defining directions, positions, or position lines fixed relatively to the beacon transmitters; Receivers co-operating therewith
    • G01S1/72Beacons or beacon systems transmitting signals having a characteristic or characteristics capable of being detected by non-directional receivers and defining directions, positions, or position lines fixed relatively to the beacon transmitters; Receivers co-operating therewith using ultrasonic, sonic or infrasonic waves
    • G01S1/76Systems for determining direction or position line
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S1/00Beacons or beacon systems transmitting signals having a characteristic or characteristics capable of being detected by non-directional receivers and defining directions, positions, or position lines fixed relatively to the beacon transmitters; Receivers co-operating therewith
    • G01S1/72Beacons or beacon systems transmitting signals having a characteristic or characteristics capable of being detected by non-directional receivers and defining directions, positions, or position lines fixed relatively to the beacon transmitters; Receivers co-operating therewith using ultrasonic, sonic or infrasonic waves
    • G01S1/76Systems for determining direction or position line
    • G01S1/78Systems for determining direction or position line using amplitude comparison of signals transmitted from transducers or transducer systems having differently-oriented characteristics
    • G01S1/786Systems for determining direction or position line using amplitude comparison of signals transmitted from transducers or transducer systems having differently-oriented characteristics the signals being transmitted simultaneously
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y20/00Information sensed or collected by the things
    • G16Y20/20Information sensed or collected by the things relating to the thing itself
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y40/00IoT characterised by the purpose of the information processing
    • G16Y40/30Control

Landscapes

  • Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Telephone Function (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The application provides a method and a device for determining the position of equipment, wherein the method comprises the following steps: the method comprises the steps that a first device sends first indication information to a second device, wherein the first indication information is used for indicating the second device to send a first audio signal; the first device detecting the first audio signal; and the first equipment determines the position relation between the first equipment and the second equipment according to the detection result of the first audio signal. The particular form of the first audio signal may be determined by the first device and communicated to the second device, or the particular form of the first audio signal may be determined by the second device and communicated to the first device. In any case, the first device serving as the detection device can more accurately determine the audio signal to be detected from the environmental noise, and take a targeted detection measure to detect, and compared with the method that the first device performs blind detection under the condition that the audio signal to be detected is not determined, the method can improve the accuracy of audio positioning.

Description

Method and device for determining position of equipment
Technical Field
The present application relates to the field of electronic technologies, and in particular, to a method and an apparatus for determining a device location.
Background
The internet of things (IoT), i.e., the internet in which everything is connected, is a network formed by combining various sensors and the internet, and can realize interconnection and intercommunication among people, machines, and things.
In some scenarios, a user wishes to determine a positional relationship between IoT devices in order to control IoT device interaction. For example, when a user enters a room where the smart sound box is located with a mobile phone, the mobile phone can transfer an audio playing function to the smart sound box to enhance a playing effect; when a user leaves a room where the smart television is located with the mobile phone, the mobile phone can turn off the screen projection function, so that power consumption is reduced.
A method for determining the position of IoT equipment is to position through audio signals, the penetration force of the audio signals is weak, the detection results of the audio signals in the same room are greatly different from the detection results of the audio signals in different rooms, and the positioning accuracy is improved. However, in some scenarios, the accuracy of audio localization is low. For example, when an IoT device sounds, the ambient noise is large, and the sound emitted by the IoT device may be masked by the ambient noise, thereby resulting in a decrease in the accuracy of audio localization.
Disclosure of Invention
The application provides a method and a device for determining the position of equipment, which can improve the accuracy of audio positioning.
In a first aspect, a method for determining a device location is provided, including: the method comprises the steps that a first device sends first indication information to a second device, wherein the first indication information is used for indicating the second device to send a first audio signal; the first device detecting the first audio signal; and the first equipment determines the position relation of the first equipment and the second equipment according to the detection result of the first audio signal.
In the above embodiment, the specific form of the first audio signal may be preset, or the specific form of the first audio signal is that the first device notifies the second device through other information, or the specific form of the first audio signal is determined by the second device and notifies the first device through other information. In any case, the first device serving as the detection device can more accurately determine the audio signal to be detected from the environmental noise, and take a targeted detection measure to detect, and compared with the method that the first device performs blind detection under the condition that the audio signal to be detected is not determined, the method can improve the accuracy of audio positioning.
Optionally, the method further comprises: the first device sends second indication information to the second device, wherein the second indication information is used for indicating physical characteristics of the first audio signal; or, the first device receives third indication information from the second device, the third indication information being used for indicating a physical characteristic of the first audio signal; the first device detecting the first audio signal, comprising: the first device detects the first audio signal according to the physical characteristic.
The first device may determine the physical characteristic of the first audio signal itself and indicate the physical characteristic by the second indication information. The physical characteristic of the first audio signal may also be determined by the second device, and the first device may determine the physical characteristic of the first audio signal by the third indication information. In any case, the first device serving as the detection device can more accurately determine the audio signal to be detected from the environmental noise, and take a targeted detection measure to detect, and compared with the case that the first device does blind detection under the condition that the physical characteristics of the audio signal to be detected are not determined, the accuracy of audio positioning can be improved in the embodiment.
Optionally, the physical characteristic comprises a frequency band and/or a playing volume of the first audio signal.
Optionally, the method further comprises: the first device sends fourth indication information to the second device, wherein the fourth indication information is used for indicating the content of the first audio signal; or, the first device receives fifth indication information from the second device, the fifth indication information being used for indicating the content of the first audio signal; the first device detecting the first audio signal, comprising: the first device detects the first audio signal from the content.
The content of the first audio signal may be music or a bird song. The first device may determine the content of the first audio signal itself and indicate the content by the fourth indication information. The content of the first audio signal may also be determined by the second device, and the first device may determine the content of the first audio signal by the fifth indication information. In any case, the first device serving as the detection device can more accurately determine the audio signal to be detected from the environmental noise, and take a targeted detection measure to detect, and compared with the case that the first device does blind detection under the condition that the content of the audio signal to be detected is not determined, the accuracy of audio positioning can be improved in the embodiment.
Optionally, the second device is located in the same area as a third device, and the method further includes: the first device sends sixth indication information to the third device, wherein the sixth indication information is used for indicating the third device to send a second audio signal; the first device detecting the second audio signal; and the first equipment determines the position relation of the first equipment and the second equipment according to the detection result of the second audio signal.
When the first device and the plurality of devices are located in the same area, the first device can instruct the plurality of devices to sound, and the position relation between the first device and the second device is determined through the plurality of sound sources, so that the accuracy of audio positioning can be improved.
Optionally, the method further comprises: when the location relationship is that the first device and the second device are located in the same area, the first device executes a shared service with the second device; when the location relationship is that the first device and the second device are located in different areas, the first device stops executing the shared service with the second device, or the first device prompts a user to stop executing the shared service with the second device.
When the first device and the second device are in the same area, the user can use the first device and the second device at the same time, and the first device executes a sharing service (such as screen projection) with the second device, so that the experience of the user is enhanced; when the first device and the second device are in different areas, the user cannot use the first device and the second device at the same time, and at the moment, the first device stops executing the sharing service with the second device, so that the power consumption of the first device and the second device can be saved.
Optionally, the second device and the fourth device are located in a target area, and the method further includes: and when the position relationship is that the first device and the second device are located in the target area, the first device executes a sharing service with the fourth device.
The fourth device may be the same device as the third device described above, or may be a different device. When the first device and the second device are located in the same area and the second device and the fourth device are located in the same area, the first device and the fourth device are also located in the same area, and the first device can directly interact with the fourth device without detecting the position relation of the first device and the fourth device, so that the power consumption of the first device can be reduced.
Optionally, the determining, by the first device, a position relationship between the first device and the second device according to the detection result of the first audio signal includes: when the energy attenuation value of the first audio signal is smaller than or equal to an energy threshold value, the first device determines that the position relationship is that the first device and the second device are located in the same region; when the energy attenuation value of the first audio signal is greater than the energy threshold, the first device determines that the location relationship is that the first device and the second device are located in different regions. .
Optionally, the first audio signal comprises audio signals of a plurality of channels.
The first device detecting based on the multi-channel sound emitted by the second device may improve the accuracy of audio localization.
In a second aspect, the present application provides another method of determining a location of a device, comprising:
the first device receives seventh indication information from the second device, wherein the seventh indication information is used for indicating that the second device is about to send the first audio signal or sending the first audio signal;
the first device detecting the first audio signal;
and the first equipment determines the position relation between the first equipment and the second equipment according to the detection result of the first audio signal.
In the above embodiment, the specific form of the first audio signal may be preset, or the specific form of the first audio signal may be that the second device notifies the first device through other information. In any case, the first device as the detection device can know the audio signal to be detected, and take a targeted detection measure to detect, compared with the first device, the blind detection is performed under the condition that the audio signal to be detected is uncertain, and the accuracy of audio positioning can be improved in the embodiment.
Optionally, the method further comprises:
the first device receiving eighth indication information from the second device, the second indication information indicating a physical characteristic of the first audio signal;
the first device detecting the first audio signal, comprising:
the first device detects the first audio signal according to the physical characteristic.
The first device can determine the physical characteristics of the audio signal to be detected (i.e., the first audio signal) according to the eighth indication information, so that a targeted detection measure can be taken for detection.
Optionally, the physical characteristic comprises a frequency band and/or a playing volume of the first audio signal.
Optionally, the method further comprises:
the first device receiving ninth indication information from the second device, the ninth indication information indicating content of the first audio signal;
the first device detecting the first audio signal, comprising:
the first device detects the first audio signal from the content.
The content of the first audio signal may be music or a bird song. The first device may determine the content of the first audio signal through the ninth indication information, so as to perform detection by taking a targeted detection measure.
Optionally, the second device is located in the same area as a third device, and the method further includes:
the first device sends sixth indication information to the third device, wherein the sixth indication information is used for indicating the third device to send a second audio signal;
the first device detecting the second audio signal;
and the first equipment determines the position relation of the first equipment and the second equipment according to the detection result of the second audio signal.
When the first device and the plurality of devices are located in the same area, the first device can instruct the plurality of devices to sound, and the position relation between the first device and the second device is determined through the plurality of sound sources, so that the accuracy of audio positioning can be improved.
Optionally, the method further comprises:
when the location relationship is that the first device and the second device are located in the same area, the first device executes a sharing service with the second device;
when the location relationship is that the first device and the second device are located in different areas, the first device stops executing the shared service with the second device, or the first device prompts a user to stop executing the shared service with the second device.
When the first device and the second device are in the same area, a user can use the first device and the second device at the same time, and at the moment, the first device executes a shared service (such as screen projection) with the second device, so that the experience of the user is enhanced; when the first device and the second device are in different areas, the user cannot use the first device and the second device at the same time, and at the moment, the first device stops executing the sharing service with the second device, so that the power consumption of the first device and the second device can be saved.
Optionally, the second device and the fourth device are located in a target area, and the method further includes:
and when the position relationship is that the first device and the second device are located in the target area, the first device executes a sharing service with the fourth device.
The fourth device may be the same device as the third device described above, or may be a different device. When the first device and the second device are located in the same area and the second device and the fourth device are located in the same area, the first device and the fourth device are also located in the same area, and the first device can directly interact with the fourth device without detecting the position relation of the first device and the fourth device, so that the power consumption of the first device can be reduced.
Optionally, the determining, by the first device, a position relationship between the first device and the second device according to the detection result of the first audio signal includes:
when the energy attenuation value of the first audio signal is smaller than or equal to an energy threshold value, the first device determines that the position relationship is that the first device and the second device are located in the same region;
when the energy attenuation value of the first audio signal is greater than the energy threshold, the first device determines that the location relationship is that the first device and the second device are located in different regions.
Optionally, the first audio signal comprises audio signals of a plurality of channels.
The first device detecting based on the multi-channel sound emitted by the second device may improve the accuracy of audio localization.
In a third aspect, the present application provides an apparatus for determining a device location, comprising means for performing the method of the first aspect or the second aspect. The device can be a terminal device or a chip in the terminal device. The apparatus may include a transmitting unit, a receiving unit, and a processing unit. The processing unit may be a processor, the transmitting unit and the receiving unit may be transceivers or communication interfaces; the terminal device may further include a storage unit, which may be a memory; the storage unit is configured to store instructions, and the processing unit executes the instructions stored in the storage unit to cause the apparatus to perform the method according to the first aspect or the second aspect.
In a fourth aspect, the present application provides a system for determining a device location, comprising a sound source device and the apparatus of the third aspect, the sound source device being configured to: receiving first indication information, wherein the first indication information is used for indicating the sound source equipment to send a first audio signal; and transmitting the first audio signal according to the first indication information.
In a fifth aspect, the present application provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, causes the processor to perform the method of the first or second aspect.
In a sixth aspect, the present application provides a computer program product comprising: computer program code which, when executed by a processor, causes the processor to perform the method of the first or second aspect.
Drawings
Fig. 1 is a schematic diagram of an IoT system suitable for use in the present application;
FIG. 2 is a schematic diagram of a method of determining a device location provided herein;
FIG. 3 is a schematic diagram of another method of determining a device location provided herein;
FIG. 4 is a schematic diagram of an audio detection method provided herein;
FIG. 5 is a schematic diagram of a method of determining a device location based on multiple sound sources provided herein;
FIG. 6 is a schematic diagram of another method provided herein for determining a device location based on multiple sound sources;
FIG. 7 is a schematic diagram of yet another method of determining a device location provided herein;
FIG. 8 is a schematic diagram of an apparatus for determining a location of a device provided herein;
FIG. 9 is a schematic diagram of another apparatus for determining a location of a device provided herein;
fig. 10 is a schematic diagram of another apparatus for determining a device location provided herein.
Detailed Description
The technical solution in the present application will be described below with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of an IoT system 100 suitable for use in the present application, where the IoT system 100 includes a handset 110, a smart tv 120, a smart speaker 130, a smart watch 140, and a router 150, which may be referred to as IoT devices.
The user can send an instruction to the smart tv 120 through the mobile phone 110, the instruction is transmitted to the smart tv 120 via the router 150, and the smart tv 120 performs corresponding operations according to the instruction, such as turning on a camera, a screen, a microphone and a speaker. The mobile phone 110 may also send the instruction directly to the smart tv 120, for example, send the instruction to the smart tv 120 through an infrared signal.
The user may also send an instruction to the smart sound box 130 through the mobile phone 110, where the instruction is transmitted to the smart sound box 130 through the bluetooth connection between the mobile phone 110 and the smart sound box 130, and the smart sound box 130 performs a corresponding operation according to the instruction, such as turning on a speaker or a microphone.
The user may also send an instruction to the smart watch 140 through the mobile phone 110, where the instruction is transmitted to the smart sound box 140 through the bluetooth connection between the mobile phone 110 and the smart watch 140, and the smart sound box 140 performs a corresponding operation according to the instruction, such as turning on a speaker or a microphone.
IoT system 100 is one example, but not all, of an IoT system suitable for use in the present application. For example, in the IoT system applied to the present application, IoT devices may also communicate with each other through a wired connection; the user may control the smart tv 120, the smart speaker 130, and the smart watch 140 through an Augmented Reality (AR) device or a Virtual Reality (VR) device.
In the IoT system 100, IoT devices need to determine a positional relationship between each other in order to interact. The location relationship may be a relative location between two devices in a space with connectivity, for example, it is a location relationship of the cell phone 100 and the smart tv 120 that the cell phone 100 and the smart tv 120 are located in a living room; the smart tv 120 is located in the living room, and the mobile phone 110 is located in the bedroom, which is a positional relationship between the mobile phone 100 and the smart tv 120.
When the mobile phone 110 determines that the smart sound box 130 is located in the same room, the mobile phone 110 may transfer the audio playing function to the smart sound box 130 to enhance the playing effect; when the cell phone 110 determines that it is in a different room than the smart tv 120, the cell phone 110 may turn off the screen projection function to reduce power consumption.
A method for determining the position of IoT equipment is to position through audio signals, the penetration force of the audio signals is weak, the detection results of the audio signals in the same room are greatly different from the detection results of the audio signals in different rooms, and the positioning accuracy is improved. However, in some scenarios, the accuracy of audio localization is low. For example, when an IoT device sounds, the ambient noise is large, and the sound emitted by the IoT device may be masked by the ambient noise, thereby resulting in a decrease in the accuracy of audio localization.
The present application provides a method 200 of determining a device location that can improve the accuracy of audio positioning. The method 200 may be performed by an IoT device or a chip in an IoT device, as shown in fig. 2, the method 200 includes the following.
S210, the first device sends first indication information to the second device, wherein the first indication information is used for indicating the second device to send a first audio signal.
In this application, the adjectives "first", "second", etc., are used to describe different individuals within the same type of object, except that no other limitation is present. For example, the first device and the second device may be any two IoT devices in the IoT system 100, and the method 200 is described below by taking the first device as the handset 110 and the second device as the smart tv 120 as an example.
The specific form (physical characteristics and/or content) of the first audio signal may be preset, or the specific form of the first audio signal is determined by the mobile phone 110 and notified to the smart tv 120, or the specific form of the first audio signal is determined by the smart tv 120 and notified to the mobile phone 110 through other messages. The method for determining the specific form of the first audio signal by the handset 110 is not limited in this application.
For example, the mobile phone 110 may instruct the smart tv 120 to transmit the first audio signal through the first indication information, and the smart tv 120 may transmit a preset first audio signal according to the first indication information.
For another example, after the mobile phone 110 sends the first indication information to the smart tv 120, it sends second indication information and/or fourth indication information to the smart tv 120, where the second indication information is used to indicate a physical characteristic of the first audio signal, and the fourth indication information is used to indicate content of the first audio signal; alternatively, the first indication information, the second indication information, and the fourth indication information may be transmitted simultaneously. The mobile phone 110 may also directly send the playing file of the first audio signal to the smart tv 120 instead of sending the second indication information and/or the fourth indication information.
For another example, after receiving the first indication information from the mobile phone 110, the smart tv 120 may download the first audio signal from the server and send the first audio signal. Optionally, before transmitting the first audio signal, the smart tv 120 may transmit third indication information and/or fifth indication information to the cell phone 110, where the third indication information is used to indicate a physical characteristic of the first audio signal, and the fifth indication information is used to indicate content of the first audio signal. Optionally, the second indication information and the fourth indication information may be sent at the same time, or may not be sent at the same time.
The physical characteristic of the first audio signal may be a frequency band and/or a playing volume. The physical characteristics of the first audio signal are not limited by this application.
The handset 110 may determine the physical characteristic of the first audio signal itself and indicate the physical characteristic by the second indication. The physical characteristics of the first audio signal may also be determined by the smart tv 120, and the cell phone 110 may determine the physical characteristics of the first audio signal through the third indication information. In any case, the mobile phone 110 serving as the detection device can more accurately determine the audio signal to be detected from the environmental noise, and take a targeted detection measure to perform detection, and compared with the mobile phone 110 performing blind detection without determining the physical characteristics of the audio signal to be detected, the present embodiment can improve the accuracy of audio positioning.
The content of the first audio signal may be music or a bird song. The content of the first audio signal is not limited in this application.
The handset 110 may determine the content of the first audio signal by itself and indicate the content through the fourth indication information. The content of the first audio signal may also be determined by the smart tv 120, and the cell phone 110 may determine the content of the first audio signal through the fifth indication information. In any case, the mobile phone 110 serving as the detection device can more accurately determine the audio signal to be detected from the environmental noise, and take a targeted detection measure to perform detection, and compared with the case that the mobile phone 110 performs blind detection without determining the content of the audio signal to be detected, the present embodiment can improve the accuracy of audio positioning.
After receiving the first indication information, the smart tv 120 may send the first audio signal immediately, or send the first audio signal at a time agreed by both parties (the smart tv 120 and the mobile phone 110).
After the first device transmits the first indication information, the following steps may be performed.
S220, the first device detects the first audio signal.
The detection of the first audio signal may be performed on a low power device.
For example, the mobile phone 110 may enter a sleep state after sending the first indication information, so as to reduce the operating frequency of the processor and maintain the normal operation of the audio detection device; after the audio detection device receives the first audio signal, the mobile phone 110 is triggered to exit from the sleep state, and the processor can recover the normal working frequency to analyze the first audio signal, so that the power consumption in the audio detection process can be reduced.
S230, the first device determines the position relation between the first device and the second device according to the detection result of the first audio signal.
For example, the mobile phone 110 may identify the first audio signal according to the physical characteristics and/or the content, detect the received energy of the first audio signal, determine an energy attenuation value of the first audio signal according to the transmitted energy and the received energy of the first audio signal, which is a detection result of the first audio signal, and the mobile phone 110 may determine the position relationship between the mobile phone 110 and the smart tv 120 according to the energy attenuation value.
In the above example, the energy of the audio signal is related to the amplitude, the larger the energy value. The handset 110 may sample the first audio signal and detect the amplitude of the first audio signal to determine the received energy of the first audio signal.
Optionally, when the energy attenuation value of the first audio signal is less than or equal to the energy threshold, the mobile phone 110 determines that the location relationship is that the mobile phone 110 and the smart television 120 are located in the same region; when the energy attenuation value of the first audio signal is greater than the energy threshold value, the mobile phone 110 determines that the location relationship is that the mobile phone 110 and the smart television 120 are located in different areas.
In addition, the delay of the sound signal can reflect the distance traveled by the sound signal, and therefore, the mobile phone 110 can also determine the position relationship between the mobile phone 110 and the smart tv 120 based on the delay of the first audio signal. For example, the mobile phone 110 may instruct the smart tv 120 to transmit the first audio signal at time a through the first indication information, and then the mobile phone 110 detects the first audio signal at time B, which is different from time a by 10ms, and based on the propagation speed of sound in the air being 340m/s, the mobile phone 110 may determine that the mobile phone 110 is 3.4m away from the smart tv 120. If the cell phone 110 determines that the current area is the living room according to other information (e.g., a strong Wi-Fi signal in the living room), the cell phone 110 may determine that the current area is the same as the area of the smart television 120; if the cell phone 110 determines that the current area is a bedroom based on other information (e.g., the Wi-Fi signal in the living room is weak), the cell phone 110 may determine that the current area is different from the area of the smart tv 120.
The specific manner in which the handset 110 detects the first audio signal is not limited in this application.
In the method 200, the mobile phone 110 may determine the physical characteristics and/or the content of the audio signal to be detected before the detection, so that the mobile phone 110 may more accurately determine the audio signal to be detected from the environmental noise and may perform the detection by adopting a targeted detection measure, and compared with performing the blind detection without determining the audio signal to be detected, the method 200 may improve the accuracy of the audio positioning.
The above-mentioned targeted detection measures may be the following schemes.
Scheme 1: the smart tv 120 may add a preamble to the first audio signal, i.e., the first audio signal is composed of a preamble and a non-preamble. The preamble may comprise tones of 600Hz, 1200Hz, 2100Hz and 2700Hz, wherein the tones of 600Hz and 2100Hz are transmitted simultaneously for 300ms, and then the tones of 1200Hz and 2700Hz are transmitted simultaneously for 300ms, and the two groups of tones may be transmitted alternately several times, so that the specific frequency and the varied sound are favorable for the mobile phone 110 to distinguish the first audio signal from the ambient noise.
After the preamble is sent, the smart tv 120 starts to send a non-preamble, where the non-preamble may be a sound with a fixed duration (e.g., 1000ms), and the handset 110 only calculates the energy of the sound within the fixed duration when calculating the energy of the first audio signal. In addition, the handset 110 needs to process the received sound through a band-pass filter to remove the environmental noise from the received sound, so as to accurately calculate the energy of the non-preamble sound.
Scheme 2: the smart tv 120 sends out a specific sequence of audio signals, such as the preamble in scenario 1, as the first audio signal, and the handset 110 only detects the energy of the signal at specific frequencies (e.g., 600Hz and 2100 Hz).
The method for determining the device location provided by the present application will be further described below with reference to specific examples.
Fig. 3 illustrates a method for determining the position relationship between the mobile phone 110 and the smart tv 120.
The handset 110 determines that the handset 110 and the smart tv 120 are located in the local area network, the handset 110 may perform the following steps. Such as a wireless fidelity (Wi-Fi) network of router 150.
S310, the mobile phone 110 sends an audio detection request to the smart tv 120 through the router 150, where the audio detection request is used to instruct the smart tv 120 to send a first audio signal. The audio detection request may or may not include a playback file of the first audio signal. When the audio detection request does not include the playing file of the first audio signal, the cell phone 110 may indicate the physical characteristics and/or content of the first audio signal through the audio detection request, or the smart tv 120 may determine the physical characteristics and/or content of the first audio signal by itself.
S320, after receiving the audio detection request, the smart television 120 may send a response message to the cell phone 110, where the response message indicates that the smart television 120 has received the audio detection request. Optionally, the response message may comprise physical characteristics and/or content of the first audio signal, for example, the response message comprises a frequency band and a playing volume of music, i.e. physical characteristics of the first audio signal, the music, i.e. content of the first audio signal.
In S310 and S320, the audio detection request is sent by the detection device (the handset 110). Optionally, the audio detection request may also be sent by the playback device (smart tv 120), and the playback device may also send the physical characteristics and/or content of the first audio signal to the detection device.
And S330, audio detection.
The steps of audio detection are shown in fig. 4.
The handset 110 performs feature detection after initiating detection, i.e., determining whether a first audio signal is received.
For example, the handset 110 determines that the first audio signal is characterized by 80-100 Hz music. When the handset 110 does not detect music at 80-100 Hz, it is determined that the feature detection fails, which may be due to the wall blockage causing the first audio signal to be weak in energy. The mobile phone 110 may determine that the mobile phone 110 and the smart tv 120 are located in different rooms according to the detection result.
In the above example, the first audio signal may be obtained by modulating a digital signal, for example, the smart television 120 may synthesize two voltage signals with frequencies of 80Hz and 100Hz by a Direct Digital Synthesizer (DDS), then the two voltage signals are modulated by a hardware multiplier and then output to a speaker, and the speaker converts the modulated voltage signal into the first audio signal and sends the first audio signal.
The mobile phone 110 may convert all received audio signals into voltage signals through a microphone, and then remove the voltage signals (e.g., voltage signals other than 60-120 Hz) caused by noise through a filter, so as to obtain a first audio signal from all the audio signals; the filtered voltage signal is converted into a digital signal after being subjected to a sampling process by an analog to digital converter (ADC), and then whether the digital signal includes signals of 80Hz and 100Hz can be determined through fourier analysis. Alternatively, if the mobile phone 110 determines that the first audio signal is a monophonic signal, the digital signal may be analyzed using Goertzel algorithm; if the mobile phone 110 determines that the first audio signal is a non-monophonic signal (e.g., a chord signal), the digital signal may be analyzed by using an envelope detection method such as Hilbert transform. The Goertzel algorithm and the Hilbert transform are both fourier transform-based analysis methods.
When the mobile phone 110 detects music at 80-100 Hz based on the above method, it is determined that the feature detection is successful, and the mobile phone 110 may perform subsequent detection, that is, energy detection.
The handset 110 may determine the positional relationship of the handset 110 and the smart tv 120 based on the energy attenuation value of the audio signal.
For example, the handset 110 may determine the energy of the detected audio signal according to equation (1).
Figure BDA0002877839560000091
Where E is the detected energy (i.e., Edetection) of the audio signal, N is the total number of samples in the statistical period, and T is0For the start time of the statistical period, m is the mth sampling point, and x (m) is the signal amplitude of the mth sampling point.
When the energy-efficiency is less than or equal to the energy threshold X, the mobile phone 110 may determine that the mobile phone 110 and the smart television 120 are located in the same room; when the Edetection is greater than the energy threshold X, the cell phone 110 may determine that the cell phone 110 is located in a different room than the smart tv 120. The Edetection is the energy of the audio signal played by the smart television 120, and Ereference-Edetection is the energy attenuation value.
The handset 110 may also determine the positional relationship of the handset 110 to the smart tv 120 based on the audio signal coefficients.
The audio signal coefficient Elosscoef can be determined by equation (2).
Elosscoef=1-sqrt(Edetection/Ereference) (2);
When the Edetection is less than or equal to M1, or when the Edetection > M2 and the Elosscoef > M3, the mobile phone 110 may determine that the mobile phone 110 and the smart television 120 are located in different rooms; in other cases, for example, when the detection > M1, the cell phone 110 may determine that the cell phone 110 is located in the same room as the smart tv 120. The specific manner of audio detection is not limited in this application.
After the mobile phone 110 determines the position relationship between the mobile phone 110 and the smart tv 120, the detection result (the position relationship) may be sent to the smart tv 120, that is, S340 is executed.
When the mobile phone 110 and the smart tv 120 are located in the same room, the mobile phone 110 may perform a sharing service with the smart tv 120, for example, the mobile phone 110 may perform a screen projection on the smart tv 120, so that the experience of the user may be enhanced; when the mobile phone 110 and the smart tv 120 are located in different rooms, the mobile phone 110 may stop executing the sharing service with the smart tv 120, or the mobile phone 110 may prompt the user to stop executing the sharing service with the smart tv 120, for example, the mobile phone 110 may stop making a screen shot on the smart tv 120, or the mobile phone 110 may prompt the user to stop making a screen shot on the smart tv 120, so that power consumption of the mobile phone 110 and the smart tv 120 may be saved.
In some scenarios, the handset 110 may also perform shared traffic with other IoT devices based on the positional relationships of the smart tv 120 and the other IoT devices.
For example, the smart tv 120 and the smart speaker 130 can communicate through bluetooth connection, and then the smart tv 120 may determine that the smart tv 120 and the smart speaker 130 are located in the same room, and the smart tv 120 may notify the mobile phone 110 of the location relationship; when the mobile phone 110 and the smart tv 120 are located in the same room, the mobile phone 110 may determine that the mobile phone 110 and the smart sound box 130 are also located in the same room, and the mobile phone 110 may interact with the smart sound box 130, for example, playing an audio file on the mobile phone 110 through the smart sound box 130.
In the above embodiment, the mobile phone 110 does not need to detect the position relationship between the mobile phone 110 and the smart sound box 130, and can directly interact with the smart sound box 130, thereby reducing the power consumption of the mobile phone 110.
The above describes a method in which the cellular phone 110 determines the positional relationship based on a single sound source, and alternatively, the cellular phone 110 may determine the positional relationship from a plurality of sound sources.
As shown in fig. 5, the smart tv 120 and the smart speaker 130 can communicate via bluetooth connection, and then the smart tv 120 may determine that the smart tv 120 and the smart speaker 130 are located in the same room, and the smart tv 120 may notify the mobile phone 110 of the location relationship. The mobile phone 110 may send sixth indication information to the smart sound box 130, where the sixth indication information indicates that the smart sound box 130 sends the second audio signal, where the mobile phone 110 may directly send the sixth indication information to the smart sound box 130, and may also send the sixth indication information to the smart sound box 130 through the smart television 120.
The specific form (physical characteristics and/or content) of the second audio signal may be preset, or the specific form of the second audio signal may be determined by the handset 110 and communicated to the smart speaker 130, or the specific form of the second audio signal may be determined by the smart speaker 130 and communicated to the handset 110 via other messages. The method of determining the specific form of the second audio signal by the handset 110 is not limited in this application.
Subsequently, the mobile phone 110 may detect the second audio signal, and determine the position relationship between the mobile phone 110 and the smart tv 120 according to the detection result of the second audio signal.
For example, when the energy attenuation value of the second audio signal is less than or equal to the energy threshold, the cell phone 110 determines that the cell phone 110 and the smart sound box 130 are located in the same room; when the energy attenuation value of the second audio signal is greater than the energy threshold, the cell phone 110 determines that the cell phone 110 and the smart sound box 130 are located in different rooms.
The second audio signal and the first audio signal may or may not be played simultaneously. Since the positional relationship between the smart tv 120 and the smart sound box 130 is known, the mobile phone 110 can determine whether the mobile phone 110 is located in the same area based on a plurality of sound sources located in the area, so that the accuracy of audio localization can be improved.
Alternatively, the multiple sound sources used by the handset 110 to determine the positional relationship may also originate from the same device.
As shown in fig. 6, the first audio signal played by the smart television 120 includes a left channel audio signal and a right channel audio signal, and the left channel audio signal and the right channel audio signal may be played at the same time or may not be played at the same time. The cellular phone 110 can determine whether the cellular phone 110 is located in the same area based on a plurality of sound sources located in the area, so that the accuracy of audio localization can be improved.
In the method for determining the device location described above, the mobile phone 110 is in the role of a master device, and instructs the smart tv 120 to transmit the first audio signal based on a request of a user (e.g., a detection command triggered by the user on the mobile phone 110) or a preset rule, and optionally, the mobile phone 110 may also detect the first audio signal based on the instruction of the smart tv 120, and determine the location relationship between the devices according to the detection result of the first audio signal.
The method for the handset 110 to detect the first audio signal based on the indication of the smart tv 120 is shown in fig. 7. The method 700 includes the following steps.
S710, the first device receives seventh indication information from the second device, where the seventh indication information is used to indicate that the second device is about to send the first audio signal or is sending the first audio signal.
S720, the first device detects the first audio signal.
And S730, the first device determines the position relation between the first device and the second device according to the detection result of the first audio signal.
The first device is, for example, a cell phone 110 and the second device is, for example, a smart tv 120. The specific form of the first audio signal in the method 700 is the same as the specific form of the first audio signal in the method 200, for example, the first audio signal in the method 700 may be sounds of various frequency bands or volumes, and the sounds may be music or bird sounds.
The smart television 120 may download the playing file of the first audio signal from the server, or may obtain the playing file of the first audio signal from the mobile phone 110, and the source of the first audio signal in the method 700 is not limited in this application.
For example, the smart tv 120 is playing a music file downloaded from a music server, and at this time, the smart tv 120 knows that the mobile phone 110 is accessed to the wireless network from the home gateway (the router 150), the smart tv 120 may send seventh indication information to the mobile phone 110 through the home gateway, instruct the mobile phone 110 to start a device location detection process, and start detecting the first audio signal. In this example, the smart tv 120 does not need terminal music playing, so that the mobile phone 110 can determine the position relationship between the mobile phone 110 and the smart tv 120 without affecting the current service.
Optionally, the smart tv 120 may notify the cell phone 110 of the physical characteristics (e.g., volume and/or frequency) of the music being played through the eighth indication information, and the smart tv 120 may also notify the cell phone 110 of the content (e.g., mozart music) of the music being played through the ninth indication information, so that the cell phone 110 may take a specific detection measure to perform detection.
For another example, when the smart tv 120 knows that the mobile phone 110 accesses the wireless network from the home gateway (the router 150), the smart tv 120 may send a seventh indication message to the mobile phone 110 through the home gateway to indicate the mobile phone 110 to start the device location detection process and start detecting the first audio signal; the smart tv 120 may play a preset voice of "welcome home" after transmitting the seventh indication message.
In the above embodiment, the specific form of the first audio signal may be preset, or the specific form of the first audio signal may be that the second device notifies the first device through other information. In any case, the first device as the detection device can know the audio signal to be detected, and take a targeted detection measure to detect, compared with the first device, the blind detection is performed under the condition that the audio signal to be detected is uncertain, and the accuracy of audio positioning can be improved in the embodiment.
In the method 700, the specific manner of detecting the first audio signal by the mobile phone 110 is the same as the specific manner of detecting the first audio signal by the mobile phone 110 in the method 200, and is not described herein again.
In addition, in the method 700, the mobile phone 110 may further determine the position relationship between the mobile phone 110 and the smart television 120 according to the detection result of the second audio signal, and the specific implementation manner may refer to an embodiment of the method 200 in which the mobile phone 110 detects the second audio signal.
After the position relationship is determined, the mobile phone 110 may execute a sharing service with the smart tv 120, such as screen projection, and the specific implementation may refer to an embodiment of the method 200 in which the mobile phone 110 executes the sharing service.
Examples of the methods of determining a device location provided herein are described in detail above. It is understood that the corresponding apparatus contains hardware structures and/or software modules corresponding to the respective functions for implementing the functions described above. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The present application may perform the division of the functional units for the apparatus for determining the device location according to the method example described above, for example, each function may be divided into each functional unit, or two or more functions may be integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the units in the present application is schematic, and is only one division of logic functions, and there may be another division manner in actual implementation.
Fig. 8 shows a schematic structural diagram of an apparatus for determining a device location provided in the present application. The apparatus 800 comprises a processing unit 810, a transmitting unit 820 and a receiving unit 830. The transmitting unit 820 can perform a transmitting operation under the control of the processing unit 810, and the receiving unit 830 can perform a receiving operation under the control of the processing unit 810.
The sending unit 820 is configured to: the method comprises the steps that a second device sends first indication information, wherein the first indication information is used for indicating the second device to send a first audio signal;
the receiving unit 830 is configured to: detecting the first audio signal;
the processing unit 810 is configured to: and determining the position relation between the device 800 and the second equipment according to the detection result of the first audio signal.
Optionally, the sending unit 820 is further configured to: sending second indication information to the second device, the second indication information indicating a physical characteristic of the first audio signal; alternatively, the receiving unit 830 is further configured to: receiving third indication information from the second device, the third indication information indicating a physical characteristic of the first audio signal;
the processing unit 810 is specifically configured to: detecting the first audio signal according to the physical feature.
Optionally, the physical characteristic comprises a frequency band and/or a playing volume of the first audio signal.
Optionally, the sending unit 820 is further configured to: sending fourth indication information to the second device, wherein the fourth indication information is used for indicating the content of the first audio signal; alternatively, the receiving unit 830 is further configured to: receiving fifth indication information from the second device, the fifth indication information indicating content of the first audio signal;
the processing unit 810 is specifically configured to: the first audio signal is detected according to the content.
Optionally, the second device is located in the same area as a third device,
the sending unit 820 is further configured to: sending sixth indication information to the third device, where the sixth indication information is used to indicate the third device to send a second audio signal;
the receiving unit 830 is further configured to: detecting the second audio signal;
the processing unit 810 is further configured to: and determining the position relation between the device 800 and the second equipment according to the detection result of the second audio signal.
Optionally, the processing unit 810 is further configured to:
when the position relationship is that the apparatus 800 and the second device are located in the same area, executing a sharing service with the second device;
when the position relationship indicates that the apparatus 800 and the second device are located in different areas, stopping executing the shared service with the second device, or prompting the user to stop executing the shared service with the second device.
Optionally, the second device and the fourth device are located in a target area, and the processing unit 810 is further configured to:
and when the position relationship is that the apparatus 800 and the second device are located in the target area, executing a shared service with the fourth device.
Optionally, the processing unit 810 is specifically configured to:
when the energy attenuation value of the first audio signal is less than or equal to an energy threshold, determining that the position relationship is that the apparatus 800 and the second device are located in the same region;
when the energy attenuation value of the first audio signal is greater than the energy threshold, it is determined that the apparatus 800 and the second device are located in different regions.
Optionally, the first audio signal comprises audio signals of a plurality of channels.
The specific manner in which the apparatus 800 performs the method for determining the location of a device and the resulting beneficial effects may be seen in the associated description of the method embodiments.
Fig. 9 is a schematic structural diagram of another apparatus for determining a device location provided in the present application. The apparatus 900 comprises a processing unit 910 and a receiving unit 920. The receiving unit 920 can perform a receiving operation under the control of the processing unit 910.
The receiving unit 920 is configured to: receiving seventh indication information from a second device, the seventh indication information indicating that the second device is about to transmit a first audio signal or is transmitting a first audio signal; detecting the first audio signal;
the processing unit 910 is configured to: and determining the position relation between the device 900 and the second equipment according to the detection result of the first audio signal.
Optionally, the receiving unit 920 is further configured to: receiving eighth indication information from the second device, the second indication information indicating a physical characteristic of the first audio signal; the processing unit 910 is specifically configured to: detecting the first audio signal according to the physical feature.
Optionally, the physical characteristic comprises a frequency band and/or a playing volume of the first audio signal.
Optionally, the receiving unit 920 is further configured to: receiving ninth indication information from the second device, the ninth indication information indicating content of the first audio signal; the processing unit 910 is specifically configured to: the first audio signal is detected according to the content.
Optionally, the second device and the third device are located in the same area, and the apparatus 900 further includes a sending unit, configured to: sending sixth indication information to the third device, where the sixth indication information is used to indicate the third device to send a second audio signal;
the receiving unit 920 is further configured to: detecting the second audio signal;
the processing unit 910 is further configured to: and determining the position relationship between the apparatus 900 and the second device according to the detection result of the second audio signal.
Optionally, the processing unit 910 is further configured to:
when the position relationship is that the apparatus 900 and the second device are located in the same area, executing a sharing service with the second device;
when the location relationship indicates that the apparatus 900 and the second device are located in different areas, the execution of the shared service with the second device is stopped, or the user is prompted to stop executing the shared service with the second device.
Optionally, the second device and the fourth device are located in a target area, and the processing unit 910 is further configured to:
when the location relationship is that the apparatus 900 and the second device are located in the target area, a sharing service with the fourth device is executed.
Optionally, the processing unit 910 is specifically configured to:
when the energy attenuation value of the first audio signal is less than or equal to an energy threshold, determining that the position relationship is that the apparatus 900 and the second device are located in the same region;
when the energy attenuation value of the first audio signal is greater than the energy threshold, determining that the positional relationship is that the apparatus 900 is located in a different region from the second device.
Optionally, the first audio signal comprises audio signals of a plurality of channels.
The apparatus 800 and the apparatus 900 are an example of an apparatus for determining a device location, and optionally, the apparatus for determining a device location may further have the structure shown in fig. 10.
Fig. 10 is a schematic structural diagram of another apparatus for determining a device location provided in the present application. The apparatus 1000 may be used to implement the methods described in the method embodiments above. Apparatus 1000 may be a terminal device.
The apparatus 1000 includes one or more processors 1001, and the one or more processors 1001 may support the apparatus 1000 to implement the methods in the method embodiments. The processor 1001 may be a general-purpose processor or a special-purpose processor. For example, the processor 1001 may be a Central Processing Unit (CPU). The CPU may be configured to control the apparatus 1000, execute software programs, and process data of the software programs. The apparatus 1000 may further include a communication unit 1005 to enable input (reception) and/or output (transmission) of signals, for example, the communication unit 1005 may be a transceiver circuit of the terminal device.
The device 1000 may also include a microphone 1006 to facilitate conversion of the first audio signal into an electrical signal.
The apparatus 1000 may include one or more memories 1002, on which programs 1004 are stored, and the programs 1004 may be executed by the processor 1001 to generate instructions 1003, so that the processor 1001 executes the method described in the above method embodiments according to the instructions 1003. Optionally, data (e.g., code to perform method 200 or method 700) may also be stored in memory 1002. Alternatively, the processor 1001 may also read data stored in the memory 1002, the data may be stored at the same memory address as the program 1004, or the data may be stored at a different memory address from the program 1004.
The processor 1001 and the memory 1002 may be provided separately or integrated together, for example, on a System On Chip (SOC) of the terminal device.
The specific manner in which the processor 1001 executes the method embodiment may be referred to in the description related to the method embodiment.
It should be understood that the steps of the above-described method embodiments may be performed by logic circuits in the form of hardware or instructions in the form of software in the processor 1001. The processor 1001 may be a CPU, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), or other programmable logic device, such as a discrete gate, a transistor logic device, or a discrete hardware component.
The application also provides a computer program product which, when executed by the processor 1001, implements the method according to any of the method embodiments of the application.
The computer program product may be stored in the memory 1002, for example, as a program 1004, and the program 1004 is finally converted into an executable object file capable of being executed by the processor 1001 through preprocessing, compiling, assembling, linking and the like.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a computer, implements the method of any of the method embodiments of the present application. The computer program may be a high-level language program or an executable object program.
Such as memory 1002. Memory 1002 may be either volatile memory or nonvolatile memory, or memory 1002 may include both volatile and nonvolatile memory. The non-volatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example, but not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), double data rate SDRAM, enhanced SDRAM, SLDRAM, Synchronous Link DRAM (SLDRAM), and direct rambus RAM (DR RAM).
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes and the generated technical effects of the above-described apparatuses and devices may refer to the corresponding processes and technical effects in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, the disclosed system, apparatus and method can be implemented in other ways. For example, some features of the method embodiments described above may be omitted, or not performed. The above-described embodiments of the apparatus are merely exemplary, the division of the unit is only one logical function division, and there may be other division ways in actual implementation, and a plurality of units or components may be combined or integrated into another system. In addition, the coupling between the units or the coupling between the components may be direct coupling or indirect coupling, and the coupling includes electrical, mechanical or other connections.
In the various embodiments of the present application, the size of the serial number does not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
In addition, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that three kinds of relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
It should be noted that in the embodiments of the present application, the terms "first", "second", and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or order. The features defined as "first" and "second" may explicitly or implicitly include one or more of the features. In the description of the embodiments of the present application, words such as "exemplary" or "for example" are used to indicate examples, illustrations or illustrations. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In short, the above description is only a preferred embodiment of the present disclosure, and is not intended to limit the scope of the present disclosure. Any modification, equivalent replacement, improvement and the like made within the principle of the present application shall be included in the protection scope of the present application.

Claims (12)

1. A method of determining a location of a device, comprising:
the method comprises the steps that a first device sends first indication information to a second device, wherein the first indication information is used for indicating the second device to send a first audio signal;
the first device detecting the first audio signal;
and the first equipment determines the position relation between the first equipment and the second equipment according to the detection result of the first audio signal.
2. The method of claim 1,
the method further comprises the following steps:
the first device sends second indication information to the second device, wherein the second indication information is used for indicating physical characteristics of the first audio signal; alternatively, the first and second electrodes may be,
the first device receiving third indication information from the second device, the third indication information indicating a physical characteristic of the first audio signal;
the first device detecting the first audio signal, comprising:
the first device detects the first audio signal according to the physical characteristic.
3. The method of claim 2, wherein the physical characteristic comprises a frequency band and/or a playback volume of the first audio signal.
4. The method according to any one of claims 1 to 3,
the method further comprises the following steps:
the first device sends fourth indication information to the second device, wherein the fourth indication information is used for indicating the content of the first audio signal; alternatively, the first and second electrodes may be,
the first device receiving fifth indication information from the second device, the fifth indication information indicating content of the first audio signal;
the first device detecting the first audio signal, comprising:
the first device detects the first audio signal from the content.
5. The method of any of claims 1-4, wherein the second device is located in the same area as a third device, the method further comprising:
the first device sends sixth indication information to the third device, wherein the sixth indication information is used for indicating the third device to send a second audio signal;
the first device detecting the second audio signal;
and the first equipment determines the position relation of the first equipment and the second equipment according to the detection result of the second audio signal.
6. The method according to any one of claims 1 to 5, further comprising:
when the location relationship is that the first device and the second device are located in the same area, the first device executes a sharing service with the second device;
when the location relationship is that the first device and the second device are located in different areas, the first device stops executing the shared service with the second device, or the first device prompts a user to stop executing the shared service with the second device.
7. The method of any of claims 1-6, wherein the second device and a fourth device are located in a target area, the method further comprising:
and when the position relationship is that the first device and the second device are located in the target area, the first device executes a sharing service with the fourth device.
8. The method according to any one of claims 1 to 7, wherein the first device determines the position relationship of the first device and the second device according to the detection result of the first audio signal, including:
when the energy attenuation value of the first audio signal is smaller than or equal to an energy threshold value, the first device determines that the position relationship is that the first device and the second device are located in the same region;
when the energy attenuation value of the first audio signal is greater than the energy threshold, the first device determines that the location relationship is that the first device and the second device are located in different regions.
9. The method of any of claims 1-8, wherein the first audio signal comprises a plurality of channels of audio signals.
10. An apparatus for determining a location of a device, comprising a processor and a memory, the processor and the memory being coupled for storing a computer program which, when executed by the processor, causes the apparatus to perform the method of any of claims 1 to 9.
11. A system for determining the position of a device, comprising a sound source device and an apparatus according to claim 10, the sound source device being configured to:
receiving first indication information, wherein the first indication information is used for indicating the sound source equipment to send a first audio signal;
and sending the first audio signal according to the first indication information.
12. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, causes the processor to carry out the method of any one of claims 1 to 9.
CN202011645221.6A 2020-12-31 2020-12-31 Method and device for determining position of equipment Pending CN114690113A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011645221.6A CN114690113A (en) 2020-12-31 2020-12-31 Method and device for determining position of equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011645221.6A CN114690113A (en) 2020-12-31 2020-12-31 Method and device for determining position of equipment

Publications (1)

Publication Number Publication Date
CN114690113A true CN114690113A (en) 2022-07-01

Family

ID=82136194

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011645221.6A Pending CN114690113A (en) 2020-12-31 2020-12-31 Method and device for determining position of equipment

Country Status (1)

Country Link
CN (1) CN114690113A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024099212A1 (en) * 2022-11-08 2024-05-16 华为技术有限公司 Spatial position determination method and system, and device therefor

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024099212A1 (en) * 2022-11-08 2024-05-16 华为技术有限公司 Spatial position determination method and system, and device therefor

Similar Documents

Publication Publication Date Title
US10492015B2 (en) Automated user/sensor location recognition to customize audio performance in a distributed multi-sensor environment
US20220392481A1 (en) Voice Wakeup Method and System, and Device
JP6799573B2 (en) Terminal bracket and Farfield voice dialogue system
JP2019091005A (en) Multi apparatus interactive method, device, apparatus and computer readable medium
EP3350804B1 (en) Collaborative audio processing
CN108922537B (en) Audio recognition method, device, terminal, earphone and readable storage medium
CN109151789B (en) Translation method, device and system and Bluetooth headset
WO2015191787A2 (en) Intelligent device connection for wireless media in an ad hoc acoustic network
US9491547B2 (en) Audio playing system and audio playing method
CN109379490B (en) Audio playing method and device, electronic equipment and computer readable medium
CN108495248B (en) Positioning method, positioning device, audio playing equipment and storage medium
EP3836582B1 (en) Relay device for voice commands to be processed by a voice assistant, voice assistant and wireless network
US20230037824A1 (en) Methods for reducing error in environmental noise compensation systems
WO2018045703A1 (en) Voice processing method, apparatus and terminal device
CN113010139B (en) Screen projection method and device and electronic equipment
CN113296728A (en) Audio playing method and device, electronic equipment and storage medium
CN114690113A (en) Method and device for determining position of equipment
US10602276B1 (en) Intelligent personal assistant
CN114299951A (en) Control method and device
CN109102816A (en) Coding control method, device and electronic equipment
CN113518297A (en) Sound box interaction method, device and system and sound box
CN114257924A (en) Method for distributing sound channels and related equipment
CN109672465B (en) Method, equipment and system for adjusting antenna gain
CN113488031B (en) Method and device for determining electronic equipment, storage medium and electronic device
CN104766599A (en) Intelligent music beat interaction equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination