CN114812381B - Positioning method of electronic equipment and electronic equipment - Google Patents

Positioning method of electronic equipment and electronic equipment Download PDF

Info

Publication number
CN114812381B
CN114812381B CN202110121715.2A CN202110121715A CN114812381B CN 114812381 B CN114812381 B CN 114812381B CN 202110121715 A CN202110121715 A CN 202110121715A CN 114812381 B CN114812381 B CN 114812381B
Authority
CN
China
Prior art keywords
electronic device
pose
camera
electronic equipment
electronic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110121715.2A
Other languages
Chinese (zh)
Other versions
CN114812381A (en
Inventor
朱应成
毛春静
曾以亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202110121715.2A priority Critical patent/CN114812381B/en
Publication of CN114812381A publication Critical patent/CN114812381A/en
Application granted granted Critical
Publication of CN114812381B publication Critical patent/CN114812381B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application is suitable for the technical field of positioning, and provides a positioning method of electronic equipment and the electronic equipment, wherein the method comprises the following steps: the first electronic equipment constructs an environment map of the current environment; the first electronic equipment performs self-positioning based on the environment map, and obtains an initial pose of the first electronic equipment in the environment map; the first electronic device identifies a second electronic device with a camera in the current environment; the first electronic device adopts the second electronic device to determine the pose to be fused of the first electronic device; and the first electronic equipment fuses the initial pose and the pose to be fused to obtain the target pose of the first electronic equipment. According to the method, the second electronic equipment in the current environment is adopted to assist in positioning the first electronic equipment, so that the accuracy of positioning the first electronic equipment can be improved.

Description

Positioning method of electronic equipment and electronic equipment
Technical Field
The embodiment of the application relates to the technical field of positioning, in particular to a positioning method of electronic equipment and the electronic equipment.
Background
In recent years, the development of technologies such as Virtual Reality (VR) and augmented reality (augmented reality, AR) has been advanced, and the technologies have been increasingly applied to the fields of education, training, medical treatment, and the like. One of the core algorithms of VR and AR technology is simultaneous localization and mapping (simultaneous location and mapping, SLAM). SLAM refers to an electronic device equipped with a specific sensor, and under the condition of no environment priori information, an environment model is built in the motion process, and the motion of the electronic device is estimated through self-positioning.
In general, to reduce errors in self-positioning of electronic devices and to increase robustness of positioning to the environment, computer vision may be applied to self-positioning, constituting a visual odometer. The visual odometer can obtain the motion pose parameters of the electronic equipment and the surrounding map information by utilizing the sequence images acquired by the camera through feature tracking and relative motion estimation.
In the process of realizing motion pose tracking by adopting SLAM, the electronic equipment needs to utilize the characteristic information of each object in the real scene. The size of a scene within the field of view (FoV) of a camera lens carried by an electronic device is often fixed. In some scenarios, for example, if the electronic device faces a large area with weak texture, such as a white wall, a ground, etc., the positioning accuracy of SLAM is poor, and the positioning result easily generates a jump offset with a larger amplitude, which seriously affects the use of the electronic device by the user.
Disclosure of Invention
The embodiment of the application provides a positioning method of electronic equipment and the electronic equipment, which are used for solving the problem that the positioning precision of the electronic equipment in the prior art is poor in some scenes.
In order to achieve the above purpose, the embodiment of the present application adopts the following technical scheme:
in a first aspect, a positioning method of an electronic device is provided, including:
the first electronic equipment constructs an environment map of the current environment;
the first electronic equipment performs self-positioning based on the environment map, and obtains an initial pose of the first electronic equipment in the environment map;
the first electronic device identifies a second electronic device with a camera in the current environment;
the first electronic device adopts the second electronic device to determine the pose to be fused of the first electronic device;
and the first electronic equipment fuses the initial pose and the pose to be fused to obtain the target pose of the first electronic equipment.
The positioning method of the electronic equipment has the following beneficial effects: according to the method and the device for locating the first electronic equipment, the second electronic equipment in the current environment is adopted to assist the first electronic equipment in locating, and locating accuracy and robustness of the first electronic equipment can be improved.
In a possible implementation manner of the first aspect, the determining, by the first electronic device, a pose to be fused of the first electronic device using the second electronic device includes: the first electronic device determines a first pose of the camera in the environment map; the first electronic equipment determines a second pose of the first electronic equipment in a coordinate system corresponding to the camera; and the first electronic equipment generates a pose to be fused of the first electronic equipment according to the first pose and the second pose. According to the embodiment of the application, by using other cameras with different directions in the current environment, instead of the camera of the first electronic device, the problem that the first electronic device cannot be positioned when facing scenes such as white walls, floors and the like without textures or with weak textures can be solved.
In a possible implementation manner of the first aspect, the determining, by the first electronic device, a first pose of the camera in the environment map includes: the first electronic device collects images of the second electronic device; and the first electronic device determines a first pose of the camera in the environment map according to the image of the second electronic device.
In a possible implementation manner of the first aspect, the determining, by the first electronic device, a first pose of the camera in the environment map, further includes: the first electronic device acquires a positioning signal sent by the second electronic device; and the first electronic determines a first pose of the camera in the environment map according to the positioning signal.
In a possible implementation manner of the first aspect, the determining, by the first electronic device, a second pose of the first electronic device in a coordinate system corresponding to the camera includes: the first electronic equipment controls the camera to acquire image information comprising the first electronic equipment; and the first electronic equipment determines a second pose of the first electronic equipment in a coordinate system corresponding to the camera according to the image information.
In a possible implementation manner of the first aspect, the determining, by the first electronic device according to the image information, a second pose of the first electronic device in a coordinate system corresponding to the camera includes: the first electronic device extracts a plurality of feature points in the image information; the first electronic device matches the plurality of feature points with a preset feature dictionary to obtain target feature points used for representing the first electronic device, and the feature dictionary is obtained by extracting features of the first electronic device and/or the second electronic device from images of the first electronic device; and the first electronic equipment determines a second pose of the first electronic equipment in a coordinate system corresponding to the camera according to the target feature points.
In a possible implementation manner of the first aspect, the determining, by the first electronic device, a second pose of the first electronic device in a coordinate system corresponding to the camera includes: the first electronic equipment controls the second electronic equipment to calculate a second pose of the first electronic equipment in a coordinate system corresponding to the camera; the first electronic device receives the second pose sent by the second electronic device. When the second pose is calculated, the method can be carried out on the second electronic equipment, and then the second electronic equipment transmits the calculated second pose to the first electronic equipment, so that the transmission quantity of data is reduced in the whole process, and the occupation of the first electronic equipment resources in the positioning process is reduced.
In a possible implementation manner of the first aspect, the second electronic device includes a plurality of electronic devices with the camera, and the pose to be fused includes a plurality of poses determined by using a plurality of second electronic devices; correspondingly, the first electronic device fuses the initial pose and the pose to be fused to obtain a target pose of the first electronic device, which comprises the following steps: the first electronic equipment processes the multiple pose to be fused to obtain the pose of the target to be fused; and the first electronic equipment fuses the initial pose and the target pose to be fused to obtain the target pose of the first electronic equipment. The embodiment of the application can utilize a plurality of second electronic devices to assist in positioning the first electronic device. Based on the camera of each second electronic device, a second pose can be obtained, the first electronic device can process the second poses in a weighted summation mode and the like, and then the processing result is fused with the initial pose, so that the positioning accuracy and the positioning robustness can be further improved.
In a second aspect, there is provided a positioning apparatus for an electronic device, including:
the environment map construction module is used for constructing an environment map of the current environment;
the initial pose calculation module is used for carrying out self-positioning based on the environment map to obtain the initial pose of the first electronic equipment in the environment map;
the electronic equipment identification module is used for identifying second electronic equipment with a camera in the current environment;
the pose to be fused calculation module is used for determining the pose to be fused of the first electronic device by adopting the second electronic device;
and the pose fusion module is used for fusing the initial pose and the pose to be fused to obtain the target pose of the first electronic equipment.
In a possible implementation manner of the second aspect, the pose calculation module to be fused is specifically configured to: determining a first pose of the camera in the environment map; determining a second pose of the first electronic device in a coordinate system corresponding to the camera; and generating the pose to be fused of the first electronic equipment according to the first pose and the second pose.
In a possible implementation manner of the second aspect, the pose calculation module to be fused is further specifically configured to: acquiring an image of the second electronic device; and determining a first pose of the camera in the environment map according to the image of the second electronic device.
In a possible implementation manner of the second aspect, the pose calculation module to be fused is further specifically configured to: acquiring a positioning signal sent by the second electronic equipment; and determining a first pose of the camera in the environment map according to the positioning signal.
In a possible implementation manner of the second aspect, the pose calculation module to be fused is further specifically configured to: controlling the camera to acquire image information comprising the first electronic equipment; and determining a second pose of the first electronic equipment in a coordinate system corresponding to the camera according to the image information.
In a possible implementation manner of the second aspect, the pose calculation module to be fused is further specifically configured to: extracting a plurality of characteristic points in the image information; matching the plurality of feature points with a preset feature dictionary to obtain target feature points for representing the first electronic equipment; and determining a second pose of the first electronic device in a coordinate system corresponding to the camera according to the target feature points, wherein the feature dictionary is obtained by extracting features of the first electronic device and/or the second electronic device on the image of the first electronic device.
In a possible implementation manner of the second aspect, the pose calculation module to be fused is further specifically configured to: controlling the second electronic equipment to calculate a second pose of the first electronic equipment in a coordinate system corresponding to the camera; and receiving the second pose sent by the second electronic equipment.
In a possible implementation manner of the second aspect, the second electronic device includes a plurality of electronic devices with the cameras, and the pose to be fused includes a plurality of poses determined by using the plurality of second electronic devices;
correspondingly, the pose fusion module is specifically used for: processing the multiple pose to be fused to obtain the pose of the target to be fused; and fusing the initial pose and the target pose to be fused to obtain the target pose of the first electronic equipment.
In a third aspect, an electronic device is provided, which may be the first electronic device of any of the first aspects, the electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the method for positioning the electronic device of any of the first aspects being implemented when the computer program is executed by the processor.
In a fourth aspect, there is provided a computer storage medium having stored therein computer instructions which, when run on an electronic device, cause the electronic to perform the related method steps to carry out the method of positioning an electronic device according to any of the first aspects described above.
In a fifth aspect, there is provided a computer program product for, when run on a computer, causing the computer to perform the relevant steps to carry out the method of positioning an electronic device as in any of the first aspects above.
In a sixth aspect, a positioning system is provided, comprising a first electronic device and a second electronic device according to any of the first aspects.
In a seventh aspect, a chip is provided, the chip including a processor, where the processor may be a general purpose processor or a special purpose processor. Wherein the processor is configured to support the electronic device to perform the relevant steps to implement the positioning method of the electronic device in any one of the above first aspects.
It will be appreciated that the advantages of the second to seventh aspects may be found in the relevant description of the first aspect, and are not described here again.
Drawings
Fig. 1 is an exemplary diagram of a prior art four-mesh VR headset;
Fig. 2 (a) is an application scenario schematic diagram of a positioning method of an electronic device provided in an embodiment of the present application;
fig. 2 (b) is an application scenario schematic diagram of a positioning method of an electronic device according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 4 is a software structural block diagram of an electronic device according to an embodiment of the present application;
fig. 5 is a schematic step flow diagram of a positioning method of an electronic device according to an embodiment of the present application;
fig. 6 (a) is a schematic step flow diagram of another positioning method of an electronic device according to an embodiment of the present application;
fig. 6 (b) is a schematic step flow diagram of a positioning method of another electronic device according to an embodiment of the present application;
fig. 7 is a block diagram of a positioning device of an electronic device according to an embodiment of the present application.
Detailed Description
In order to clearly describe the technical solutions of the embodiments of the present application, in the embodiments of the present application, the words "first", "second", etc. are used to distinguish the same item or similar items having substantially the same function and effect. For example, the first electronic device, the second electronic device, and the like are merely for distinguishing between different electronic devices, and are not limited in number and order of execution.
It should be noted that, in the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
The service scenario described in the embodiment of the present application is for more clearly describing the technical solution of the embodiment of the present application, and does not constitute a limitation on the technical solution provided in the embodiment of the present application, and as a person of ordinary skill in the art can know that, with the appearance of a new service scenario, the technical solution provided in the embodiment of the present application is also applicable to similar technical problems.
In the embodiments of the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a alone, a and B together, and B alone, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b, or c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or plural.
The steps involved in the positioning method of the electronic device provided in the embodiment of the present application are only taken as examples, not all the steps are necessarily performed, or not all the contents in each information or message are necessarily selected, and may be increased or decreased as needed in the use process.
In the embodiments of the present application, the same steps or technical features having the same function may be referred to and referred to each other between different embodiments.
As previously mentioned, the size of the scene within the field of view is often fixed, limited by the angle of view of the camera lens carried by the electronic device. The positioning accuracy of electronic devices using SLAM is poor when facing a large area of texture such as a white wall, the ground, etc. Taking an electronic device as a VR helmet as an example, when a user uses the VR helmet, if SLAM positioning accuracy is poor, a large jump deviation occurs. Therefore, in order to improve the positioning accuracy of VR helmets in such a scene, some researchers have proposed a scheme of improving the viewing angle of VR helmets by increasing the number of cameras. For example, the prior art has emerged with binocular, even four-purpose VR helmet positioning schemes.
As shown in fig. 1, an exemplary diagram of a prior art four-mesh VR headset 100 carries four cameras (cameras 101, 102, 103, and 104 in fig. 1). The robustness of the system can be improved by tracking and positioning texture features in the four-eye image. However, the cameras on VR headset 100 are still facing in the same direction, whether in a binocular or a four-view positioning scheme. When facing a weak texture area such as a white wall, the ground, etc., the images acquired by the plurality of cameras may be weak texture images. In this way, failure of SLAM positioning may still result in a drift jump in the VR scenario.
In order to solve the above problems, the embodiments of the present application provide a positioning method for an electronic device, which uses a visual method to identify and position a pose of the electronic device by using other cameras in a scene in a combined manner, and fuses the pose obtained by SLAM positioning of the electronic device to improve tracking accuracy and robustness under a scene of weak texture and the like.
Fig. 2 (a) is a schematic diagram of an application scenario of a positioning method of an electronic device according to an embodiment of the present application, where the scenario is an indoor scenario. In the indoor scene shown in fig. 2 (a), the indoor scene includes a first electronic device 21 and a second electronic device 22a, the second electronic device 22a has a camera 221a thereon, and the angle of view of the camera 221a is V1. Illustratively, the first electronic device 21 in fig. 2 (a) may be a VR headset and the second electronic device 22a may be a cell phone. In a specific application, the first electronic device 21 may perform self-positioning based on SLAM, obtaining an initial pose in the current environment. Since the positioning accuracy of the first electronic device 21 for self-positioning based on SLAM may be low, the first electronic device 21 may perform auxiliary positioning in combination with the camera 221a of the second electronic device 22a, so as to obtain the pose to be fused of the first electronic device 21. Finally, the first electronic device 21 can obtain the target pose with higher accuracy by fusing the initial pose and the pose to be fused, so that the positioning accuracy and the robustness of the first electronic device 21 are improved.
In a possible implementation manner of the embodiment of the present application, with respect to the indoor scenario shown in fig. 2 (a), fig. 2 (b) is a schematic application scenario diagram of another positioning method of an electronic device provided in the embodiment of the present application. In this scenario, the second electronic device 22a includes a second electronic device 22b in addition to the second electronic device 22a, and the second electronic device 22b has a camera 222b thereon, and the angle of view of the camera 222b is V2. The second electronic device 22b may be a television set having a camera function, for example. When the first electronic device 21 is positioned, the initial pose and the pose to be fused obtained by the camera 222b of the second electronic device 22b can be fused; alternatively, the first electronic device 21 may first process the pose to be fused obtained by the second electronic device 22a and the pose to be fused obtained by the second electronic device 22b to obtain the target pose to be fused, and then fuse the initial pose and the target pose to be fused to obtain the target pose of the first electronic device 21. The number of the second electronic devices is not limited in the embodiment of the application.
In this embodiment of the present application, the first electronic device or the second electronic device may be an electronic device with a camera, such as a mobile phone, a tablet computer, a wearable device, a vehicle-mounted device, an AR/VR device, a notebook computer, a personal computer (personal computer, PC), a netbook, a personal digital assistant (personal digital assistant, PDA), or the like. The specific type of the first electronic device or the second electronic device is not limited in the embodiments of the present application.
By way of example, fig. 3 shows a schematic structural diagram of an electronic device 300. The structures of the first electronic device and the second electronic device described above may refer to the structure of the electronic device 300.
Electronic device 300 may include a processor 310, an external memory interface 320, an internal memory 321, a universal serial bus (universal serial bus, USB) interface 330, a charge management module 340, a power management module 341, a battery 342, an antenna 1, an antenna 2, a mobile communication module 350, a wireless communication module 360, an audio module 370, a speaker 370A, a receiver 370B, a microphone 370C, an ear-piece interface 370D, a sensor module 380, keys 390, a motor 391, an indicator 392, a camera 393, a display screen 394, a user identification module (subscriber identification module, SIM) card interface 395, and the like. Among other things, the sensor module 380 may include a pressure sensor 380A, a gyroscope sensor 380B, a barometric pressure sensor 380C, a magnetic sensor 380D, an acceleration sensor 380E, a distance sensor 380F, a proximity light sensor 380G, a fingerprint sensor 380H, a temperature sensor 380J, a touch sensor 380K, an ambient light sensor 380L, a bone conduction sensor 380M, and the like.
It is to be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the electronic device 300. In some embodiments of the present application, electronic device 300 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 310 may include one or more processing units. For example, the processor 310 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural-Network Processor (NPU), etc. The different processing units may be separate devices or may be integrated in one or more processors.
The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 310 for storing instructions and data. In some embodiments of the present application, the memory in processor 310 is a cache memory. The memory may hold instructions or data that the processor 310 has just used or recycled. If the processor 310 needs to reuse the instruction or data, it may be called directly from the memory. Repeated accesses are avoided and the latency of the processor 310 is reduced, thereby improving the efficiency of the system.
In some embodiments of the present application, processor 310 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The I2C interface is a bi-directional synchronous serial bus comprising a serial data line (SDA) and a serial clock line (derail clock line, SCL). In some embodiments of the present application, the processor 310 may include multiple sets of I2C buses. The processor 310 may be coupled to the touch sensor 380K, charger, flash, camera 393, etc., respectively, through different I2C bus interfaces. For example, the processor 310 may couple the touch sensor 380K through an I2C interface, causing the processor 310 to communicate with the touch sensor 380K through an I2C bus interface, implementing the touch functionality of the electronic device 300.
The I2S interface may be used for audio communication. In some embodiments of the present application, the processor 310 may include multiple sets of I2S buses. The processor 310 may be coupled to the audio module 370 via an I2S bus to enable communication between the processor 310 and the audio module 370. In some embodiments of the present application, the audio module 370 may transmit an audio signal to the wireless communication module 360 through the I2S interface, so as to implement a function of answering a call through the bluetooth headset.
PCM interfaces may also be used for audio communication to sample, quantize and encode analog signals. In some embodiments of the present application, the audio module 370 and the wireless communication module 360 may be coupled by a PCM bus interface. In some embodiments of the present application, the audio module 370 may also transmit audio signals to the wireless communication module 360 through the PCM interface, so as to implement a function of answering a call through the bluetooth headset.
The UART interface is a universal serial data bus for asynchronous communications. The bus may be a bi-directional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments of the present application, a UART interface is typically used to connect the processor 310 with the wireless communication module 360. For example, the processor 310 communicates with a bluetooth module in the wireless communication module 360 through a UART interface to implement a bluetooth function. In some embodiments of the present application, the audio module 370 may transmit audio signals to the wireless communication module 360 through a UART interface, so as to realize a function of playing music through a bluetooth headset.
The MIPI interface may be used to connect the processor 310 with peripheral devices such as the display screen 394, the camera 393, and the like. The MIPI interfaces include camera serial interfaces (camera serial interface, CSI), display serial interfaces (display serial interface, DSI), and the like.
In some embodiments of the present application, the processor 310 and the camera 393 communicate through the CSI interface to implement the photographing function of the electronic device 300. The processor 310 and the display screen 394 communicate via a DSI interface to implement the display functions of the electronic device 300.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal or as a data signal. In some embodiments of the present application, a GPIO interface may be used to connect the processor 310 with the camera 393, display screen 394, wireless communication module 360, audio module 370, sensor module 380, etc. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, an MIPI interface, etc.
The USB interface 330 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 330 may be used to connect a charger to charge the electronic device 300, or may be used to transfer data between the electronic device 300 and a peripheral device. The USB interface 330 may also be used to connect headphones through which audio is played. The interface may also be used to connect other electronic devices, such as AR devices, etc.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present application is only illustrative, and does not limit the structure of the electronic device 300. In other embodiments of the present application, the electronic device 300 may also use different interfacing manners, or a combination of multiple interfacing manners in the foregoing embodiments.
The charge management module 340 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 340 may receive a charging input of a wired charger through the USB interface 330. In some wireless charging embodiments, the charge management module 340 may receive wireless charging input through a wireless charging coil of the electronic device 300. The battery 342 is charged by the charge management module 340, and the electronic device may be powered by the power management module 341.
The power management module 341 is configured to connect the battery 342, the charge management module 340, and the processor 310. The power management module 341 receives input from the battery 342 and/or the charge management module 340 to power the processor 310, the internal memory 321, the display screen 394, the camera 393, the wireless communication module 360, and the like. The power management module 341 may also be configured to monitor parameters such as battery capacity, battery cycle number, battery health (leakage, impedance), etc.
In other embodiments, the power management module 341 may also be disposed in the processor 310. In other embodiments, the power management module 341 and the charging management module 340 may also be disposed in the same device.
The wireless communication function of the electronic device 300 may be implemented by the antenna 1, the antenna 2, the mobile communication module 350, the wireless communication module 360, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 300 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example, the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 350 may provide a solution for wireless communication, including 2G/3G/4G/5G, etc., applied on the electronic device 300. The mobile communication module 350 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 350 may receive electromagnetic waves from the antenna 1, perform processes such as filtering and amplifying on the received electromagnetic waves, and transmit the electromagnetic waves to the modem processor for demodulation. The mobile communication module 350 may amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate the electromagnetic waves.
In some embodiments of the present application, at least some of the functional modules of the mobile communication module 350 may be provided in the processor 310. In some embodiments of the present application, at least some of the functional modules of the mobile communication module 350 may be provided in the same device as at least some of the modules of the processor 310.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to speaker 370A, receiver 370B, etc.) or displays images or video through display screen 394.
In some embodiments of the present application, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 350 or other functional module, independent of the processor 310.
The wireless communication module 360 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., as applied to the electronic device 300. The wireless communication module 360 may be one or more devices that integrate at least one communication processing module. The wireless communication module 360 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 310. The wireless communication module 360 may also receive a signal to be transmitted from the processor 310, frequency modulate and amplify the signal, and convert the signal into electromagnetic waves to radiate the electromagnetic waves through the antenna 2.
In some embodiments of the present application, antenna 1 and mobile communication module 350 of electronic device 300 are coupled, and antenna 2 and wireless communication module 360 are coupled, such that electronic device 300 may communicate with a network and other devices through wireless communication techniques. The wireless communication techniques may include a global system for mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou satellite navigation system (beidou navigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS), and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
The electronic device 300 implements display functions through a GPU, a display screen 394, an application processor, and the like. The GPU is a microprocessor for image processing, connected to the display screen 394 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 310 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 394 is used for displaying images, videos, and the like. The display screen 394 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (FLED), a Miniled, microLed, micro-oeled, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments of the present application, the electronic device 300 may include 1 or N display screens 394, N being a positive integer greater than 1.
The electronic device 300 may implement a photographing function through an ISP, a camera 393, a video codec, a GPU, a display screen 394, an application processor, and the like.
The ISP is used to process the data fed back by camera 393. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, so that the electrical signal is converted into an image visible to naked eyes. ISP can also perform algorithm optimization on noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature, etc. of the photographed scene. In some embodiments of the present application, the ISP may be provided in the camera 393.
Camera 393 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments of the present application, electronic device 300 may include 1 or N cameras 393, where N is a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 300 is selecting a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The electronic device 300 may support one or more video codecs. Thus, the electronic device 300 may play or record video in a variety of encoding formats, such as moving picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent recognition of the electronic device 300, for example, image recognition, face recognition, voice recognition, text understanding, etc., may be implemented by the NPU.
The external memory interface 320 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 300. The external memory card communicates with the processor 310 through an external memory interface 320 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 321 may be used to store computer executable program code comprising instructions. The internal memory 321 may include a storage program area and a storage data area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. The storage data area may store data created during use of the electronic device 300 (e.g., audio data, phonebook, etc.), and so on.
In addition, the internal memory 321 may include a high-speed random access memory, and may also include a nonvolatile memory. Such as at least one disk storage device, flash memory device, universal flash memory (universal flash storage, UFS), etc.
The processor 310 performs various functional applications of the electronic device 300 and data processing by executing instructions stored in the internal memory 321, and/or instructions stored in a memory provided in the processor.
The electronic device 300 may implement audio functionality through an audio module 370, a speaker 370A, a receiver 370B, a microphone 370C, an ear-headphone interface 370D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 370 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 370 may also be used to encode and decode audio signals. In some embodiments of the present application, the audio module 370 may be disposed in the processor 310, or some of the functional modules of the audio module 370 may be disposed in the processor 310.
Speaker 370A, also known as a "horn," is used to convert audio electrical signals into sound signals. The electronic device 300 may listen to music, or to hands-free conversations, through the speaker 370A.
A receiver 370B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When electronic device 300 is answering a telephone call or voice message, voice may be received by placing receiver 370B close to the human ear.
Microphone 370C, also known as a "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can sound near the microphone 370C through the mouth, inputting a sound signal to the microphone 370C. The electronic device 300 may be provided with at least one microphone 370C. In other embodiments, the electronic device 300 may be provided with two microphones 370C, and may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 300 may further be provided with three, four, or more microphones 370C to enable collection of sound signals, noise reduction, identification of sound sources, directional recording, etc.
The earphone interface 370D is for connecting a wired earphone. The earphone interface 370D may be a USB interface 330, or may be a 3.5mm open mobile electronic device platform (open mobile terminal platform, OMTP) standard interface, or a american cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 380A is configured to sense a pressure signal and convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 380A may be disposed on the display screen 394. The pressure sensor 380A is of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. When a force is applied to the pressure sensor 380A, the capacitance between the electrodes changes. The electronic device 300 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 394, the electronic apparatus 300 detects the touch operation intensity according to the pressure sensor 380A. The electronic device 300 may also calculate the location of the touch based on the detection signal of the pressure sensor 380A.
In some embodiments of the present application, touch operations that act on the same touch location but with different touch operation intensities may correspond to different operation instructions. For example, when a touch operation with a touch operation intensity smaller than a first pressure threshold acts on the short message application icon, an instruction to view the short message is executed. And executing an instruction for newly creating the short message when the touch operation with the touch operation intensity being greater than or equal to the first pressure threshold acts on the short message application icon.
The gyro sensor 380B may be used to determine a motion gesture of the electronic device 300. In some embodiments of the present application, the angular velocity of electronic device 300 about three axes (i.e., x, y, and z axes) may be determined by gyro sensor 380B. The gyro sensor 380B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 380B detects the shake angle of the electronic device 300, calculates the distance to be compensated by the lens module according to the angle, and makes the lens counteract the shake of the electronic device 300 through the reverse motion, so as to realize anti-shake. The gyro sensor 380B may also be used for navigating, somatosensory game scenes.
The air pressure sensor 380C is used to measure air pressure. In some embodiments of the present application, electronic device 300 calculates altitude, aids in positioning, and navigation from barometric pressure values measured by barometric pressure sensor 380C.
The magnetic sensor 380D includes a hall sensor. The electronic device 300 may detect the opening and closing of the flip holster using the magnetic sensor 380D. In some embodiments of the present application, when the electronic device 300 is a flip machine, the electronic device 300 may detect opening and closing of the flip according to the magnetic sensor 380D, and further set the characteristics of automatic unlocking of the flip according to the detected opening and closing state of the leather sheath or the opening and closing state of the flip.
The acceleration sensor 380E may detect the magnitude of acceleration of the electronic device 300 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 300 is stationary. The electronic equipment gesture recognition method can also be used for recognizing the gesture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 380F for measuring distance. The electronic device 300 may measure the distance by infrared or laser. In some embodiments of the present application, for example, shooting a scene, the electronic device 300 may range using the distance sensor 380F to achieve quick focus.
The proximity light sensor 380G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 300 emits infrared light outward through the light emitting diode. The electronic device 300 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it may be determined that an object is in the vicinity of the electronic device 300. When insufficient reflected light is detected, the electronic device 300 may determine that there is no object in the vicinity of the electronic device 300. The electronic device 300 can detect that the user holds the electronic device 300 close to the ear by using the proximity light sensor 380G, so as to automatically extinguish the screen to achieve the purpose of saving power. The proximity light sensor 380G may also be used in holster mode, pocket mode to automatically unlock and lock the screen.
The ambient light sensor 380L is used to sense ambient light level. The electronic device 300 may adaptively adjust the brightness of the display screen 394 based on the perceived ambient light level. The ambient light sensor 380L may also be used to automatically adjust white balance during photographing. The ambient light sensor 380L may also cooperate with the proximity light sensor 380G to detect if the electronic device 300 is in a pocket to prevent false touches.
The fingerprint sensor 380H is used to collect a fingerprint. The electronic device 300 may utilize the collected fingerprint characteristics to realize fingerprint unlocking, access an application lock, fingerprint photographing, fingerprint incoming call answering, and the like.
The temperature sensor 380J is used to detect temperature. In some embodiments of the present application, the electronic device 300 performs a temperature processing strategy using the temperature detected by the temperature sensor 380J. For example, when the temperature reported by temperature sensor 380J exceeds a threshold, electronic device 300 performs a reduction in performance of a processor located in the vicinity of temperature sensor 380J in order to reduce power consumption to implement thermal protection. In other embodiments, when the temperature is below another threshold, the electronic device 300 heats the battery 342 to avoid the low temperature causing the electronic device 300 to shut down abnormally. In other embodiments, when the temperature is below a further threshold, the electronic device 300 performs boosting of the output voltage of the battery 342 to avoid abnormal shutdown caused by low temperatures.
Touch sensor 380K, also known as a "touch device". The touch sensor 380K may be disposed on the display screen 394, and the touch sensor 380K and the display screen 394 form a touch screen, which is also referred to as a "touch screen". The touch sensor 380K is for detecting a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display screen 394. In other embodiments, touch sensor 380K may also be located on a surface of electronic device 300 other than at display 394.
The bone conduction sensor 380M may acquire a vibration signal. In some embodiments of the present application, bone conduction sensor 380M may acquire a vibration signal of a human vocal tract vibrating bone pieces. The bone conduction sensor 380M may also contact the pulse of the human body to receive the blood pressure pulsation signal.
In some embodiments of the present application, bone conduction sensor 380M may also be provided in the headset, in combination with the bone conduction headset. The audio module 370 may analyze the voice signal based on the vibration signal of the sound portion vibration bone block obtained by the bone conduction sensor 380M, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beat signals acquired by the bone conduction sensor 380M, so that a heart rate detection function is realized.
The keys 390 include a power-on key, a volume key, etc. The keys 390 may be mechanical keys or touch keys. The electronic device 300 may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device 300.
The motor 391 may generate a vibration alert. The motor 391 may be used for incoming call vibration alerting as well as for touch vibration feedback. For example, touch operations acting on different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 391 may also correspond to different vibration feedback effects by touch operations applied to different areas of the display screen 394. Different application scenarios (e.g., time alert, receipt information, alarm clock, game, etc.) may also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
The indicator 392 may be an indicator light, which may be used to indicate a state of charge, a change in charge, or an indication message, missed call, notification, etc.
The SIM card interface 395 is for interfacing with a SIM card. The SIM card may be inserted into the SIM card interface 395 or removed from the SIM card interface 395 to enable contact and separation with the electronic device 300. The electronic device 300 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 395 may support Nano SIM cards, micro SIM cards, and the like. The same SIM card interface 395 can be used to insert multiple cards simultaneously. The types of the plurality of cards may be the same or different. The SIM card interface 395 may also be compatible with different types of SIM cards. The SIM card interface 395 may also be compatible with external memory cards. The electronic device 300 interacts with the network through the SIM card to realize functions such as communication and data communication. In some embodiments of the present application, the electronic device 300 employs an eSIM (i.e., an embedded SIM card). The eSIM card can be embedded in the electronic device 300 and cannot be separated from the electronic device 300.
The software system of the electronic device 300 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. Embodiments of the present application are in a layered architectureThe system is an example illustrating the software architecture of the electronic device 300.
Fig. 4 is a software architecture block diagram of an electronic device 300 according to an embodiment of the present application.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments of the present application, willThe system is divided into four layers, namely an application program layer, an application program framework layer and a ∈thers from top to bottom>Run time (+)>runtimes) and system libraries, and kernel layers.
The application layer may include a series of application packages.
As shown in fig. 3, the application package may include applications for cameras, gallery, calendar, talk, map, navigation, WLAN, bluetooth, music, video, short messages, etc.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in fig. 3, the application framework layer may include a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, and the like.
The window manager is used for managing window programs. The window manager may obtain the display screen size, determine if there is a status bar, lock the screen, intercept the screen, etc.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
The telephony manager is used to provide the communication functions of the electronic device 300. For example, management of call status (including on, off, etc.).
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification managers are used to inform of download completion, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. Such as prompting text messages in status bars, sounding prompts, vibrating electronic devices, flashing indicator lights, etc.
Run time includes a core library and virtual machines. />runtime is responsible for->Scheduling and management of the system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part isIs a core library of (a).
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing functions such as management of object life cycle, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example, surface manager (surface manager), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The following embodiments take an electronic device having the above hardware structure/software structure as an example, and describe a positioning method of the electronic device provided in the embodiments of the present application.
Referring to fig. 5, a schematic step diagram of a positioning method of an electronic device provided in an embodiment of the present application is shown, where the method specifically may include the following steps:
s501, the first electronic equipment constructs an environment map of the current environment.
In the embodiment of the present application, the first electronic device may refer to an electronic device that needs to perform self-positioning currently. At least one camera should be configured on the first electronic device. The current environment may refer to a scene where the first electronic device is currently located, and the scene may be an indoor scene or an outdoor scene. When the first electronic device performs self-positioning in the current environment, an environment map of the current environment may be first constructed, and the environment map may be a three-dimensional map.
In one possible implementation manner of the embodiment of the present application, the first electronic device may call its own camera to shoot the current environment, and construct an environment map of the current environment by identifying the shot image.
S502, the first electronic equipment performs self-positioning based on the environment map, and the initial pose of the first electronic equipment in the environment map is obtained.
Self-positioning refers to that the electronic equipment establishes a model of the environment in the motion process under the condition of no environment priori information, and simultaneously estimates the motion of the electronic equipment through self-positioning.
In the embodiment of the present application, the first electronic device may perform self-positioning based on the environment map constructed in S501, and determine an initial pose of itself in the environment map.
It should be noted that, the pose refers to a position and a pose, and the initial pose of the first electronic device refers to a current position and a current pose of the first electronic device. The "initial pose" in this step is only used to distinguish the current position and pose of the first electronic device from the position and pose in the subsequent step, and in this embodiment, the "initial pose" has no other special meaning.
In one possible implementation manner of the embodiment of the present application, the first electronic device may be configured with a SLAM function. Therefore, when the first electronic device is placed in the current environment, the SLAM can start working, and positioning and map construction are automatically performed, so that the initial pose of the first electronic device is obtained.
S503, the first electronic device identifies a second electronic device with a camera in the current environment.
In the embodiment of the present application, the second electronic device may be the same type of electronic device as the first electronic device, or may be a different type of electronic device from the first electronic device. The second electronic device should have a camera or the like capable of image acquisition.
The first electronic device may be, for example, a VR headset. Thus, the second electronic device may be another VR headset, or the second electronic device may be an electronic device with a camera such as a cell phone, notebook computer, or the like. The embodiments of the present application are not limited in this regard.
In one possible implementation manner of the embodiment of the application, the first electronic device may identify the second electronic device in the current environment through a deep learning or visual manner. For example, the first electronic device may take a photograph in a current environment, and then process the photographed image to identify whether a second electronic device having a camera exists in the current environment.
In another possible implementation manner of the embodiment of the present application, the first electronic device may also determine, in a wired or wireless manner, whether there is a connectable second electronic device with a camera in the current environment. The first electronic device may send the connection request to the outside in a bluetooth pairing manner. If there are other connectable electronic devices, the first electronic device may determine, by means of message interaction, whether the electronic device has a camera after the electronic device completes bluetooth connection.
In this embodiment of the present application, after recognizing that a second electronic device with a camera exists in a current environment, the first electronic device may apply for permission to invoke the camera of the second electronic device to perform image capturing. After obtaining the authority of the camera capable of calling the second electronic device, the first electronic device may execute S504, and determine the pose to be fused of the first electronic device by using the second electronic device.
S504, the first electronic device adopts the second electronic device to determine the pose to be fused of the first electronic device.
In the embodiment of the application, the pose to be fused of the first electronic device may refer to a pose obtained after the first electronic device processes some data, and the pose to be fused may be used for fusing with the initial pose, so as to obtain a target pose of the first electronic device in a current environment. Some of the data may include image data captured by a camera of the first electronic device and a camera of the second electronic device.
In one possible implementation manner of the embodiment of the present application, when the first electronic device determines the pose to be fused by using the second electronic device, the first pose of the camera of the second electronic device in the environment map may be determined first.
In one example, a first electronic device may employ a visual method to locate a camera of a second electronic device to obtain a first pose of the camera of the second electronic device in an environment map. For example, the first electronic device may collect an image of the second electronic device using its own camera, and then determine, according to the collected image, a first pose of the camera of the second electronic device in the environment map.
In another example, the first electronic device may also determine a first pose of a camera of the second electronic device in the environment map using wireless positioning. The first electronic device may determine the first pose of the second electronic device and its camera in the environment map by acquiring a positioning signal sent by the second electronic device.
In the embodiment of the application, when the first electronic device determines the pose to be fused by adopting the second electronic device, the second pose of the first electronic device in the coordinate system corresponding to the camera of the second electronic device may also be determined.
The process of determining the second pose of the first electronic device in the coordinate system corresponding to the camera of the second electronic device may be performed at the first electronic device end or performed at the second electronic device end.
In an example, taking the process of determining the second pose as an example at the first electronic device side, after the first electronic device obtains the authority to invoke the camera of the second electronic device, the camera may be controlled to collect the image information including the first electronic device. For example, the first electronic device may send a control instruction to the second electronic device, instruct the camera of the second electronic device to perform image capturing, and obtain image information including the first electronic device. And then, the image information can be sent to the first electronic equipment by the second electronic equipment, the first electronic equipment processes the image information, and the second pose of the first electronic equipment in the coordinate system corresponding to the camera of the second electronic equipment is determined according to the image information.
In one possible implementation manner of the embodiment of the present application, when the first electronic device determines, according to the received image information, the second pose of the first electronic device in the coordinate system corresponding to the second electronic device camera, the plurality of feature points in the image information may be first extracted. The feature points can be feature points extracted by any feature extraction algorithm based on a feature point detection algorithm (features from accelerated segment test, FAST), an orientation FAST rotation feature transformation (oriented FAST and rotated BRIEF, ORB) or a scale-invariant feature transformation (scale-invariant feature transform, SIFT), and the corresponding feature descriptors can use BRIEF and the like. Then, the first electronic device may match the plurality of feature points with a preset feature dictionary, to obtain target feature points for characterizing the first electronic device. The first electronic device can determine the second pose of the first electronic device in the coordinate system corresponding to the camera of the second electronic device according to the target feature points obtained through matching. The feature dictionary may be obtained by extracting features of an image of the first electronic device by the first electronic device and/or the second electronic device, and for example, the first electronic device and/or the second electronic device may collect the ORB feature points in the image containing the first electronic device in advance and train to obtain a feature dictionary that may be used to characterize the first electronic device. The feature extraction algorithm and the descriptor used by the first electronic device when extracting the feature points from the image information should be consistent with the feature extraction algorithm and the descriptor used when constructing the feature dictionary.
When the first electronic device matches the extracted plurality of feature points with the feature dictionary, if a feature point matches and hits a feature point in the feature dictionary, the feature point can be considered to belong to a target feature point which can be used for representing the first electronic device.
In the embodiment of the application, the first electronic device may determine whether the two feature points are matched by calculating the euclidean distance or the hamming distance between the descriptors of the feature points. For example, if the euclidean distance between the descriptor of a certain feature point extracted from the image information and the descriptor of a certain feature point in the feature dictionary is smaller than a certain threshold, the first electronic device may determine that the two match. The first electronic device may take the feature point as a target feature point.
According to the target feature points obtained by matching, the first electronic device can determine the second pose of the first electronic device in the coordinate system corresponding to the camera of the second electronic device by using a geometric method or an optimization solving mode.
In another example, taking the process of determining the second pose as an example at the second electronic device end, after the first electronic device obtains the authority for calling the camera of the second electronic device, a control instruction may be sent to the second electronic device to instruct the second electronic device to calculate the second pose of the first electronic device in the coordinate system corresponding to the camera of the second electronic device. After the second electronic device completes the calculation process and obtains the information of the second pose, the information of the second pose can be sent to the first electronic device.
It should be noted that, the process of calculating the second pose by the second electronic device is similar to the process of calculating the second pose by the first electronic device, and reference may be made to the foregoing description.
In this embodiment of the present application, after determining a first pose of a camera of a second electronic device in an environment map and a second pose of the first electronic device in a coordinate system corresponding to the camera of the second electronic device, the first electronic device may generate a pose to be fused of the first electronic device according to the first pose and the second pose.
S505, the first electronic equipment fuses the initial pose and the pose to be fused to obtain the target pose of the first electronic equipment.
In the embodiment of the application, the initial pose is a pose obtained when the first electronic device performs self-positioning, and the pose to be fused is a pose calculated by combining with other cameras in the current environment. The first electronic equipment can obtain the target pose of the first electronic equipment by fusing the initial pose and the pose to be fused.
In one example, if the first electronic device is facing a weak texture area such as a white wall, a ground, etc., the initial pose obtained by locating the image captured by the camera of the first electronic device may not be accurate. The first electronic equipment obtains the pose to be fused by utilizing other cameras in the current environment to assist in positioning, so that the initial pose and the pose to be fused are fused, a target pose with higher accuracy can be obtained, and the positioning accuracy and the positioning robustness are improved.
It should be noted that, depending on the algorithm actually used, the definition of the weak texture region may also be different. For example, for a region where the extractable feature point is smaller than a certain value, it can be regarded as a weak texture region; alternatively, the weak texture region may be determined from the gradient, and a region having a gradient average value within a certain section may be regarded as the weak texture region, which is not limited in the embodiment of the present application.
In one possible implementation manner of the embodiment of the present application, the second electronic device in the current environment may include a plurality of electronic devices, that is, there are a plurality of electronic devices in the current environment, each electronic device having a camera, and being capable of assisting in positioning the first electronic device. Therefore, when the first electronic equipment determines the target pose, each camera can be used for assisting in positioning, and the positioning accuracy and robustness are further improved.
In the embodiment of the application, for each second electronic device, the first electronic device may respectively use the camera on the second electronic device to perform auxiliary positioning, so as to obtain a pose to be fused. Thus, the plurality of second electronic devices can obtain a plurality of pose to be fused. It should be noted that, the specific processing procedure of obtaining a pose to be fused for each electronic device may refer to the description of each step, and will not be repeated herein.
Then, the first electronic device can process the multiple pose to be fused to obtain the pose of the target to be fused. For a plurality of pose to be fused, the first electronic device may perform processing in a weighted summation manner to obtain the pose of the target to be fused.
The first electronic equipment can fuse the initial pose and the target pose to be fused to obtain the target pose with higher accuracy.
For easy understanding, a description will be made below of a positioning method of an electronic device according to an embodiment of the present application in conjunction with a specific example.
Referring to fig. 6 (a), a flowchart of steps of another positioning method of an electronic device according to an embodiment of the present application is shown. Taking a first electronic device as a VR helmet and a second electronic device as a mobile phone as an example, the positioning method of the electronic device comprises the following steps:
in S601a, the VR headset SLAM works, performs positioning and mapping, and obtains an initial pose.
In this step, the VR headset is configured with SLAM functionality, and when the VR headset is activated, the SLAM begins to operate and the VR headset can acquire an initial pose. The initial pose is obtained by the VR headset automatically locating in the current environment and constructing an environment map (i.e., SLAM map) of the current environment.
In S602a, the VR headset identifies an electronic device in the current environment that includes a camera.
In this step, the electronic device having a camera in the current environment may be a mobile phone in the environment. VR helmets can identify electronic devices in the environment that include cameras by way of deep learning or visual identification.
In S603a, the VR headset obtains the rights of the environmental camera.
In this step, for the identified mobile phone, the VR headset may send information to the mobile phone requesting access to rights that may invoke a mobile phone camera (an environmental camera) for assisting in locating the VR headset.
In S604a, the VR headset records a first pose of an environmental camera in a SLAM map.
In the step, a VR helmet can adopt a visual method, an image of an environmental camera is acquired by utilizing a camera of the VR helmet, and then a first pose of the environmental camera in an SLAM map is determined according to the acquired image; alternatively, the VR headset may also employ a wireless positioning to determine a first pose of the environmental camera in the SLAM map. It should be noted that, when the VR headset adopts a visual method to determine the first pose of the environmental camera in the SLAM map, the environmental camera should be ensured to be within the field of view of the VR headset.
In S605a, the VR headset obtains image information of an environmental camera.
In this step, the VR headset may send a control instruction to the mobile phone, instructing the mobile phone to call its camera to take a picture of the VR headset, to obtain image information including an image of the VR headset. The image information can be sent to the VR helmet by the mobile phone, and the subsequent processing process is completed in the VR helmet.
In S606a, the VR headset extracts feature points in the image information.
In this step, the VR headset may perform feature point extraction on the received image information using an arbitrary feature point extraction algorithm. For example, ORB, FAST, etc., the feature descriptors may employ BRIEF.
In S607a, feature points of the VR headset are collected offline, and a feature dictionary is trained for storage.
In this step, the algorithm and descriptor used to acquire the feature points of the VR headset offline should remain consistent with the algorithm and descriptor employed in S606 a. The offline acquisition of feature points of the VR headset may be performed after an image of the VR headset is acquired. The image of the VR helmet can be obtained by shooting through a mobile phone in each step and transmitted to the VR helmet, or can be acquired by other equipment and then transmitted to the VR helmet. The purpose of training the feature dictionary is to match the feature points in a subsequent step to find target feature points that can be used to characterize the VR headset.
In S608a, the VR headset matches the feature points.
In this step, the VR headset may match the feature points extracted in S606a with the feature dictionary trained in S607 a. For a matching hit feature point, it can be considered a target feature point for characterizing a VR headset.
In S609a, a second pose of the VR headset in the environmental camera coordinate system is calculated.
In the step, the VR helmet can utilize a geometric method or an optimization solving method to calculate the pose of the image of the VR helmet, which is obtained by shooting by the environment camera, so as to obtain the second pose of the VR helmet in the coordinate system of the environment camera.
In S610a, the VR headset combines the first pose and the second pose, and calculates a pose to be fused.
In this step, the VR headset may process the first pose obtained in S604a and the second pose obtained in S609a, to obtain the pose to be fused. The pose to be fused can be regarded as the pose of the VR helmet calculated by using the assistance of the environmental camera.
In S611a, the initial pose and the pose to be fused are fused, so as to obtain the target pose of the VR helmet.
In the step, the VR helmet can fuse the initial pose and the pose to be fused to obtain a target pose with higher accuracy, and the positioning accuracy and the positioning robustness are improved.
In the positioning method of the electronic device shown in fig. 6 (a), the electronic device having the environmental camera transmits the image acquired by the environmental camera to the VR headset, and the VR headset performs the processing of each step. That is, each of the above steps, including S605a-S609a, is accomplished by a VR helmet.
In the embodiment of the application, the current pose of the VR helmet is jointly calculated by transmitting the image acquired by the camera in the environment to the VR helmet, so that the positioning accuracy and the robustness of the VR helmet can be improved; the problem that the VR helmet cannot be positioned when facing a scene such as a white wall, a ground and the like without texture or with weak texture can be solved by utilizing other cameras with different directions in the environment to acquire images instead of only using the camera of the VR helmet.
In a possible implementation manner of the embodiment of the present application, after the image information of the VR headset is obtained by the mobile phone camera, the pose calculation may also be directly performed in the mobile phone, so as to obtain the second pose. And then the mobile phone transmits the calculated second pose to the VR helmet for processing. Taking a first electronic device as a VR helmet and a second electronic device as a mobile phone as an example, referring to fig. 6 (b), a flowchart of steps of a positioning method of another electronic device according to an embodiment of the present application is shown, where the method includes the following steps:
In S601b, the VR headset SLAM works, positioning and mapping are performed, and an initial pose is obtained.
In this step, the VR headset is configured with SLAM functionality, and when the VR headset is activated, the SLAM begins to operate and the VR headset can acquire an initial pose. The initial pose is obtained by automatically positioning the VR headset in the current environment and constructing an environment map (SLAM map) of the current environment.
In S602b, the VR headset identifies an electronic device in the current environment that includes a camera.
In this step, the electronic device having a camera in the current environment may be a mobile phone in the environment. VR helmets can identify electronic devices in the environment that include cameras by way of deep learning or visual identification.
In S603b, the VR headset obtains the rights of the environmental camera.
In this step, for the identified mobile phone, the VR headset may send information to the mobile phone requesting access to rights that may invoke a mobile phone camera (an environmental camera) for assisting in locating the VR headset.
In S604b, the VR headset records a first pose of the environmental camera in the SLAM map.
In the step, a VR helmet can adopt a visual method, an image of an environmental camera is acquired by utilizing a camera of the VR helmet, and then a first pose of the environmental camera in an SLAM map is determined according to the acquired image; alternatively, the VR headset may also employ a wireless positioning to determine a first pose of the environmental camera in the SLAM map. It should be noted that, when the VR headset adopts a visual method to determine the first pose of the environmental camera in the SLAM map, the environmental camera should be ensured to be within the field of view of the VR headset.
In S605b, the mobile phone acquires image information of the environmental camera.
In this step, the VR headset may send a control instruction to the mobile phone, instruct the mobile phone to call its camera to take a picture of the VR headset, and calculate the second pose of the VR headset in the environmental camera coordinate system.
In S606b, the mobile phone extracts feature points in the image information.
In this step, the mobile phone may use any feature point extraction algorithm to extract feature points from the acquired image information. For example, ORB, FAST, etc., the feature descriptors may employ BRIEF.
In S607b, feature points of the VR headset are collected offline, and a feature dictionary is trained for storage.
In this step, the algorithm and descriptor used to acquire the feature points of the VR headset offline should remain consistent with the algorithm and descriptor employed in S606 b. The offline acquisition of feature points of the VR headset may be performed after an image of the VR headset is acquired. For example, multiple images of the VR headset may be captured using an environmental camera, and then feature point extraction and training may be performed on the multiple images by the phone to obtain a feature dictionary that may be used to characterize the VR headset. The purpose of training the feature dictionary is to match the feature points in a subsequent step to find target feature points that can be used to characterize the VR headset.
In S608b, the mobile phone matches the feature points.
In this step, the mobile phone may match the feature points extracted in S606b with the feature dictionary trained in S607 b. For a matching hit feature point, it can be considered a target feature point for characterizing a VR headset.
In S609b, a second pose of the VR headset in the environmental camera coordinate system is calculated.
In the step, the mobile phone can utilize a geometric method or an optimization solving method to calculate the pose of the image of the VR helmet, which is obtained by shooting by the environment camera, so as to obtain the second pose of the VR helmet in the coordinate system of the environment camera.
Note that, the positioning method of the electronic device shown in fig. 6 (b) is that the mobile phone calculates the second pose. That is, S605b-S609b are all completed in the mobile phone with the environmental camera.
After the second pose is calculated, the mobile phone can transmit the second pose to the VR helmet, and the VR helmet performs further processing.
In S610b, the VR headset combines the first pose and the second pose, and calculates a pose to be fused.
In this step, after receiving the information of the second pose transmitted by the mobile phone, the VR headset may process the second pose and the first pose obtained in S604b to obtain the pose to be fused. The pose to be fused can be regarded as the pose of the VR helmet calculated by using the assistance of the environmental camera.
In S611b, the initial pose and the pose to be fused are fused, to obtain the target pose of the VR helmet.
In the step, the VR helmet can fuse the initial pose and the pose to be fused to obtain a target pose with higher accuracy, and the positioning accuracy and the positioning robustness are improved.
In this application embodiment, obtain the image through utilizing other cameras that the direction is different in the environment to calculate the position appearance of VR helmet in environment camera coordinate system according to this, transmit the position appearance that will calculate to the VR helmet, help reducing the data transmission volume in the location in-process, occupation to VR helmet resource when reducing the position appearance of calculating.
The embodiment of the application may divide the functional modules of the electronic device according to the above method example, for example, each functional module may be divided corresponding to each function, or one or more functions may be integrated into one functional module. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation. Each functional module is described below as an example of division of each function.
Corresponding to the above embodiments, referring to fig. 7, there is shown a block diagram of a positioning device of an electronic device provided in an embodiment of the present application, where the device may be applied to the first electronic device in the foregoing embodiments, and the device may specifically include the following modules: an environment map building module 701, an initial pose calculating module 702, an electronic device identifying module 703, a pose calculating module 704 to be fused and a pose fusing module 705, wherein:
the environment map construction module is used for constructing an environment map of the current environment;
the initial pose calculation module is used for carrying out self-positioning based on the environment map to obtain the initial pose of the first electronic equipment in the environment map;
the electronic equipment identification module is used for identifying second electronic equipment with a camera in the current environment;
the pose to be fused calculation module is used for determining the pose to be fused of the first electronic device by adopting the second electronic device;
and the pose fusion module is used for fusing the initial pose and the pose to be fused to obtain the target pose of the first electronic equipment.
In this embodiment of the present application, the pose computing module to be fused is specifically configured to: determining a first pose of the camera in the environment map; determining a second pose of the first electronic device in a coordinate system corresponding to the camera; and generating the pose to be fused of the first electronic equipment according to the first pose and the second pose.
In this embodiment of the present application, the pose computing module to be fused is further specifically configured to: acquiring an image of the second electronic device; and determining a first pose of the camera in the environment map according to the image of the second electronic device.
In this embodiment of the present application, the pose computing module to be fused is further specifically configured to: acquiring a positioning signal sent by the second electronic equipment; and determining a first pose of the camera in the environment map according to the positioning signal.
In this embodiment of the present application, the pose computing module to be fused is further specifically configured to: controlling the camera to acquire image information comprising the first electronic equipment; and determining a second pose of the first electronic equipment in a coordinate system corresponding to the camera according to the image information.
In this embodiment of the present application, the pose computing module to be fused is further specifically configured to: extracting a plurality of characteristic points in the image information; matching the plurality of feature points with a preset feature dictionary to obtain target feature points for representing the first electronic equipment; and determining a second pose of the first electronic device in a coordinate system corresponding to the camera according to the target feature points, wherein the feature dictionary is obtained by extracting features of the first electronic device and/or the second electronic device on the image of the first electronic device.
In this embodiment of the present application, the pose computing module to be fused is further specifically configured to: controlling the second electronic equipment to calculate a second pose of the first electronic equipment in a coordinate system corresponding to the camera; and receiving the second pose sent by the second electronic equipment.
In the embodiment of the application, the second electronic device includes a plurality of electronic devices with the cameras, and the pose to be fused includes a plurality of poses determined by using the plurality of second electronic devices;
correspondingly, the pose fusion module is specifically used for: processing the multiple pose to be fused to obtain the pose of the target to be fused; and fusing the initial pose and the target pose to be fused to obtain the target pose of the first electronic equipment.
It should be noted that, all relevant contents of each step related to the above method embodiment may be cited to the functional description of the corresponding functional module, which is not described herein.
The embodiment of the application also provides an electronic device, which may be the first electronic device in the foregoing embodiments, where the electronic device includes a memory, a processor, and a computer program stored in the memory and capable of running on the processor, and when the processor executes the computer program, the positioning method of the electronic device in the foregoing embodiments is implemented.
Embodiments of the present application also provide a computer storage medium having stored therein computer instructions that, when executed on an electronic device, cause the electronic device to perform the above-described related method steps to implement the positioning method of the electronic device in the above-described embodiments.
Embodiments of the present application also provide a computer program product, which when run on a computer, causes the computer to perform the above-mentioned related steps to implement the positioning method of the electronic device in the above-mentioned embodiments.
The embodiment of the application also provides a positioning system, which comprises the first electronic device and the second electronic device in each embodiment.
The embodiment of the application also provides a chip, which comprises a processor, wherein the processor can be a general-purpose processor or a special-purpose processor. The processor is configured to support the electronic device to perform the related steps, so as to implement the positioning method of the electronic device in each embodiment.
Optionally, the chip further includes a transceiver, where the transceiver is configured to receive control of the processor, and is configured to support the electronic device to perform the related steps, so as to implement the positioning method of the electronic device in the foregoing embodiments.
Optionally, the chip may further comprise a storage medium.
It should be noted that the chip may be implemented using the following circuits or devices: one or more field programmable gate arrays (field programmable gate array, FPGA), programmable logic devices (programmable logic device, PLD), controllers, state machines, gate logic, discrete hardware components, any other suitable circuit or combination of circuits capable of performing the various functions described throughout this application.
Finally, it should be noted that: the foregoing is merely a specific embodiment of the present application, but the protection scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered in the protection scope of the present application.

Claims (11)

1. A method for locating an electronic device, comprising:
the first electronic equipment constructs an environment map of the current environment;
the first electronic equipment performs self-positioning based on the environment map, and obtains an initial pose of the first electronic equipment in the environment map;
the first electronic device identifies a second electronic device with a camera in the current environment;
The first electronic device determines a first pose of the camera in the environment map;
the first electronic equipment determines a second pose of the first electronic equipment in a coordinate system corresponding to the camera;
the first electronic equipment generates a pose to be fused of the first electronic equipment according to the first pose and the second pose;
and the first electronic equipment fuses the initial pose and the pose to be fused to obtain the target pose of the first electronic equipment.
2. The method of claim 1, wherein the first electronic device determining a first pose of the camera in the environment map comprises:
the first electronic device collects images of the second electronic device;
and the first electronic equipment determines a first pose of the camera in the environment map according to the image of the second electronic equipment.
3. The method of claim 1, wherein the first electronic device determining a first pose of the camera in the environment map comprises:
the first electronic device acquires a positioning signal sent by the second electronic device;
And the first electronic equipment determines a first pose of the camera in the environment map according to the positioning signal.
4. A method according to any of claims 1-3, wherein the first electronic device determining a second pose of the first electronic device in a coordinate system corresponding to the camera comprises:
the first electronic equipment controls the camera to acquire image information comprising the first electronic equipment;
and the first electronic equipment determines a second pose of the first electronic equipment in a coordinate system corresponding to the camera according to the image information.
5. The method of claim 4, wherein the determining, by the first electronic device, a second pose of the first electronic device in a coordinate system corresponding to the camera according to the image information comprises:
the first electronic device extracts a plurality of feature points in the image information;
the first electronic device matches the plurality of feature points with a preset feature dictionary to obtain target feature points used for representing the first electronic device, and the feature dictionary is obtained by extracting features of the first electronic device and/or the second electronic device from images of the first electronic device;
And the first electronic equipment determines a second pose of the first electronic equipment in a coordinate system corresponding to the camera according to the target feature points.
6. A method according to any of claims 1-3, wherein the first electronic device determining a second pose of the first electronic device in a coordinate system corresponding to the camera comprises:
the first electronic equipment controls the second electronic equipment to calculate a second pose of the first electronic equipment in a coordinate system corresponding to the camera;
the first electronic device receives the second pose sent by the second electronic device.
7. The method of any of claims 1-3 or 5, wherein the second electronic device comprises a plurality of electronic devices having the camera, the pose to be fused comprising a plurality of poses determined using a plurality of second electronic devices;
correspondingly, the first electronic device fuses the initial pose and the pose to be fused to obtain a target pose of the first electronic device, which comprises the following steps:
the first electronic equipment processes the multiple pose to be fused to obtain the pose of the target to be fused;
And the first electronic equipment fuses the initial pose and the target pose to be fused to obtain the target pose of the first electronic equipment.
8. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the positioning method of the electronic device according to any of claims 1-7 when executing the computer program.
9. A positioning system comprising a first electronic device as claimed in any one of claims 1-7 and a second electronic device.
10. A computer storage medium comprising computer instructions which, when run on an electronic device, perform the method of positioning an electronic device according to any of claims 1-7.
11. A computer program product, characterized in that the computer performs the positioning method of an electronic device according to any of claims 1-7 when the computer program product is run on a computer.
CN202110121715.2A 2021-01-28 2021-01-28 Positioning method of electronic equipment and electronic equipment Active CN114812381B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110121715.2A CN114812381B (en) 2021-01-28 2021-01-28 Positioning method of electronic equipment and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110121715.2A CN114812381B (en) 2021-01-28 2021-01-28 Positioning method of electronic equipment and electronic equipment

Publications (2)

Publication Number Publication Date
CN114812381A CN114812381A (en) 2022-07-29
CN114812381B true CN114812381B (en) 2023-07-18

Family

ID=82526127

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110121715.2A Active CN114812381B (en) 2021-01-28 2021-01-28 Positioning method of electronic equipment and electronic equipment

Country Status (1)

Country Link
CN (1) CN114812381B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116664684B (en) * 2022-12-13 2024-04-05 荣耀终端有限公司 Positioning method, electronic device and computer readable storage medium

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015164696A1 (en) * 2014-04-25 2015-10-29 Google Technology Holdings LLC Electronic device localization based on imagery
US10684485B2 (en) * 2015-03-06 2020-06-16 Sony Interactive Entertainment Inc. Tracking system for head mounted display
US10078218B2 (en) * 2016-01-01 2018-09-18 Oculus Vr, Llc Non-overlapped stereo imaging for virtual reality headset tracking
CN108734736B (en) * 2018-05-22 2021-10-26 腾讯科技(深圳)有限公司 Camera posture tracking method, device, equipment and storage medium
CN109087359B (en) * 2018-08-30 2020-12-08 杭州易现先进科技有限公司 Pose determination method, pose determination apparatus, medium, and computing device
EP3629290B1 (en) * 2018-09-26 2023-01-04 Apple Inc. Localization for mobile devices
US11189054B2 (en) * 2018-09-28 2021-11-30 Apple Inc. Localization and mapping using images from multiple devices
CN112530024A (en) * 2018-10-15 2021-03-19 华为技术有限公司 Data processing method and equipment for virtual scene
US20220067974A1 (en) * 2018-12-21 2022-03-03 Koninklijke Kpn N.V. Cloud-Based Camera Calibration
US10748302B1 (en) * 2019-05-02 2020-08-18 Apple Inc. Multiple user simultaneous localization and mapping (SLAM)
US11145083B2 (en) * 2019-05-21 2021-10-12 Microsoft Technology Licensing, Llc Image-based localization
CN110956571B (en) * 2019-10-10 2024-03-15 华为终端有限公司 SLAM-based virtual-real fusion method and electronic equipment
CN111338474B (en) * 2020-02-19 2022-11-08 Oppo广东移动通信有限公司 Virtual object pose calibration method and device, storage medium and electronic equipment
CN111442722B (en) * 2020-03-26 2022-05-17 达闼机器人股份有限公司 Positioning method, positioning device, storage medium and electronic equipment
CN111862213A (en) * 2020-07-29 2020-10-30 Oppo广东移动通信有限公司 Positioning method and device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN114812381A (en) 2022-07-29

Similar Documents

Publication Publication Date Title
CN115866121B (en) Application interface interaction method, electronic device and computer readable storage medium
CN110495819B (en) Robot control method, robot, terminal, server and control system
CN114119758B (en) Method for acquiring vehicle pose, electronic device and computer-readable storage medium
CN112351156B (en) Lens switching method and device
US20220262035A1 (en) Method, apparatus, and system for determining pose
CN112087649B (en) Equipment searching method and electronic equipment
WO2022007707A1 (en) Home device control method, terminal device, and computer-readable storage medium
CN111127509A (en) Target tracking method, device and computer readable storage medium
CN115589051B (en) Charging method and terminal equipment
CN116048358B (en) Method and related device for controlling suspension ball
CN115619858A (en) Object reconstruction method and related equipment
CN115914461B (en) Position relation identification method and electronic equipment
CN111249728B (en) Image processing method, device and storage medium
CN114812381B (en) Positioning method of electronic equipment and electronic equipment
CN115147451A (en) Target tracking method and device thereof
CN114842069A (en) Pose determination method and related equipment
CN115437601B (en) Image ordering method, electronic device, program product and medium
CN115032640B (en) Gesture recognition method and terminal equipment
CN114283195B (en) Method for generating dynamic image, electronic device and readable storage medium
CN115150542B (en) Video anti-shake method and related equipment
CN114827442B (en) Method for generating image and electronic equipment
WO2022033344A1 (en) Video stabilization method, and terminal device and computer-readable storage medium
CN117009005A (en) Display method, automobile and electronic equipment
CN111339513B (en) Data sharing method and device
CN115393676A (en) Gesture control optimization method and device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant