CN111627114A - Indoor visual navigation method, device and system and electronic equipment - Google Patents

Indoor visual navigation method, device and system and electronic equipment Download PDF

Info

Publication number
CN111627114A
CN111627114A CN202010292954.XA CN202010292954A CN111627114A CN 111627114 A CN111627114 A CN 111627114A CN 202010292954 A CN202010292954 A CN 202010292954A CN 111627114 A CN111627114 A CN 111627114A
Authority
CN
China
Prior art keywords
indoor
coordinate system
image
camera pose
mobile equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010292954.XA
Other languages
Chinese (zh)
Inventor
王金戈
谢航
庹东成
陈南
李正权
刘诗文
刘骁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Megvii Technology Co Ltd
Original Assignee
Beijing Megvii Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Megvii Technology Co Ltd filed Critical Beijing Megvii Technology Co Ltd
Priority to CN202010292954.XA priority Critical patent/CN111627114A/en
Publication of CN111627114A publication Critical patent/CN111627114A/en
Priority to PCT/CN2020/119479 priority patent/WO2021208372A1/en
Priority to JP2022566506A priority patent/JP2023509099A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • G06Q10/047Optimisation of routes or paths, e.g. travelling salesman problem
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Quality & Reliability (AREA)
  • Game Theory and Decision Science (AREA)
  • Automation & Control Theory (AREA)
  • Data Mining & Analysis (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Development Economics (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Navigation (AREA)
  • Instructional Devices (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides an indoor visual navigation method, a device, a system and electronic equipment, wherein the collected indoor image to be positioned is uploaded to a server through mobile equipment, determining the camera pose of the mobile device when acquiring the indoor image through the server, issuing the camera pose corresponding to the indoor image to the mobile device, the mobile device then establishes an AR coordinate system aligned with the world coordinate system based on the camera pose corresponding to the indoor image, planning the shortest route in a pre-imported indoor topological map based on destination information set by a user, finally displaying a current preview image acquired by the mobile equipment on an interface, and the three-dimensional identification used for indicating the route traveling direction is superposed on the current preview image based on the AR coordinate system and the shortest route.

Description

Indoor visual navigation method, device and system and electronic equipment
Technical Field
The invention relates to the technical field of image processing, in particular to an indoor visual navigation method, device and system and electronic equipment.
Background
Electronic map navigation has become a way of finding a way that people mainly rely on when going out, however, the existing navigation technology mainly combines GPS technology to perform outdoor navigation, when a user is located indoors such as a shopping mall, the user can only learn the position of a desired shop (destination) by means of a plan view provided at an entrance of the shopping mall, but in the process of the user going to the destination, along with the change of the position of the user, the user often cannot clearly learn a navigation route from the current position to the destination, and therefore, it usually takes much time and effort to find a way to reach the destination.
Disclosure of Invention
In view of the above, the present invention is directed to an indoor visual navigation method, apparatus, system and electronic device, which can provide an indoor navigation service for a user and guide the user to conveniently arrive at a destination.
In order to achieve the above purpose, the embodiment of the present invention adopts the following technical solutions:
in a first aspect, an embodiment of the present invention provides an indoor visual navigation method, where the method is performed by a mobile device, and the method includes: if an indoor image to be positioned is acquired, uploading the indoor image to a server so that the server determines the camera pose of the mobile equipment when the indoor image is acquired; receiving a camera pose corresponding to the indoor image returned by the server, and establishing an AR coordinate system aligned with a world coordinate system based on the camera pose corresponding to the indoor image; planning a shortest route in a pre-imported indoor topological map based on destination information set by a user; and displaying the current preview image acquired by the mobile equipment on an interface of the mobile equipment, and superposing a three-dimensional identifier for indicating the route path direction on the current preview image based on the AR coordinate system and the shortest route.
Further, the step of establishing an AR coordinate system aligned with a world coordinate system based on the camera pose corresponding to the indoor image includes: establishing an initial AR coordinate system; adjusting the initial AR coordinate system based on a camera pose corresponding to the indoor image to align the AR coordinate system with a world coordinate system.
Further, the step of planning the shortest route in the pre-imported indoor topology map based on the destination information set by the user includes: and planning the shortest route in the pre-imported indoor topological map by utilizing a path planning algorithm based on the destination information set by the user.
Further, the step of superimposing a three-dimensional indicator indicating a route traveling direction on the current preview image includes: detecting a ground plane on the current preview image; determining the three-dimensional coordinates of the shortest route in the AR coordinate system, and generating a three-dimensional identifier for indicating the route traveling direction based on the determined three-dimensional coordinates; and drawing the three-dimensional identification on the ground plane of the current preview image.
Further, the method further comprises: and if the current camera pose sent by the server is received in the navigation process, correcting the AR coordinate system based on the current camera pose so as to keep the corrected AR coordinate system aligned with the world coordinate system.
In a second aspect, an embodiment of the present invention further provides an indoor visual navigation method, where the method is performed by a server, and the method includes: if an indoor image to be positioned uploaded by mobile equipment is received, determining the camera pose of the mobile equipment when the mobile equipment acquires the indoor image; issuing the camera pose corresponding to the indoor image to the mobile equipment so that the mobile equipment establishes an AR coordinate system aligned with a world coordinate system based on the camera pose corresponding to the indoor image, and superposing a three-dimensional identifier for indicating a route path direction on a current preview image acquired by the mobile equipment based on the AR coordinate system and a shortest route; the shortest route is planned in an indoor topological map by the mobile device based on destination information set by a user.
Further, the step of determining a camera pose at which the mobile device captures the indoor image comprises: carrying out feature matching on the indoor image and a visual map in a pre-established visual map library to obtain a camera pose when the mobile equipment acquires the indoor image; wherein the visual map is characterized by a sparse point cloud model of an indoor scene.
Further, the process of establishing the visual map library comprises the following steps: acquiring a plurality of scene images acquired by the mobile equipment in an indoor scene; and performing three-dimensional reconstruction on the plurality of scene images based on an SFM algorithm to obtain a visual map library containing sparse point cloud models corresponding to the plurality of scene images.
Further, the method further comprises: aligning the visual map with a pre-imported indoor floor plan.
Further, the method further comprises: acquiring a current preview image acquired by the mobile equipment in a navigation process at regular time, and determining the current camera pose when the mobile equipment acquires the current preview image; and issuing the current camera pose to the mobile equipment so that the mobile equipment corrects the AR coordinate system based on the current camera pose.
In a third aspect, an embodiment of the present invention provides an indoor visual navigation apparatus, where the apparatus is disposed on a mobile device side, and the apparatus includes: the image uploading module is used for uploading the indoor image to a server if the indoor image to be positioned is acquired, so that the server determines the camera pose of the mobile equipment when the indoor image is acquired; the coordinate system establishing module is used for receiving the camera pose corresponding to the indoor image returned by the server and establishing an AR coordinate system aligned with a world coordinate system based on the camera pose corresponding to the indoor image; the route planning module is used for planning the shortest route in a pre-imported indoor topological map based on destination information set by a user; and the navigation display module is used for displaying the current preview image acquired by the mobile equipment on an interface of the mobile equipment and superposing a three-dimensional identifier for indicating the route traveling direction on the current preview image based on the AR coordinate system and the shortest route.
In a fourth aspect, an embodiment of the present invention provides an indoor visual navigation device, where the device is disposed on a server side, and the device includes: the pose determining module is used for determining the camera pose when the mobile equipment acquires the indoor image if the indoor image to be positioned uploaded by the mobile equipment is received; the device navigation module is used for issuing the camera pose corresponding to the indoor image to the mobile device so that the mobile device establishes an AR coordinate system aligned with a world coordinate system based on the camera pose corresponding to the indoor image, and superimposes a three-dimensional identifier for indicating a route traveling direction on a current preview image acquired by the mobile device based on the AR coordinate system and a shortest route; the shortest route is obtained by planning the mobile equipment in an indoor topological map according to destination information set by a user.
In a fifth aspect, an embodiment of the present invention provides an indoor visual navigation system, where the system includes a mobile device and a server that are connected in communication; wherein the mobile device is configured to perform the method according to any of the first aspect and the server is configured to perform the method according to any of the second aspect.
In a sixth aspect, an embodiment of the present invention provides an electronic device, including: a processor and a storage device; the storage device has stored thereon a computer program which, when executed by the processor, performs the method of any of the first aspects, or the method of any of the second aspects.
In a seventh aspect, the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of the method in any one of the above first aspects or the steps of the method in any one of the above second aspects.
The embodiment of the invention provides an indoor visual navigation method, device, system and electronic equipment, wherein a collected indoor image to be positioned is uploaded to a server through mobile equipment, the camera pose of the mobile equipment when the indoor image is collected is determined through the server, the camera pose corresponding to the indoor image is issued to the mobile equipment, then an AR coordinate system aligned with a world coordinate system is established by the mobile equipment based on the camera pose corresponding to the indoor image, the shortest route is planned in a pre-imported indoor topological map based on destination information set by a user, finally, a current preview image collected by the mobile equipment is displayed on an interface, and a three-dimensional identifier used for indicating the route traveling direction is superposed on the current preview image based on the AR coordinate system and the shortest route, so that the indoor visual navigation is realized. The method provided by the embodiment can guide the user to go to the destination according to the shortest route indoors in an AR mode, and the user experience is better improved.
Additional features and advantages of embodiments of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of embodiments of the invention as set forth above.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
FIG. 2 is a flow chart of an indoor visual navigation method provided by an embodiment of the invention;
FIG. 3 is a flow chart of another indoor visual navigation method provided by an embodiment of the invention;
FIG. 4 is a flow chart of another indoor visual navigation method provided by an embodiment of the invention;
fig. 5 shows a block diagram of an indoor visual navigation apparatus provided in an embodiment of the present invention;
fig. 6 shows a block diagram of another indoor visual navigation device provided by the embodiment of the invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, not all, embodiments of the present invention.
In view of the problem that a user cannot use a mobile terminal such as a mobile phone to perform indoor navigation in the prior art, embodiments of the present invention provide an indoor visual navigation method, apparatus, system and electronic device.
The first embodiment is as follows:
first, an example electronic device 100 for implementing an indoor visual navigation method, apparatus, system and electronic device according to an embodiment of the present invention is described with reference to fig. 1.
As shown in fig. 1, an electronic device 100 includes one or more processors 102, one or more memory devices 104, an input device 106, an output device 108, and an image capture device 110, which are interconnected via a bus system 112 and/or other type of connection mechanism (not shown). It should be noted that the components and structure of the electronic device 100 shown in fig. 1 are exemplary only, and not limiting, and the electronic device may have other components and structures as desired.
The processor 102 may be implemented in at least one hardware form of a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), the processor 102 may be one or a combination of several of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), or other forms of processing units having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 100 to perform desired functions.
The storage 104 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. On which one or more computer program instructions may be stored that may be executed by processor 102 to implement client-side functionality (implemented by the processor) and/or other desired functionality in embodiments of the invention described below. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer-readable storage medium.
The input device 106 may be a device used by a user to input instructions and may include one or more of a keyboard, a mouse, a microphone, a touch screen, and the like.
The output device 108 may output various information (e.g., images or sounds) to the outside (e.g., a user), and may include one or more of a display, a speaker, and the like.
The image capture device 110 may take images (e.g., photographs, videos, etc.) desired by the user and store the taken images in the storage device 104 for use by other components.
Exemplary electronic devices for implementing the indoor visual navigation method, apparatus, system and electronic device according to embodiments of the present invention may be implemented as smart terminals such as smart phones, tablets, wearable electronic devices, computers, servers and the like.
Example two:
in this embodiment, an indoor visual navigation method is provided at a mobile device side, and the method may be executed by a mobile device such as a mobile phone, a tablet computer, a wearable electronic device, and the like, see a flow chart of the indoor visual navigation method shown in fig. 2, and the method mainly includes the following steps S202 to S208:
step S202, if the indoor image to be positioned is collected, the indoor image is uploaded to a server, so that the server can determine the camera pose when the mobile equipment collects the indoor image.
In the following, the mobile device is taken as a mobile phone for explanation, and a user may not know a current position of the user when the user is in an indoor scene, and even how to go to a destination from the current position, so that an indoor image in the current scene may be taken by the mobile phone first, and then the indoor image is uploaded to a server, and the server performs visual positioning based on the indoor image.
And step S204, receiving the camera pose corresponding to the indoor image returned by the server, and establishing an AR coordinate system aligned with the world coordinate system based on the camera pose corresponding to the indoor image.
The AR (Augmented Reality) technology is a technology that fuses virtual information and a real world, and can apply the virtual information to the real world, and can present an effect that a virtual image and a real scene image are on the same picture or the same space by superimposing the virtual information (such as a virtual image) and the real scene, so that a user experiences a virtual and real combined scene. In order to provide better navigation experience for a user, the embodiment is implemented in an AR navigation manner, and since a virtual graph needs to be displayed in a real scene, an AR coordinate system needs to be established, and the AR coordinate system is adjusted based on a camera pose corresponding to an indoor image, so that the AR coordinate system is aligned with a world coordinate system. The AR coordinate system may be established by using the existing SLAM algorithm (simultaneous localization and Mapping, instant positioning and map building), and details are not repeated herein.
In step S206, the shortest route is planned in the indoor topological map imported in advance based on the destination information set by the user.
The user may input a destination name in the mobile phone APP, or directly perform an operation of clicking a destination on an indoor plan presented by the mobile phone APP, which is not limited herein. After the mobile phone acquires the destination information set by the user, route planning can be performed in the indoor topological map which is imported in advance. In the indoor topological map, each shop may be a node in the map, and an indoor route may be a side route in the map. In practical applications, areas such as shopping malls, libraries, museums and the like which need to provide indoor navigation services may provide indoor topological maps to the server in advance, and the server may also be directly converted from indoor plane maps.
And S208, displaying the current preview image acquired by the mobile equipment on an interface of the mobile equipment, and superposing a three-dimensional identifier for indicating the route traveling direction on the current preview image based on the AR coordinate system and the shortest route.
The camera of the mobile device starts to be in a shooting state and continuously acquires images, and the acquired images are displayed on a screen of the mobile terminal, which can also be called that the camera of the mobile device is in an image preview mode. When a user carries out navigation by holding the mobile phone, the camera of the mobile phone is in a preview mode, and the user can see that the three-dimensional identification which is currently overlapped and used for indicating the route traveling direction is marked in an indoor scene (such as marking an arrow on the ground) through a mobile phone interface.
Through the indoor visual navigation method provided by the embodiment, the mobile device can guide the user to go to the destination according to the shortest route indoors in an AR mode, and user experience is improved well.
When establishing the AR coordinate system aligned with the world coordinate system based on the camera pose corresponding to the indoor image, an initial AR coordinate system may be established first, and then the initial AR coordinate system may be adjusted based on the camera pose corresponding to the indoor image to align the AR coordinate system with the world coordinate system.
An AR system, which is typically a Visual-Inertial odometer (VIO) including loop detection, may be provided within the handset, where the AR coordinate system may also be referred to as the VIO coordinate system. The AR system may be an own ARKit in the iOS system, or an arcre in Android, or may be any third-party system capable of implementing a navigation function of the mobile device, which is not limited herein.
The AR coordinate system established in the initial stage usually uses the first frame of acquired image as the origin of coordinates, so that the virtual image drawn based on the AR coordinate system can be better fused with the real scene image in the world coordinate system, and therefore the initial AR coordinate system needs to be adjusted by using the camera pose corresponding to the indoor image, so that the mobile phone can smoothly convert the planned path to the world coordinate system, and the virtual image and the real scene image are better combined.
In practical applications, this embodiment provides a specific implementation manner of planning the shortest route in a pre-imported indoor topological map based on destination information set by a user: and planning the shortest route in the pre-imported indoor topological map by utilizing a path planning algorithm based on the destination information set by the user. The path planning algorithm may be an a-x algorithm, and may of course be other path planning algorithms, which are not limited herein.
In order to provide navigation directions for the user clearly, the present embodiment may mainly refer to the following steps when performing the step S208: (1) detecting a ground plane on a current preview image; (2) determining the three-dimensional coordinates of the shortest route in an AR coordinate system, and generating a three-dimensional identifier for indicating the route traveling direction based on the determined three-dimensional coordinates; (3) and drawing the three-dimensional identification on the ground plane of the current preview image. For example, the three-dimensional identifier may be a virtual arrow, a dashed path identifier, or the like. The user can walk according to the direction indicated by the three-dimensional identification, namely, the user can finally reach the destination by the shortest path.
It is considered that during the AR navigation process, the navigation route may gradually become inaccurate due to errors in the initial visual positioning pose, drift of the AR system itself, and the like, for example, as the navigation route starts to intersect with the building. Therefore, in order to ensure accurate navigation in the whole process, in the embodiment, if the mobile device receives the current camera pose issued by the server in the navigation process, the AR coordinate system is corrected based on the current camera pose, so that the corrected AR coordinate system and the world coordinate system are aligned. That is, the server can regularly acquire the current preview image acquired by the mobile device in the navigation process, determine the current camera pose when the mobile device acquires the current preview image, and issue the current camera pose to the mobile device, and then the mobile device can correct the AR coordinate system based on the current camera pose, so that the navigation accuracy is improved.
Example three:
the present embodiment provides an indoor visual navigation method on a server side, which may be executed by, for example, a cloud server, and with reference to a flow chart of the indoor visual navigation method shown in fig. 3, the method mainly includes the following steps S302 to S304:
step S302, if an indoor image to be positioned uploaded by the mobile equipment is received, determining the camera pose of the mobile equipment when the mobile equipment collects the indoor image. Among other things, the camera pose may include XY coordinates and camera direction orientation.
Step S304, issuing the camera pose corresponding to the indoor image to the mobile equipment so that the mobile equipment establishes an AR coordinate system aligned with a world coordinate system based on the camera pose corresponding to the indoor image, and superposing a three-dimensional identifier for indicating the route routing direction on a current preview image acquired by the mobile equipment based on the AR coordinate system and the shortest route; the shortest route is planned in the indoor topological map by the mobile device based on destination information set by a user.
According to the indoor visual navigation method provided by the embodiment, the server can undertake the steps of large calculation amount such as camera pose calculation and the like required in the navigation process, and then the user is guided to go to the destination in an AR mode through the mobile equipment quickly and conveniently indoors, so that the user experience is improved well.
In this embodiment, the calculation process that consumes time and space can be executed by the server side, such as determining the camera pose when the mobile device acquires the indoor image by the server, and in specific implementation, the server can perform feature matching on the indoor image and a visual map in a pre-established visual map library to obtain the camera pose when the mobile device acquires the indoor image; the visual map is characterized by a sparse point cloud model of an indoor scene, and can be understood as a large number of visual features of the scene. In practical application, a PNP (passive-n-Point) problem can be solved according to a 3D-2D matching relation, namely a motion method from a 3D Point pair to a 2D Point pair is solved, so that the camera pose is solved according to the motion of a characteristic Point pair.
The embodiment provides a specific implementation mode for performing feature matching on an indoor image and a visual map in a pre-established visual map library to obtain a camera pose when a mobile device acquires the indoor image, and the specific implementation mode mainly comprises two links of rough positioning and fine positioning, which are respectively introduced as follows:
in the rough positioning link, a global image descriptor of an indoor image is calculated by utilizing a depth hash algorithm, k key frame images most similar to the global image descriptor are searched in a visual map library to obtain key frame IDs, and then stored key frame information including the poses of the key frames, local feature points, local descriptors and coordinates of corresponding map points in a world coordinate system is searched according to the key frame IDs. And clustering the k key frames according to the poses of the key frames, and clustering the key frames with similar positions into one class. For each cluster, its cluster center provides a preliminary coarse localization result for subsequent fine localization.
In the fine positioning link, firstly, traversing each cluster, extracting local feature points from the indoor image, calculating a local descriptor, matching the local descriptor with the local features of all key frames in the cluster, then taking out the 3D map points corresponding to the feature points successfully matched, and if the number of the successfully matched 3D-2D point pairs is greater than a preset number (such as greater than 5), solving the PNP problem to obtain the camera pose corresponding to the indoor image. The pose obtained by PNP can be used as an initial value, and the BundleAdjustment graph optimization problem is further constructed, so that the pose corresponding to the indoor image can be optimized, and the reprojection error is minimized. And after the pose is optimized, removing the side with larger reprojection error, and constructing the BundleAdjustment graph optimization problem by using the other sides again to finally obtain the more accurate pose corresponding to the indoor image. And if the phenomenon that the number of the 3D-2D point pairs is too small or the reprojection error is too large after optimization occurs in the process, the key frames in the current cluster are considered to be in error matching, and the cluster is abandoned. And if the reprojection error after optimization is smaller, the pose solution is considered to be correct, the result is directly output, and the next clustering cycle is not entered.
After the server is positioned in two steps from coarse positioning to fine positioning, the camera pose corresponding to the indoor image can be obtained accurately. Of course, the above is only one pose determination method provided in this embodiment, and may also be implemented by any other method for determining the pose of the camera, which is not limited herein.
In practical applications, the server also aligns the visual map with the pre-imported indoor floor plan map so as to smoothly transform the planned path to the world coordinate system at the mobile device for indoor navigation. The indoor floor plan can be understood as a building structure diagram, and the indoor floor plan can be uploaded to the server in advance by a shop, for example.
The server can pre-construct a visual map library, and the establishment process of the visual map library comprises the following steps: (1) acquiring a plurality of scene images acquired by the mobile equipment in an indoor scene. In practical application, a large number of images of each scene in a room can be collected in advance so as to construct a relatively accurate sparse point cloud model. (2) And performing three-dimensional reconstruction on the multiple scene images based on an SFM (Structure From motion) algorithm to obtain a visual map library containing sparse point cloud models corresponding to the multiple scene images. The SFM mapping process may be implemented by using an open source algorithm such as COLMAP, Theia, visual SFM, OpenMVG, and the like, which is not limited herein. In practical application, in order to save the disk storage space of the server, the visual map in the visual map library can be compressed, for example, the original features in the visual map can be encoded by using methods such as product quantization, and the like, and only the encoded result is stored in the visual map, and the original features are not stored, so that the size of the map is greatly compressed. When the server uses the visual database to perform visual positioning, the coded visual features can be decoded, and the decoded features are used for matching and pose estimation.
Through the method, the server can bear links of visual map library construction and camera pose determination with large calculation amount, so that the hardware requirement of the mobile equipment is reduced, and the mobile equipment can provide navigation service for users more quickly based on the calculation result of the server.
In addition, in consideration of the reasons that in the AR navigation process, due to the error of the initial visual positioning pose, the drift of an AR system and the like, the navigation route may become inaccurate gradually, the server can also acquire the current preview image acquired by the mobile equipment in the navigation process at regular time and determine the current camera pose when the mobile equipment acquires the current preview image; and issuing the current camera pose to the mobile equipment so that the mobile equipment corrects the AR coordinate system based on the current camera pose.
Example four:
the embodiment provides a specific implementation manner of an indoor visual navigation method based on the third embodiment and the fourth embodiment, and specifically refers to a flowchart of the indoor visual navigation method shown in fig. 4, and specifically includes the following steps:
step S410: collecting a plurality of indoor images;
step S412: and (4) carrying out SFM mapping based on the indoor images to generate a visual map database.
Step S414: and aligning the visual map in the visual map database with the indoor plane distribution map. The indoor plane distribution diagram can also be called as a building structure diagram.
Step S420: receiving an indoor image to be positioned uploaded by a mobile phone;
step S422: extracting image features of an indoor image to be positioned, matching the extracted image features with a visual map, and estimating a camera pose when the indoor image is acquired by a mobile phone;
step S424: returning the estimated camera pose to the mobile phone;
step S430: establishing an AR coordinate system;
step S432: aligning the AR coordinate system with the camera pose when the indoor image is collected by the mobile phone;
step S434: receiving destination information set by a user, and planning the shortest route in an indoor topological map;
step S436: detecting a ground plane;
step S438: the shortest route is converted into three-dimensional coordinates in an AR coordinate system, and the route is plotted on the ground plane with arrows.
For the specific implementation operation of the above steps, reference may be made to the contents of the second embodiment and the third embodiment, which are not described herein again.
The steps S410 to S414 may be collectively referred to as a visual map construction operation, the steps S420 to S424 may be collectively referred to as a cloud visual positioning operation, both the visual map construction operation and the cloud visual positioning operation may be performed by a server, and the steps S430 to S438 may be collectively referred to as a mobile terminal AR navigation operation, and may be specifically performed by a mobile device such as a mobile phone.
The indoor visual navigation method provided by the embodiment allows a user to determine the position of the user at any time and any place by using the mobile phone in a mode of shooting the surrounding environment, the user can see the optimal route planned by the mobile phone through the path selection algorithm through the mobile phone screen after selecting the destination, and the user can walk along the route to reach the destination. In addition, the indoor visual navigation method carries out the calculation process which consumes time and space on the server side, so that the user can realize real-time positioning and navigation on the mobile equipment.
Example five:
corresponding to the indoor visual navigation method provided by the second embodiment, the present embodiment further provides an indoor visual navigation apparatus disposed on a mobile device side, referring to the structural block diagram of the indoor visual navigation apparatus shown in fig. 5, including the following modules:
the image uploading module 502 is used for uploading the indoor image to the server if the indoor image to be positioned is acquired, so that the server determines the camera pose when the mobile device acquires the indoor image;
a coordinate system establishing module 504, configured to receive a camera pose corresponding to the indoor image returned by the server, and establish an AR coordinate system aligned with the world coordinate system based on the camera pose corresponding to the indoor image;
a route planning module 506, configured to plan a shortest route in a pre-imported indoor topological map based on destination information set by a user;
and the navigation display module 508 is configured to display the current preview image acquired by the mobile device on an interface of the mobile device, and superimpose a three-dimensional identifier for indicating a route traveling direction on the current preview image based on the AR coordinate system and the shortest route.
Through the indoor visual navigation device provided by the embodiment, the mobile equipment can guide the user to go to the destination according to the shortest route indoors in an AR mode, and the user experience is better improved.
In one embodiment, the coordinate system establishing module 504 is configured to establish an initial AR coordinate system; and adjusting the initial AR coordinate system based on the camera pose corresponding to the indoor image so as to align the AR coordinate system with the world coordinate system.
In one embodiment, the route planning module 506 is configured to plan the shortest route in a pre-imported indoor topology map using a path planning algorithm based on the destination information.
In one embodiment, the navigation presentation module 508 is configured to detect a ground plane on the current preview image; determining the three-dimensional coordinates of the shortest route in an AR coordinate system, and generating a three-dimensional identifier for indicating the route traveling direction based on the determined three-dimensional coordinates; and drawing the three-dimensional identification on the ground plane of the current preview image.
In an implementation manner, the apparatus further includes a coordinate system correction module, configured to correct the AR coordinate system based on a current camera pose received from the server in a navigation process, so that the corrected AR coordinate system and the world coordinate system are aligned.
The device provided by the embodiment has the same implementation principle and technical effect as the foregoing embodiment, and for the sake of brief description, reference may be made to the corresponding contents in the foregoing method embodiment for the portion of the embodiment of the device that is not mentioned.
Example six:
corresponding to the indoor visual navigation method provided by the second embodiment, the present embodiment further provides an indoor visual navigation device disposed on the server side, referring to the structural block diagram of the indoor visual navigation device shown in fig. 6, including the following modules:
the pose determining module 602 is configured to determine, if an indoor image to be positioned uploaded by the mobile device is received, a camera pose at which the mobile device acquires the indoor image;
the device navigation module 604 is configured to issue a camera pose corresponding to the indoor image to the mobile device, so that the mobile device establishes an AR coordinate system aligned with a world coordinate system based on the camera pose corresponding to the indoor image, and superimposes a three-dimensional identifier for indicating a route travel direction on a current preview image acquired by the mobile device based on the AR coordinate system and a shortest route; the shortest route is obtained by planning the mobile equipment in an indoor topological map according to destination information set by a user.
Through the indoor visual navigation device provided by the embodiment, the server can undertake the steps of large calculation amount such as camera pose calculation and the like required in the navigation process, and then the user is guided to the destination in an AR mode through the mobile equipment quickly and conveniently indoors, so that the user experience is improved well.
In one embodiment, the pose determination module 602 is configured to perform feature matching on the indoor image and a visual map in a pre-established visual map library to obtain a camera pose when the mobile device acquires the indoor image; the visual map is represented by a sparse point cloud model of an indoor scene.
In one embodiment, the apparatus further includes a map building module, configured to obtain a plurality of scene images acquired by the mobile device in an indoor scene; and performing three-dimensional reconstruction on the multiple scene images based on the SFM algorithm to obtain a visual map library containing sparse point cloud models corresponding to the multiple scene images.
In one embodiment, the above apparatus further comprises: and the alignment module is used for aligning the visual map with the pre-imported indoor plane distribution diagram.
In one embodiment, the above apparatus further comprises: the current pose determining module is used for acquiring a current preview image acquired by the mobile equipment in a navigation process at regular time and determining the current camera pose when the mobile equipment acquires the current preview image; and the correction module is used for issuing the current camera pose to the mobile equipment so that the mobile equipment corrects the AR coordinate system based on the current camera pose.
The device provided by the embodiment has the same implementation principle and technical effect as the foregoing embodiment, and for the sake of brief description, reference may be made to the corresponding contents in the foregoing method embodiment for the portion of the embodiment of the device that is not mentioned.
Example seven:
the embodiment first provides an indoor visual navigation system, which comprises the mobile device provided by the embodiment two and the server provided by the embodiment three.
The present embodiment also provides an electronic device, including: a processor and a storage device; the storage device has stored thereon a computer program which, when executed by the processor, performs the method as provided in embodiment two or the method as provided in embodiment three.
The embodiment further provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the computer program performs the method provided in the second embodiment or the method provided in the third embodiment.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the system described above may refer to the corresponding process in the foregoing embodiments, and is not described herein again.
The indoor visual navigation method, apparatus, system and computer program product of the electronic device provided in the embodiments of the present invention include a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the method described in the foregoing method embodiments, and specific implementation may refer to the method embodiments, and will not be described herein again.
In addition, in the description of the embodiments of the present invention, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (15)

1. An indoor visual navigation method, the method being performed by a mobile device, the method comprising:
if an indoor image to be positioned is acquired, uploading the indoor image to a server so that the server determines the camera pose of the mobile equipment when the indoor image is acquired;
receiving a camera pose corresponding to the indoor image returned by the server, and establishing an AR coordinate system aligned with a world coordinate system based on the camera pose corresponding to the indoor image;
planning a shortest route in a pre-imported indoor topological map based on destination information set by a user;
and displaying the current preview image acquired by the mobile equipment on an interface of the mobile equipment, and superposing a three-dimensional identifier for indicating the route path direction on the current preview image based on the AR coordinate system and the shortest route.
2. The method of claim 1, wherein the step of establishing an AR coordinate system aligned with a world coordinate system based on the camera pose corresponding to the indoor image comprises:
establishing an initial AR coordinate system;
adjusting the initial AR coordinate system based on a camera pose corresponding to the indoor image to align the AR coordinate system with a world coordinate system.
3. The method according to claim 1, wherein the step of planning the shortest route in the pre-imported indoor topological map based on the destination information set by the user comprises:
and planning the shortest route in the pre-imported indoor topological map by utilizing a path planning algorithm based on the destination information set by the user.
4. The method of claim 1, wherein the step of superimposing a three-dimensional marker indicating a direction of travel of the route on the current preview image comprises:
detecting a ground plane on the current preview image;
determining the three-dimensional coordinates of the shortest route in the AR coordinate system, and generating a three-dimensional identifier for indicating the route traveling direction based on the determined three-dimensional coordinates;
and drawing the three-dimensional identification on the ground plane of the current preview image.
5. The method of claim 1, further comprising:
and if the current camera pose sent by the server is received in the navigation process, correcting the AR coordinate system based on the current camera pose so as to keep the corrected AR coordinate system aligned with the world coordinate system.
6. An indoor visual navigation method, the method being performed by a server, the method comprising:
if an indoor image to be positioned uploaded by mobile equipment is received, determining the camera pose of the mobile equipment when the mobile equipment acquires the indoor image;
issuing the camera pose corresponding to the indoor image to the mobile equipment so that the mobile equipment establishes an AR coordinate system aligned with a world coordinate system based on the camera pose corresponding to the indoor image, and superposing a three-dimensional identifier for indicating a route path direction on a current preview image acquired by the mobile equipment based on the AR coordinate system and a shortest route; the shortest route is planned in an indoor topological map by the mobile device based on destination information set by a user.
7. The method of claim 6, wherein the step of determining a camera pose at which the mobile device captures the indoor image comprises:
carrying out feature matching on the indoor image and a visual map in a pre-established visual map library to obtain a camera pose when the mobile equipment acquires the indoor image; wherein the visual map is characterized by a sparse point cloud model of an indoor scene.
8. The method of claim 7, wherein the process of establishing the visual map library comprises:
acquiring a plurality of scene images acquired by the mobile equipment in an indoor scene;
and performing three-dimensional reconstruction on the plurality of scene images based on an SFM algorithm to obtain a visual map library containing sparse point cloud models corresponding to the plurality of scene images.
9. The method of claim 7, further comprising:
aligning the visual map with a pre-imported indoor floor plan.
10. The method of claim 6, further comprising:
acquiring a current preview image acquired by the mobile equipment in a navigation process at regular time, and determining the current camera pose when the mobile equipment acquires the current preview image;
and issuing the current camera pose to the mobile equipment so that the mobile equipment corrects the AR coordinate system based on the current camera pose.
11. An indoor visual navigation apparatus, characterized in that the apparatus is provided on a mobile device side, the apparatus comprising:
the image uploading module is used for uploading the indoor image to a server if the indoor image to be positioned is acquired, so that the server determines the camera pose of the mobile equipment when the indoor image is acquired;
the coordinate system establishing module is used for receiving the camera pose corresponding to the indoor image returned by the server and establishing an AR coordinate system aligned with a world coordinate system based on the camera pose corresponding to the indoor image;
the route planning module is used for planning the shortest route in a pre-imported indoor topological map based on destination information set by a user;
and the navigation display module is used for displaying the current preview image acquired by the mobile equipment on an interface of the mobile equipment and superposing a three-dimensional identifier for indicating the route traveling direction on the current preview image based on the AR coordinate system and the shortest route.
12. An indoor visual navigation apparatus, characterized in that the apparatus is provided on a server side, the apparatus comprising:
the pose determining module is used for determining the camera pose when the mobile equipment acquires the indoor image if the indoor image to be positioned uploaded by the mobile equipment is received;
the device navigation module is used for issuing the camera pose corresponding to the indoor image to the mobile device so that the mobile device establishes an AR coordinate system aligned with a world coordinate system based on the camera pose corresponding to the indoor image, and superimposes a three-dimensional identifier for indicating a route traveling direction on a current preview image acquired by the mobile device based on the AR coordinate system and a shortest route; the shortest route is obtained by planning the mobile equipment in an indoor topological map according to destination information set by a user.
13. An indoor visual navigation system, the system comprising a mobile device and a server communicatively coupled; wherein the mobile device is configured to perform the method of any of claims 1 to 5 and the server is configured to perform the method of any of claims 6 to 10.
14. An electronic device, comprising: a processor and a storage device;
the storage device has stored thereon a computer program which, when executed by the processor, performs the method of any of claims 1 to 5, or the method of any of claims 6 to 10.
15. A computer-readable storage medium, having a computer program stored thereon, wherein the computer program, when executed by a processor, performs the steps of the method of any of the preceding claims 1 to 5 or the steps of the method of any of the preceding claims 6 to 10.
CN202010292954.XA 2020-04-14 2020-04-14 Indoor visual navigation method, device and system and electronic equipment Pending CN111627114A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202010292954.XA CN111627114A (en) 2020-04-14 2020-04-14 Indoor visual navigation method, device and system and electronic equipment
PCT/CN2020/119479 WO2021208372A1 (en) 2020-04-14 2020-09-30 Indoor visual navigation method, apparatus, and system, and electronic device
JP2022566506A JP2023509099A (en) 2020-04-14 2020-09-30 INDOOR VISUAL NAVIGATION METHOD, APPARATUS, SYSTEM AND ELECTRONIC DEVICE

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010292954.XA CN111627114A (en) 2020-04-14 2020-04-14 Indoor visual navigation method, device and system and electronic equipment

Publications (1)

Publication Number Publication Date
CN111627114A true CN111627114A (en) 2020-09-04

Family

ID=72273170

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010292954.XA Pending CN111627114A (en) 2020-04-14 2020-04-14 Indoor visual navigation method, device and system and electronic equipment

Country Status (3)

Country Link
JP (1) JP2023509099A (en)
CN (1) CN111627114A (en)
WO (1) WO2021208372A1 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112577492A (en) * 2020-12-15 2021-03-30 国科易讯(北京)科技有限公司 Path planning method and system
CN112729327A (en) * 2020-12-24 2021-04-30 浙江商汤科技开发有限公司 Navigation method, navigation device, computer equipment and storage medium
CN113063424A (en) * 2021-03-29 2021-07-02 湖南国科微电子股份有限公司 Method, device, equipment and storage medium for intra-market navigation
CN113159433A (en) * 2021-04-28 2021-07-23 中国科学院沈阳应用生态研究所 Dynamic navigation path searching method for integrated indoor mixed three-dimensional road network
CN113240816A (en) * 2021-03-29 2021-08-10 泰瑞数创科技(北京)有限公司 AR and semantic model based city accurate navigation method and device
WO2021208372A1 (en) * 2020-04-14 2021-10-21 北京迈格威科技有限公司 Indoor visual navigation method, apparatus, and system, and electronic device
CN113587928A (en) * 2021-07-28 2021-11-02 北京百度网讯科技有限公司 Navigation method, navigation device, electronic equipment, storage medium and computer program product
CN113865593A (en) * 2021-09-14 2021-12-31 山东新一代信息产业技术研究院有限公司 Indoor navigation method, equipment and medium
CN115578539A (en) * 2022-12-07 2023-01-06 深圳大学 Indoor space high-precision visual position positioning method, terminal and storage medium
CN117128959A (en) * 2023-04-18 2023-11-28 荣耀终端有限公司 Car searching navigation method, electronic equipment, server and system
EP4215874A4 (en) * 2020-10-28 2023-11-29 Huawei Technologies Co., Ltd. Positioning method and apparatus, and electronic device and storage medium
WO2023246530A1 (en) * 2022-06-20 2023-12-28 中兴通讯股份有限公司 Ar navigation method, and terminal and storage medium
CN112729327B (en) * 2020-12-24 2024-06-07 浙江商汤科技开发有限公司 Navigation method, navigation device, computer equipment and storage medium

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114071392B (en) * 2021-12-28 2023-07-25 智小途(上海)数字科技有限公司 UWB indoor high-precision three-dimensional live-action data construction method and system
CN114111803B (en) * 2022-01-26 2022-04-19 中国人民解放军战略支援部队航天工程大学 Visual navigation method of indoor satellite platform
CN115454055B (en) * 2022-08-22 2023-09-19 中国电子科技南湖研究院 Multi-layer fusion map representation method for indoor autonomous navigation and operation
CN115290110A (en) * 2022-08-25 2022-11-04 广东车卫士信息科技有限公司 AR navigation method, system and computer readable storage medium
CN117152245A (en) * 2023-01-31 2023-12-01 荣耀终端有限公司 Pose calculation method and device
CN116313020B (en) * 2023-05-22 2023-08-18 合肥工业大学 Intelligent processing method and system for medical service
CN116402826B (en) * 2023-06-09 2023-09-26 深圳市天趣星空科技有限公司 Visual coordinate system correction method, device, equipment and storage medium
CN116858215B (en) * 2023-09-05 2023-12-05 武汉大学 AR navigation map generation method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110288763A1 (en) * 2010-05-18 2011-11-24 Alpine Electronics, Inc. Method and apparatus for displaying three-dimensional route guidance
CN106679668A (en) * 2016-12-30 2017-05-17 百度在线网络技术(北京)有限公司 Navigation method and device
CN110017841A (en) * 2019-05-13 2019-07-16 大有智能科技(嘉兴)有限公司 Vision positioning method and its air navigation aid
CN110019580A (en) * 2017-08-25 2019-07-16 腾讯科技(深圳)有限公司 Map-indication method, device, storage medium and terminal
CN110260867A (en) * 2019-07-29 2019-09-20 浙江大华技术股份有限公司 Method, equipment and the device that pose is determining in a kind of robot navigation, corrects

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8938257B2 (en) * 2011-08-19 2015-01-20 Qualcomm, Incorporated Logo detection for indoor positioning
CN106791784B (en) * 2016-12-26 2019-06-25 深圳增强现实技术有限公司 A kind of the augmented reality display methods and device of actual situation coincidence
US10331244B2 (en) * 2017-06-23 2019-06-25 Pixart Imaging Inc. Navagation device with fast frame rate upshift and operating method thereof
CN107782314B (en) * 2017-10-24 2020-02-11 张志奇 Code scanning-based augmented reality technology indoor positioning navigation method
CN109272454B (en) * 2018-07-27 2020-07-03 阿里巴巴集团控股有限公司 Coordinate system calibration method and device of augmented reality equipment
JP7146542B2 (en) * 2018-09-18 2022-10-04 株式会社Screenホールディングス Route guidance program, route guidance device, and route guidance system
CN111627114A (en) * 2020-04-14 2020-09-04 北京迈格威科技有限公司 Indoor visual navigation method, device and system and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110288763A1 (en) * 2010-05-18 2011-11-24 Alpine Electronics, Inc. Method and apparatus for displaying three-dimensional route guidance
CN106679668A (en) * 2016-12-30 2017-05-17 百度在线网络技术(北京)有限公司 Navigation method and device
CN110019580A (en) * 2017-08-25 2019-07-16 腾讯科技(深圳)有限公司 Map-indication method, device, storage medium and terminal
CN110017841A (en) * 2019-05-13 2019-07-16 大有智能科技(嘉兴)有限公司 Vision positioning method and its air navigation aid
CN110260867A (en) * 2019-07-29 2019-09-20 浙江大华技术股份有限公司 Method, equipment and the device that pose is determining in a kind of robot navigation, corrects

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021208372A1 (en) * 2020-04-14 2021-10-21 北京迈格威科技有限公司 Indoor visual navigation method, apparatus, and system, and electronic device
EP4215874A4 (en) * 2020-10-28 2023-11-29 Huawei Technologies Co., Ltd. Positioning method and apparatus, and electronic device and storage medium
CN112577492A (en) * 2020-12-15 2021-03-30 国科易讯(北京)科技有限公司 Path planning method and system
CN112577492B (en) * 2020-12-15 2023-06-13 国科易讯(北京)科技有限公司 Path planning method and system
CN112729327A (en) * 2020-12-24 2021-04-30 浙江商汤科技开发有限公司 Navigation method, navigation device, computer equipment and storage medium
CN112729327B (en) * 2020-12-24 2024-06-07 浙江商汤科技开发有限公司 Navigation method, navigation device, computer equipment and storage medium
CN113240816B (en) * 2021-03-29 2022-01-25 泰瑞数创科技(北京)有限公司 AR and semantic model based city accurate navigation method and device
CN113240816A (en) * 2021-03-29 2021-08-10 泰瑞数创科技(北京)有限公司 AR and semantic model based city accurate navigation method and device
CN113063424A (en) * 2021-03-29 2021-07-02 湖南国科微电子股份有限公司 Method, device, equipment and storage medium for intra-market navigation
CN113159433B (en) * 2021-04-28 2022-02-22 中国科学院沈阳应用生态研究所 Dynamic navigation path searching method for integrated indoor mixed three-dimensional road network
CN113159433A (en) * 2021-04-28 2021-07-23 中国科学院沈阳应用生态研究所 Dynamic navigation path searching method for integrated indoor mixed three-dimensional road network
CN113587928A (en) * 2021-07-28 2021-11-02 北京百度网讯科技有限公司 Navigation method, navigation device, electronic equipment, storage medium and computer program product
CN113865593A (en) * 2021-09-14 2021-12-31 山东新一代信息产业技术研究院有限公司 Indoor navigation method, equipment and medium
WO2023246530A1 (en) * 2022-06-20 2023-12-28 中兴通讯股份有限公司 Ar navigation method, and terminal and storage medium
CN115578539A (en) * 2022-12-07 2023-01-06 深圳大学 Indoor space high-precision visual position positioning method, terminal and storage medium
CN115578539B (en) * 2022-12-07 2023-09-19 深圳大学 Indoor space high-precision visual position positioning method, terminal and storage medium
CN117128959A (en) * 2023-04-18 2023-11-28 荣耀终端有限公司 Car searching navigation method, electronic equipment, server and system

Also Published As

Publication number Publication date
WO2021208372A1 (en) 2021-10-21
JP2023509099A (en) 2023-03-06

Similar Documents

Publication Publication Date Title
CN111627114A (en) Indoor visual navigation method, device and system and electronic equipment
US10134196B2 (en) Mobile augmented reality system
US9699375B2 (en) Method and apparatus for determining camera location information and/or camera pose information according to a global coordinate system
US9646406B2 (en) Position searching method and apparatus based on electronic map
US9558559B2 (en) Method and apparatus for determining camera location information and/or camera pose information according to a global coordinate system
US11557083B2 (en) Photography-based 3D modeling system and method, and automatic 3D modeling apparatus and method
JP6353175B1 (en) Automatically combine images using visual features
TW201229962A (en) Augmenting image data based on related 3D point cloud data
CN110648363A (en) Camera posture determining method and device, storage medium and electronic equipment
CN112750203A (en) Model reconstruction method, device, equipment and storage medium
CN112733641A (en) Object size measuring method, device, equipment and storage medium
CN116086411A (en) Digital topography generation method, device, equipment and readable storage medium
JP2016136439A (en) Line tracking with automatic model initialization by graph matching and cycle detection
CN113610702B (en) Picture construction method and device, electronic equipment and storage medium
CN116858215B (en) AR navigation map generation method and device
CN112634366A (en) Position information generation method, related device and computer program product
WO2023088127A1 (en) Indoor navigation method, server, apparatus and terminal
US9811889B2 (en) Method, apparatus and computer program product for generating unobstructed object views
CN110796706A (en) Visual positioning method and system
KR20160000842U (en) Apparatus for constructing indoor map
CN113763561B (en) POI data generation method and device, storage medium and electronic equipment
CA3102860C (en) Photography-based 3d modeling system and method, and automatic 3d modeling apparatus and method
WO2024001847A1 (en) 2d marker, and indoor positioning method and apparatus
CN117911498A (en) Pose determination method and device, electronic equipment and storage medium
CN116416382A (en) Modeling method and device for road scene, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination