WO2020006685A1 - 一种建立地图的方法、终端和计算机可读存储介质 - Google Patents

一种建立地图的方法、终端和计算机可读存储介质 Download PDF

Info

Publication number
WO2020006685A1
WO2020006685A1 PCT/CN2018/094329 CN2018094329W WO2020006685A1 WO 2020006685 A1 WO2020006685 A1 WO 2020006685A1 CN 2018094329 W CN2018094329 W CN 2018094329W WO 2020006685 A1 WO2020006685 A1 WO 2020006685A1
Authority
WO
WIPO (PCT)
Prior art keywords
map
description information
information
maps
points
Prior art date
Application number
PCT/CN2018/094329
Other languages
English (en)
French (fr)
Inventor
韩立明
林义闽
廉士国
Original Assignee
深圳前海达闼云端智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳前海达闼云端智能科技有限公司 filed Critical 深圳前海达闼云端智能科技有限公司
Priority to PCT/CN2018/094329 priority Critical patent/WO2020006685A1/zh
Priority to CN201880001181.6A priority patent/CN109074757B/zh
Publication of WO2020006685A1 publication Critical patent/WO2020006685A1/zh

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B29/00Maps; Plans; Charts; Diagrams, e.g. route diagram
    • G09B29/003Maps
    • G09B29/006Representation of non-cartographic information on maps, e.g. population distribution, wind direction, radiation levels, air and sea routes
    • G09B29/007Representation of non-cartographic information on maps, e.g. population distribution, wind direction, radiation levels, air and sea routes using computer methods

Definitions

  • the present application relates to the field of detection, and in particular, to a method, a terminal, and a computer-readable storage medium for establishing a map.
  • VSLAM Current Visual Simultaneous localization and mapping
  • the matching of the feature points in the image captured by the terminal and the feature points in the map is the core issue of the VSLAM technology.
  • the feature points extracted through the image have a certain invariance in a short time and a small scene, so that the current VSLAM technical solution can be applied in the above situation.
  • the existing solution is to establish multiple maps for the same space for terminal positioning. That is, in a short period of time, multiple sets of images are collected at a fixed angle, and VSLAM technology is used to establish a feature point map for each group of images, and then multiple maps are added to the labels and managed separately.
  • mapping scheme has at least the following disadvantages: due to the existence of multiple maps, the terminal needs to perform operations such as expanding, updating, and deleting each map separately, and the management efficiency is low. It can be seen that how to improve the management efficiency of maps is a problem that needs to be solved.
  • a technical problem to be solved in some embodiments of the present application is how to improve map management efficiency.
  • An embodiment of the present application provides a method for building a map, including: obtaining N maps describing the same space, where N is an integer greater than 1; extracting spatially invariant features of each of the N maps; and according to the N maps The respective space-invariant features are combined with the information of the map points in the N maps to obtain a merged map.
  • An embodiment of the present application further provides a terminal including at least one processor; and a memory communicatively connected to the at least one processor; wherein the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor.
  • the processor executes to enable at least one processor to execute the method for establishing a map mentioned in the foregoing embodiment.
  • An embodiment of the present application further provides a computer-readable storage medium storing a computer program.
  • the computer program is executed by a processor, the method for establishing a map mentioned in the foregoing embodiment is implemented.
  • the embodiment of the present application only needs to perform operations such as expanding, deleting, and updating map point information.
  • the operation of the map improves the management efficiency of the map.
  • the terminal uses the merged map for positioning, it is not necessary to switch the map for matching, which improves the positioning efficiency.
  • FIG. 1 is a flowchart of a method for establishing a map according to a first embodiment of the present application
  • FIG. 2 is a flowchart of a method of combining information of map points of N-1 maps into a reference map according to a second embodiment of the present application;
  • FIG. 3 is a schematic diagram showing a relationship between a map establishing method and a positioning method according to a second embodiment of the present application.
  • FIG. 4 is a flowchart of a method for positioning using a merged map according to a second embodiment of the present application
  • FIG. 5 is a schematic structural diagram of a terminal according to a third embodiment of the present application.
  • the first embodiment of the present application relates to a method for establishing a map, which is applied to a terminal.
  • the method for establishing a map includes the following steps:
  • Step 101 Obtain N maps describing the same space.
  • N is an integer greater than 1.
  • the terminal collects multiple image sequences in the same space through a visual sensor, and extracts information and spatially invariant features of the map points extracted from each group of image sequences, and establishes based on the information and spatially invariant features of the extracted map points. Maps for each image sequence.
  • the map may be a map established by the terminal according to the image sequence collected by the visual sensor, or a map transmitted to the terminal by the cloud or other terminals, and the source of the map is not limited in this embodiment.
  • Step 102 Extract the spatially invariant features of each of the N maps.
  • the spatially invariant feature may be any one or any combination of line features, semantic features, and tag information.
  • the line feature refers to the characteristics of the line segments in the map, including the length, angle, and intersection of the line segments.
  • Semantic features refer to features assigned to the same map points in different maps.
  • the terminal recognizes the same map points in different maps by using an image recognition method, and assigns the same features to the same map points.
  • Marker information refers to the information on the map of the location markers in space.
  • the positioning mark is a mark arranged at a plurality of fixed positions in the space, and the mark may be a quick response (Quick Response (QR) code or data matrix (data matrix (DM) code.
  • QR Quick Response
  • DM data matrix
  • Step 103 Combine the information of the map points in the N maps according to the space invariant features of the N maps to obtain a merged map.
  • the method of merging the information of the map points in the N maps to obtain the merged maps according to the spatially invariant features of the N maps is exemplified below.
  • Method 1 The terminal creates a new map as a reference map, and determines the relative pose relationship between the reference map and map A in the N maps.
  • map A is any one of the N maps.
  • the terminal adds the information of the spatially invariant features and map points in the map A to the reference map according to the relative pose relationship between the reference map and the map A.
  • the terminal matches the spatially invariant features in the N-1 maps other than map A with the spatially invariant features in the reference map, and determines the corresponding N-1 maps with The relative pose relationship of the base map.
  • the terminal combines the information of the map points of the N-1 maps into the reference map according to the relative pose relationship between the N-1 maps and the reference map, to obtain a merged map.
  • Method 2 The terminal selects one map from the N maps as the reference map, and matches the spatially invariant features in the N-1 maps other than the reference map in the N maps with the spatially invariant features in the reference map, respectively. , According to the respective matching results of the N-1 maps, determine the relative pose relationship between the N-1 maps and the reference map, respectively. The terminal combines the information of the map points of the N-1 maps into the reference map according to the relative pose relationship between the N-1 maps and the reference map, to obtain a merged map.
  • a method for determining the relative pose relationship between map B and map C is taken as an example, and in the methods 1 and 2, the terminal determines the relative pose relationship between N-1 maps and the reference map, respectively.
  • the terminal matches the spatially invariant features in map B with the spatially invariant features in map C. Based on the matching results, point perspective is used.
  • n Points (PnP) pose measurement algorithm to solve the relative pose relationship between map B and map C.
  • the relative pose relationship is optimized by a beam adjustment (BA) algorithm.
  • BA beam adjustment
  • the terminal may merge map points with a distance less than a preset value in the merged map.
  • the position information of the merged map point is determined according to the position information of the map point before the merge. The value can be determined according to actual needs.
  • the spatial coordinates of map point a in the merged map are (xa, ya, za), and the spatial coordinates of map point b are (xb, yb, zb), where xa represents the abscissa of map point a, and ya represents the map.
  • the vertical coordinate of point a, za indicates the vertical coordinate of map point a
  • xb indicates the horizontal coordinate of map point b
  • yb indicates the vertical coordinate of map point b
  • zb indicates the vertical coordinate of map point b.
  • the terminal calculates the distance between map point a and map point b, and judges whether the distance between map point a and map point b is less than a preset value.
  • map point c If it is determined to be smaller, merge map point a and map point b to obtain map point c.
  • the spatial coordinates of map point c are ((xa + xb) / 2, (ya + yb) / 2, (za + zb) / 2).
  • the description information set of map point c includes the description information set of map point a and Set of description information of map point b.
  • determining the location information of the merged map points based on the location information of the map points before the merger is equivalent to collecting the location information of the same map point multiple times, which improves the accuracy of the location information of the map points .
  • a merged map is established, which improves the management efficiency of the map.
  • the merged map is established by the method for establishing a map mentioned in this embodiment, when the terminal needs to expand the map, determine the position of the information that needs to be expanded in the merged map, and add the information that needs to be expanded at that position.
  • Each map of the map is expanded with map information.
  • the terminal needs to delete or update the map point information, it only needs to delete or update the map point information in the merged map, and does not need to delete or update the map point information for each of the N maps.
  • the second embodiment of the present application relates to a method for establishing a map.
  • This embodiment is a further refinement of the first embodiment, and specifically illustrates that according to the relative pose relationship between N-1 maps and the reference map, N The process of merging the information of the map points of the -1 map into the reference map.
  • FIG. 2 a flowchart of a method for combining information of map points of N-1 maps into a reference map is shown in FIG. 2 and includes the following steps:
  • Step 201 Determine the corresponding relationship between the map points in the N-1 map and the map points in the reference map according to the relative pose relationship between the N-1 maps and the reference map.
  • Step 202 Determine the description information set of the map points in the merged map according to the corresponding relationship between the map points in the N-1 maps and the map points in the reference map, and the description information of the map points in the N-1 map.
  • the terminal may cluster the description information in the description information set of the map points by using a clustering algorithm to obtain the clustered description information set.
  • the clustered description information set determines the merged map.
  • the terminal clusters the description information in the description information set of the map points, classifies the description information of the similar map points into one category, and records the center point of the category as the clustered description information. For example, after determining that there is description information of unclassified map points, the terminal randomly selects description information of one map point as the center point from description information of unclassified map points.
  • the terminal For this center point, the terminal performs the following operations: finds all description information that is within a first preset value from the center point and records it as set M; determines the vector from the center point to each element in set M, and compares all vectors Add to get the offset vector; control the center point to move in the direction of the offset vector, the distance of travel is half the modulus of the offset vector; determine whether the modulus of the offset vector is less than the second preset value, and if it is less, record the center point , Otherwise, determine the vector from the current center point to each element in the set M, add all the vectors to get the offset vector, control the center point to move in the direction of the offset vector, and move the distance by half the modulus of the offset vector ... ... until the magnitude of the offset vector is less than the second preset value. After the terminal determines that all description information of the map points is classified, the clustered description information is determined according to the recorded center point.
  • the map point information further includes shooting information corresponding to the description information of the map point, where the shooting information includes shooting brightness and shooting angle.
  • the terminal clusters the description information in the description information set of the map points through a clustering algorithm to obtain the clustered description information set. If the clustering algorithm aggregates the L description information into one category, according to the shooting brightness corresponding to the L description information, the shooting brightness corresponding to the description information clustered by the L description information, and the shooting corresponding to the L description information.
  • the terminal calculates the average value of the shooting brightness corresponding to the L description information as the first average value, and uses the first average value as the shooting brightness corresponding to the description information clustered by the L description information.
  • the terminal calculates an average value of the shooting angles corresponding to the L description information as the second average value, and uses the second average value as the shooting angle corresponding to the description information clustered by the L description information.
  • the following describes the process of determining the shooting information corresponding to the clustered description information in combination with the actual scene.
  • the description information corresponding to the map point P includes first description information (ka), second description information (kb), third description information (kc), fourth description information (kd), fifth description information (ke) ... ....
  • the shooting brightness of ka is the first shooting brightness (hka)
  • the shooting brightness of kb is the second shooting brightness (hkb)
  • the shooting brightness of kc is the third shooting brightness (hkc)
  • the shooting angle of ka is the first The shooting angle (tka)
  • the shooting angle of kb is the second shooting angle (tkb)
  • the shooting angle of kc is the third shooting angle (tkc) ...
  • ka, kb, and kc are divided into one group,
  • the shooting brightness of the description information after the class (hka + hkb + hkc) / 3
  • the shooting angle of the description information after the cluster (tka + tkb + tkc) / 3.
  • merging N maps reduces the storage space of the maps.
  • the storage format of the description information of the map points is: (shooting information 1, map point id, location information, map point description information 1); (shooting information 2, map point id, location information, Map point description information 2); (shooting information 3, map point id, location information, map point description information 3) ... (shooting information p, map point id, location information, map point description information p)
  • p represents the number of description information of map points
  • map point id represents the number of map points by the terminal in order to determine the same map point in different maps.
  • the description information of the map points in the merged map is stored in the form of a description information set, the storage format is: ⁇ map point id, location information, (shooting information 1, Description information of map points 1), (shooting information 2, description information of map points 2), (shooting information 3, description information of map points 3) ... (shooting information q, description information of map points q) ⁇ .
  • the merged map has q less than or equal to p after clustering the description information in the description information set of the merged map.
  • the data type of the shooting information is unsigned character data (unsigned char)
  • the data size is 1 byte
  • the data type of the map point id is unsigned int data (unsigned int)
  • the data size is 4 bytes
  • the position information is the spatial coordinates of the map point.
  • the data type is floating point data (float)
  • the data size is 4 bytes
  • the data type of the map point description information is unsigned char
  • the data size is 8 bytes.
  • the terminal may perform positioning based on the merged map.
  • the relationship between the method of establishing a map and the positioning method is shown in FIG. 3.
  • map merging refers to the merging of maps corresponding to different shooting information.
  • Single state positioning refers to positioning according to the shooting information when the terminal initiates a positioning request.
  • the method for positioning a terminal using a merged map is shown in FIG. 4 and includes the following steps:
  • Step 301 Acquire an image for positioning.
  • Step 302 Extract description information of the map points in the image.
  • Step 303 Determine description information in the merged map that matches description information of map points in the image.
  • the terminal matches the description information of the map points in the image with all the description information in the merged map, and determines the matching description information.
  • the terminal when the terminal extracts the description information of the map in the image, the terminal determines the shooting information of the image by analyzing the image or by using a sensor (such as a light sensor) on the terminal.
  • the terminal determines the shooting information corresponding to the description information of the map point in the image according to the shooting information of the image.
  • the terminal filters and obtains from the description information set of the map points in the merged map according to the shooting information corresponding to the description information of the map points in the image and each description information in the description information set of the map points in the merged map. Descriptive information for matching.
  • the terminal determines a temporary map according to the description information used for matching, and performs positioning according to the temporary map.
  • the terminal determines the description information that matches the description information of the map point in the image according to the description information for matching in the temporary map.
  • the specific process for the terminal to determine the description information for matching is as follows: the terminal calculates a first difference between the shooting brightness of the image and the shooting brightness corresponding to each description information, and the shooting angle of the image A second difference value of the shooting angle corresponding to each description information; according to the first difference value and the second difference value, selecting M description information from the description information set of the map point as the description information for matching; wherein, M is a positive integer.
  • the following describes the process by which the terminal obtains the descriptive information for matching from the feature information set of the map points in the map in combination with the actual scenario.
  • the terminal acquires the shooting brightness of the map points in the second image as H, the shooting angle is L, and the feature information set of the map points in the map is ⁇ (description information a1, shooting brightness h1, shooting angle l1), (description information a2 , Shooting brightness h2, shooting angle l2), (description information a3, shooting brightness h3, shooting angle l3) ... ⁇ , that is, the shooting brightness corresponding to the description information a1 is h1, the corresponding shooting angle is l1, and the description information a2 corresponds to The shooting brightness is h2, and the corresponding shooting angle is l2 ...
  • the terminal calculates the difference between h1 and H and the difference between l1 and L, and sets different weights for the two differences according to actual needs, so as to determine the second image.
  • the terminal calculates the distance between the shooting information of the map points of the second image and the shooting information corresponding to each description information in the feature information set of the map points in the map.
  • the terminal sorts each description information in the feature information set of the map points in the map in the order of the distance from small to large, and selects the first M description information ranked as the description information for matching.
  • a method of setting a distance preset value can also be used to filter the feature information set, that is, the distance of the shooting information corresponding to the description information of the map point in the second image.
  • the description information smaller than the distance preset value is used as the description information for matching.
  • Step 304 Obtain position information of the map points corresponding to the matching description information from the merged map, and determine a positioning result according to the obtained position information.
  • the terminal determines a positioning result by using a pose estimation algorithm, for example, a PnP pose measurement algorithm, according to a matching result between a map point in the map and a map point in the second image.
  • the positioning result includes the pose information of the terminal.
  • the terminal may also first determine that at least T map points in the image match the map successfully .
  • T is a positive integer, for example, T is equal to 10.
  • the pose information of the terminal is calculated to avoid the insufficient number of successful matches and the inability to locate the resource waste caused by the positioning result.
  • the terminal since information of map points in multiple maps is stored in a merged map, when the terminal performs operations such as expansion, deletion, and update of map point information, , Only need to operate the merged map, which improves the management efficiency of the map.
  • the terminal uses the merged map for positioning, it is not necessary to switch the map for matching, which improves the positioning efficiency.
  • the terminal merges the description information of similar map points through a clustering algorithm, reducing the amount of map data.
  • a third embodiment of the present application relates to a terminal.
  • the terminal includes at least one processor 401; and a memory 402 that is communicatively connected to the at least one processor 401.
  • the memory 402 stores instructions that can be executed by the at least one processor 401, and the instructions are executed by the at least one processor 401, so that the at least one processor 401 can execute the method for establishing a map described above.
  • the processor 401 is a central processing unit (Central Processing Unit (CPU) as an example
  • the memory 402 is a readable and writable memory (Random Access Memory, RAM) as an example.
  • the processor 401 and the memory 402 may be connected through a bus or other methods. In FIG. 5, the connection through the bus is taken as an example.
  • the memory 402 is a non-volatile computer-readable storage medium, and can be used to store non-volatile software programs, non-volatile computer executable programs, and modules. For example, description information of map points in the embodiments of the present application is stored in In memory 402.
  • the processor 401 executes various functional applications and data processing of the device by running the non-volatile software programs, instructions, and modules stored in the memory 402, that is, the above method for establishing a map is implemented.
  • the memory 402 may include a storage program area and a storage data area, where the storage program area may store an operating system and an application program required for at least one function; the storage data area may store a list of options and the like.
  • the memory 402 may include a high-speed random access memory, and may further include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other non-volatile solid-state storage device.
  • the memory 402 may optionally include a memory remotely set relative to the processor, and these remote memories may be connected to an external device through a network. Examples of the above network include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
  • One or more modules are stored in the memory, and when executed by one or more processors, execute the method for establishing a map in any of the foregoing method embodiments.
  • the above product can execute the method provided in the embodiment of the present application, and has the corresponding functional modules and beneficial effects of the execution method.
  • the above product can execute the method provided in the embodiment of the present application, and has the corresponding functional modules and beneficial effects of the execution method.
  • a fourth embodiment of the present application relates to a computer-readable storage medium storing a computer program.
  • the computer program is executed by the processor, the method for building a map described in any of the above method embodiments is implemented.
  • the program is stored in a storage medium and includes several instructions to make a device ( It may be a single-chip microcomputer, a chip, or the like) or a processor that executes all or part of the steps of the method described in each embodiment of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disks or optical disks and other media that can store program code .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Ecology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请部分实施例提供了一种建立地图的方法、终端及计算机可读。该建立地图的方法包括:获取描述同一空间的N幅地图,其中,N为大于1的整数;提取N幅地图各自的空间不变特征;根据N幅地图各自的空间不变特征,对N幅地图中的地图点的信息进行合并,得到合并地图。

Description

一种建立地图的方法、终端和计算机可读存储介质 技术领域
本申请涉及检测领域,尤其涉及一种建立地图的方法、终端和计算机可读存储介质。
背景技术
当前的视觉即时定位和建图(Visual Simultaneous localization and mapping,VSLAM)技术,使用视觉传感器采集一组图像,通过特征提取方法,从采集的图像中提取表征空间环境的特征点,并通过双目视差或单目运动视差等方法,计算出特征点在空间中的位置信息,或者使用带有深度信息的摄像头,直接获取特征点对应的位置信息。终端拍摄一组图像时,由这些图像中的特征点可以组成VSLAM中用于定位的地图。在终端移动过程中,将当前时刻传感器拍摄的图像中的特征点与地图中的特征点进行匹配,能够得到终端的位置和姿态。
上述的公开的VSLAM技术方案中,终端拍摄的图像中的特征点与地图中的特征点的匹配是VSLAM技术的核心问题。通过图像提取的特征点在短时间、小场景下具有一定的不变性,使得当前的VSLAM技术方案能够在上述情况下应用。然而,在长时间、大场景的情况下,由于拍摄图像的光照条件、拍摄角度的变化,特征点很可能会发生变化,导致图像中的特征点很难与该地图正确匹配。为了解决这一问题,现有的方案是针对同一空间,建立多幅地图,以便终端定位。即在短时间内、以固定角度采集多组图像,对每一组图像分别运用VSLAM技术建立特征点地图,然后将多个地图加入标签后分别管理。
技术问题
发明人在研究现有技术过程中发现,上述的建图方案中,至少存在以下缺点:由于存在多幅地图,终端需要分别对每幅地图进行扩展、更新、删除等操作,管理效率低。可见,如何提高地图的管理效率,是需要解决的问题。
技术解决方案
本申请部分实施例所要解决的一个技术问题在于如何提高地图的管理效率。
本申请的一个实施例提供了一种建立地图的方法,包括:获取描述同一空间的N幅地图,其中,N为大于1的整数;提取N幅地图各自的空间不变特征;根据N幅地图各自的空间不变特征,对N幅地图中的地图点的信息进行合并,得到合并地图。
本申请的一个实施例还提供了一种终端,包括至少一个处理器;以及,与至少一个处理器通信连接的存储器;其中,存储器存储有可被至少一个处理器执行的指令,指令被至少一个处理器执行,以使至少一个处理器能够执行上述实施例提及的建立地图的方法。
本申请的一个实施例还提供了一种计算机可读存储介质,存储有计算机程序,计算机程序被处理器执行时实现上述实施例提及的建立地图的方法。
有益效果
本申请的实施例相对于现有技术而言,由于多幅地图中的地图点的信息存储于合并地图中,终端在进行地图点的信息的扩展、删除、更新等操作时,仅需要对合并地图进行操作,提高了地图的管理效率。除此之外,终端在使用合并地图进行定位时,不需要切换用于匹配的地图,提高了定位效率。
附图说明
一个或多个实施例通过与之对应的附图中的图片进行示例性说明,这些示例性说明并不构成对实施例的限定,附图中具有相同参考数字标号的元件表示为类似的元件,除非有特别申明,附图中的图不构成比例限制。
图1是本申请第一实施例的建立地图的方法的流程图;
图2是本申请第二实施例的将N-1幅地图的地图点的信息合并至基准地图的方法的流程图;
图3是本申请第二实施例的建立地图的方法和定位方法的关系示意图;
图4是本申请第二实施例的使用合并地图进行定位的方法的流程图;
图5是本申请第三实施例的终端的结构示意图。
本发明的实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请部分实施例进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本申请的第一实施例涉及一种建立地图的方法,应用于终端,如图1所示,该建立地图的方法包括以下步骤:
步骤101:获取描述同一空间的N幅地图。其中,N为大于1的整数。
具体地说,终端通过视觉传感器采集同一空间的多个图像序列,分别提取每组图像序列中提取地图点的信息和空间不变特征,根据提取的地图点的信息和空间不变特征,建立各个图像序列各自对应的地图。
需要说明的是,地图可以是终端根据视觉传感器采集的图像序列建立的地图,也可以是云端或其他终端传输至该终端的地图,本实施例不限制地图的来源。
步骤102:提取N幅地图各自的空间不变特征。
具体地说,空间不变特征可以是线特征、语义特征和标记信息中的任意一种或任意组合。其中,线特征是指地图中的线段的特征,包括线段的长度、角度、交叉点等信息。语义特征是指为不同地图中的相同的地图点分配的特征。终端通过图像识别方法,识别不同地图中的相同的地图点,为相同的地图点分配相同特征。标记信息是指空间中的定位标记在地图中的信息。其中,定位标记是布置在空间中的多个固定位置上的标记,该标记可以是快速反应(Quick Response,QR)码或数据矩阵(data matrix ,DM)码。
步骤103:根据N幅地图各自的空间不变特征,对N幅地图中的地图点的信息进行合并,得到合并地图。
以下对根据N幅地图各自的空间不变特征,对N幅地图中的地图点的信息进行合并,得到合并地图的方法进行举例说明。
方法1:终端创建一个新的地图作为基准地图,确定该基准地图与N幅地图中的地图A的相对位姿关系。其中,地图A为N幅地图中的任意一幅地图。终端根据基准地图与地图A的相对位姿关系,将地图A中的空间不变特征和地图点的信息添加至基准地图中。终端将地图A以外的N-1幅地图中的空间不变特征,分别与基准地图中的空间不变特征匹配,根据N-1幅地图各自对应的匹配结果,确定N-1幅地图分别与基准地图的相对位姿关系。终端根据N-1幅地图分别与基准地图的相对位姿关系,将N-1幅地图的地图点的信息合并至基准地图中,得到合并地图。
方法2:终端从N幅地图中选择一幅地图作为基准地图,将N幅地图中除基准地图以外的N-1幅地图中的空间不变特征,分别与基准地图中的空间不变特征匹配,根据N-1幅地图各自对应的匹配结果,确定N-1幅地图分别与基准地图的相对位姿关系。终端根据N-1幅地图分别与基准地图的相对位姿关系,将N-1幅地图的地图点的信息合并至基准地图中,得到合并地图。
为阐述清楚,本实施例中以地图B和地图C的相对位姿关系的确定方法为例,说明方法1和方法2中,终端确定N-1幅地图分别与基准地图的相对位姿关系的方法。终端将地图B中的空间不变特征与地图C中的空间不变特征进行匹配,根据匹配结果,采用点透视(Perspective n Points,PnP)位姿测量算法,求解地图B和地图C之间的相对位姿关系。可选的,在采用PnP位姿测量算法求解的过程中,通过光束法平差(bundle adjustment,BA)算法优化相对位姿关系。
需要说明的是,将N-1幅地图的地图点的信息合并至基准地图的过程中,可能存在相同的地图点未合并为同一个地图点的情况。针对这一情况,终端可以在得到合并地图之后,将合并地图中距离小于预设值的地图点合并,合并后的地图点的位置信息根据合并前的地图点的位置信息确定,其中,预设值可以根据实际需要确定。例如,合并地图中的地图点a的空间坐标为(xa,ya,za),地图点b的空间坐标为(xb,yb,zb),其中,xa表示地图点a的横坐标,ya表示地图点a的纵坐标,za表示地图点a的竖坐标,xb表示地图点b的横坐标,yb表示地图点b的纵坐标,zb表示地图点b的竖坐标。终端计算地图点a和地图点b 之间的距离,判断地图点a和地图点b 之间的距离是否小于预设值,若确定小于,将地图点a和地图点b合并,得到地图点c,地图点c的空间坐标为((xa+xb)/2,(ya+yb)/2,(za+zb)/2),地图点c的描述信息集合包括地图点a的描述信息集合和地图点b的描述信息集合。
值得一提的是,根据合并前的地图点的位置信息确定合并后的地图点的位置信息,相当于对同一地图点的位置信息进行了多次采集,提高了地图点的位置信息的准确度。
值得一提的是,根据N幅地图的地图点的信息,建立合并地图,提高了地图的管理效率。通过本实施例提及的建立地图的方法建立合并地图后,终端需要扩展地图时,确定需要扩展的信息在合并地图中的位置,在该位置处添加需要扩展的信息即可,无需针对N幅地图的每幅地图进行地图信息的扩展。终端需要删除或更新地图点的信息时,仅需删除或更新合并地图中的地图点的信息,无需针对N幅地图的每幅地图进行地图点的信息的删除或更新。
与现有技术相比,本实施例中提供的建立地图的方法,由于多幅地图中的地图点的信息存储于合并地图中,终端在进行地图点的信息的扩展、删除、更新等操作时,仅需要对合并地图进行操作,提高了地图的管理效率。除此之外,终端在使用合并地图进行定位时,不需要切换用于匹配的地图,提高了定位效率。
本申请的第二实施例涉及一种建立地图的方法,本实施例是对第一实施例的进一步细化,具体说明了根据N-1幅地图分别与基准地图的相对位姿关系,将N-1幅地图的地图点的信息合并至基准地图中的过程。
具体地说,将N-1幅地图的地图点的信息合并至基准地图的方法的流程图如图2所示,包括以下步骤:
步骤201:根据N-1幅地图分别与基准地图的相对位姿关系,确定N-1幅地图中的地图点分别与基准地图中地图点的对应关系。
步骤202:根据N-1幅地图中的地图点分别与基准地图中地图点的对应关系,以及N-1幅地图中的地图点的描述信息,确定合并地图中的地图点的描述信息集合。
需要说明的是,终端可以在得到地图点的描述信息集合之后,通过聚类算法,对地图点的描述信息集合中的描述信息进行聚类,得到聚类后的描述信息集合,根据地图点的聚类后的描述信息集合确定合并地图。
具体地说,终端对地图点的描述信息集合中的描述信息进行聚类,将相近的地图点的描述信息归为一类,记录该类的中心点,作为聚类后的描述信息。例如,终端在确定存在未被分类的地图点的描述信息后,从未被分类的地图点的描述信息中,随机选择一个地图点的描述信息作为中心点。针对该中心点,终端进行以下操作:找到与中心点距离在第一预设值之内的所有描述信息,记作集合M;确定中心点到集合M中每个元素的向量,将所有向量相加,得到偏移向量;控制中心点沿偏移向量的方向移动,移动距离为偏移向量的模的一半;判断偏移向量的模是否小于第二预设值,若小于,记录该中心点,否则,确定当前中心点到集合M中每个元素的向量,将所有向量相加,得到偏移向量,控制中心点沿偏移向量的方向移动,移动距离为偏移向量的模的一半……直至偏移向量的模小于第二预设值。终端确定地图点的所有描述信息都被分类后,根据记录的中心点,确定聚类后的描述信息。
值得一提的是,通过聚类算法,将相似的地图点的描述信息合并,减小了地图的数据量。
另一具体实现中,地图点的信息还包括地图点的描述信息对应的拍摄信息,其中,拍摄信息包括拍摄亮度和拍摄角度。终端在确定地图点的描述信息集合后,通过聚类算法,对地图点的描述信息集合中的描述信息进行聚类,得到聚类后的描述信息集合。若聚类算法将L个描述信息聚为一类,则根据L个描述信息分别对应的拍摄亮度,L个描述信息聚类后的描述信息对应的拍摄亮度,根据L个描述信息分别对应的拍摄角度,确定L个描述信息聚类后的描述信息对应的拍摄角度;根据聚类后的描述信息对应的拍摄亮度,以及聚类后的描述信息对应的拍摄角度,确定聚类后的描述信息集合中每个描述信息对应的拍摄信息。具体实现中,终端计算L个描述信息对应的拍摄亮度的平均值作为第一平均值,将第一平均值作为L个描述信息聚类后的描述信息对应的拍摄亮度。终端计算L个描述信息对应的拍摄角度的平均值作为第二平均值,将第二平均值作为L个描述信息聚类后的描述信息对应的拍摄角度。
以下结合实际场景,说明确定聚类后的描述信息对应的拍摄信息的过程。
例如,地图点P对应的描述信息包括第一描述信息(ka)、第二描述信息(kb)、第三描述信息(kc)、第四描述信息(kd)、第五描述信息(ke)……。其中,ka的拍摄亮度为第一拍摄亮度(hka),kb的拍摄亮度为第二拍摄亮度(hkb),kc的拍摄亮度为第三拍摄亮度(hkc)……,ka的拍摄角度为第一拍摄角度(tka),kb的拍摄角度为第二拍摄角度(tkb),kc的拍摄角度为第三拍摄角度(tkc)……通过聚类算法,ka、kb和kc被分为一类,聚类后的描述信息的拍摄亮度=(hka+hkb+hkc)/3,聚类后的描述信息的拍摄角度=(tka+tkb+tkc)/3。
值得一提的是,将N幅地图合并,减小了地图的存储空间。
以下举例说明将N幅地图合并能够减小地图的存储空间的原因。未合并的N幅地图中,地图点的描述信息的存储格式为:(拍摄信息1,地图点id,位置信息,地图点的描述信息1);(拍摄信息2,地图点id,位置信息,地图点的描述信息2);(拍摄信息3,地图点id,位置信息,地图点的描述信息3)……(拍摄信息p,地图点id,位置信息,地图点的描述信息p)其中,p表示地图点的描述信息的个数,地图点id表示终端对地图点的编号,以便确定不同地图中的相同地图点。使用本实施例提及的建立地图的方法得到合并地图后,合并地图中的地图点的描述信息以描述信息集合的形式存储,存储格式为:{地图点id,位置信息,(拍摄信息1,地图点的描述信息1),(拍摄信息2,地图点的描述信息2),(拍摄信息3,地图点的描述信息3)……(拍摄信息q,地图点的描述信息q)}。合并地图与未合并的地图相比,在对合并地图的描述信息集合中的描述信息进行聚类后,q小于或等于p。假设拍摄信息的数据类型为无符号的字符型数据(unsigned char),数据大小为1个字节,地图点id的数据类型为无符号的整型数据(unsigned int),数据大小为4个字节,位置信息为地图点的空间坐标,每个坐标的数据类型为浮点型数据(float),数据大小为4个字节,故位置信息的数据大小为4*3=12个字节,地图点的描述信息的数据类型为unsigned char,数据大小为8个字节。合并前N幅地图占用的数据大小为p+p*(4+12+8)*地图点数量=p+24*p*地图点数量,合并地图的数据大小是[4+12+q*(1+8)]*地图点数量=(16+q*9)*地图点数量,由于地图点数量通常是很大的数值,对于大场景,一般会达到十万数量级以上,因此,合并地图的数据和N幅地图的数据的比值k近似为(16+q*9)/(24*p),当q=p>1时,k约为37.5%-70.8%,故合并地图的存储空间更小。
需要说明的是,在使用第一实施例或第二实施例提及的建立地图的方法建立合并地图后,终端可以基于该合并地图进行定位。建立地图的方法和定位方法的关系如图3所示,图中多状态下地图合并是指不同拍摄信息对应的地图的合并,单一状态定位是指根据终端发起定位请求时的拍摄信息进行定位。其中,终端使用合并地图进行定位的方法如图4所示,包括以下步骤:
步骤301:获取用于定位的图像。
步骤302:提取图像中的地图点的描述信息。
步骤303:确定合并地图中与图像中的地图点的描述信息相匹配的描述信息。
具体实现中,终端将图像中的地图点的描述信息与合并地图中的所有描述信息匹配,确定相匹配的描述信息。
另一具体实现中,终端在提取图像中的地图的描述信息时,通过解析图像,或通过终端上的传感器(如光敏传感器),确定图像的拍摄信息。终端根据图像的拍摄信息,确定图像中的地图点的描述信息对应的拍摄信息。终端根据图像中的地图点的描述信息对应的拍摄信息,以及合并地图中的地图点的描述信息集合中每个描述信息对应的拍摄信息,从合并地图中的地图点的描述信息集合中筛选得到用于匹配的描述信息。终端根据用于匹配的描述信息,确定临时地图,并根据临时地图进行定位。终端根据临时地图中的用于匹配的描述信息,确定与图像中的地图点的描述信息相匹配的描述信息。当拍摄信息包括拍摄亮度和拍摄角度时,终端确定用于匹配的描述信息的具体过程如下:终端计算图像的拍摄亮度与每个描述信息对应的拍摄亮度的第一差值,以及图像的拍摄角度与每个描述信息对应的拍摄角度的第二差值;根据第一差值和第二差值,从地图点的描述信息集合中,选择M个描述信息作为用于匹配的描述信息;其中,M为正整数。
以下结合实际场景说明终端从地图中的地图点的特征信息集合中筛选得到用于匹配的描述信息的过程。
假设终端获取第二图像中的地图点的拍摄亮度为H,拍摄角度为L,地图中的地图点的特征信息集合为{(描述信息a1,拍摄亮度h1,拍摄角度l1),(描述信息a2,拍摄亮度h2,拍摄角度l2),(描述信息a3,拍摄亮度h3,拍摄角度l3)……},即描述信息a1对应的拍摄亮度为h1,对应的拍摄角度为l1,描述信息a2对应的拍摄亮度为h2,对应的拍摄角度为l2……终端计算h1与H的差值和l1与L的差值,并根据实际需要,为两个差值设置不同的权重,从而确定第二图像中的地图点的拍摄信息与地图中的地图点的拍摄信息的距离dt,即dt=a*(H-h1)+b*(L-l1),其中,a为拍摄亮度的差值的权重,b为拍摄角度的差值权重。以此类推,终端计算第二图像的地图点的拍摄信息与地图中的地图点的特征信息集合中每个描述信息对应的拍摄信息的距离。终端按照该距离从小到大的顺序,对地图中的地图点的特征信息集合中每个描述信息进行排序,选择排在前面的前M个描述信息作为用于匹配的描述信息。
需要说明的是,本领域技术人员可以理解,实际应用中,也可以通过设置距离预设值的方法,筛选特征信息集合,即将与第二图像中的地图点的描述信息对应的拍摄信息的距离小于距离预设值的描述信息作为用于匹配的描述信息。
步骤304:从合并地图中获取相匹配的描述信息对应的地图点的位置信息,根据获取的位置信息确定定位结果。
具体地说,终端根据地图中的地图点与第二图像中的地图点的匹配结果,采用位姿估计算法,例如,PnP位姿测量算法,确定定位结果。其中,定位结果中包括终端的位姿信息。
具体实现中,在从合并地图中获取相匹配的描述信息对应的地图点的位置信息,根据获取的位置信息确定定位结果之前,终端还可以先确定图像中至少存在T个地图点与地图匹配成功。其中,T为正整数,例如,T等于10。
值得一提的是,在匹配成功的地图点的个数达到T后,再对终端的位姿信息进行计算,避免匹配成功个数不足,无法定位的情况下,确定定位结果造成的资源浪费。
与现有技术相比,本实施例中提供的建立地图的方法,由于多幅地图中的地图点的信息存储于合并地图中,终端在进行地图点的信息的扩展、删除、更新等操作时,仅需要对合并地图进行操作,提高了地图的管理效率。除此之外,终端在使用合并地图进行定位时,不需要切换用于匹配的地图,提高了定位效率。除此之外,终端通过聚类算法,将相似的地图点的描述信息合并,减小了地图的数据量。
本申请的第三实施例涉及一种终端,如图5所示,包括至少一个处理器401;以及,与至少一个处理器401通信连接的存储器402。其中,存储器402存储有可被至少一个处理器401执行的指令,指令被至少一个处理器401执行,以使至少一个处理器401能够执行上述建立地图的方法。
本实施例中,处理器401以中央处理器(Central Processing Unit,CPU)为例,存储器402以可读写存储器(Random Access Memory,RAM)为例。处理器401、存储器402可以通过总线或者其他方式连接,图5中以通过总线连接为例。存储器402作为一种非易失性计算机可读存储介质,可用于存储非易失性软件程序、非易失性计算机可执行程序以及模块,如本申请实施例中地图点的描述信息就存储于存储器402中。处理器401通过运行存储在存储器402中的非易失性软件程序、指令以及模块,从而执行设备的各种功能应用以及数据处理,即实现上述建立地图的方法。
存储器402可以包括存储程序区和存储数据区,其中,存储程序区可存储操作***、至少一个功能所需要的应用程序;存储数据区可存储选项列表等。此外,存储器402可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实施例中,存储器402可选包括相对于处理器远程设置的存储器,这些远程存储器可以通过网络连接至外接设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
一个或者多个模块存储在存储器中,当被一个或者多个处理器执行时,执行上述任意方法实施例中的建立地图的方法。
上述产品可执行本申请实施例所提供的方法,具备执行方法相应的功能模块和有益效果,未在本实施例中详尽描述的技术细节,可参见本申请实施例所提供的方法。
本申请的第四实施例涉及一种计算机可读存储介质,存储有计算机程序。计算机程序被处理器执行时实现以上任意方法实施例所描述的建立地图的方法。
即,本领域技术人员可以理解,实现上述实施例方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
本领域的普通技术人员可以理解,上述各实施例是实现本申请的具体实施例,而在实际应用中,可以在形式上和细节上对其作各种改变,而不偏离本申请的精神和范围。

Claims (14)

  1. 一种建立地图的方法,其中,包括:
    获取描述同一空间的N幅地图,其中,N为大于1的整数;
    提取所述N幅地图各自的空间不变特征;
    根据所述N幅地图各自的空间不变特征,对所述N幅地图中的地图点的信息进行合并,得到合并地图。
  2. 如权利要求1所述的建立地图的方法,其中,所述空间不变特征包括线特征、语义特征和标记信息中的任意一种或任意组合;
    其中,所述线特征是指地图中的线段的特征;所述语义特征是指为不同地图中的相同的地图点分配的特征;所述标记信息是指空间中的定位标记在地图中的信息。
  3. 如权利要求1或2所述的建立地图的方法,其中,所述根据所述N幅地图各自的空间不变特征,对所述N幅地图中的地图点的信息进行合并,得到合并地图,具体包括:
    从所述N幅地图中选择一幅地图作为基准地图;
    将所述N幅地图中除所述基准地图以外的N-1幅地图中的空间不变特征,分别与所述基准地图中的空间不变特征匹配;
    根据所述N-1幅地图各自对应的匹配结果,确定所述N-1幅地图分别与所述基准地图的相对位姿关系;
    根据所述N-1幅地图分别与所述基准地图的相对位姿关系,将所述N-1幅地图的地图点的信息合并至所述基准地图中,得到所述合并地图。
  4. 如权利要求1至3中任一项所述的建立地图的方法,其中,根据所述N幅地图各自的空间不变特征,对所述N幅地图中的地图点的信息进行合并,得到合并地图之后,所述建立地图的方法还包括:
    将所述合并地图中距离小于预设值的地图点合并,其中,合并后的地图点的位置信息根据合并前的所述地图点的位置信息确定。
  5. 如权利要求3所述的建立地图的方法,其中,所述地图点的信息包括所述地图点的描述信息;
    所述根据所述N-1幅地图分别与所述基准地图的相对位姿关系,将所述N-1幅地图的地图点的信息合并至所述基准地图中,得到所述合并地图,具体包括:
    根据所述N-1幅地图分别与所述基准地图的相对位姿关系,确定所述N-1幅地图中的地图点分别与所述基准地图中地图点的对应关系;
    根据所述N-1幅地图中的地图点分别与所述基准地图中地图点的对应关系,以及所述N-1幅地图中的地图点的描述信息,确定所述合并地图中的地图点的描述信息集合。
  6. 如权利要求5所述的建立地图的方法,其中,在所述根据所述N-1幅地图中的地图点分别与所述基准地图中地图点的对应关系,以及所述N-1幅地图中的地图点的描述信息,确定所述合并地图中的地图点的描述信息集合之后,所述建立地图的方法还包括:
    通过聚类算法,对所述地图点的描述信息集合中的描述信息进行聚类,得到聚类后的描述信息集合。
  7. 如权利要求6所述的建立地图的方法,其中,所述地图点的信息中还包括:与所述地图点的描述信息对应的拍摄信息。
  8. 如权利要求7所述的建立地图的方法,其中,所述拍摄信息包括拍摄亮度和拍摄角度;
    所述通过聚类算法,对所述地图点的描述信息集合中的描述信息进行聚类,得到聚类后的描述信息集合之后,所述建立地图的方法还包括:
    若所述聚类算法将L个描述信息聚为一类,则根据所述L个描述信息分别对应的拍摄亮度,确定所述L个描述信息聚类后的描述信息对应的拍摄亮度,根据所述L个描述信息分别对应的拍摄角度,确定所述L个描述信息聚类后的描述信息对应的拍摄角度;
    根据所述聚类后的描述信息对应的拍摄亮度,以及所述聚类后的描述信息对应的拍摄角度,确定所述聚类后的描述信息集合中每个聚类后的描述信息对应的拍摄信息。
  9. 如权利要求8所述的建立地图的方法,其中,在所述根据所述聚类后的描述信息对应的拍摄亮度,以及所述聚类后的描述信息对应的拍摄角度,确定所述聚类后的描述信息集合之后,所述建立地图的方法还包括:
    获取用于定位的图像;
    提取所述图像中的地图点的描述信息;
    确定所述合并地图中与所述图像中的地图点的描述信息相匹配的描述信息;
    从所述合并地图中获取所述相匹配的描述信息对应的地图点的位置信息,根据获取的位置信息确定定位结果。
  10. 根据权利要求9所述的建立地图的方法,其中,所述确定所述合并地图中与所述图像中的地图点的描述信息相匹配的描述信息,具体包括:
    确定所述图像的拍摄信息;
    根据所述图像的拍摄信息,以及所述合并地图中的地图点的描述信息集合中每个描述信息对应的拍摄信息,从所述合并地图中的地图点的描述信息集合中筛选得到用于匹配的描述信息;
    从所述用于匹配的描述信息中,确定与所述图像中的地图点的描述信息相匹配的描述信息。
  11. 根据权利要求10所述的建立地图的方法,其中,所述拍摄信息包括拍摄亮度和拍摄角度;
    所述根据所述图像的拍摄信息,以及所述地图中的地图点的描述信息集合中每个描述信息对应的拍摄信息,从所述合并地图中的地图点的描述信息集合中筛选得到用于匹配的描述信息,具体包括:
    计算所述图像的拍摄亮度与每个描述信息对应的拍摄亮度的第一差值,以及所述图像的拍摄角度与每个描述信息对应的拍摄角度的第二差值;
    根据所述第一差值和所述第二差值,从所述地图点的描述信息集合中,选择M个描述信息作为所述用于匹配的描述信息;其中,M为正整数。
  12. 根据权利要求9至11中任一项所述的建立地图的方法,其中,在所述确定地图中与所述图像中的地图点的描述信息相匹配的描述信息之后,在所述根据获取的位置信息确定定位结果之前,所述定位方法还包括:
    确定所述图像中至少存在T个地图点与所述地图匹配成功,其中,T为正整数。
  13. 一种终端,其中,包括至少一个处理器;以及,
    与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行如权利要求1至12任一项所述的建立地图的方法。
  14. 一种计算机可读存储介质,存储有计算机程序,其中,所述计算机程序被处理器执行时实现权利要求1至12任一项所述的建立地图的方法。
PCT/CN2018/094329 2018-07-03 2018-07-03 一种建立地图的方法、终端和计算机可读存储介质 WO2020006685A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2018/094329 WO2020006685A1 (zh) 2018-07-03 2018-07-03 一种建立地图的方法、终端和计算机可读存储介质
CN201880001181.6A CN109074757B (zh) 2018-07-03 2018-07-03 一种建立地图的方法、终端和计算机可读存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/094329 WO2020006685A1 (zh) 2018-07-03 2018-07-03 一种建立地图的方法、终端和计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2020006685A1 true WO2020006685A1 (zh) 2020-01-09

Family

ID=64789292

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/094329 WO2020006685A1 (zh) 2018-07-03 2018-07-03 一种建立地图的方法、终端和计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN109074757B (zh)
WO (1) WO2020006685A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111652934A (zh) * 2020-05-12 2020-09-11 Oppo广东移动通信有限公司 定位方法及地图构建方法、装置、设备、存储介质
CN115001835A (zh) * 2022-06-15 2022-09-02 覃惠玲 一种基于物联网终端的数据加密***
CN115705670A (zh) * 2021-08-06 2023-02-17 北京小米移动软件有限公司 地图管理方法及其装置

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020223974A1 (zh) * 2019-05-09 2020-11-12 珊口(深圳)智能科技有限公司 更新地图的方法及移动机器人
CN112148742A (zh) * 2019-06-28 2020-12-29 Oppo广东移动通信有限公司 地图更新方法及装置、终端、存储介质
CN118095809A (zh) * 2024-04-28 2024-05-28 炬星科技(深圳)有限公司 机器人多任务处理方法及装置、机器人

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106503011A (zh) * 2015-09-07 2017-03-15 高德软件有限公司 一种地图数据处理方法及装置
JP2018050149A (ja) * 2016-09-21 2018-03-29 キヤノン株式会社 画像処理装置
CN107885867A (zh) * 2017-11-22 2018-04-06 苏州联讯图创软件有限责任公司 地图切片数据的合成方法和合成***

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
NL2008690C2 (en) * 2011-04-25 2014-07-15 Google Inc Dynamic highlighting of geographic entities on electronic maps.
KR101692652B1 (ko) * 2012-10-24 2017-01-03 가부시키가이샤 모르포 화상 처리 장치, 화상 처리 방법, 및 기록 매체
US9177404B2 (en) * 2012-10-31 2015-11-03 Qualcomm Incorporated Systems and methods of merging multiple maps for computer vision based tracking
CN103136782B (zh) * 2013-02-22 2016-05-18 广东威创视讯科技股份有限公司 一种三维模型地图动态渲染方法及装置
US9201019B2 (en) * 2013-05-30 2015-12-01 Seagate Technology Llc Article edge inspection
US9723109B2 (en) * 2014-05-28 2017-08-01 Alexander Hertel Platform for constructing and consuming realm and object feature clouds
CN106156245B (zh) * 2015-04-28 2020-02-14 高德软件有限公司 一种电子地图中的线要素合并方法及装置
CN105096386B (zh) * 2015-07-21 2017-11-17 中国民航大学 大范围复杂城市环境几何地图自动生成方法
CN105674993A (zh) * 2016-01-15 2016-06-15 武汉光庭科技有限公司 基于双目相机的高精度视觉定位地图生成***及方法
US20190333239A1 (en) * 2016-12-02 2019-10-31 Cloudminds (Shenzhen) Robotics Systems Co., Ltd. Positioning method and device
CN107223269B (zh) * 2016-12-29 2021-09-28 达闼机器人有限公司 三维场景定位方法和装置
CN106802954B (zh) * 2017-01-18 2021-03-26 中国科学院合肥物质科学研究院 无人车语义地图模型构建方法及其在无人车上的应用方法
CN107680135B (zh) * 2017-11-16 2019-07-23 珊口(上海)智能科技有限公司 定位方法、***及所适用的机器人
CN107885871A (zh) * 2017-11-24 2018-04-06 南京华捷艾米软件科技有限公司 基于云计算的同步定位与地图构建方法、***、交互***
CN108108748A (zh) * 2017-12-08 2018-06-01 联想(北京)有限公司 一种信息处理方法及电子设备

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106503011A (zh) * 2015-09-07 2017-03-15 高德软件有限公司 一种地图数据处理方法及装置
JP2018050149A (ja) * 2016-09-21 2018-03-29 キヤノン株式会社 画像処理装置
CN107885867A (zh) * 2017-11-22 2018-04-06 苏州联讯图创软件有限责任公司 地图切片数据的合成方法和合成***

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111652934A (zh) * 2020-05-12 2020-09-11 Oppo广东移动通信有限公司 定位方法及地图构建方法、装置、设备、存储介质
CN115705670A (zh) * 2021-08-06 2023-02-17 北京小米移动软件有限公司 地图管理方法及其装置
CN115705670B (zh) * 2021-08-06 2024-06-04 北京小米移动软件有限公司 地图管理方法及其装置
CN115001835A (zh) * 2022-06-15 2022-09-02 覃惠玲 一种基于物联网终端的数据加密***

Also Published As

Publication number Publication date
CN109074757A (zh) 2018-12-21
CN109074757B (zh) 2021-11-09

Similar Documents

Publication Publication Date Title
WO2020006685A1 (zh) 一种建立地图的方法、终端和计算机可读存储介质
WO2020259481A1 (zh) 定位方法及装置、电子设备、可读存储介质
CN108921874B (zh) 人体跟踪处理方法、装置及***
CN109740004B (zh) 一种归档方法及装置
Sheng et al. Unsupervised collaborative learning of keyframe detection and visual odometry towards monocular deep slam
CN110399789B (zh) 行人重识别方法、模型构建方法、装置、设备和存储介质
KR20180035869A (ko) 정경 재구성 방법, 장치, 단말기 장치 및 저장 매체
CN110132242B (zh) 多摄像机即时定位与地图构建的三角化方法及其运动体
CN113935358A (zh) 一种行人追踪方法、设备和存储介质
CN111105459B (zh) 描述子地图生成方法、定位方法、装置、设备和存储介质
KR20160109761A (ko) 건설현장 맞춤형 이미지 분석기술을 활용한 중장비/근로자 인식 및 추적 방법 및 시스템
WO2024077935A1 (zh) 一种基于视觉slam的车辆定位方法及装置
Wilson et al. Visual and object geo-localization: A comprehensive survey
JP6976731B2 (ja) 情報処理装置、情報処理方法、及びプログラム
CN113808269A (zh) 地图生成方法、定位方法、***及计算机可读存储介质
CN111401482B (zh) 特征点匹配方法及装置、设备、存储介质
CN109074676B (zh) 建立地图的方法、定位方法、终端及计算机可读存储介质
CN111767839A (zh) 一种车辆行驶轨迹确定方法、装置、设备及介质
CN111079535A (zh) 一种人体骨架动作识别方法、装置及终端
CN110827340B (zh) 地图的更新方法、装置及存储介质
CN114443914B (zh) 元宇宙空间服务器的数据索引、查询方法及***
CN113361392B (zh) 无监督的基于相机和无线定位的多模态行人重识别方法
Huang et al. Image-based localization for indoor environment using mobile phone
Wu et al. Vehicle re-id for surround-view camera system
CN113763468B (zh) 一种定位方法、装置、***及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18925276

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 14/05/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18925276

Country of ref document: EP

Kind code of ref document: A1