WO2022007385A1 - 一种激光、视觉定位融合的方法及设备 - Google Patents

一种激光、视觉定位融合的方法及设备 Download PDF

Info

Publication number
WO2022007385A1
WO2022007385A1 PCT/CN2021/072510 CN2021072510W WO2022007385A1 WO 2022007385 A1 WO2022007385 A1 WO 2022007385A1 CN 2021072510 W CN2021072510 W CN 2021072510W WO 2022007385 A1 WO2022007385 A1 WO 2022007385A1
Authority
WO
WIPO (PCT)
Prior art keywords
visual
map
positioning
mapping
laser
Prior art date
Application number
PCT/CN2021/072510
Other languages
English (en)
French (fr)
Inventor
王小挺
白静
程伟
谷桐
张晓凤
陈士凯
Original Assignee
上海思岚科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海思岚科技有限公司 filed Critical 上海思岚科技有限公司
Publication of WO2022007385A1 publication Critical patent/WO2022007385A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Definitions

  • the present application relates to the field of computer technology, and in particular, to a method and device for fusion of laser and visual positioning.
  • SLAM Laser Simultaneous Positioning and Map Construction
  • VSLAM has the advantages of low deployment cost, large amount of information, and wide application range. It is a mainstream direction for future development. However, it also has disadvantages such as being greatly affected by lighting and the built map cannot be used for path planning.
  • the principle of visual label SLAM (TagSLAM) is basically the same as that of conventional VSLAM, but it uses decodable visual labels to replace image feature points, and uses the geometric constraints of visual labels and the uniqueness of encoded values for precise positioning.
  • TagSLAM can attach tags to areas that will not change easily, such as ceilings, so that even if the environment changes greatly, TagSLAM can still maintain the stability of positioning.
  • the disadvantage of TagSLAM is that it requires intrusive deployment to modify the environment and cannot be applied to all scenarios. For some places where tags can be deployed, it is difficult to cover the entire environment.
  • An object of the present application is to provide a method and device for fusion of laser and visual positioning, which solves the problems in the prior art that the stability of the existing positioning method is difficult to maintain and cannot be applied to all scenarios.
  • a method for fusion of laser and visual positioning comprising:
  • the visual mapping engine When the visual mapping engine is in the positioning mode, all visual label sub-images are screened, and the positioning quality is determined according to the screened visual label sub-images and the three-dimensional transformation relationship;
  • the visual positioning observation input information is determined according to the positioning quality, and the visual positioning observation input information is fused with the laser positioning information and the odometer information.
  • calculating the three-dimensional transformation relationship between each visual label sub-map and the corresponding first map including:
  • the three-dimensional transformation relationship between each visual label sub-map and the corresponding first map is calculated according to the corresponding relationship.
  • calculating the three-dimensional transformation relationship between each visual label sub-map and the corresponding first map according to the corresponding relationship including:
  • the positioning quality is determined according to the screened visual label submap and the three-dimensional transformation relationship, including:
  • the positioning quality is determined according to the mapping quality of the visual mapping.
  • determining the positioning quality according to the mapping quality of the visual mapping including:
  • the positioning quality is determined according to the construction quality of the visual mapping, the number of visual labels, and the threshold of the number of visual labels.
  • the relationship between the transformed visual mapping key frame and the corresponding laser mapping key frame is calculated. average distance.
  • determining the visual positioning observation input information according to the positioning quality including:
  • laser mapping as the main system, polling the current pose of the device where the visual mapping engine is located through process communication, to determine whether the current pose is a valid pose;
  • the current pose is preprocessed according to the positioning quality, and the visual positioning observation input information is determined according to the preprocessing result.
  • a laser and visual positioning fusion device comprising:
  • an acquiring device for acquiring a first map established by a laser mapping engine and a second map established by a visual mapping engine, wherein the second map includes a plurality of visual label sub-maps;
  • a computing device for computing the three-dimensional transformation relationship between each visual label sub-map and the corresponding first map
  • a determination device used for screening all the visual label sub-maps when the visual mapping engine is in the positioning mode, and determining the positioning quality according to the screened effective visual label sub-maps and the three-dimensional transformation relationship;
  • the fusion device is configured to determine the visual positioning observation input information according to the positioning quality, and fuse the visual positioning observation input information with the laser positioning information and the odometer information.
  • a computer-readable medium having computer-readable instructions stored thereon, the computer-readable instructions being executable by a processor to implement the method as described above.
  • the present application obtains a first map established by a laser mapping engine and a second map established by a visual mapping engine, wherein the second map includes a plurality of visual label sub-maps; A three-dimensional transformation relationship between a visual label sub-map and the corresponding first map; when the visual mapping engine is in the positioning mode, all visual label sub-maps are screened, according to the filtered visual label sub-map and the three-dimensional transformation relationship
  • the positioning quality is determined;
  • the visual positioning observation input information is determined according to the positioning quality, and the visual positioning observation input information is fused with the laser positioning information and the odometer information. Therefore, through the collaborative work of the laser mapping engine and the visual mapping engine, the stability of positioning can be maintained and the scope of applicable application scenarios can be expanded.
  • FIG. 1 shows a schematic flowchart of a method for fusion of laser and visual positioning provided according to an aspect of the present application
  • FIG. 2 shows a schematic flowchart of mapping in a specific embodiment of the present application
  • FIG. 3 shows a schematic flowchart of a method for fusion of laser and visual positioning based on an extended Kalman filter in a specific embodiment of the present application
  • FIG. 4 shows a schematic structural diagram of a laser and visual positioning fusion device provided by another aspect of the present application.
  • the terminal, the device serving the network, and the trusted party all include one or more processors (for example, a central processing unit (CPU)), an input/output interface, a network interface, and a memory .
  • processors for example, a central processing unit (CPU)
  • Memory may include non-persistent memory in computer-readable media, random access memory (RAM) and/or non-volatile memory, such as read only memory (ROM) or flash memory ( flash RAM). Memory is an example of a computer-readable medium.
  • RAM random access memory
  • ROM read only memory
  • flash RAM flash memory
  • Computer-readable media includes both persistent and non-permanent, removable and non-removable media, and storage of information may be implemented by any method or technology.
  • Information may be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase-change memory (Phase-Change RAM, PRAM), static random access memory (Static Random Access Memory, SRAM), dynamic random access memory (Dynamic Random Access Memory, DRAM) , other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), flash memory or other memory technology, only Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Disk (DVD) or other optical storage, magnetic cassette, magnetic tape storage or other magnetic storage device or any other A non-transmission medium that can be used to store information that can be accessed by a computing device.
  • computer-readable media does not include non-transitory computer-readable media (transitory media), such as
  • FIG. 1 shows a schematic flowchart of a method for laser and visual positioning fusion provided according to an aspect of the present application.
  • the method includes: steps S11 to S14,
  • step S11 the first map established by the laser mapping engine and the second map established by the visual mapping engine are obtained, wherein the second map includes a plurality of visual label sub-maps; here, the laser mapping engine
  • the map construction is laser SLAM construction
  • the created map is the first map
  • the first map is a grid map
  • the visual mapping engine construction is TagSLAM mapping
  • the created map is the second map, when the second map is created
  • the map is only built in the area with the label
  • the area where the label is located is a subgraph.
  • several visual label subgraphs are obtained. The subgraph can greatly expand the application scope of visual label SLAM.
  • step S12 the three-dimensional transformation relationship between each visual label sub-map and the corresponding first map is calculated; here, TagSLAM builds a map in the form of a sub-map, and each visual label sub-map is completed and the corresponding first map is calculated.
  • the three-dimensional transformation relationship of the map, the transformation relationship is the rotation matrix r and the translation vector t.
  • step S13 when the visual mapping engine is in the positioning mode, all visual label sub-pictures are screened, and the positioning quality is determined according to the screened visual label sub-picture and the three-dimensional transformation relationship;
  • open TagSLAM When TagSLAM is in the positioning mode, all observed visual labels are screened to filter out valid visual labels, that is, if an effective visual label is observed, it is obtained according to the subgraph where the effective visual label is located and the above calculation.
  • the positioning quality is determined by the three-dimensional transformation relationship of , and the positioning quality is used to describe the reliability of positioning.
  • the visual positioning quality is between 0 and 1, and the larger the value, the more reliable it is.
  • step S14 visual positioning observation input information is determined according to the positioning quality, and the visual positioning observation input information is fused with laser positioning information and odometer information.
  • the positioning quality it is determined that the visual positioning observation input information is the current pose, whether it needs to be repositioned to obtain a new pose, or the current pose is not fused; the determined visual positioning observation input information is combined with laser positioning.
  • odometry for fusion can use Extended Kalman Filter (EKF), and of course, can also access other positioning information, such as GPS, third-party VSLAM positioning, EKF can handle nonlinear systems, so that each source is different
  • EKF Extended Kalman Filter
  • step S12 the corresponding relationship between the visual mapping key frame in the second map and the laser mapping key frame in the first map is recorded according to the timestamp; The three-dimensional transformation relationship between each visual label sub-map and the corresponding first map.
  • laser SLAM and TagSLAM build maps independently.
  • laser SLAM pushes key frames to TagSLAM in real time.
  • TagSLAM records the one-to-one correspondence between the key frames in the two maps according to the timestamp when building the map.
  • the first frame after that is a visual key frame; if the position of the observed label is compared with the previous frame, the distance is greater than the preset pixel (such as 18 pixels) , the selected key frame will participate in the mapping, but there is no corresponding laser key frame, and will not participate in the subsequent calculation of the 3D transformation of the sub-image.
  • the three-dimensional transformation from the TagSLAM map to the laser SALM map is calculated by the ICP (Iterative Closest Point) algorithm.
  • the visual label sub-map refers to a local map created by a visual mapping engine, and several visual label sub-maps in the second map are independent of each other. Transform the relationship to get the pose of each visual label subgraph in the world coordinate system.
  • the position of each visual label sub-map can be calculated according to the corresponding relationship; the three-dimensional transformation relationship is obtained by calculating the three-dimensional transformation from the position of each visual label sub-map to the corresponding first map.
  • a point cloud matching algorithm is used to obtain the three-dimensional transformation of each visual label sub-image to the corresponding first map.
  • step S13 all visual labels observed when the visual mapping engine is in the positioning mode are obtained; all visual labels are decoded to obtain legal coding values, and all visual label submaps are judged Whether there is a visual label sub-graph containing the coded value, and if so, the visual label sub-graph containing the coded value is used as the filtered visual label sub-graph.
  • a visual label belongs to only one subgraph, and a subgraph contains multiple visual labels; when TagSLAM is in the positioning mode, it filters all the observed visual labels to filter out valid visual labels, that is, if the observed effective visual labels are , the current pose is calculated according to the corresponding visual label subgraph and the three-dimensional transformation relationship.
  • the determination of a valid visual label is as follows: if the visual label is decoded to obtain a legal code value, and the code value is in a certain visual label sub-graph, then the visual label is a valid visual label and participates in the positioning. The subgraph and 3D transformation relationship where the valid visual labels are located are located.
  • step S13 the average distance between the transformed corresponding key frames is determined according to the screened visual label sub-image and the three-dimensional transformation relationship; the average distance is normalized
  • the mapping quality of the visual mapping is obtained; the positioning quality is determined according to the mapping quality of the visual mapping.
  • the screened visual label sub-graph is an effective visual label sub-graph, the average distance between the corresponding key frames after conversion is calculated, and then the average distance is normalized to evaluate the quality of the mapping.
  • the mapping quality is the positioning quality.
  • the pose of the visual mapping key frame in the valid visual label sub-map and the pose of the laser mapping key frame in the first map corresponding to the valid visual label sub-map can be obtained; according to the visual mapping key
  • the pose of the frame, the pose of the laser mapping key frame, and the three-dimensional transformation relationship are used to calculate the average distance between the transformed visual mapping key frame and the corresponding laser mapping key frame.
  • the mean absolute error between the transformed corresponding keyframes is calculated according to the following formula:
  • TKi is the pose of the ith key frame of TagSLAM
  • LKi is the pose of the ith key frame of laser SLAM.
  • q is the mapping quality, and the closer the value is to 1, the smaller the error.
  • the positioning quality may be determined according to the construction quality of the visual mapping, the number of visual labels, and the threshold of the number of visual labels.
  • the positioning quality is used to describe the reliability of the positioning.
  • the visual positioning quality is between 0 and 1. The larger the value, the more reliable the positioning quality.
  • step S14 the current pose of the device where the visual mapping engine is located is determined; laser mapping is used as the main system, and the current position of the device where the visual mapping engine is located is polled through process communication pose, determine whether the current pose is a valid pose; when the current pose is a valid pose, preprocess the current pose according to the positioning quality, and determine the visual positioning observation according to the preprocessing result Enter information.
  • the laser SLAM is positioned through an algorithm, and both the positioning result and the odometry value are input into the extended Kalman filter as the observation value; TagSLAM is independently positioned according to the observed result, where the observation result is decoded according to the image captured by the camera.
  • the visual positioning of TagSLAM shall prevail, and the current pose can be directly corrected to obtain the final positioning.
  • the default value of the visual positioning quality can be 0.6, and when it is greater than 0.6, it is considered that the positioning quality is high, and it participates in the fusion process of the extended Kalman filter.
  • the present application uses TagSLAM mapping and laser SLAM mapping, and laser SLAM pushes the laser key frames of the obtained map to TagSLAM mapping, and the The keyframes of TagSLAM and the keyframes of the laser are in one-to-one correspondence according to the time stamp.
  • the global BundleAdjustment (beam adjustment method) is performed.
  • the 2D image feature points are reprojected back to In the 3D domain, there will be a deviation from the position of the real 3D point, and BundleAdjustment is used to minimize this deviation through algorithms such as the least squares method, so as to obtain the precise value of the robot pose.
  • the global BundleAdjustment is optimized for a visual label subgraph as a whole.
  • the three-dimensional transformation relationship between the visual label sub-image and the laser SLAM map is calculated according to the key frame correspondence.
  • FIG. 3 it is a schematic flowchart of a method for fusion of laser and visual positioning based on extended Kalman filter. If it is, then calculate the current pose and save the timestamp.
  • Laser SLAM polls the TagSLAM pose through inter-process communication to determine whether the polled pose is a valid positioning, if not, ignore and do not perform subsequent fusion, if so, Then judge whether the positioning quality is higher than the threshold. If not, a low positioning alarm will be generated and the current pose will not be fused. If so, it will be judged whether the distance from the current pose is large and the laser positioning quality is low, so as to determine whether it is necessary to directly correct the final position. Positioning, if not, it is used as the observation input information of the extended Kalman filter, and the positioning result obtained by SLAM positioning and the value of the odometer are fused in the extended Kalman filter to obtain the final positioning result.
  • an embodiment of the present application further provides a computer-readable medium on which computer-readable instructions are stored, and the computer-readable instructions can be executed by a processor to implement the foregoing method for laser and visual positioning fusion.
  • the present application also provides a terminal, which includes modules or units capable of executing the method steps described in FIG. 1 or FIG. 2 or FIG. It can be implemented by means of hardware, software or a combination of software and hardware, which is not limited in this application.
  • a device for fusion of laser and visual positioning is also provided, and the device includes:
  • a memory storing computer readable instructions which, when executed, cause the processor to perform the operations of the method as previously described.
  • computer-readable instructions when executed, cause the one or more processors to:
  • a method for fusion of laser and visual positioning characterized in that the method comprises:
  • the visual mapping engine When the visual mapping engine is in the positioning mode, all visual label sub-images are screened, and the positioning quality is determined according to the screened visual label sub-images and the three-dimensional transformation relationship;
  • the visual positioning observation input information is determined according to the positioning quality, and the visual positioning observation input information is fused with the laser positioning information and the odometer information.
  • FIG. 4 shows a schematic structural diagram of a laser and visual positioning fusion device provided by another aspect of the present application.
  • the device includes: an acquisition device 11, a computing device 12, a determination device 13, and a fusion device 14.
  • the acquisition device 11 uses For obtaining the first map established by the laser mapping engine and the second map established by the visual mapping engine, wherein, the second map includes a plurality of visual label sub-maps; the computing device 12 is used to calculate each visual label sub-map.
  • the content executed by the acquiring device 11 , the computing device 12 , the determining device 13 and the fusion device 14 is the same or correspondingly the same as the content in the above steps S11 , S12 , S13 and S14 respectively. Repeat.
  • the present application may be implemented in software and/or a combination of software and hardware, for example, may be implemented using an application specific integrated circuit (ASIC), a general purpose computer, or any other similar hardware device.
  • ASIC application specific integrated circuit
  • the software program of the present application may be executed by a processor to implement the steps or functions described above.
  • the software programs of the present application (including associated data structures) may be stored on a computer-readable recording medium, such as RAM memory, magnetic or optical drives or floppy disks, and the like.
  • some steps or functions of the present application may be implemented in hardware, for example, as a circuit that cooperates with a processor to perform various steps or functions.
  • a part of the present application can be applied as a computer program product, such as computer program instructions, which when executed by a computer, through the operation of the computer, can invoke or provide methods and/or technical solutions according to the present application.
  • the program instructions for invoking the methods of the present application may be stored in fixed or removable recording media, and/or transmitted via data streams in broadcast or other signal-bearing media, and/or stored in accordance with the in the working memory of the computer device on which the program instructions are executed.
  • an embodiment according to the present application includes an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein, when the computer program instructions are executed by the processor, a trigger is
  • the apparatus operates based on the aforementioned methods and/or technical solutions according to various embodiments of the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

一种激光、视觉定位融合的方法及设备,该方法包括:获取通过激光建图引擎建立的第一地图和通过视觉建图引擎建立的第二地图,其中,第二地图包括多个视觉标签子图(S11);计算每一视觉标签子图与对应第一地图的三维变换关系(S12);确定视觉建图引擎处于定位模式时的有效视觉标签子图,根据有效视觉标签子图以及三维变换关系确定定位质量(S13);根据定位质量确定视觉定位观测输入信息,将视觉定位观测输入信息与激光定位信息、里程计信息进行融合(S14)。通过激光建图引擎和视觉建图引擎的协同工作,使得可以保持定位的稳定性以及扩展了适用的应用场景范围。

Description

一种激光、视觉定位融合的方法及设备 技术领域
本申请涉及计算机技术领域,尤其涉及一种激光、视觉定位融合的方法及设备。
背景技术
激光同步定位和地图构建(SLAM)起步较早,在理论、技术和产品落地上都相对成熟,是目前最稳定/最成熟的定位导航方法。激光SLAM建图得到的栅格地图,可直接用于路径规划和导航。但当地图与实际环境差别较大时或存在较多玻璃墙时,激光SLAM就难以保持定位的稳定性。
VSLAM有着部署成本低、信息量大、适用范围广等优点,是未来发展的一个主流方向,但也有受光照影响大、构建的地图无法用于路径规划等缺点。视觉标签SLAM(TagSLAM)与常规VSLAM的原理基本相同,不过采用可解码的视觉标签来代替图像特征点,利用视觉标签的几何形状约束和编码值的唯一性来进行精准定位。TagSLAM可以将标签贴在不会轻易变化的区域如天花板,这样的话即使环境产生很大的变化,TagSLAM依然可以保持定位的稳定性。TagSLAM的缺点在于需要侵入式部署来修改环境,无法应用于所有场景,对于一些可部署标签的场所,也难以覆盖整个环境。
发明内容
本申请的一个目的是提供一种激光、视觉定位融合的方法及设备,解决现有技术中现有的定位方式定位的稳定性难以保持以及无法应用于所有场景的问题。
根据本申请的一个方面,提供了一种激光、视觉定位融合的方法,该方法包括:
获取通过激光建图引擎建立的第一地图和通过视觉建图引擎建立的第二地图,其中,所述第二地图包括多个视觉标签子图;
计算每一视觉标签子图与对应第一地图的三维变换关系;
当所述视觉建图引擎处于定位模式时,对所有视觉标签子图进行筛选,根据筛选后的视觉标签子图以及所述三维变换关系确定定位质量;
根据所述定位质量确定视觉定位观测输入信息,将所述视觉定位观测输入信息与激光定位信息、里程计信息进行融合。
进一步地,计算每一视觉标签子图与对应第一地图的三维变换关系,包括:
根据时间戳记录所述第二地图中视觉建图关键帧与所述第一地图中激光建图关键帧的对应关系:
根据所述对应关系计算每一视觉标签子图与对应第一地图的三维变换关系。
进一步地,当所述视觉建图引擎处于定位模式时,对所有视觉标签子图进行筛选,包括:
获取所述视觉建图引擎处于定位模式时观测到的所有视觉标签;
解码所述所有视觉标签得到合法的编码值,判断所述所有视觉标签子图中是否存在一个视觉标签子图含有所述编码值,若是,则将含有所述编码值的视觉标签子图作为筛选后的视觉标签子图。
进一步地,根据所述对应关系计算每一视觉标签子图与对应第一地图的三维变换关系,包括:
根据所述对应关系计算每一视觉标签子图的位置;
计算每一视觉标签子图的位置到对应第一地图的三维变换,得到三维变换关系。
进一步地,根据筛选后的视觉标签子图以及所述三维变换关系确定定位质量,包括:
根据所述筛选后的视觉标签子图以及所述三维变换关系确定变换后的对应关键帧之间的平均距离;
对所述平均距离进行归一化处理,得到视觉建图的建图质量;
根据所述视觉建图的建图质量确定定位质量。
进一步地,根据所述视觉建图的建图质量确定定位质量,包括:
根据所述视觉建图的建图质量、视觉标签数量以及视觉标签数量阈值确定定位质量。
进一步地,根据所述筛选后的视觉标签子图以及所述三维变换关系确定变换后的对应关键帧之间的平均距离,包括:
获取所述筛选后的视觉标签子图中视觉建图关键帧的位姿、所述筛选后的视觉标签子图对应的第一地图中激光建图关键帧的位姿;
根据所述视觉建图关键帧的位姿、所述激光建图关键帧的位姿以及所述三维变换关系,计算变换后的所述视觉建图关键帧与对应的激光建图关键帧之间的平均距离。
进一步地,根据所述定位质量确定视觉定位观测输入信息,包括:
确定所述视觉建图引擎所在设备的当前位姿;
将激光建图作为主***,通过进程通信轮询所述视觉建图引擎所在设备的当前位姿,判断所述当前位姿是否为有效位姿;
当所述当前位姿为有效位姿时,根据所述定位质量对所述当前位姿进行预处理,根据预处理 结果确定视觉定位观测输入信息。
根据本申请又一个方面,还提供了一种激光、视觉定位融合的设备,该设备包括:
获取装置,用于获取通过激光建图引擎建立的第一地图和通过视觉建图引擎建立的第二地图,其中,所述第二地图包括多个视觉标签子图;
计算装置,用于计算每一视觉标签子图与对应第一地图的三维变换关系;
确定装置,用于当所述视觉建图引擎处于定位模式时,对所有视觉标签子图进行筛选,根据筛选后的有效视觉标签子图以及所述三维变换关系确定定位质量;
融合装置,用于根据所述定位质量确定视觉定位观测输入信息,将所述视觉定位观测输入信息与激光定位信息、里程计信息进行融合。
根据本申请再一个方面,还提供了一种计算机可读介质,其上存储有计算机可读指令,所述计算机可读指令可被处理器执行以实现如前述所述的方法。
与现有技术相比,本申请通过获取通过激光建图引擎建立的第一地图和通过视觉建图引擎建立的第二地图,其中,所述第二地图包括多个视觉标签子图;计算每一视觉标签子图与对应第一地图的三维变换关系;当所述视觉建图引擎处于定位模式时,对所有视觉标签子图进行筛选,根据筛选后的视觉标签子图以及所述三维变换关系确定定位质量;根据所述定位质量确定视觉定位观测输入信息,将所述视觉定位观测输入信息与激光定位信息、里程计信息进行融合。从而通过激光建图引擎和视觉建图引擎的协同工作,使得可以保持定位的稳定性以及扩展了适用的应用场景范围。
附图说明
通过阅读参照以下附图所作的对非限制性实施例所作的详细描述,本申请的其它特征、目的和优点将会变得更明显:
图1示出根据本申请的一个方面提供的一种激光、视觉定位融合的方法流程示意图;
图2示出本申请一具体实施例中建图的流程示意图;
图3示出本申请一具体实施例中基于扩展卡尔曼滤波器的激光、视觉定位融合的方法流程示意图;
图4示出本申请又一个方面提供的一种激光、视觉定位融合的设备的结构示意图。
附图中相同或相似的附图标记代表相同或相似的部件。
具体实施方式
下面结合附图对本申请作进一步详细描述。
在本申请一个典型的配置中,终端、服务网络的设备和可信方均包括一个或多个处 理器(例如中央处理器(Central Processing Unit,CPU))、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(Random Access Memory,RAM)和/或非易失性内存等形式,如只读存储器(Read Only Memory,ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(Phase-Change RAM,PRAM)、静态随机存取存储器(Static Random Access Memory,SRAM)、动态随机存取存储器(Dynamic Random Access Memory,DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、数字多功能光盘(Digital Versatile Disk,DVD)或其他光学存储、磁盒式磁带,磁带磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括非暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
图1示出根据本申请的一个方面提供的一种激光、视觉定位融合的方法流程示意图,该方法包括:步骤S11~步骤S14,
在步骤S11中,获取通过激光建图引擎建立的第一地图和通过视觉建图引擎建立的第二地图,其中,所述第二地图包括多个视觉标签子图;在此,激光建图引擎建图为激光SLAM建图,建立的地图为第一地图,第一地图为栅格地图,视觉建图引擎建图为TagSLAM建图,建立的地图为第二地图,在建立该第二地图时只在有标签的区域建图,标签所在区域为一个子图,在整体建图完成后,得到若干个视觉标签子图。通过子图可以极大的扩展了视觉标签SLAM的应用范围。
在步骤S12中,计算每一视觉标签子图与对应第一地图的三维变换关系;在此,TagSLAM是以子图形式进行建图,每一个视觉标签子图完成后计算其和对应的第一地图的三维变换关系,该变换关系为旋转矩阵r和平移向量t。
在步骤S 13中,当所述视觉建图引擎处于定位模式时,对所有视觉标签子图进行筛选,根据筛选后的视觉标签子图以及所述三维变换关系确定定位质量;在此,开启TagSLAM的定位模式,当TagSLAM在定位模式时,对所有观测到的视觉标签进行筛选, 筛选出有效的视觉标签,即若观测得到有效的视觉标签,则根据有效视觉标签所在的子图和上述计算得到的三维变换关系确定出定位质量,该定位质量用于描述定位的可信度,视觉的定位质量在0~1之间,数值越大表示越可靠。
在步骤S14中,根据所述定位质量确定视觉定位观测输入信息,将所述视觉定位观测输入信息与激光定位信息、里程计信息进行融合。在此,根据定位质量确定视觉定位观测输入信息为当前的位姿,还是需要重新定位得到新的位姿,又或当前的位姿不进行融合;将确定出的视觉定位观测输入信息与激光定位还有里程计进行融合,该融合可使用扩展卡尔曼滤波器(EKF),当然,还可以接入其他定位信息,如GPS、第三方VSLAM定位,EKF可处理非线性***,从而对各来源不同的定位信息可进行很好的融合。
在本申请一实施例中,在步骤S12中,根据时间戳记录所述第二地图中视觉建图关键帧与所述第一地图中激光建图关键帧的对应关系;根据所述对应关系计算每一视觉标签子图与对应第一地图的三维变换关系。在此,激光SLAM和TagSLAM各自独立建图,同时激光SLAM将关键帧实时推送到TagSLAM中,TagSLAM在建图时根据时间戳记录两种地图中关键帧的一一对应关系。其中,激光建图时若产生关键帧,则在此之后的第一帧为视觉的关键帧;若观测到的标签的位置与前一帧相比,距离大于预设像素(如18个像素),则选出来的关键帧会参与建图,但没有对应的激光关键帧,不参与后续计算子图的三维变换的过程。建图结束时,根据关键帧坐标值的匹配关系,通过ICP(Iterative Closest Point,迭代最近点)算法计算TagSLAM地图到激光SALM地图的三维变换。在本申请实施例中,视觉标签子图指通过视觉建图引擎建立的局部地图,第二地图中的若干视觉标签子图相互独立,通过计算每一视觉标签子图与对应第一地图的三维变换关系,得到每个视觉标签子图在世界坐标系下的位姿。
具体地,可以根据所述对应关系计算每一视觉标签子图的位置;计算每一视觉标签子图的位置到对应第一地图的三维变换,得到三维变换关系。在此,根据关键帧一一对应关系,通过点云匹配算法得到每个视觉标签子图到对应第一地图的三维变换。
在本申请一实施例中,在步骤S13中,获取所述视觉建图引擎处于定位模式时观测到的所有视觉标签;解码所述所有视觉标签得到合法的编码值,判断所有视觉标签子图中是否存在一个视觉标签子图含有所述编码值,若是,则将含有所述编码值的视觉标签子图作为筛选后的视觉标签子图。在此,一个视觉标签只属于一个子图,一个子图中包含多个视觉标签;TagSLAM在定位模式时,对观测到的所有视觉标签进行筛选,筛选出有效的视觉标签,即若观测到有效的视觉标签,则根据对应的视觉标签子图和三维变换关系计算当前的位 姿。其中,有效的视觉标签的判定如下:若视觉标签解码得到一个合法的编码值,该编码值在某个视觉标签子图中,则该视觉标签为有效的视觉标签,参与到定位中,根据该有效的视觉标签所在的子图和三维变换关系进行定位。
在本申请一实施例中,在步骤S13中,根据所述筛选后的视觉标签子图以及所述三维变换关系确定变换后的对应关键帧之间的平均距离;对所述平均距离进行归一化处理,得到视觉建图的建图质量;根据所述视觉建图的建图质量确定定位质量。在此,筛选后的视觉标签子图为有效的视觉标签子图,计算转换后的对应关键帧之间的平均距离,再对该平均距离进行归一化处理,用于评价建图质量,根据该建图质量得到定位质量。其中,可以获取所述有效视觉标签子图中视觉建图关键帧的位姿、所述有效视觉标签子图对应的第一地图中激光建图关键帧的位姿;根据所述视觉建图关键帧的位姿、所述激光建图关键帧的位姿以及所述三维变换关系,计算变换后的所述视觉建图关键帧与对应的激光建图关键帧之间的平均距离。在此,根据以下公式计算变换后的对应关键帧之间的绝对平均误差(mean absolute error):
Figure PCTCN2021072510-appb-000001
其中,TKi为TagSLAM第i个关键帧的位姿,LKi为激光SLAM第i个关键帧的位姿。再通过以下公式对mae进行归一化处理:
Figure PCTCN2021072510-appb-000002
其中,q为建图质量,值越接近1表示误差越小。
具体地,可以根据所述视觉建图的建图质量、视觉标签数量以及视觉标签数量阈值确定定位质量。在此,定位质量用于描述定位的可信度,视觉的定位质量在0~1之间,越大表示越可靠,定位质量由建图质量以及观测到的视觉标签数量和标签数量阈值可确定,计算公式如下:定位质量=建图质量*观测到的标签数量/标签数量阈值,其中,标签数量阈值可配置。
在本申请一实施例中,在步骤S14中,确定所述视觉建图引擎所在设备的当前位姿;将激光建图作为主***,通过进程通信轮询所述视觉建图引擎所在设备的当前位姿,判断所述当前位姿是否为有效位姿;当所述当前位姿为有效位姿时,根据所述定位质量对所述当前位姿进行预处理,根据预处理结果确定视觉定位观测输入信息。在此,激光SLAM通过算法进行定位,将定位结果和里程计值均作为观测值输入扩展卡尔曼滤波器;TagSLAM根据观测到的结果独立进行定位,其中,观测结果为根据摄像头拍摄的图像解码得到多干个视觉标签,根据解码得到的标签找到对应的视觉标签子图,找到的视觉标签子图为有效的视 觉标签子图,获取视觉标签在世界坐标系下的位姿,并通过PNP算法得到当前的相机位姿,根据相机外参和子图的位姿得到当前机器的位姿,当前机器的位姿为有效位姿。激光SLAM作为主***,通过进程间通信轮询TagSLAM定位结果,若失败则忽略并通过扩展卡尔曼滤波算法融合各个定位结果,输出最终定位;若成功,则预处理TagSLAM定位信息,作为观测值输入扩展卡尔曼滤波器。
具体地,TagSlAM定位质量和激光SLAM定位质量均低于质量阈值时,由于前者提供了可信的粗略定位信息,因此会进行用户环境变化大导致低定位质量的提示信息。TagSLAM定位质量高,激光SLAM定位质量低,且与当前位姿距离大时,以TagSLAM的视觉定位为准,直接纠正当前位姿,得到最终定位。其中,视觉定位质量的缺省值可为0.6,当大于0.6时认为定位质量高,参与到扩展卡尔曼滤波器的融合过程。
在本申请一具体实施例中,如图2所示的建图流程示意图,本申请使用TagSLAM建图和激光SLAM建图,激光SLAM将得到的地图的激光关键帧推送至TagSLAM建图中,将TagSLAM的关键帧和激光的关键帧根据时间戳进行一一对应,当子图结束或暂停建图时,进行全局BundleAdjustment(光束平差法),在机器人导航中,2D的图像特征点重投影回三维域内,和真正3D点的位置会有偏差,而BundleAdjustment作用为通过最小二乘法等算法去最小化这个偏差,从而得到机器人位姿的精确值。全局BundleAdjustment针对一个视觉标签子图整体作优化,在建图过程中,针对滑动窗口,会有一个局部的BundleAdjustment。在对子图进行BundleAdjustment后,根据关键帧对应关系计算视觉标签子图到激光SLAM地图的三维变换关系。
在本申请一具体实施例中,如图3所示,为基于扩展卡尔曼滤波器的激光、视觉定位融合的方法流程示意图,TagSLAM获取图像判断是否观测到有效的标签,若否,则清空位姿;若是,则计算当前的位姿并保存时间戳,激光SLAM通过进程间通信轮询TagSLAM位姿,判断轮询到的位姿是否为有效定位,若否则忽略不进行后续的融合,若是,则判断定位质量是否高于阈值,若否,则产生低定位告警并该当前位姿不进行融合,若是,则判断是否与当前位姿距离大且激光定位质量低,从而确定是否需要直接纠正最终定位,若否,则作为扩展卡尔曼滤波器的观测输入信息,和SLAM定位得到的定位结果以及里程计的值一起在扩展卡尔曼滤波器中进行融合,得到最终的定位结果。
此外,本申请实施例还提供了一种计算机可读介质,其上存储有计算机可读指令,所述计算机可读指令可被处理器执行以实现前述一种激光、视觉定位融合的方法。
与上文所述的方法相对应的,本申请还提供一种终端,其包括能够执行上述图1或 图2或图3或各个实施例所述的方法步骤的模块或单元,这些模块或单元可以通过硬件、软件或软硬结合的方式来实现,本申请并不限定。例如,在本申请一实施例中,还提供了一种激光、视觉定位融合的设备,所述设备包括:
一个或多个处理器;以及
存储有计算机可读指令的存储器,所述计算机可读指令在被执行时使所述处理器执行如前述所述方法的操作。
例如,计算机可读指令在被执行时使所述一个或多个处理器:
一种激光、视觉定位融合的方法,其特征在于,所述方法包括:
获取通过激光建图引擎建立的第一地图和通过视觉建图引擎建立的第二地图,其中,所述第二地图包括多个视觉标签子图;
计算每一视觉标签子图与对应第一地图的三维变换关系;
当所述视觉建图引擎处于定位模式时,对所有视觉标签子图进行筛选,根据筛选后的视觉标签子图以及所述三维变换关系确定定位质量;
根据所述定位质量确定视觉定位观测输入信息,将所述视觉定位观测输入信息与激光定位信息、里程计信息进行融合。
图4示出本申请又一个方面提供的一种激光、视觉定位融合的设备的结构示意图,该设备包括:获取装置11、计算装置12、确定装置13和融合装置14,其中,获取装置11用于获取通过激光建图引擎建立的第一地图和通过视觉建图引擎建立的第二地图,其中,所述第二地图包括多个视觉标签子图;计算装置12用于计算每一视觉标签子图与对应第一地图的三维变换关系;确定装置13用于当所述视觉建图引擎处于定位模式时,对所有视觉标签子图进行筛选,根据筛选后的视觉标签子图以及所述三维变换关系确定定位质量;融合装置14用于根据所述定位质量确定视觉定位观测输入信息,将所述视觉定位观测输入信息与激光定位信息、里程计信息进行融合。
需要说明的是,获取装置11、计算装置12、确定装置13和融合装置14执行的内容分别与上述步骤S11、S12、S13和S14中的内容相同或相应相同,为简明起见,在此不再赘述。
显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本申请的精神和范围。这样,倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。
需要注意的是,本申请可在软件和/或软件与硬件的组合体中被实施,例如,可采用 专用集成电路(ASIC)、通用目的计算机或任何其他类似硬件设备来实现。在一个实施例中,本申请的软件程序可以通过处理器执行以实现上文所述步骤或功能。同样地,本申请的软件程序(包括相关的数据结构)可以被存储到计算机可读记录介质中,例如,RAM存储器,磁或光驱动器或软磁盘及类似设备。另外,本申请的一些步骤或功能可采用硬件来实现,例如,作为与处理器配合从而执行各个步骤或功能的电路。
另外,本申请的一部分可被应用为计算机程序产品,例如计算机程序指令,当其被计算机执行时,通过该计算机的操作,可以调用或提供根据本申请的方法和/或技术方案。而调用本申请的方法的程序指令,可能被存储在固定的或可移动的记录介质中,和/或通过广播或其他信号承载媒体中的数据流而被传输,和/或被存储在根据所述程序指令运行的计算机设备的工作存储器中。在此,根据本申请的一个实施例包括一个装置,该装置包括用于存储计算机程序指令的存储器和用于执行程序指令的处理器,其中,当该计算机程序指令被该处理器执行时,触发该装置运行基于前述根据本申请的多个实施例的方法和/或技术方案。
对于本领域技术人员而言,显然本申请不限于上述示范性实施例的细节,而且在不背离本申请的精神或基本特征的情况下,能够以其他的具体形式实现本申请。因此,无论从哪一点来看,均应将实施例看作是示范性的,而且是非限制性的,本申请的范围由所附权利要求而不是上述说明限定,因此旨在将落在权利要求的等同要件的含义和范围内的所有变化涵括在本申请内。不应将权利要求中的任何附图标记视为限制所涉及的权利要求。此外,显然“包括”一词不排除其他单元或步骤,单数不排除复数。第一,第二等词语用来表示名称,而并不表示任何特定的顺序。

Claims (10)

  1. 一种激光、视觉定位融合的方法,其特征在于,所述方法包括:
    获取通过激光建图引擎建立的第一地图和通过视觉建图引擎建立的第二地图,其中,所述第二地图包括多个视觉标签子图;
    计算每一视觉标签子图与对应第一地图的三维变换关系;
    当所述视觉建图引擎处于定位模式时,对所有视觉标签子图进行筛选,根据筛选后的视觉标签子图以及所述三维变换关系确定定位质量;
    根据所述定位质量确定视觉定位观测输入信息,将所述视觉定位观测输入信息与激光定位信息、里程计信息进行融合。
  2. 根据权利要求1所述的方法,其特征在于,计算每一视觉标签子图与对应第一地图的三维变换关系,包括:
    根据时间戳记录所述第二地图中视觉建图关键帧与所述第一地图中激光建图关键帧的对应关系;
    根据所述对应关系计算每一视觉标签子图与对应第一地图的三维变换关系。
  3. 根据权利要求1所述的方法,其特征在于,当所述视觉建图引擎处于定位模式时,对所有视觉标签子图进行筛选,包括:
    获取所述视觉建图引擎处于定位模式时观测到的所有视觉标签;
    解码所述所有视觉标签得到合法的编码值,判断所述所有视觉标签子图中是否存在一个视觉标签子图含有所述编码值,若是,则将含有所述编码值的视觉标签子图作为筛选后的视觉标签子图。
  4. 根据权利要求2所述的方法,其特征在于,根据所述对应关系计算每一视觉标签子图与对应第一地图的三维变换关系,包括:
    根据所述对应关系计算每一视觉标签子图的位置;
    计算每一视觉标签子图的位置到对应第一地图的三维变换,得到三维变换关系。
  5. 根据权利要求3所述的方法,其特征在于,根据筛选后的视觉标签子图以及所述三维变换关系确定定位质量,包括:
    根据所述筛选后的视觉标签子图以及所述三维变换关系确定变换后的对应关键帧之间的平均距离;
    对所述平均距离进行归一化处理,得到视觉建图的建图质量;
    根据所述视觉建图的建图质量确定定位质量。
  6. 根据权利要求5所述的方法,其特征在于,根据所述视觉建图的建图质量确定定位质 量,包括:
    根据所述视觉建图的建图质量、视觉标签数量以及视觉标签数量阈值确定定位质量。
  7. 根据权利要求5所述的方法,其特征在于,根据所述筛选后的视觉标签子图以及所述三维变换关系确定变换后的对应关键帧之间的平均距离,包括:
    获取所述筛选后的视觉标签子图中视觉建图关键帧的位姿、所述筛选后的视觉标签子图对应的第一地图中激光建图关键帧的位姿;
    根据所述视觉建图关键帧的位姿、所述激光建图关键帧的位姿以及所述三维变换关系,计算变换后的所述视觉建图关键帧与对应的激光建图关键帧之间的平均距离。
  8. 根据权利要求1所述的方法,其特征在于,根据所述定位质量确定视觉定位观测输入信息,包括:
    确定所述视觉建图引擎所在设备的当前位姿;
    将激光建图作为主***,通过进程通信轮询所述视觉建图引擎所在设备的当前位姿,判断所述当前位姿是否为有效位姿;
    当所述当前位姿为有效位姿时,根据所述定位质量对所述当前位姿进行预处理,根据预处理结果确定视觉定位观测输入信息。
  9. 一种激光、视觉定位融合的设备,其特征在于,所述设备包括:
    获取装置,用于获取通过激光建图引擎建立的第一地图和通过视觉建图引擎建立的第二地图,其中,所述第二地图包括多个视觉标签子图;
    计算装置,用于计算每一视觉标签子图与对应第一地图的三维变换关系;
    确定装置,用于当所述视觉建图引擎处于定位模式时,对所有视觉标签子图进行筛选,根据所述筛选后的视觉标签子图以及所述三维变换关系确定定位质量;
    融合装置,用于根据所述定位质量确定视觉定位观测输入信息,将所述视觉定位观测输入信息与激光定位信息、里程计信息进行融合。
  10. 一种计算机可读介质,其上存储有计算机可读指令,所述计算机可读指令可被处理器执行以实现如权利要求1至8中任一项所述的方法。
PCT/CN2021/072510 2020-07-09 2021-01-18 一种激光、视觉定位融合的方法及设备 WO2022007385A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010656372.5A CN111735446B (zh) 2020-07-09 2020-07-09 一种激光、视觉定位融合的方法及设备
CN202010656372.5 2020-07-09

Publications (1)

Publication Number Publication Date
WO2022007385A1 true WO2022007385A1 (zh) 2022-01-13

Family

ID=72655844

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/072510 WO2022007385A1 (zh) 2020-07-09 2021-01-18 一种激光、视觉定位融合的方法及设备

Country Status (2)

Country Link
CN (1) CN111735446B (zh)
WO (1) WO2022007385A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115267796A (zh) * 2022-08-17 2022-11-01 深圳市普渡科技有限公司 定位方法、装置、机器人和存储介质
CN115546348A (zh) * 2022-11-24 2022-12-30 上海擎朗智能科技有限公司 一种机器人建图方法、装置、机器人及存储介质
CN117044478A (zh) * 2023-08-31 2023-11-14 未岚大陆(北京)科技有限公司 割草机控制方法、装置、割草机、电子设备及存储介质
CN117392347A (zh) * 2023-10-13 2024-01-12 苏州煋海图科技有限公司 一种地图构建方法、装置、计算机设备及可读存储介质

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111735446B (zh) * 2020-07-09 2020-11-13 上海思岚科技有限公司 一种激光、视觉定位融合的方法及设备
CN112596070B (zh) * 2020-12-29 2024-04-19 四叶草(苏州)智能科技有限公司 一种基于激光及视觉融合的机器人定位方法
CN113777615B (zh) * 2021-07-19 2024-03-29 派特纳(上海)机器人科技有限公司 室内机器人的定位方法、***及清洁机器人
CN113932814B (zh) * 2021-09-30 2024-04-02 杭州电子科技大学 一种基于多模态地图的协同定位方法
CN114279434B (zh) * 2021-12-27 2024-06-14 驭势科技(北京)有限公司 一种建图方法、装置、电子设备和存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110098924A1 (en) * 2009-10-28 2011-04-28 Callaway Golf Company Method and device for determining a distance
CN106352870A (zh) * 2016-08-26 2017-01-25 深圳微服机器人科技有限公司 一种目标的定位方法及装置
CN107478214A (zh) * 2017-07-24 2017-12-15 杨华军 一种基于多传感器融合的室内定位方法及***
CN110243358A (zh) * 2019-04-29 2019-09-17 武汉理工大学 多源融合的无人车室内外定位方法及***
WO2019202806A1 (ja) * 2018-04-20 2019-10-24 本田技研工業株式会社 自己位置推定方法
CN111044069A (zh) * 2019-12-16 2020-04-21 驭势科技(北京)有限公司 一种车辆定位方法、车载设备及存储介质
CN111735446A (zh) * 2020-07-09 2020-10-02 上海思岚科技有限公司 一种激光、视觉定位融合的方法及设备

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108319976B (zh) * 2018-01-25 2019-06-07 北京三快在线科技有限公司 建图方法及装置
CN108919811A (zh) * 2018-07-27 2018-11-30 东北大学 一种基于tag标签的室内移动机器人SLAM方法
CN109186606B (zh) * 2018-09-07 2022-03-08 南京理工大学 一种基于slam和图像信息的机器人构图及导航方法
CN109345588B (zh) * 2018-09-20 2021-10-15 浙江工业大学 一种基于Tag的六自由度姿态估计方法
CN110458863B (zh) * 2019-06-25 2023-12-01 广东工业大学 一种基于rgbd与编码器融合的动态slam***
CN110187375A (zh) * 2019-06-27 2019-08-30 武汉中海庭数据技术有限公司 一种基于slam定位结果提高定位精度的方法及装置
CN110726409B (zh) * 2019-09-09 2021-06-22 杭州电子科技大学 一种基于激光slam和视觉slam地图融合方法
CN111045017B (zh) * 2019-12-20 2023-03-31 成都理工大学 一种激光和视觉融合的巡检机器人变电站地图构建方法
CN111242996B (zh) * 2020-01-08 2021-03-16 郭轩 一种基于Apriltag标签与因子图的SLAM方法
CN111380535A (zh) * 2020-05-13 2020-07-07 广东星舆科技有限公司 基于视觉标签的导航方法、装置、移动式机器及可读介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110098924A1 (en) * 2009-10-28 2011-04-28 Callaway Golf Company Method and device for determining a distance
CN106352870A (zh) * 2016-08-26 2017-01-25 深圳微服机器人科技有限公司 一种目标的定位方法及装置
CN107478214A (zh) * 2017-07-24 2017-12-15 杨华军 一种基于多传感器融合的室内定位方法及***
WO2019202806A1 (ja) * 2018-04-20 2019-10-24 本田技研工業株式会社 自己位置推定方法
CN110243358A (zh) * 2019-04-29 2019-09-17 武汉理工大学 多源融合的无人车室内外定位方法及***
CN111044069A (zh) * 2019-12-16 2020-04-21 驭势科技(北京)有限公司 一种车辆定位方法、车载设备及存储介质
CN111735446A (zh) * 2020-07-09 2020-10-02 上海思岚科技有限公司 一种激光、视觉定位融合的方法及设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YU YU-FENG, ZHAO HUI-JING1: "Off-road Localization Using Monocular Camera and Nodding LiDAR", ACTA AUTOMATICA SINICA, KEXUE CHUBANSHE, BEIJING, CN, vol. 45, no. 9, 1 January 2019 (2019-01-01), CN , pages 1791 - 1798, XP055885941, ISSN: 0254-4156, DOI: 10.16383/j.aas.2018.c170281 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115267796A (zh) * 2022-08-17 2022-11-01 深圳市普渡科技有限公司 定位方法、装置、机器人和存储介质
CN115267796B (zh) * 2022-08-17 2024-04-09 深圳市普渡科技有限公司 定位方法、装置、机器人和存储介质
CN115546348A (zh) * 2022-11-24 2022-12-30 上海擎朗智能科技有限公司 一种机器人建图方法、装置、机器人及存储介质
CN115546348B (zh) * 2022-11-24 2023-03-24 上海擎朗智能科技有限公司 一种机器人建图方法、装置、机器人及存储介质
CN117044478A (zh) * 2023-08-31 2023-11-14 未岚大陆(北京)科技有限公司 割草机控制方法、装置、割草机、电子设备及存储介质
CN117044478B (zh) * 2023-08-31 2024-03-19 未岚大陆(北京)科技有限公司 割草机控制方法、装置、割草机、电子设备及存储介质
CN117392347A (zh) * 2023-10-13 2024-01-12 苏州煋海图科技有限公司 一种地图构建方法、装置、计算机设备及可读存储介质
CN117392347B (zh) * 2023-10-13 2024-04-30 苏州煋海图科技有限公司 一种地图构建方法、装置、计算机设备及可读存储介质

Also Published As

Publication number Publication date
CN111735446B (zh) 2020-11-13
CN111735446A (zh) 2020-10-02

Similar Documents

Publication Publication Date Title
WO2022007385A1 (zh) 一种激光、视觉定位融合的方法及设备
Wang et al. Vision-based framework for automatic progress monitoring of precast walls by using surveillance videos during the construction phase
US11328401B2 (en) Stationary object detecting method, apparatus and electronic device
CN110568447B (zh) 视觉定位的方法、装置及计算机可读介质
CN111951397B (zh) 一种多机协同构建三维点云地图的方法、装置和存储介质
US8355565B1 (en) Producing high quality depth maps
US9684962B2 (en) Method and system for calibrating surveillance cameras
US9886746B2 (en) System and method for image inpainting
WO2018205803A1 (zh) 位姿估计方法和装置
US11182954B2 (en) Generating three-dimensional geo-registered maps from image data
WO2022183685A1 (zh) 目标检测方法、电子介质和计算机存储介质
CN115376109B (zh) 障碍物检测方法、障碍物检测装置以及存储介质
US20180101981A1 (en) Smoothing 3d models of objects to mitigate artifacts
CN115638787B (zh) 一种数字地图生成方法、计算机可读存储介质及电子设备
CN115375870B (zh) 回环检测优化方法、电子设备及计算机可读存储装置
CN114972490B (zh) 一种数据自动标注方法、装置、设备及存储介质
CN113240734A (zh) 一种基于鸟瞰图的车辆跨位判断方法、装置、设备及介质
CN112270748B (zh) 基于图像的三维重建方法及装置
WO2020135326A1 (zh) 一种基于图片的方向标注方法及装置
CN116823966A (zh) 相机的内参标定方法、装置、计算机设备和存储介质
CN116563352A (zh) 融合深度视觉信息的单线激光雷达回环检测方法及***
CN113808142B (zh) 一种地面标识的识别方法、装置、电子设备
CN115830073A (zh) 地图要素重建方法、装置、计算机设备和存储介质
Hong et al. The use of CCTV in the emergency response: A 3D GIS perspective
CN112262411B (zh) 图像关联方法、***和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21837539

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21837539

Country of ref document: EP

Kind code of ref document: A1