CN115540867A - Robot, map construction method, map construction device and readable storage medium - Google Patents

Robot, map construction method, map construction device and readable storage medium Download PDF

Info

Publication number
CN115540867A
CN115540867A CN202111664358.0A CN202111664358A CN115540867A CN 115540867 A CN115540867 A CN 115540867A CN 202111664358 A CN202111664358 A CN 202111664358A CN 115540867 A CN115540867 A CN 115540867A
Authority
CN
China
Prior art keywords
pose
nodes
probability grid
local probability
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111664358.0A
Other languages
Chinese (zh)
Inventor
何科君
刘勇
周阳
陈美文
刘运航
武金龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Pudu Technology Co Ltd
Original Assignee
Shenzhen Pudu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Pudu Technology Co Ltd filed Critical Shenzhen Pudu Technology Co Ltd
Priority to CN202111664358.0A priority Critical patent/CN115540867A/en
Publication of CN115540867A publication Critical patent/CN115540867A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3807Creation or updating of map data characterised by the type of data
    • G01C21/383Indoor data

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

A map building method, comprising: acquiring multiframe sensing data acquired by a sensing device when the robot moves along the mapping path; obtaining a plurality of pose nodes and a plurality of local probability grid maps of the robot based on multi-frame sensing data, and thus obtaining a first relative pose relationship of every two adjacent pose nodes and a second relative pose relationship of each pose node and the local probability grid maps; acquiring multi-frame image data acquired by an image acquisition device for the identification code; associating multi-frame image data with corresponding pose nodes to obtain a plurality of observed values of the pose nodes and the identification codes; optimizing a plurality of pose nodes, a plurality of local probability grid maps and an observation value by taking the first relative pose relationship, the second relative pose relationship and the observation value as constraints; and constructing a map by using the plurality of optimized pose nodes, the plurality of local probability grid maps, the observation value and the multi-frame sensing data. The method and the device can establish a stable map in a limited scene.

Description

Robot, map construction method, map construction device and readable storage medium
Technical Field
The invention relates to the field of robots, in particular to a robot, a map construction method, a map construction device and a readable storage medium.
Background
Currently, robots that perform indoor distribution tasks are generally located by two-dimensional (2D) laser or visual cooperative identification. The two positioning modes have advantages And disadvantages respectively, wherein the positioning mode through the 2D laser needs to establish a positioning map through a synchronous positioning And Mapping (SLAM) mode, the positioning map does not depend on identification or only needs a small amount of identification, however, the observation And constraint of the 2D laser under scenes such as long corridors And glass are limited, mapping errors or incompleteness are easily caused, and the positioning And normal use of a robot in the later period are seriously influenced.
Disclosure of Invention
The application provides a robot, a map construction method, a map construction device and a readable storage medium, so that a stable map can be established through laser and visual cooperation identification under a scene with limited identification or limited observation constraint.
In one aspect, the present application provides a robot, comprising:
the device comprises a memory, a processor, a sensing device and an image acquisition device;
the memory stores executable program code;
the processor, coupled to the memory, invokes executable program code stored in the memory to perform a mapping method comprising:
the processor, coupled with the memory, invokes executable program code stored in the memory to perform a mapping method comprising:
acquiring multi-frame sensing data acquired by the sensing device when the robot moves along the mapping path;
obtaining a plurality of pose nodes and a plurality of local probability grid maps of the robot based on the multi-frame sensing data;
based on the multi-frame sensing data, a plurality of pose nodes and a plurality of local probability grid maps, obtaining a first relative pose relationship of every two adjacent pose nodes and a second relative pose relationship of each pose node and the local probability grid maps;
acquiring multi-frame image data acquired by the image acquisition device for the identification code;
associating the multi-frame image data with the corresponding pose nodes to obtain a plurality of observed values of the pose nodes and the identification codes;
optimizing the pose nodes, the local probability grid maps and the observed values by taking the first relative pose relationship, the second relative pose relationship and the observed values as constraints;
and constructing a positioning map by using the plurality of optimized pose nodes, the plurality of local probability grid maps, the observation values and the multi-frame sensing data.
In another aspect, the present application provides a map construction apparatus applied to a robot including a sensing device and an image capturing device, the apparatus including:
the first acquisition module is used for acquiring multiframe sensing data acquired by the sensing device when the robot moves along the mapping path;
the second acquisition module is used for acquiring a plurality of pose nodes and a plurality of local probability grid maps of the robot based on the multi-frame sensing data;
the third acquisition module is used for acquiring a first relative pose relationship of every two adjacent pose nodes and a second relative pose relationship of each pose node and the local probability grid maps based on the multi-frame sensing data, the pose nodes and the local probability grid maps; the fourth acquisition module is used for acquiring multi-frame image data acquired by the image acquisition device for the identification code;
the association module is used for associating the multi-frame image data with the corresponding pose nodes to obtain a plurality of observed values of the pose nodes and the identification codes;
the optimization module is used for optimizing the pose nodes, the local probability grid maps and the observed value by taking the first relative pose relationship, the second relative pose relationship and the observed value as constraints;
and the map construction module is used for constructing a positioning map by utilizing the plurality of optimized pose nodes, the plurality of local probability grid maps, the observation values and the multi-frame sensing data.
In a third aspect, the present application provides a map building method, including:
acquiring multi-frame sensing data acquired by the sensing device when the robot moves along the mapping path;
obtaining a plurality of pose nodes and a plurality of local probability grid maps of the robot based on the multi-frame sensing data;
based on the multi-frame sensing data, a plurality of pose nodes and a plurality of local probability grid maps, obtaining a first relative pose relationship of every two adjacent pose nodes and a second relative pose relationship of each pose node and the local probability grid maps;
acquiring multi-frame image data acquired by the image acquisition device for the identification code;
associating the multi-frame image data with the corresponding pose nodes to obtain a plurality of observed values of the pose nodes and the identification codes;
optimizing the pose nodes, the local probability grid maps and the observation value by taking the first relative pose relationship, the second relative pose relationship and the observation value as constraints;
and constructing a positioning map by using the plurality of optimized pose nodes, the plurality of local probability grid maps, the observation values and the multi-frame sensing data.
In a fourth aspect, the present application provides a readable storage medium having stored thereon a computer program for, when executed by a processor, implementing a mapping method for the robotically implemented mapping method of the first aspect.
According to the technical scheme provided by the application, when the latest pose node of the robot is optimized, a constraint condition that the robot observes the identifier in the environment is added, in other words, only limited identifiers need to be pasted in the environment which is not suitable for two-dimensional laser radar positioning, and then the latest pose node of the robot, the local probability grid map, laser point cloud data related to the latest pose node and the like acquired by the two-dimensional laser radar carried by the robot are combined, so that the global probability grid map and the identifier map which can be stably output can be established after the latest pose node of the robot and the pose data related to the identifier in the environment are optimized, and the limitation on the two-dimensional laser radar positioning in some scenes is broken through.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic structural diagram of a robot provided in an embodiment of the present application;
FIG. 2 is a flowchart of a mapping method provided in an embodiment of the present application;
FIG. 3 is a schematic structural diagram of a mapping apparatus provided in an embodiment of the present application;
fig. 4 is a schematic structural diagram of an apparatus provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In this specification, adjectives such as first and second may only be used to distinguish one element or action from another, without necessarily requiring or implying any actual such relationship or order. References to an element or component or step (etc.) should not be construed as limited to only one of the element, component, or step but rather to one or more of the element, component, or step, etc., where the context permits.
In the present specification, the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
Referring to fig. 1, a schematic structural diagram of a robot according to an embodiment of the present application is shown. For convenience of explanation, only portions related to the embodiments of the present application are shown. The robot may include:
the robot comprises a memory 10, a processor 20, a sensing device and an image acquisition device (not shown in the figure), wherein the processor 20 is a core of operation and control of the robot and is a final execution unit for information processing and program operation. The memory 10 is, for example, a hard disk drive memory, a non-volatile memory (e.g., a flash memory or other electronically programmable erase-limited memory used to form a solid state drive, etc.), a volatile memory (e.g., a static or dynamic random access memory, etc.), and the like, and the embodiments of the present application are not limited thereto.
The memory 10 has stored therein executable program code; a processor 20 coupled to the memory 10 calls the executable program code stored in the memory 10 to perform the following mapping method: acquiring multi-frame sensing data acquired by the sensing device when the robot moves along the mapping path; obtaining a plurality of pose nodes and a plurality of local probability grid maps of the robot based on the multi-frame sensing data; based on the multi-frame sensing data, the pose nodes and the local probability grid map, obtaining a first relative pose relationship of adjacent pose nodes and a second relative pose relationship of the pose nodes and the local probability grid map; acquiring multi-frame image data acquired by the image acquisition device for the identification code; associating the multi-frame image data with the corresponding pose nodes to obtain an observed value of the identification code; optimizing the pose nodes, the local probability grid maps and the observation value by taking the first relative pose relationship, the second relative pose relationship and the observation value as constraints; and constructing a positioning map by using the plurality of optimized pose nodes, the plurality of local probability grid maps, the observation values and the multi-frame sensing data.
Referring to fig. 2, a map construction method provided in the embodiment of the present application mainly includes steps S201 to S207, which are described as follows:
step S201: and acquiring multi-frame sensing data acquired by a sensing device when the robot moves along the mapping path.
In the embodiment of the application, the mapping path is a route of the robot when the robot moves for mapping, and the sensing devices include a two-dimensional laser radar, a wheel type odometer and the like, so that the multi-frame sensing data collected by the sensing devices includes laser point cloud data and wheel type odometer data, that is, information such as position and attitude.
Step S202: and obtaining a plurality of pose nodes and a plurality of local probability grid maps of the robot through multi-frame sensing data.
In the embodiment of the application, as mentioned above, the sensing device carried by the robot includes the wheel type odometer and the two-dimensional laser radar, and a plurality of accurate pose nodes of the robot can be obtained through the multi-frame sensing data acquired by the two sensors. It should be noted here that the plurality of pose nodes of the robot include a historical pose node and a latest pose node, where the latest pose node of the robot is also referred to as a current pose node of the robot, because the pose node of the robot includes a time variable, that is, the pose node of the robot is a pose node at a different time, the pose node obtained at the current time is the latest pose node or the current pose node, and the pose node obtained before the current time is the historical pose node. Furthermore, the local probability grid map is relative to the global probability grid map, meaning that the grid map is limited to a certain range. As an embodiment of the present application, obtaining a plurality of pose nodes and a plurality of local probability grid maps of a robot through multiple frames of sensing data may be implemented through the following steps S2021 to S2025:
step S2021: and creating an initial pose node and an initial local probability grid map.
The initial pose node means that the pose node is 0, and the initial local probability grid map means that the pose of the pose node is 0.
Step S2022: and creating a new pose node based on the odometry data and/or the laser point cloud data.
Generally, more pose nodes are created, which means the accuracy of subsequent charting is improved. However, creating more pose nodes is at the cost of consuming computational effort or sacrificing real-time performance, and therefore, a choice is needed in creating new pose nodes. Specifically, creating a new pose node based on odometry data and/or laser point cloud data may be: and if the odometer data and/or the laser point cloud data indicate that the translation distance of the robot compared with the previous pose node is greater than or equal to a first translation threshold value and/or the rotation distance of the robot is greater than or equal to a first rotation threshold value, creating a new pose node.
Step S2023: and acquiring a pose predicted value of the new pose node based on the odometer data corresponding to the new pose node and the previous pose node.
In the embodiment of the present application, the previous pose node is the pose node obtained in the previous event period according to steps S2021 to S2024.
Step S2024: and correcting the predicted value of the pose by using the stress light spot cloud data and the local probability grid map with the new pose node to obtain the latest pose node.
Since there is a certain error or uncertainty in obtaining the predicted value of the new pose node based on the odometer data corresponding to the new pose node and the previous pose node, it is necessary to correct the predicted value of the pose node with the stress light spot cloud data and the local probability grid map to obtain the latest pose node.
Step S2025: and creating a new local probability grid map based on the latest pose node and the last local probability grid map.
Specifically, creating a new local probability grid map based on the latest pose node and the last local probability grid map may be: and if the translation distance between the latest pose node and the last local probability grid map is greater than or equal to the second translation threshold value, creating a new local probability grid map, and taking the pose value of the latest pose node as the pose value of the local probability grid map. Namely, the pose value of the first pose in the local probability grid map is used as the pose value of the local probability grid map.
For example, for an initial local probability grid map, the first pose node is the initial pose node, so the pose value of the local probability grid map is the same as the pose value of the initial pose node (e.g., (0, 0)).
After step S2025, the step of creating a new pose node and/or a new local probability grid map based on the odometry data and/or the laser point cloud data is performed to obtain a plurality of pose nodes and a plurality of local probability grid maps, i.e., steps S2023 to S2025 are repeatedly performed.
Step S203: based on multi-frame sensing data, a plurality of pose nodes and a plurality of local probability grid maps, a first relative pose relationship of adjacent pose nodes and a second relative pose relationship of the pose nodes and the local probability grid maps are obtained.
The second relative pose relationship in step S203 includes a map relative relationship and a closed-loop constraint relationship. Specifically, the implementation of step S203 can be obtained through steps S2031 to S2033, which are explained as follows:
step S2031: and acquiring a first relative pose relationship of every two adjacent pose nodes based on the plurality of pose nodes.
Specifically, based on every two adjacent pose nodes in the plurality of pose nodes, a first relative pose relationship of every two adjacent pose nodes is calculated. Thereby a plurality of first relative pose relationships can be obtained. Optionally, the first relative pose relationship between two adjacent pose nodes specifically refers to a relative pose relationship between pose values of two adjacent pose nodes.
Step S2032: and obtaining a map relative relation based on each pose node and the corresponding local probability grid map.
Specifically, laser point cloud data associated with the latest pose node in a plurality of pose nodes of the robot is matched with a corresponding local probability grid map. If the laser point cloud data associated with the latest pose node of the robot is successfully matched with the corresponding local probability grid map, acquiring the relative pose relation C between the latest pose node of the robot and the successfully matched local probability grid map in the corresponding local probability grid map ij And the relative relationship between the pose node and the corresponding local probability grid map is used as the map relative relationship between the pose node and the corresponding local probability grid map.
In the process of continuously creating the pose, each latest pose node is matched with the local probability grid map where the pose node is located (corresponding to the pose node), so that the map relative relationship between each pose node and the local probability grid map is obtained, and a plurality of map relative relationships can be obtained. Similarly, the map relative relationship between each pose node and the local probability grid map is also the relative position relationship of the pose value.
Step S2033: and acquiring a closed loop constraint relation based on the laser point cloud data corresponding to the latest pose node and other local grid probability maps through loop detection, wherein the other local grid probability maps are local grid probability maps which do not correspond to the latest pose node.
In the process of continuously creating the pose, each latest pose node and other local grid probability maps are subjected to closed-loop detection, and therefore whether a closed-loop constraint relationship exists is determined. If yes, obtaining the closed-loop constraint relation, and if not, determining that the closed-loop constraint relation is absent.
The loop detection is also called closed loop detection, and is not described herein for the prior art. Once a closed-loop frame is detected, a closed-loop constraint relationship, i.e., an edge between two points having a loop-back relationship in graph optimization, may be obtained based on the laser point cloud data corresponding to the latest pose node and other local grid probability maps.
Step S204: and acquiring multi-frame image data acquired by the image acquisition device for the identification code.
In the embodiment of the application, the identification code can be a label pasted in an environment with limited observation constraint for a two-dimensional laser radar positioning mode, such as a long corridor and a glass wall. The image acquisition device that the robot carried on can be monocular, binocular or trinocular camera, through the shooting of image acquisition device such as these monocular, binocular or trinocular camera to the identification code, can gather multiframe image data.
Step S205: and associating the multi-frame image data acquired by the image acquisition device with the corresponding pose nodes to obtain a plurality of observed values of the pose nodes and the identification codes.
Because the technical scheme of this application is in order to solve some scenes that are not suitable for two-dimensional laser radar location, consequently, in this application embodiment, the sign of pasting in the limited environment of observation restraint for two-dimensional laser radar positioning methods such as long corridor, glass wall. The image acquisition device carried by the robot can be a monocular camera, a binocular camera or a trinocular camera, and the pose of the identification codes can be determined by acquiring the identification codes in the environment, wherein the pose is the observed value of the robot to the identification codes in the environment. Furthermore, multi-frame image data can be associated with the pose nodes of the robot, so that when the pose of the identification code in the environment cannot be acquired through an image acquisition device carried by the robot, the pose of the identification code, namely the observed value of the identification code, can be acquired through the association relation. As for the specific association method, a time association method may be used to associate the observed values of the same identification code in the environment at different times with different pose nodes of the robot, or an ID association method may be used to associate the observed values of the different identification codes in the environment with the same pose nodes of the robot.
Optionally, for the same pose node, multiple identification codes may be observed, and similarly, the same identification code may also be observed by multiple pose nodes, so that multiple observation values of the identification code may be obtained by associating the multiple frames of image data acquired by the image acquisition device with the corresponding pose node.
In some cases, the observed value may be one, or may be at least two or more. Step S206: and optimizing the pose nodes, the local probability grid maps and the observed values by using the first relative pose relationship, the second relative pose relationship and the observed values as constraints.
Specifically, the implementation of step S206 may be: constructing a nonlinear least square problem by taking the first relative pose relationship, the second relative pose relationship and the observed value as constraint conditions and taking a plurality of pose nodes, a plurality of local probability grid maps and the observed value as optimization targets; and solving the nonlinear least square problem by using a graph optimization solving algorithm, and taking the optimal solution of the nonlinear least square problem as the optimized pose nodes, the local probability grid maps and the observed values. Here, the graph optimization solution algorithm may be a Levenberg-Marquardt algorithm or a gauss-newton method, etc., which is not limited in this application.
In an optional embodiment, a plurality of or all first relative pose relations, a plurality of or all second relative pose relations, and a plurality of or all observation values are used as constraints, and optimization calculation is performed uniformly, so that a plurality of pose nodes, a plurality of local probability grid maps, and observation values are obtained.
Step S207: and constructing a positioning map by using the plurality of optimized pose nodes, the plurality of local probability grid maps, the observed value and the multi-frame sensing data.
Specifically, the implementation of step S207 may be: establishing a global probability grid map according to the plurality of optimized pose nodes, multi-frame sensing data associated with the pose nodes and a plurality of local probability grid maps; establishing an identification map based on the optimized observed value; the identification map is associated with a global probability grid map to form a positioning map. Different from the prior art that a global probability grid map is formed by splicing local probability grid maps, in the embodiment of the application, the global probability grid map is established by a plurality of optimized pose nodes and multi-frame sensing data associated with the pose nodes and a plurality of local probability grid maps. Because the map building method does not depend on the local probability grid map any more, but strictly follows the map building process of SLAM, and the map building parameters are optimized data, the global probability grid map is built by taking the optimized pose nodes, the multi-frame sensing data related to the pose nodes, the local probability grid maps and the like as parameters, and the error of map building is reduced; similarly, the optimized observation values are written into the relevant files of the identification map, and the error of map building can be reduced by building the identification map.
As can be seen from the map construction method illustrated in fig. 2, when the latest pose node of the robot is optimized, a constraint condition that the robot has an observed value of the identifier in the environment is added, in other words, only limited identifiers need to be pasted in the environment that is not suitable for two-dimensional laser radar positioning, and then the latest pose node of the robot, the local probability grid map, laser point cloud data associated with the latest pose node and the like acquired by the two-dimensional laser radar carried by the robot are combined, so that the global probability grid map and the identifier map that can be stably output can be established after the latest pose node of the robot and the pose data associated with the latest pose node are optimized, and the limitation on two-dimensional laser radar positioning in some scenes is broken through.
Referring to fig. 3, a map building apparatus provided in an embodiment of the present application, which may be a central processing unit of a robot or a functional module thereof, may include a first obtaining module 301, a second obtaining module 302, a third obtaining module 303, a fourth obtaining module 304, an association module 305, an optimization module 306, and a map building module 307, which are detailed as follows:
the first acquisition module 301 is configured to acquire multi-frame sensing data acquired by a sensing device when the robot moves along the mapping path;
the second obtaining module 302 is configured to obtain multiple pose nodes and multiple local probability grid maps of the robot based on multiple frames of sensing data;
a third obtaining module 303, configured to obtain, based on the multiple frames of sensing data, multiple pose nodes, and multiple local probability grid maps, a first relative pose relationship between every two adjacent pose nodes and a second relative pose relationship between each pose node and the multiple local probability grid maps;
the fourth obtaining module 304 is configured to obtain multi-frame image data acquired by the image acquisition device for the identification code;
an association module 305, configured to associate multiple frames of image data with corresponding pose nodes to obtain multiple observed values of the pose nodes and the identification codes;
the optimization module 306 is configured to optimize the pose nodes, the local probability grid maps and the observed values by using the first relative pose relationship, the second relative pose relationship and the observed values as constraints;
and a map building module 307, configured to build a positioning map by using the optimized pose nodes, the local probability grid maps, the observation values, and the multi-frame sensing data.
In an embodiment of the present application, the second obtaining module 302 illustrated in fig. 3 may include an initializing unit, a first creating unit, a predicted value obtaining unit, a correcting unit, and a second creating unit, where:
the system comprises an initialization unit, a local probability grid map generation unit and a local probability grid map generation unit, wherein the initialization unit is used for creating an initial pose node and an initial local probability grid map;
the first creating unit is used for creating a new pose node based on the odometer data and/or the laser point cloud data;
a predicted value acquisition unit for acquiring a pose predicted value of the new pose node based on the odometer data corresponding to the new pose node and the previous pose node;
the correcting unit is used for correcting the predicted value of the pose by using the stress light spot cloud data and the local probability grid map with the new pose node to obtain the latest pose node;
and the second creating unit is used for creating a new local probability grid map based on the latest pose node and the last local probability grid map.
Optionally, in another embodiment of the present application, the first creating unit may include a new pose node creating unit, and the second creating unit may include a new map creating unit, where:
the new pose node creating unit is used for creating a new pose node if the odometer data and/or the laser point cloud data indicate that the translation distance of the robot compared with the previous pose node is greater than or equal to a first translation threshold value and/or the rotation distance of the robot is greater than or equal to a first rotation threshold value;
and the new map creating unit is used for creating a new local probability grid map and taking the latest pose node as the pose of the local probability grid map if the translation distance between the latest pose node and the last local probability grid map is greater than or equal to a second translation threshold.
Optionally, in another embodiment of the present application, the third obtaining module 303 illustrated in fig. 3 may include a first relative pose relationship obtaining unit, a map relative relationship obtaining unit, and a closed-loop constraint relationship obtaining unit, where:
a first relative pose relationship acquisition unit configured to acquire a first relative pose relationship between every two adjacent pose nodes based on the plurality of pose nodes;
the map relative relation acquisition unit is used for acquiring a map relative relation based on each pose node and the corresponding local probability grid map;
and the closed-loop constraint relation acquisition unit is used for acquiring a closed-loop constraint relation based on the laser point cloud data corresponding to the latest pose node and other local grid probability maps through loop detection, wherein the other local grid probability maps refer to the local grid probability maps which do not correspond to the latest pose node.
Optionally, in another embodiment of the present application, the optimization module 306 illustrated in fig. 3 above may include a problem construction unit and a problem solving unit, where:
the problem construction unit is used for constructing a nonlinear least square problem by taking the first relative pose relationship, the second relative pose relationship and the observed value as constraint conditions and taking a plurality of pose nodes, a plurality of local probability grid maps and the observed value as optimization targets;
and the problem solving unit is used for solving the nonlinear least square problem by using a graph optimization solving algorithm, and taking the optimal solution of the nonlinear least square problem as the optimized pose nodes, local probability grid maps and observed values.
Optionally, in another embodiment of the present application, the map building module 307 illustrated in fig. 3 may include a first map generating unit, a second map generating unit, and an associating unit, where:
the first graph generation unit is used for establishing a global probability grid map according to the plurality of optimized pose nodes, multi-frame sensing data associated with the pose nodes and a plurality of local probability grid maps;
the second map generation unit is used for establishing an identification map based on the optimized observation value;
and the association unit is used for associating the identification map with the global probability grid map to form a positioning map.
It can be known from the apparatus illustrated in fig. 3 that, when the latest pose node of the robot is optimized, a constraint condition that the robot observes an identifier in an environment is added, in other words, only limited identifiers need to be pasted in an environment that is not suitable for two-dimensional lidar positioning, and then a global probability grid map and an identifier map that can be stably output are established after the latest pose node of the robot, a local probability grid map, laser point cloud data associated with the latest pose node and the like obtained by a two-dimensional lidar carried by the robot are combined, so that data such as the latest pose node of the robot and the pose of the identifier in the environment are optimized, and the limitation on two-dimensional lidar positioning in some scenes is broken through.
Fig. 4 is a schematic structural diagram of an apparatus provided in an embodiment of the present application. As shown in fig. 4, the apparatus 4 of this embodiment may be a robot or a module thereof, and mainly includes: a processor 40, a memory 41 and a computer program 42, such as a program of a mapping method, stored in the memory 41 and executable on the processor 40. The steps in the above-described embodiment of the map construction method, such as steps S201 to S207 shown in fig. 2, are implemented when the processor 40 executes the computer program 42. Alternatively, the processor 40 executes the computer program 42 to implement the functions of the modules/units in the device embodiments, such as the functions of the first obtaining module 301, the second obtaining module 302, the third obtaining module 303, the fourth obtaining module 304, the associating module 305, the optimizing module 306 and the map building module 307 shown in fig. 3.
Illustratively, the computer program 42 of the mapping method essentially comprises: acquiring multiframe sensing data acquired by a sensing device when the robot moves along the mapping path; obtaining a plurality of pose nodes and a plurality of local probability grid maps of the robot based on multi-frame sensing data; based on multi-frame sensing data, a plurality of pose nodes and a plurality of local probability grid maps, obtaining a first relative pose relationship of adjacent pose nodes and a second relative pose relationship of the pose nodes and the local probability grid maps; acquiring multi-frame image data acquired by an image acquisition device for the identification code; associating multi-frame image data with corresponding pose nodes to obtain an observed value of the identification code; optimizing a plurality of pose nodes, a plurality of local probability grid maps and an observed value by taking the first relative pose relationship, the second relative pose relationship and the observed value as constraints; and constructing a map by using the plurality of optimized pose nodes, the plurality of local probability grid maps, the observed value and the multi-frame sensing data. The computer program 42 may be partitioned into one or more modules/units, which are stored in the memory 41 and executed by the processor 40 to complete the present application. One or more of the modules/units may be a series of computer program instruction segments capable of performing specific functions that describe the execution of the computer program 42 in the device 4. For example, the computer program 42 may be divided into the functions of a first acquisition module 301, a second acquisition module 302, a third acquisition module 303, a fourth acquisition module 304, an association module 305, an optimization module 306, and a mapping module 307 (modules in a virtual device), each module having the following specific functions: the first acquisition module 301 is configured to acquire multi-frame sensing data acquired by a sensing device when the robot moves along the mapping path; the second obtaining module 302 is configured to obtain multiple pose nodes and multiple local probability grid maps of the robot based on multiple frames of sensing data; a third obtaining module 303, configured to obtain, based on the multiple frames of sensing data, the multiple pose nodes, and the multiple local probability grid maps, a first relative pose relationship between adjacent pose nodes and a second relative pose relationship between the pose nodes and the local probability grid maps; the fourth obtaining module 304 is configured to obtain multi-frame image data acquired by the image acquisition device for the identification code; the association module 305 is configured to associate multi-frame image data with corresponding pose nodes to obtain an observed value of an identification code; the optimization module 306 is configured to optimize the pose nodes, the local probability grid maps and the observed values by using the first relative pose relationship, the second relative pose relationship and the observed values as constraints; and a map building module 307, configured to build a map by using the optimized multiple pose nodes, multiple local probability grid maps, the observation value, and the multiple frames of sensing data.
The device 4 may include, but is not limited to, a processor 40, a memory 41. Those skilled in the art will appreciate that fig. 4 is merely an example of a device 4 and does not constitute a limitation of device 4 and may include more or fewer components than shown, or some components in combination, or different components, e.g., a computing device may also include input-output devices, network access devices, buses, etc.
The Processor 40 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 41 may be an internal storage unit of the device 4, such as a hard disk or a memory of the device 4. The memory 41 may also be an external storage device of the device 4, such as a plug-in hard disk provided on the device 4, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. Further, the memory 41 may also include both an internal storage unit of the device 4 and an external storage device. The memory 41 is used for storing computer programs and other programs and data required by the device. The memory 41 may also be used to temporarily store data that has been output or is to be output.
It should be clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional units and modules is only used for illustration, and in practical applications, the above functions may be distributed as different functional units and modules according to needs, that is, the internal structure of the device is divided into different functional units or modules so as to complete all or part of the functions described above. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only used for distinguishing one functional unit from another, and are not used for limiting the protection scope of the present application. For the specific working processes of the units and modules in the above-mentioned apparatus, reference may be made to the corresponding processes in the foregoing method embodiments, which are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/device and method may be implemented in other ways. For example, the above-described apparatus/device embodiments are merely illustrative, and for example, a module or a unit may be divided into only one logic function, and may be implemented in other ways, for example, a plurality of units or components may be combined or integrated into another apparatus, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a non-transitory computer readable storage medium. Based on such understanding, all or part of the processes in the method of the embodiments may also be implemented by instructing related hardware through a computer program, where the computer program of the map building method may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the embodiments of the methods may be implemented, that is, acquiring multiple frames of sensing data collected by a sensing device when the robot moves along the map building path; obtaining a plurality of pose nodes and a plurality of local probability grid maps of the robot based on multi-frame sensing data; based on multi-frame sensing data, a plurality of pose nodes and a plurality of local probability grid maps, obtaining a first relative pose relationship of adjacent pose nodes and a second relative pose relationship of the pose nodes and the local probability grid maps; acquiring multi-frame image data acquired by an image acquisition device for the identification code; associating multi-frame image data with corresponding pose nodes to obtain an observed value of the identification code; optimizing a plurality of pose nodes, a plurality of local probability grid maps and an observation value by taking the first relative pose relationship, the second relative pose relationship and the observation value as constraints; and constructing a map by using the plurality of optimized pose nodes, the plurality of local probability grid maps, the observation value and the multi-frame sensing data. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The non-transitory computer readable medium may include: any entity or device capable of carrying computer program code, recording medium, U.S. disk, removable hard disk, magnetic disk, optical disk, computer Memory, read-Only Memory (ROM), random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution media, and the like. It should be noted that the non-transitory computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, in accordance with legislation and patent practice, the non-transitory computer readable medium does not include electrical carrier signals and telecommunications signals. The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application. The above-mentioned embodiments, objects, technical solutions and advantages of the present application are further described in detail, it should be understood that the above-mentioned embodiments are only examples of the present application, and are not intended to limit the scope of the present application, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present application should be included in the scope of the present invention.

Claims (10)

1. A robot, characterized in that the robot comprises:
the device comprises a memory, a processor, a sensing device and an image acquisition device;
the memory stores executable program code;
the processor, coupled with the memory, invokes executable program code stored in the memory to perform a mapping method comprising:
acquiring multi-frame sensing data acquired by the sensing device when the robot moves along the mapping path;
obtaining a plurality of pose nodes and a plurality of local probability grid maps of the robot based on the multi-frame sensing data;
based on the multi-frame sensing data, a plurality of pose nodes and a plurality of local probability grid maps, obtaining a first relative pose relationship of every two adjacent pose nodes and a second relative pose relationship of each pose node and the local probability grid maps;
acquiring multi-frame image data acquired by the image acquisition device for the identification code;
associating the multi-frame image data with the corresponding pose nodes to obtain a plurality of observed values of the pose nodes and the identification codes;
optimizing the pose nodes, the local probability grid maps and the observation value by taking the first relative pose relationship, the second relative pose relationship and the observation value as constraints;
and constructing a positioning map by using the plurality of optimized pose nodes, the plurality of local probability grid maps, the observation values and the multi-frame sensing data.
2. The robot of claim 1, wherein the sensing devices comprise wheel odometers and two-dimensional lidar, the sensing data comprises odometer data and laser point cloud data, the processor invokes executable program code stored in the memory to perform a mapping method wherein the step of deriving a plurality of pose nodes and a plurality of local probability grid maps for the robot based on the plurality of frames of sensing data comprises:
creating an initial pose node and an initial local probability grid map;
creating a new pose node based on the odometer data and/or the laser point cloud data;
acquiring a pose prediction value of the new pose node based on the odometry data corresponding to the new pose node and the last pose node;
correcting the predicted value of the pose by using the new pose node and the stress light spot cloud data and the local probability grid map to obtain the latest pose node;
creating a new local probability grid map based on the latest pose node and the last local probability grid map;
and returning to the step of creating a new pose node and/or a new local probability grid map based on the odometry data and/or the laser point cloud data to obtain a plurality of pose nodes and a plurality of local probability grid maps.
3. The robot of claim 2, wherein the processor invokes executable program code stored in the memory to perform the step of creating a new pose node based on the odometry data and/or the laser point cloud data in a mapping method comprising:
if the odometry data and/or the laser point cloud data indicate that the translation distance of the robot compared with the previous pose node is greater than or equal to a first translation threshold value and/or the rotation distance of the robot is greater than or equal to a first rotation threshold value, creating a new pose node;
the processor calls the executable program code stored in the memory to execute the step of creating a new local probability grid map based on the latest pose node and the last local probability grid map in the map building method, and the step of creating a new local probability grid map based on the latest pose node and the last local probability grid map comprises the following steps:
and if the translation distance between the latest pose node and the last local probability grid map is greater than or equal to a second translation threshold value, creating a new local probability grid map, and taking the latest pose node as the pose of the local probability grid map.
4. The robot of claim 1, wherein the second relative pose relationship comprises a map relative relationship and a closed-loop constraint relationship, the processor invokes executable program code stored in the memory to perform the step of obtaining the first relative pose relationship for each two adjacent pose nodes and the second relative pose relationship for each pose node and the plurality of local probability grid maps based on the plurality of frames of sensing data, the plurality of pose nodes, and the plurality of local probability grid maps in the mapping method comprising:
acquiring a first relative pose relationship of every two adjacent pose nodes based on the plurality of pose nodes;
acquiring the relative relation of the map based on each pose node and the corresponding local probability grid map;
and acquiring the closed loop constraint relation based on the laser point cloud data corresponding to the latest pose node and other local grid probability maps through loop detection, wherein the other local grid probability maps are local grid probability maps which do not correspond to the latest pose node.
5. The robot of claim 1, wherein the processor invokes executable program code stored in the memory to perform a method of mapping wherein optimizing the plurality of pose nodes, the plurality of local probability grid maps, and the observations with the first relative pose relationship, the second relative pose relationship, and the observations as constraints comprises:
constructing a nonlinear least square problem by taking the first relative pose relationship, the second relative pose relationship and the observed value as constraint conditions and taking the pose nodes, the local probability grid maps and the observed value as optimization targets;
and solving the nonlinear least square problem by using a graph optimization solving algorithm, and taking the optimal solution of the nonlinear least square problem as the optimized pose nodes, the local probability grid maps and the observed value.
6. The robot of claim 1, wherein the processor invokes executable program code stored in the memory to perform the step of constructing a location map using the optimized plurality of pose nodes, the plurality of local probability grid maps, the observations, and the plurality of frames of sensory data in a mapping method comprising:
establishing a global probability grid map according to the plurality of optimized pose nodes, multi-frame sensing data associated with the pose nodes and a plurality of local probability grid maps;
establishing an identification map based on the optimized observed value;
associating the identification map with the global probability grid map to form the positioning map.
7. A map constructing apparatus applied to a robot including a sensor device and an image pickup device, the apparatus comprising:
the first acquisition module is used for acquiring multi-frame sensing data acquired by the sensing device when the robot moves along the mapping path;
the second acquisition module is used for acquiring a plurality of pose nodes and a plurality of local probability grid maps of the robot based on the multi-frame sensing data;
a third obtaining module, configured to obtain, based on the multiple frames of sensing data, multiple pose nodes, and multiple local probability grid maps, a first relative pose relationship between every two adjacent pose nodes and a second relative pose relationship between each pose node and the multiple local probability grid maps;
the fourth acquisition module is used for acquiring multi-frame image data acquired by the image acquisition device for the identification code;
the association module is used for associating the multi-frame image data with the corresponding pose nodes to obtain a plurality of observed values of the pose nodes and the identification codes;
the optimization module is used for optimizing the pose nodes, the local probability grid maps and the observed value by taking the first relative pose relationship, the second relative pose relationship and the observed value as constraints;
and the map construction module is used for constructing a positioning map by utilizing the plurality of optimized pose nodes, the plurality of local probability grid maps, the observation values and the multi-frame sensing data.
8. A map construction method, applied to a robot, a robot sensing device and an image acquisition device, comprising:
acquiring multi-frame sensing data acquired by the sensing device when the robot moves along the mapping path;
obtaining a plurality of pose nodes and a plurality of local probability grid maps of the robot based on the multi-frame sensing data;
based on the multi-frame sensing data, a plurality of pose nodes and a plurality of local probability grid maps, obtaining a first relative pose relationship of every two adjacent pose nodes and a second relative pose relationship of each pose node and the local probability grid maps;
acquiring multi-frame image data acquired by the image acquisition device for the identification code;
associating the multi-frame image data with the corresponding pose nodes to obtain a plurality of observed values of the pose nodes and the identification codes;
optimizing the pose nodes, the local probability grid maps and the observed values by taking the first relative pose relationship, the second relative pose relationship and the observed values as constraints;
and constructing a positioning map by using the plurality of optimized pose nodes, the plurality of local probability grid maps, the observation values and the multi-frame sensing data.
9. The method of claim 8, wherein the sensing devices comprise wheel odometers and two-dimensional lidar, the sensing data comprises odometer data and lidar point cloud data, the processor invokes executable program code stored in the memory to perform the method of mapping wherein the step of deriving the plurality of pose nodes and the plurality of local probability grid maps for the robot based on the plurality of frames of sensing data comprises:
creating an initial pose node and an initial local probability grid map;
creating a new pose node based on the odometer data and/or the laser point cloud data;
acquiring a pose prediction value of the new pose node based on the odometry data corresponding to the new pose node and the last pose node;
correcting the pose predicted value by utilizing the new pose node and the stress light spot cloud data and the local probability grid map to obtain a latest pose node;
creating a new local probability grid map based on the latest pose node and the last local probability grid map;
and returning to the step of creating a new pose node and/or a new local probability grid map based on the odometry data and/or the laser point cloud data to obtain a plurality of pose nodes and a plurality of local probability grid maps.
10. A readable storage medium on which a computer program is stored for, when executed by a processor, implementing a mapping method, the mapping method being robotically implemented according to any of claims 1-6.
CN202111664358.0A 2021-12-31 2021-12-31 Robot, map construction method, map construction device and readable storage medium Pending CN115540867A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111664358.0A CN115540867A (en) 2021-12-31 2021-12-31 Robot, map construction method, map construction device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111664358.0A CN115540867A (en) 2021-12-31 2021-12-31 Robot, map construction method, map construction device and readable storage medium

Publications (1)

Publication Number Publication Date
CN115540867A true CN115540867A (en) 2022-12-30

Family

ID=84723111

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111664358.0A Pending CN115540867A (en) 2021-12-31 2021-12-31 Robot, map construction method, map construction device and readable storage medium

Country Status (1)

Country Link
CN (1) CN115540867A (en)

Similar Documents

Publication Publication Date Title
JP7179110B2 (en) Positioning method, device, computing device, computer-readable storage medium and computer program
CN110587597B (en) SLAM closed loop detection method and detection system based on laser radar
CN111209978A (en) Three-dimensional visual repositioning method and device, computing equipment and storage medium
CN114255323A (en) Robot, map construction method, map construction device and readable storage medium
CN113587934A (en) Robot, indoor positioning method and device and readable storage medium
CN112967340A (en) Simultaneous positioning and map construction method and device, electronic equipment and storage medium
CN111380515A (en) Positioning method and device, storage medium and electronic device
CN113298910A (en) Method, apparatus and storage medium for generating traffic sign line map
CN109997123B (en) Methods, systems, and apparatus for improved space-time data management
CN115494533A (en) Vehicle positioning method, device, storage medium and positioning system
CN111368860B (en) Repositioning method and terminal equipment
CN115249266A (en) Method, system, device and storage medium for predicting position of waypoint
CN112097772B (en) Robot and map construction method and device thereof
CN111461980B (en) Performance estimation method and device of point cloud stitching algorithm
CN116958452A (en) Three-dimensional reconstruction method and system
CN110887490B (en) Key frame selection method, medium, terminal and device for laser positioning navigation
CN113177980A (en) Target object speed determination method and device for automatic driving and electronic equipment
CN115540867A (en) Robot, map construction method, map construction device and readable storage medium
US20240011792A1 (en) Method and apparatus for updating confidence of high-precision map
CN115147561A (en) Pose graph generation method, high-precision map generation method and device
CN115597585A (en) Robot positioning method, device, equipment and storage medium
CN112669196B (en) Method and equipment for optimizing data by factor graph in hardware acceleration engine
CN112668371B (en) Method and device for outputting information
CN113034538B (en) Pose tracking method and device of visual inertial navigation equipment and visual inertial navigation equipment
CN114295138B (en) Robot, map extension method, apparatus, and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination