CN114266821A - Online positioning method and device, terminal equipment and storage medium - Google Patents

Online positioning method and device, terminal equipment and storage medium Download PDF

Info

Publication number
CN114266821A
CN114266821A CN202111435639.9A CN202111435639A CN114266821A CN 114266821 A CN114266821 A CN 114266821A CN 202111435639 A CN202111435639 A CN 202111435639A CN 114266821 A CN114266821 A CN 114266821A
Authority
CN
China
Prior art keywords
frame
current
point cloud
pose
positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111435639.9A
Other languages
Chinese (zh)
Inventor
宋朝忠
周洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Echiev Autonomous Driving Technology Co ltd
Original Assignee
Shenzhen Echiev Autonomous Driving Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Echiev Autonomous Driving Technology Co ltd filed Critical Shenzhen Echiev Autonomous Driving Technology Co ltd
Priority to CN202111435639.9A priority Critical patent/CN114266821A/en
Publication of CN114266821A publication Critical patent/CN114266821A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an online positioning method, an online positioning device, terminal equipment and a storage medium, wherein the online positioning method comprises the following steps: selecting one frame as a current frame at a preset distance in a map area where a mobile intelligent terminal is located, acquiring an initial pose of each current frame, current frame point cloud data and corresponding reference frame point cloud data, calculating the accurate pose of each current frame as each key frame, and constructing a global key frame point cloud map based on each key frame; positioning and initializing the current position of the mobile intelligent terminal according to the global key frame point cloud map to obtain the pose of an initial positioning frame; selecting a key frame corresponding to the current positioning frame from the global key frame point cloud map according to the pose of the last frame of the current positioning frame; and positioning the current positioning frame according to the point cloud data of the current positioning frame and the corresponding key frame to obtain the accurate pose of the current positioning frame. The invention can improve the accuracy of on-line positioning.

Description

Online positioning method and device, terminal equipment and storage medium
Technical Field
The present invention relates to the field of positioning technologies, and in particular, to an online positioning method, an online positioning device, a terminal device, and a storage medium.
Background
At present, the technology of automatic driving and Automatic Guided Vehicle (AGV) is developed rapidly, various robots are widely applied, and the on-line positioning technology is very important in the application process of the automatic driving, the AGV and the robots. In the prior art, SLAM (Simultaneous Localization and Mapping) technology is usually adopted to make an unmanned vehicle (or robot) complete Mapping, Localization and path planning in an unknown environment.
The laser SLAM is a relatively mature positioning and navigation scheme at present, and specific implementation of the laser SLAM is divided into two steps aiming at a 3D SLAM: firstly, an off-line map building method generally comprises three main parts of a front-end odometer, rear-end optimization and closed-loop detection. Secondly, the online positioning is realized mainly according to the previously established offline global point cloud map, and the position of the unmanned vehicle (or the robot) is output in real time. At present, most of the methods adopt current frame point cloud data and a global point cloud map for registration, so as to realize online laser positioning. However, the method is limited by the mapping accuracy of the global point cloud map, the positioning accuracy obtained by the method is not very high, and the matching accuracy and robustness are not very good because the current frame point cloud number and the global point cloud map are not an order of magnitude, so the method is very limited in practical application.
Therefore, it is necessary to provide a method for improving the online positioning accuracy.
Disclosure of Invention
The invention mainly aims to provide an online positioning method, an online positioning device, terminal equipment and a storage medium, and aims to improve the accuracy of online positioning.
In order to achieve the above object, the present invention provides an online positioning method, including:
selecting a frame as a current frame at a preset distance in a map area where the mobile intelligent terminal is located, and acquiring the initial pose of each current frame, current frame point cloud data and corresponding reference frame point cloud data;
calculating the accurate pose of each current frame according to the initial pose of each current frame, the current frame point cloud data and the corresponding reference frame point cloud data to serve as each key frame, and constructing a global key frame point cloud map based on each key frame;
positioning and initializing the current position of the mobile intelligent terminal according to the global key frame point cloud map to obtain the pose of an initial positioning frame;
acquiring point cloud data of a current positioning frame and a pose of a previous frame of the current positioning frame, wherein when the previous frame of the current positioning frame is the initial positioning frame, the pose of the previous frame is the pose of the initial positioning frame;
obtaining an estimated pose of the current positioning frame according to the pose of the previous frame, and selecting a key frame corresponding to the current positioning frame from the global key frame point cloud map according to the estimated pose;
and positioning the current positioning frame according to the point cloud data of the current positioning frame and the corresponding key frame to obtain the accurate pose of the current positioning frame.
Optionally, the step of calculating an accurate pose of each current frame according to the initial pose of each current frame, the current frame point cloud data, and the corresponding reference frame point cloud data to serve as each key frame, and constructing a global key frame point cloud map based on each key frame includes:
registering each current frame point cloud data with each reference frame point cloud data through a point cloud matching algorithm to obtain a relative pose transformation matrix of each current frame;
calculating to obtain the accurate pose of each current frame according to the initial pose and the relative pose transformation matrix of each current frame;
storing the accurate pose and point cloud data of each current frame as each key frame;
and constructing the global key frame point cloud map based on each stored key frame in the map area.
Optionally, the step of performing positioning initialization on the current position of the mobile intelligent terminal according to the global key frame point cloud map to obtain the pose of the initial positioning frame includes:
acquiring initial positioning frame point cloud data;
traversing key frames in the global key frame point cloud map according to the initial positioning frame point cloud data, and registering the initial positioning frame with each key frame to obtain a registration score of each key frame;
selecting a key frame corresponding to the highest score in the registration scores as a matching key frame of the initial positioning frame, and acquiring the pose and point cloud data of the matching key frame;
registering the initial positioning frame point cloud data and the point cloud data of the matched key frame to obtain a relative transformation matrix;
and calculating the pose of the initial positioning frame according to the pose of the matching key frame and the relative transformation matrix.
Optionally, the step of obtaining the estimated pose of the current positioning frame according to the pose of the previous frame, and selecting the key frame corresponding to the current positioning frame from the global key frame point cloud map according to the estimated pose includes:
calculating the estimated pose of the current positioning frame according to the interval time and the running speed of the mobile intelligent terminal from the last frame of the current positioning frame to the current positioning frame and the pose of the last frame;
and selecting a key frame closest to the current positioning frame according to the estimated pose on the global key frame point cloud map as a corresponding key frame of the current positioning frame, and acquiring the pose and point cloud data of the corresponding key frame.
Optionally, the step of locating the current positioning frame according to the point cloud data of the current positioning frame and the corresponding key frame to obtain the accurate pose of the current positioning frame includes:
registering the point cloud data of the current positioning frame with the point cloud data of the corresponding key frame to obtain a relative pose transformation matrix of the current positioning frame;
and calculating to obtain the accurate pose of the current positioning frame according to the relative pose conversion matrix of the current positioning frame and the pose of the corresponding key frame.
Optionally, the step of acquiring the initial pose of each current frame, the current frame point cloud data, and the corresponding reference frame point cloud data includes:
acquiring an initial pose of each current frame;
obtaining each current frame point cloud data through the first sensor;
and intercepting point cloud data in a preset area taking the initial pose of each current frame as the center on the global point cloud map to serve as reference frame point cloud data corresponding to each current frame.
Optionally, the step of acquiring an initial pose of each current frame includes:
judging whether the current frame is a first frame;
if the current frame is a first frame, obtaining the initial pose of the current frame through manual uploading or a second sensor;
and if the current frame is not the first frame, taking the pose of the previous frame of the current frame as the initial pose of the current frame.
In addition, to achieve the above object, the present invention also provides an online positioning device, including:
the mobile intelligent terminal comprises an initial acquisition module, a reference frame point cloud data acquisition module and a processing module, wherein the initial acquisition module is used for selecting one frame as a current frame at intervals of a preset distance in a map area where the mobile intelligent terminal is located, and acquiring the initial pose of each current frame, the current frame point cloud data and the corresponding reference frame point cloud data;
the construction output module is used for calculating the accurate pose of each current frame according to the initial pose of each current frame, the current frame point cloud data and the corresponding reference frame point cloud data to serve as each key frame, and constructing a global key frame point cloud map based on each key frame;
the positioning initialization module is used for positioning and initializing the current position of the mobile intelligent terminal according to the global key frame point cloud map to obtain the pose of an initial positioning frame;
the secondary acquisition module is used for acquiring point cloud data of a current positioning frame and the pose of the previous frame of the current positioning frame, wherein when the previous frame of the current positioning frame is the initial positioning frame, the pose of the previous frame is the pose of the initial positioning frame;
the estimation selection module is used for obtaining the estimation pose of the current positioning frame according to the pose of the previous frame and selecting the key frame corresponding to the current positioning frame in the global key frame point cloud map according to the estimation pose;
and the online positioning module is used for positioning the current positioning frame according to the point cloud data of the current positioning frame and the corresponding key frame to obtain the accurate pose of the current positioning frame.
In addition, in order to achieve the above object, the present invention further provides a terminal device, where the terminal device includes a memory, a processor, and an online positioning program stored in the memory and executable on the processor, and the online positioning program, when executed by the processor, implements the steps of the online positioning method as described above.
In addition, to achieve the above object, the present invention further provides a computer readable storage medium, having an online positioning program stored thereon, where the online positioning program, when executed by a processor, implements the steps of the online positioning method as described above.
According to the online positioning method, the online positioning device, the terminal equipment and the storage medium, one frame is selected as a current frame at a preset distance in a map area where the mobile intelligent terminal is located, and the initial pose of each current frame, current frame point cloud data and corresponding reference frame point cloud data are obtained; calculating the accurate pose of each current frame according to the initial pose of each current frame, the current frame point cloud data and the corresponding reference frame point cloud data to serve as each key frame, and constructing a global key frame point cloud map based on each key frame; positioning and initializing the current position of the mobile intelligent terminal according to the global key frame point cloud map to obtain the pose of an initial positioning frame; acquiring point cloud data of a current positioning frame and a pose of a previous frame of the current positioning frame, wherein when the previous frame of the current positioning frame is the initial positioning frame, the pose of the previous frame is the pose of the initial positioning frame; obtaining an estimated pose of the current positioning frame according to the pose of the previous frame, and selecting a key frame corresponding to the current positioning frame from the global key frame point cloud map according to the estimated pose; and positioning the current positioning frame according to the point cloud data of the current positioning frame and the corresponding key frame to obtain the accurate pose of the current positioning frame. The accuracy of online positioning is improved by constructing a global key frame point cloud map and obtaining the accurate pose of the current positioning frame according to the global key frame point cloud map.
Drawings
FIG. 1 is a functional block diagram of a terminal device to which an online positioning device of the present invention belongs;
FIG. 2 is a schematic flow chart diagram illustrating an exemplary embodiment of an online location method of the present invention;
fig. 3 is a schematic diagram of a refining process of calculating an accurate pose of each current frame according to an initial pose of each current frame, current frame point cloud data and corresponding reference frame point cloud data, and constructing a global key frame point cloud map based on each key frame as each key frame in the embodiment of the present invention;
FIG. 4 is a schematic flow chart of constructing a global key frame point cloud map according to an embodiment of the present invention;
fig. 5 is a schematic view of a refining process of locating the current positioning frame according to the point cloud data of the current positioning frame and the corresponding key frame to obtain an accurate pose of the current positioning frame in the embodiment of the present invention;
fig. 6 is a schematic flow chart of online positioning according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The main solution of the embodiment of the invention is as follows: selecting a frame as a current frame at preset intervals in a map area where the mobile intelligent terminal is located, and acquiring the initial pose of each current frame, current frame point cloud data and corresponding reference frame point cloud data; calculating the accurate pose of each current frame according to the initial pose of each current frame, the current frame point cloud data and the corresponding reference frame point cloud data to serve as each key frame, and constructing a global key frame point cloud map based on each key frame; positioning and initializing the current position of the mobile intelligent terminal according to the global key frame point cloud map to obtain the pose of an initial positioning frame; acquiring point cloud data of a current positioning frame and a pose of a previous frame of the current positioning frame, wherein when the previous frame of the current positioning frame is the initial positioning frame, the pose of the previous frame is the pose of the initial positioning frame; obtaining an estimated pose of the current positioning frame according to the pose of the previous frame, and selecting a key frame corresponding to the current positioning frame from the global key frame point cloud map according to the estimated pose; and positioning the current positioning frame according to the point cloud data of the current positioning frame and the corresponding key frame to obtain the accurate pose of the current positioning frame. The accuracy of online positioning is improved by constructing a global key frame point cloud map and obtaining the accurate pose of the current positioning frame according to the global key frame point cloud map.
The technical terms related to the embodiment of the invention are as follows:
instantaneous Localization and Mapping (SLAM): the robot starts to move from an unknown position in an unknown environment, self-positioning is carried out according to the position and the map in the moving process, and meanwhile, an incremental map is built on the basis of self-positioning, so that the autonomous positioning and navigation of the robot are realized.
Automatic Guided Vehicle (AGV): the present invention relates to a transport vehicle equipped with an electromagnetic or optical automatic navigation device, capable of traveling along a predetermined navigation route, and having safety protection and various transfer functions.
Region of interest (ROI): in machine vision and image processing, a region to be processed is outlined from a processed image in the form of a box, a circle, an ellipse, an irregular polygon, or the like, and is called a region of interest.
Normal Distribution Transform (NDT): the method is an algorithm applied to a comparison core in the field of unmanned driving and is also a relatively common algorithm.
Lidar (Lidar): the radar system is a radar system that detects a characteristic quantity such as a position, a velocity, and the like of an object by emitting a laser beam.
Global Positioning System (GPS): the positioning system is a high-precision radio navigation positioning system based on artificial earth satellites, and can provide accurate geographic position, vehicle running speed and precise time information in any place of the world and in the near-earth space.
Controller Area Network (CAN): and the controller local area network is used for acquiring the vehicle speed signal.
At present, two mainstream SLAM technology applications are provided, namely laser SLAM (mapping and positioning based on laser radar) and visual SLAM (mapping and positioning based on single/binocular camera vision). Laser SLAM is a relatively mature positioning and navigation scheme at present, and visual SLAM is a mainstream direction of current research.
The laser-based positioning method mainly depends on the laser SLAM technology, and the invention mainly aims at 3D SLAM due to the fact that different sensors (single line Lidar and multi-line Lidar) can be divided into 2D SLAM and 3D SLAM. The concrete implementation is divided into two steps: firstly, establishing a graph offline. The classical SLAM system generally comprises three main parts, namely a front-end odometer, a back-end optimization and a closed-loop detection. For example, LOAM, LeGO-LOAM, LOAM-livox and the like are adopted, and the methods basically consist of the three links, are different in magnitude and mainly different in the characteristic extraction part in the front-end odometer. Second, online positioning. The module is mainly used for realizing the online positioning of the unmanned vehicle (or the robot) according to the established offline global point cloud map and outputting the position of the unmanned vehicle (or the robot) in real time. Currently, most current frame point cloud data and a global point cloud map are adopted for registration, so that online laser positioning is realized, for example, the laser positioning in Baidu apollo and autoware adopts the method. However, the method is limited by the mapping accuracy of the global point cloud map, the positioning accuracy obtained by the method is not very high, and the matching accuracy and robustness are not very good because the current frame point cloud number and the global point cloud map are not an order of magnitude, so that in practical application, the method is very limited and needs to be combined with other sensors and positioning methods for fusion to improve the accuracy and robustness.
The current positioning technology based on the multi-line laser radar is difficult to have higher precision and robustness, and specifically has the following two reasons:
firstly, the mapping accuracy is not high. The laser SLAM map building method mainly depends on some obvious line and surface characteristics, particularly on the wall surfaces of some high-rise buildings, a map with high quality can be built in the environment, however, in some scenic spots, most of scenes of trees are faced, the built point cloud map is poor in quality, and the accuracy of online positioning depends on the quality of the point cloud map.
Secondly, the registration accuracy is not high and the robustness is poor. During online positioning, current frame data and global point cloud map data need to be registered, and due to the difference of the two orders of magnitude (even if an area of interest (ROI) is defined, the data volume of the global point cloud map is generally 5 to 10 times of that of the current frame), the registration error is large, even error matching is obtained, so that positioning flight is caused, and accurate positioning cannot be provided for an unmanned vehicle (or a robot).
In addition, the current positioning technology based on the multiline laser radar cannot obtain the initial pose of the vehicle, and other sensors (such as a GPS) or methods are required to initialize the position.
According to the laser positioning method based on the key frame, a global key frame point cloud map is constructed on the basis of the traditional SLAM technology, and accurate positioning is calculated according to the key frame and the current frame (the key frame and the current frame are in the same order of magnitude). The method can effectively solve the defects of the method, is provided with positioning initialization, does not need to utilize other sensors, has higher precision and robustness, and can be suitable for various complex scenes.
Specifically, referring to fig. 1, fig. 1 is a functional module schematic diagram of a terminal device to which the online positioning device of the present invention belongs. The online positioning device may be a device which is independent from the terminal device and can perform online positioning, and it may be carried on the terminal device in the form of hardware or software. The terminal device can be an intelligent mobile terminal with a data processing function, such as a mobile phone and a tablet personal computer, and can also be a fixed terminal device or a server with a data processing function.
In this embodiment, the terminal device to which the online positioning apparatus belongs at least includes an output module 110, a processor 120, a memory 130, and a communication module 140.
The memory 130 stores an operating system and an online positioning program, and the online positioning apparatus may acquire an initial pose of each current frame, current frame point cloud data and corresponding reference frame point cloud data, calculate an accurate pose of each current frame according to the initial pose of each current frame, the current frame point cloud data and the corresponding reference frame point cloud data, each key frame, a global key frame point cloud map constructed based on each key frame, perform positioning initialization on the current position of the mobile intelligent terminal according to the global key frame point cloud map, acquire a pose of the initial positioning frame, the acquired point cloud data of the current positioning frame and a pose of a previous frame of the current positioning frame, an estimated pose of the current positioning frame obtained according to the pose of the previous frame, and a key frame corresponding to the current positioning frame selected from the global key frame point cloud map according to the estimated pose, positioning the current positioning frame according to the point cloud data of the current positioning frame and the corresponding key frame, and storing the obtained information of the accurate pose and the like of the current positioning frame in the memory 130; the output module 110 may be a display screen or the like. The communication module 140 may include a WIFI module, a mobile communication module, a bluetooth module, and the like, and communicates with an external device or a server through the communication module 140.
Wherein the online positioning program in the memory 130, when executed by the processor, implements the steps of:
selecting a frame as a current frame at a preset distance in a map area where the mobile intelligent terminal is located, and acquiring the initial pose of each current frame, current frame point cloud data and corresponding reference frame point cloud data;
calculating the accurate pose of each current frame according to the initial pose of each current frame, the current frame point cloud data and the corresponding reference frame point cloud data to serve as each key frame, and constructing a global key frame point cloud map based on each key frame;
positioning and initializing the current position of the mobile intelligent terminal according to the global key frame point cloud map to obtain the pose of an initial positioning frame;
acquiring point cloud data of a current positioning frame and a pose of a previous frame of the current positioning frame, wherein when the previous frame of the current positioning frame is the initial positioning frame, the pose of the previous frame is the pose of the initial positioning frame;
obtaining an estimated pose of the current positioning frame according to the pose of the previous frame, and selecting a key frame corresponding to the current positioning frame from the global key frame point cloud map according to the estimated pose;
and positioning the current positioning frame according to the point cloud data of the current positioning frame and the corresponding key frame to obtain the accurate pose of the current positioning frame.
Further, the online location program in the memory 130, when executed by the processor, further implements the steps of:
registering each current frame point cloud data with each reference frame point cloud data through a point cloud matching algorithm to obtain a relative pose transformation matrix of each current frame;
calculating to obtain the accurate pose of each current frame according to the initial pose and the relative pose transformation matrix of each current frame;
storing the accurate pose and point cloud data of each current frame as each key frame;
and constructing the global key frame point cloud map based on each stored key frame in the map area.
Further, the online location program in the memory 130, when executed by the processor, further implements the steps of:
acquiring initial positioning frame point cloud data;
traversing key frames in the global key frame point cloud map according to the initial positioning frame point cloud data, and registering the initial positioning frame with each key frame to obtain a registration score of each key frame;
selecting a key frame corresponding to the highest score in the registration scores as a matching key frame of the initial positioning frame, and acquiring the pose and point cloud data of the matching key frame;
registering the initial positioning frame point cloud data and the point cloud data of the matched key frame to obtain a relative transformation matrix;
and calculating the pose of the initial positioning frame according to the pose of the matching key frame and the relative transformation matrix.
Further, the online location program in the memory 130, when executed by the processor, further implements the steps of:
calculating the estimated pose of the current positioning frame according to the interval time and the running speed of the mobile intelligent terminal from the last frame of the current positioning frame to the current positioning frame and the pose of the last frame;
and selecting a key frame closest to the current positioning frame according to the estimated pose on the global key frame point cloud map as a corresponding key frame of the current positioning frame, and acquiring the pose and point cloud data of the corresponding key frame.
Further, the online location program in the memory 130, when executed by the processor, further implements the steps of:
registering the point cloud data of the current positioning frame with the point cloud data of the corresponding key frame to obtain a relative pose transformation matrix of the current positioning frame;
and calculating to obtain the accurate pose of the current positioning frame according to the relative pose conversion matrix of the current positioning frame and the pose of the corresponding key frame.
Further, the online location program in the memory 130, when executed by the processor, further implements the steps of:
acquiring an initial pose of each current frame;
obtaining each current frame point cloud data through the first sensor;
and intercepting point cloud data in a preset area taking the initial pose of each current frame as the center on the global point cloud map to serve as reference frame point cloud data corresponding to each current frame.
Further, the online location program in the memory 130, when executed by the processor, further implements the steps of:
judging whether the current frame is a first frame;
if the current frame is a first frame, obtaining the initial pose of the current frame through manual uploading or a second sensor;
and if the current frame is not the first frame, taking the pose of the previous frame of the current frame as the initial pose of the current frame.
According to the scheme, one frame is selected as the current frame at intervals of a preset distance in the map area where the mobile intelligent terminal is located, and the initial pose of each current frame, the current frame point cloud data and the corresponding reference frame point cloud data are obtained; calculating the accurate pose of each current frame according to the initial pose of each current frame, the current frame point cloud data and the corresponding reference frame point cloud data to serve as each key frame, and constructing a global key frame point cloud map based on each key frame; positioning and initializing the current position of the mobile intelligent terminal according to the global key frame point cloud map to obtain the pose of an initial positioning frame; acquiring point cloud data of a current positioning frame and a pose of a previous frame of the current positioning frame, wherein when the previous frame of the current positioning frame is the initial positioning frame, the pose of the previous frame is the pose of the initial positioning frame; obtaining an estimated pose of the current positioning frame according to the pose of the previous frame, and selecting a key frame corresponding to the current positioning frame from the global key frame point cloud map according to the estimated pose; and positioning the current positioning frame according to the point cloud data of the current positioning frame and the corresponding key frame to obtain the accurate pose of the current positioning frame. The accuracy of online positioning is improved by constructing a global key frame point cloud map and obtaining the accurate pose of the current positioning frame according to the global key frame point cloud map.
Based on the above terminal device architecture but not limited to the above architecture, embodiments of the method of the present invention are presented.
The execution subject of the method of this embodiment may be an online positioning device or a terminal device, and the online positioning device is used for example in this embodiment.
Referring to fig. 2, fig. 2 is a flowchart illustrating an exemplary embodiment of an online location method according to the present invention. The online positioning method comprises the following steps:
step S10, selecting one frame as a current frame at intervals of a preset distance in a map area where the mobile intelligent terminal is located, and acquiring the initial pose of each current frame, current frame point cloud data and corresponding reference frame point cloud data;
specifically, the step of acquiring the initial pose of each current frame, the current frame point cloud data and the corresponding reference frame point cloud data includes:
acquiring an initial pose of each current frame;
obtaining each current frame point cloud data through the first sensor;
and intercepting point cloud data in a preset area taking the initial pose of each current frame as the center on the global point cloud map to serve as reference frame point cloud data corresponding to each current frame.
The step of obtaining the initial pose of each current frame comprises the following steps:
judging whether the current frame is a first frame;
if the current frame is a first frame, obtaining the initial pose of the current frame through manual uploading or a second sensor;
and if the current frame is not the first frame, taking the pose of the previous frame of the current frame as the initial pose of the current frame.
The mobile intelligent terminal can be intelligent equipment such as unmanned vehicles, unmanned aerial vehicles and robots, and the map area where the mobile intelligent terminal is located can be areas such as indoor environment, outdoor environment, underwater environment and air environment. The pose refers to a position coordinate and a heading angle of the mobile intelligent terminal in a map area, when the mobile intelligent terminal moves on the map area, each passing position point can be regarded as a current frame, and one current frame is taken at a preset distance (in the embodiment of the invention, the preset distance is 1 meter). The point cloud is a collection of massive points, the point cloud obtained according to the laser measurement principle comprises three-dimensional coordinates and laser reflection intensity, and the pose of the current frame can be further obtained according to the point cloud data.
The method comprises the steps of obtaining an initial pose of a current frame, judging whether the current frame is a first frame or not, if the current frame is the first frame, positioning initialization is needed, namely, manual work or other sensors (such as a GPS global positioning system) are needed to give the pose of the first frame, and if the current frame is not the first frame, the pose of the previous frame can be directly used, because the frequency of laser positioning is 10 Hz, namely, each frame is separated by about 100 milliseconds, under the condition that the moving speed of the mobile intelligent terminal is not high, the moving distance of the front frame and the rear frame is small, and the pose of the previous frame can be used as the initial pose of the current frame.
The current frame point cloud data is provided by a Lidar sensor of the mobile intelligent terminal in real time, in order to improve the calculation efficiency, the ROI is selected in the embodiment, and the point cloud data except the ROI is deleted. The reference frame point cloud data is obtained from the initial pose of the current frame and a global point cloud map, namely, the point cloud data in the ROI range which takes the current position as the center is intercepted on the global point cloud map and is used as a reference frame. The ROI in embodiments of the present invention typically takes the form of a square 60x60m area centered at the current position. The global point cloud map is an off-line map constructed based on the traditional laser SLAM technology and is formed by splicing multi-frame point cloud data.
Step S20, calculating the accurate pose of each current frame according to the initial pose of each current frame, the current frame point cloud data and the corresponding reference frame point cloud data to serve as each key frame, and constructing a global key frame point cloud map based on each key frame;
the point cloud matching algorithm can minimize the spatial position difference between the point clouds and completely register the point clouds, and the NDT algorithm is adopted in the embodiment of the invention, so that the acquired point cloud data of the current frame and the point cloud data of the reference frame are registered, and the relative pose transformation matrix of the current frame can be obtained. And then, the accurate pose of the current frame can be calculated according to the relative pose transformation matrix of the current frame and the initial pose of the current frame estimated in the step S10. And storing each current frame as a key frame, wherein each key frame consists of an accurate pose and point cloud data, and storing all the key frames when traversing the whole map area, namely outputting the global key frame point cloud map.
Step S30, the current position of the mobile intelligent terminal is positioned and initialized according to the global key frame point cloud map, and the pose of the initial positioning frame is obtained, and the method specifically comprises the following steps:
acquiring initial positioning frame point cloud data;
traversing key frames in the global key frame point cloud map according to the initial positioning frame point cloud data, and registering the initial positioning frame with each key frame to obtain a registration score of each key frame;
selecting a key frame corresponding to the highest score in the registration scores as a matching key frame of the initial positioning frame, and acquiring the pose of the matching key frame;
and calculating the pose of the initial positioning frame according to the pose of the matching key frame.
After the first power-off restart of the mobile intelligent terminal, positioning initialization is required to obtain an initial position of the mobile intelligent terminal. After the initial positioning frame point cloud data is obtained, traversing a global key frame point cloud map, registering the initial positioning frame with all key frames once through a point cloud matching algorithm, obtaining the registration score of each key frame, obtaining the key frame with the highest score as the best match, namely the matching key frame, obtaining the matching key frame which is the key frame with the smallest difference with the initial positioning frame, obtaining the pose of the matching key frame through the global key frame point cloud map, and calculating the pose of the initial positioning frame according to the pose of the matching key frame, thereby completing the positioning initialization.
Compared with the traditional SLAM positioning technology, the initial point is not required to be given by manpower or other sensors in the positioning initialization process, in addition, the magnitude of the point cloud data of the initial positioning frame and the matching key frame is the same, compared with the traditional matching of the current frame and the global point cloud map (the magnitude of the point cloud data is greatly different), the scoring confidence coefficient obtained by matching is very high, and whether the positioning initialization is successful or not can be well judged.
Step S40, point cloud data of a current positioning frame and a pose of a previous frame of the current positioning frame are obtained, wherein when the previous frame of the current positioning frame is the initial positioning frame, the pose of the previous frame is the pose of the initial positioning frame;
the current positioning frame point cloud data is obtained through a first sensor, the first sensor is a Lidar sensor configured in the mobile intelligent terminal, the current frame point cloud data can be obtained in real time, in order to improve the calculation efficiency, a region of interest (ROI) is selected, point cloud data except the ROI is deleted, namely a square region of 60x60m is intercepted by taking the current positioning as the center, and effective point cloud in the region range is the current positioning frame point cloud data. And if the last frame of the current positioning frame is the initial positioning frame, the obtained pose of the last frame is the pose of the initial positioning frame.
Step S50, obtaining the estimated pose of the current positioning frame according to the pose of the previous frame, and selecting a key frame corresponding to the current positioning frame in the global key frame point cloud map according to the estimated pose;
and calculating the estimated pose of the current positioning frame according to the interval time and the operation speed from the last frame of the current positioning frame to the current positioning frame and the pose of the last frame of the mobile intelligent terminal, wherein when the last frame of the current positioning frame is the initial positioning frame, the pose of the last frame is the pose of the initial positioning frame, and when the last frame of the current positioning frame is the second frame, the pose of the current positioning frame is estimated according to the pose of the second frame, and so on.
After the positioning initialization is finished, the time interval between the front frame and the rear frame is short, about 100 milliseconds, the mobile intelligent terminal CAN be regarded as a model of uniform linear motion, the running speed of the mobile intelligent terminal CAN be known according to the vehicle speed signal acquired by the CAN, and the estimated pose of the current positioning frame CAN be estimated according to the running speed and the interval time.
And step S60, positioning the current positioning frame according to the point cloud data of the current positioning frame and the corresponding key frame to obtain the accurate pose of the current positioning frame.
After the positioning initialization is completed, the online positioning can be carried out. The online positioning is mainly to register current positioning frame point cloud data and the nearest key frame point cloud to obtain a relative pose transformation matrix of the current positioning frame, and calculate the accurate pose, namely the current position and the course, of the current positioning frame on the basis of the pose of the key frame. As in the positioning initialization, the current frame positioning frame point cloud data and the key frame point cloud data have the same order of magnitude and have high similarity (because the selected key frame is the key frame closest to the current positioning frame on the global key frame point cloud map), so that good matching can be obtained, the positioning accuracy is improved, in addition, the confidence coefficient of the registration score obtained in the matching process is also high, the quality of the matching can be known, and the positioning robustness is improved.
In the embodiment, one frame is selected as a current frame at intervals of a preset distance in a map area where the mobile intelligent terminal is located, and the initial pose of each current frame, the current frame point cloud data and the corresponding reference frame point cloud data are obtained; calculating the accurate pose of each current frame according to the initial pose of each current frame, the current frame point cloud data and the corresponding reference frame point cloud data to serve as each key frame, and constructing a global key frame point cloud map based on each key frame; positioning and initializing the current position of the mobile intelligent terminal according to the global key frame point cloud map to obtain the pose of an initial positioning frame; and positioning the current positioning frame according to the corresponding key frame point cloud data and pose of the current positioning frame in the global key frame point cloud map and the point cloud data of the current positioning frame to obtain the accurate pose of the current positioning frame. The constructed global key frame point cloud map has higher precision, the pose of the initial positioning frame can be calculated according to the pose of the key frame in the global key frame point cloud map, so that the initial point is not required to be given by manpower or other sensors, and the key frame and the initial positioning frame have the same order of magnitude, so that whether the positioning initialization is successful can be well judged. The accurate pose of the current positioning frame is obtained according to the global key frame point cloud map, and the accuracy of online positioning can be improved.
Further, referring to fig. 3, fig. 3 is a schematic diagram of a refining process of calculating an accurate pose of each current frame according to the initial pose of each current frame, current frame point cloud data and corresponding reference frame point cloud data, and constructing a global key frame point cloud map based on each key frame, where the accurate pose is used as each key frame.
In this embodiment, based on the embodiment shown in fig. 2, in this example, in step S20, the step of calculating an accurate pose of each current frame according to the initial pose of each current frame, the current frame point cloud data, and the corresponding reference frame point cloud data to serve as each key frame, and the step of constructing a global key frame point cloud map based on each key frame includes:
step S201, registering each current frame point cloud data and each reference frame point cloud data through a point cloud matching algorithm to obtain a relative pose transformation matrix of each current frame;
after acquiring each current frame point cloud data and each corresponding reference frame point cloud data, registering each current frame point cloud data and each reference frame point cloud data through an NDT algorithm in a point cloud matching algorithm, wherein the NDT is an algorithm based on a rasterized map, the main algorithm idea is based on a 2D scene, the reference frame point cloud map is divided into small blocks (cells), the point cloud obtained by the current frame at the next moment is calculated, and is converted to a reference coordinate, and the probability of falling into the cell of the corresponding reference frame is obtained. The method comprises the following steps of converting detection results of the same environment point and different moments into the same coordinate, enabling the detection results to fall into the same cell, minimizing the space position difference between point clouds, obtaining a relative pose transformation matrix of each current frame, and recording the relative pose transformation matrix as T, wherein the method comprises the following specific steps:
1. dividing the space occupied by the reference frame point cloud data into grids or voxels with specified sizes, and calculating the multi-dimensional normal distribution parameters of each grid:
calculating the mean value of the point set in the grid:
Figure BDA0003378012670000161
calculating a covariance matrix of a point set in the grid:
Figure BDA0003378012670000162
2. initializing transformation parameter p ═ tx,tyPhi) T (assigned a zero value or assigned using odometer data);
3. for the point cloud to be registered, it is converted into a mesh of reference frame point clouds by transformation T:
x′i=T(xi,p);
T:
Figure BDA0003378012670000163
4. calculating the probability of each conversion point falling in the corresponding cell according to the normal distribution parameters
Figure BDA0003378012670000164
NDT registration score (score): calculating the sum of the probabilities of the corresponding points falling in the corresponding grid cell
Figure BDA0003378012670000165
6. The objective function score is optimized according to a newton optimization algorithm, i.e. the transformation parameter p is found such that the value of score is maximal. The key steps of the optimization are to calculate the gradient of the objective function and the Hessian matrix:
the objective function score is obtained by adding values s of each grid, and q is x'i-qiThen, then
Figure BDA0003378012670000166
According to the chain derivation rule and the formula of vector and matrix derivation, the s gradient direction is:
Figure BDA0003378012670000167
where q is a transformation parameter piThe partial derivative of (a) is the Jacobian matrix of the transformation T:
Figure BDA0003378012670000171
based on the above gradient calculation result, s is continuously solved about the variable pi、pjSecond order partial derivative of (1):
Figure BDA0003378012670000172
according to the transformation equation, the vector of the second derivative of the transformation parameter p by the vector q is:
Figure BDA0003378012670000173
wherein q is the mean value of the point set in the grid, P is the probability of each conversion point falling in the corresponding cell, x is the abscissa of the corresponding point, y is the ordinate of the corresponding point, J is the gradient of the objective function, s is the value of each grid, and H is s with respect to the variable Pi、PjSecond order bias ofAnd (4) leading.
7. And jumping to the step 3 and continuing to execute until a convergence condition is reached, and obtaining a relative pose transformation matrix of each current frame.
Step S202, calculating to obtain the accurate pose of each current frame according to the initial pose and the relative pose transformation matrix of each current frame;
according to the initial pose of each current frame obtained in step S10 and the relative pose transformation matrix T of each current frame obtained in step S201, where the initial pose of the current frame includes the position XlastHeading angle HeadinglastThen, the accurate pose of the current frame is:
A. position of the current frame: xcur=T*Xlast
B. Heading angle error Heading can be obtained by converting matrixerrThen, the course of the current frame is: headingcur=Headinglast+Headingerr
Step S203, storing the accurate pose and point cloud data of each current frame as each key frame;
in a map area where the mobile intelligent terminal is located, one current frame is stored as a key frame at intervals of 1 meter, each current frame does not need to be stored as a key frame, and each key frame is composed of the accurate pose of the current frame and point cloud data.
Step S204, the global key frame point cloud map is constructed based on all the key frames stored in the map area.
When the whole map area is traversed, all key frames are stored and constructed into a global key frame point cloud map, and the global key frame point cloud map can be output.
Specifically, referring to fig. 4, fig. 4 is a schematic flowchart of a process of constructing a global keyframe point cloud map according to an embodiment of the present invention. By utilizing the traditional SLAM technology, an SLAM global point cloud map is relied on, and on the premise of giving an initial position, the pose and point cloud data of the current position are obtained by utilizing an NDT point cloud matching algorithm. When the global key frame point cloud map is constructed, each frame of point cloud is not required to be stored, and one frame of point cloud data can be taken as a key frame at a certain distance. After all the key frames are stored, the whole global key frame point cloud map can be output.
Compared with the traditional global point cloud map, the global key frame point cloud map obtained on the basis of the global point cloud map is composed of the poses of a series of position points and a frame of point cloud data scanned by a laser radar at the position points, and the key frame point cloud data is a discrete frame of point cloud and is not spliced as the SLAM global point cloud map. The order of magnitude of reference frame point cloud data obtained from the global point cloud map is generally 5 to 10 times that of a current frame, registration error is large due to large order difference when current frame data and global point cloud map data need to be registered in an online positioning process, positioning is inaccurate, the global key frame point cloud map solves the problem, and the point cloud map with higher quality is provided.
In this embodiment, the accurate pose of each current frame is calculated according to the initial pose of each current frame, the current frame point cloud data and the corresponding reference frame point cloud data, the accurate pose is used as each key frame, a global key frame point cloud map is constructed based on each key frame, and the obtained map has higher precision compared with the traditional global point cloud map, wherein the key frame is consistent with the current frame in order of magnitude, and the positioning precision of the mobile intelligent terminal in the online positioning process is improved.
Referring to fig. 5, fig. 5 is a schematic view of a refining process of positioning the current positioning frame according to the point cloud data of the current positioning frame and the corresponding key frame to obtain an accurate pose of the current positioning frame in the embodiment of the present invention.
In this embodiment, based on the embodiment shown in fig. 2, in this embodiment, in step S60, the step of locating the current positioning frame according to the point cloud data of the current positioning frame and the corresponding key frame to obtain an accurate pose of the current positioning frame includes:
step S601, registering the point cloud data of the current positioning frame and the point cloud data of the corresponding key frame to obtain a relative pose transformation matrix of the current positioning frame;
after traversing a global key frame point cloud map and finding a key frame closest to a current positioning frame on the map, acquiring the pose of the key frame corresponding to the current positioning frame according to key frame point cloud data and the pose stored in the global key frame point cloud map, wherein the pose of the key frame comprises a position XkfHeading angle Headingkf
The registration of the current positioning frame point cloud data and the corresponding key frame point cloud data is also performed by dividing a global key frame point cloud map where the key frame is located into small blocks (cells) by using an NDT (normalized difference test) algorithm based on a rasterized map, calculating the point cloud obtained by the target frame at the next moment, converting the point cloud into a reference coordinate, and determining the probability of falling into the cell of the corresponding key frame. The detection results of the same environment point and different moments are transformed to the same coordinate and fall into the same cell. And minimizing the space position difference between the point clouds to obtain a relative pose transformation matrix of the current positioning frame.
Step S602, calculating to obtain the accurate pose of the current positioning frame according to the relative pose transformation matrix of the current positioning frame and the pose of the corresponding key frame.
The accurate pose of the current positioning frame can be calculated according to the relative pose conversion matrix of the current positioning frame and the pose of the corresponding key frame as follows:
A. position of the current frame: xcur=T*Xkf
B. Heading angle error Heading can be obtained by converting matrixerrAnd then the course of the current positioning frame is as follows: headingcur=Headingkf+Headingerr
Specifically, referring to fig. 6, fig. 6 is a schematic flow chart of online positioning in the embodiment of the present invention. As shown in fig. 6, the on-line positioning can be performed after the positioning initialization is completed. The online positioning is mainly to match current positioning frame point cloud data with nearest key frame point cloud data to obtain a relative pose transformation matrix, and calculate the pose of the current positioning frame, namely the current position and the current course of the mobile intelligent terminal, on the basis of the corresponding key frame pose. As with the positioning initialization, the current positioning frame point cloud data and the corresponding key frame point cloud data have the same order of magnitude, and the corresponding key frame is the key frame closest to the current positioning frame, so that the current positioning frame point cloud data and the corresponding key frame point cloud data have high similarity, and therefore, the current positioning frame point cloud data and the corresponding key frame point cloud data can be well matched, the positioning precision is improved, and the positioning robustness is improved.
In this embodiment, a key frame closest to the current positioning frame is obtained by matching key frame point cloud data corresponding to current positioning frame point cloud data from the global key frame point cloud map, and a pose of the corresponding key frame is obtained, then, the key frame point cloud data corresponding to the current positioning frame point cloud data is registered to obtain a relative pose transformation matrix of the current positioning frame, and an accurate pose, that is, a current position and a heading, of the current positioning frame is obtained by calculating on the basis of the pose of the key frame. Because the similarity degree of the current positioning frame and the corresponding key frame is high, and the order of magnitude of the point cloud data of the current positioning frame is consistent with that of the point cloud data of the corresponding key frame, the matching can be well achieved, and the positioning precision is improved.
In addition, an embodiment of the present invention further provides an online positioning device, where the online positioning device includes:
the mobile intelligent terminal comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for selecting one frame as a current frame at intervals of a preset distance in a map area where the mobile intelligent terminal is located, and acquiring the initial pose of each current frame, current frame point cloud data and corresponding reference frame point cloud data;
the construction output module is used for calculating the accurate pose of each current frame according to the initial pose of each current frame, the current frame point cloud data and the corresponding reference frame point cloud data to serve as each key frame, and constructing a global key frame point cloud map based on each key frame;
the positioning initialization module is used for positioning and initializing the current position of the mobile intelligent terminal according to the global key frame point cloud map to obtain the pose of an initial positioning frame;
the secondary acquisition module is used for acquiring point cloud data of a current positioning frame and the pose of the previous frame of the current positioning frame, wherein when the previous frame of the current positioning frame is the initial positioning frame, the pose of the previous frame is the pose of the initial positioning frame;
the estimation selection module is used for obtaining the estimation pose of the current positioning frame according to the pose of the previous frame and selecting the key frame corresponding to the current positioning frame in the global key frame point cloud map according to the estimation pose;
and the online positioning module is used for positioning the current positioning frame according to the point cloud data of the current positioning frame and the corresponding key frame to obtain the accurate pose of the current positioning frame.
The implementation process of the embodiment mainly comprises two parts of drawing construction and positioning.
The mapping refers to the construction of a global key frame point cloud map, which is different from a traditional SLAM global point cloud map and consists of the poses of a series of position points and a frame of point cloud data scanned by a laser radar at the position points, wherein the key frame point cloud data is a discrete frame of point cloud, and is not spliced as the SLAM global point cloud map. The method for constructing the global key frame point cloud map mainly comprises the steps of utilizing a traditional SLAM technology, relying on the SLAM global point cloud map, and obtaining the pose and point cloud data of the current position by utilizing an NDT (normal distribution transformation) point cloud matching algorithm on the premise of giving the initial position. When a global key frame point cloud map is constructed, each frame of point cloud is not required to be stored, and one frame of point cloud data can be taken as a key frame at a certain distance (1 meter in the invention). After all the key frames are stored, the whole global key frame point cloud map can be output. The detailed process is as follows:
firstly, a current approximate pose is obtained. The pose here refers to the position coordinates and heading angle of the unmanned vehicle (or robot) in the map. If the frame is the first frame, positioning initialization is needed, namely, an initial point is given by a human or other sensors (such as a GPS); if the position of the previous frame is not the first frame, the position of the previous frame can be directly used, because the frequency of laser positioning is 10 Hz, namely each frame is separated by about 100ms, under the condition of low vehicle speed, the moving distance of the front frame and the rear frame is small, and the pose of the previous frame can be used as the current initial pose.
Secondly, acquiring point cloud data of the current frame and the reference frame. The current frame point cloud data is provided by the Lidar sensor in real time, and in order to improve the calculation efficiency, the ROI is selected and the point cloud data except the ROI is deleted. The reference frame point cloud data is obtained from the current initial pose estimated in the last step and a global point cloud map, namely, the point cloud data in the ROI range which takes the current position as the center is intercepted on the global point cloud map and is used as a reference frame. The ROI in the present invention typically takes a square area of 60x60m centered on the current position.
Thirdly, point cloud registration. And registering the current frame point cloud and the reference frame point cloud by using an NDT algorithm to obtain a relative pose transformation matrix.
Fourth, Lidar locates the results. And calculating to obtain the current accurate pose by using the relative pose transformation matrix obtained in the last step and the estimated current pose in the first step.
And fifthly, outputting the key frame. A global key frame point cloud map is constructed, each frame is not required to be stored as a key frame, and one key frame is stored at intervals of 1 meter in the invention. Each keyframe consists of pose and point cloud data.
And sixthly, outputting the global key frame point cloud map. When the whole map is traversed, all key frames are stored, and the global key frame point cloud map can be output.
Positioning refers to online positioning, that is, obtaining the current position of the unmanned vehicle (or robot) in the map, which includes a positioning initialization process.
1) Location initialization
When first frame point cloud data is obtained, traversing a global key frame point cloud map, registering the current frame and all key frames once by using an NDT (normalized difference test) matching algorithm, and obtaining a registration score of each key frame, wherein the highest score is the best match. The pose of the current unmanned vehicle (or robot) can be obtained through calculation by combining the existing pose of the key frame, and the positioning initialization is completed. Compared with the traditional SLAM positioning technology, the initial point is not required to be given by manpower or other sensors (such as a GPS) in the positioning initialization process, in addition, due to the fact that the order of the point cloud data of the current frame and the key frame is the same, compared with the traditional matching of the current frame and a global map (the order of the point cloud data is greatly different), the confidence coefficient of the score obtained by matching is very high, and therefore whether the positioning initialization is successful or not can be well judged.
2) On-line positioning
After the positioning initialization is completed, the online positioning can be carried out. The online positioning is mainly to match the current positioning frame point cloud with the nearest key frame point cloud to obtain a relative pose, and calculate the current pose, namely the current position and the heading, on the basis of the key frame pose. As in the positioning initialization, the current positioning frame point cloud and the key frame point cloud are consistent in number and degree of similarity (because the current nearest key frame is found), so that good matching can be obtained, the positioning accuracy is improved, in addition, the confidence coefficient of the registration score obtained in the matching process is also high, the quality of the matching can be known, and the positioning robustness is improved. The detailed process is as follows:
firstly, estimating the current approximate pose according to the previous frame. After the positioning initialization is finished, because the time interval between the front frame and the rear frame is short (100ms), an unmanned vehicle (or a robot) CAN be regarded as a uniform-speed linear model, and the pose of the current frame is estimated according to the CAN vehicle speed, the pose of the previous frame and the time interval between the two frames.
And secondly, acquiring point cloud data of the current positioning frame and the key frame and the pose thereof. The point cloud data of the current positioning frame is provided by the Lidar sensor in real time, and in order to improve the calculation efficiency, the ROI is selected and the point cloud data except the ROI is deleted. The key frame point cloud data is obtained from the current pose estimated in the last step and a global key frame point cloud map, namely, the global key frame point cloud map is traversed, and the key frame closest to the global key frame point cloud map is found on the map according to the estimated current pose. The ROI in the method generally selects a square area of 60x60m with the current position as the center, namely the current positioning frame point cloud and the key frame point cloud are effective point clouds in the range of 60 m.
Thirdly, point cloud registration. And registering the current positioning frame point cloud and the key frame point cloud by using an NDT algorithm to obtain a relative pose transformation matrix of the current positioning frame.
Fourth, Lidar locates the results. And calculating to obtain the accurate pose of the current positioning frame by using the relative pose conversion matrix of the current positioning frame obtained in the last step and the pose of the key frame.
In the embodiment of the invention, a global key frame point cloud map is constructed on the basis of the traditional SLAM technology, and key frame point cloud data and corresponding poses thereof are stored in the map. And during online positioning, matching the key frame with the current positioning frame on the basis of the known pose of the key frame, and calculating to obtain accurate positioning. The method can effectively overcome the defects of the traditional SLAM method, has higher precision and robustness, and can be suitable for various complex scenes.
In addition, the present invention also provides a terminal device, which includes a memory, a processor and an online positioning program stored in the memory and capable of running on the processor, wherein the online positioning program, when executed by the processor, implements the steps of the online positioning method as described above.
Since the on-line positioning program is executed by the processor, all technical solutions of all the foregoing embodiments are adopted, so that at least all the beneficial effects brought by all the technical solutions of all the foregoing embodiments are achieved, and details are not repeated herein.
Furthermore, the present invention also provides a computer readable storage medium, on which an online positioning program is stored, which when executed by a processor implements the steps of the online positioning method as described above.
Since the on-line positioning program is executed by the processor, all technical solutions of all the foregoing embodiments are adopted, so that at least all the beneficial effects brought by all the technical solutions of all the foregoing embodiments are achieved, and details are not repeated herein.
Compared with the prior art, the online positioning method, the online positioning device, the terminal equipment and the storage medium provided by the embodiment of the invention select one frame as the current frame at a preset distance in the map area where the mobile intelligent terminal is located, and acquire the initial pose of each current frame, the current frame point cloud data and the corresponding reference frame point cloud data; calculating the accurate pose of each current frame according to the initial pose of each current frame, the current frame point cloud data and the corresponding reference frame point cloud data to serve as each key frame, and constructing a global key frame point cloud map based on each key frame; positioning and initializing the current position of the mobile intelligent terminal according to the global key frame point cloud map to obtain the pose of an initial positioning frame; acquiring point cloud data of a current positioning frame and a pose of a previous frame of the current positioning frame, wherein when the previous frame of the current positioning frame is the initial positioning frame, the pose of the previous frame is the pose of the initial positioning frame; obtaining an estimated pose of the current positioning frame according to the pose of the previous frame, and selecting a key frame corresponding to the current positioning frame from the global key frame point cloud map according to the estimated pose; and positioning the current positioning frame according to the point cloud data of the current positioning frame and the corresponding key frame to obtain the accurate pose of the current positioning frame. On the basis of the traditional SLAM technology, a global key frame point cloud map is constructed, and key frame point cloud data and corresponding poses thereof are stored in the map. And during online positioning, matching the key frame with the current positioning frame on the basis of the known pose of the key frame, and calculating to obtain accurate positioning. The method can effectively overcome the defects of the traditional SLAM method, has higher precision and robustness, and can be suitable for various complex scenes.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, a controlled terminal, or a network device) to execute the method of each embodiment of the present application.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. An online positioning method, characterized in that the online positioning method comprises the following steps:
selecting a frame as a current frame at a preset distance in a map area where the mobile intelligent terminal is located, and acquiring the initial pose of each current frame, current frame point cloud data and corresponding reference frame point cloud data;
calculating the accurate pose of each current frame according to the initial pose of each current frame, the current frame point cloud data and the corresponding reference frame point cloud data to serve as each key frame, and constructing a global key frame point cloud map based on each key frame;
positioning and initializing the current position of the mobile intelligent terminal according to the global key frame point cloud map to obtain the pose of an initial positioning frame;
acquiring point cloud data of a current positioning frame and a pose of a previous frame of the current positioning frame, wherein when the previous frame of the current positioning frame is the initial positioning frame, the pose of the previous frame is the pose of the initial positioning frame;
obtaining an estimated pose of the current positioning frame according to the pose of the previous frame, and selecting a key frame corresponding to the current positioning frame from the global key frame point cloud map according to the estimated pose;
and positioning the current positioning frame according to the point cloud data of the current positioning frame and the corresponding key frame to obtain the accurate pose of the current positioning frame.
2. The online positioning method as claimed in claim 1, wherein the step of calculating the accurate pose of each current frame as each key frame according to the initial pose of each current frame, the current frame point cloud data and the corresponding reference frame point cloud data, and constructing a global key frame point cloud map based on each key frame comprises:
registering each current frame point cloud data with each reference frame point cloud data through a point cloud matching algorithm to obtain a relative pose transformation matrix of each current frame;
calculating to obtain the accurate pose of each current frame according to the initial pose and the relative pose transformation matrix of each current frame;
storing the accurate pose and point cloud data of each current frame as each key frame;
and constructing the global key frame point cloud map based on each stored key frame in the map area.
3. The online positioning method according to claim 1, wherein the step of performing positioning initialization on the current position of the mobile intelligent terminal according to the global key frame point cloud map to obtain the pose of an initial positioning frame comprises:
acquiring initial positioning frame point cloud data;
traversing key frames in the global key frame point cloud map according to the initial positioning frame point cloud data, and registering the initial positioning frame with each key frame to obtain a registration score of each key frame;
selecting a key frame corresponding to the highest score in the registration scores as a matching key frame of the initial positioning frame, and acquiring the pose and point cloud data of the matching key frame;
registering the initial positioning frame point cloud data and the point cloud data of the matched key frame to obtain a relative transformation matrix;
and calculating the pose of the initial positioning frame according to the pose of the matching key frame and the relative transformation matrix.
4. The online positioning method of claim 1, wherein the step of obtaining the estimated pose of the current positioning frame according to the pose of the previous frame, and selecting the keyframe corresponding to the current positioning frame from the global keyframe point cloud map according to the estimated pose comprises:
calculating the estimated pose of the current positioning frame according to the interval time and the running speed of the mobile intelligent terminal from the last frame of the current positioning frame to the current positioning frame and the pose of the last frame;
and selecting a key frame closest to the current positioning frame according to the estimated pose on the global key frame point cloud map as a corresponding key frame of the current positioning frame, and acquiring the pose and point cloud data of the corresponding key frame.
5. The online positioning method of claim 1, wherein the step of positioning the current positioning frame according to the point cloud data of the current positioning frame and the corresponding key frame to obtain the accurate pose of the current positioning frame comprises:
registering the point cloud data of the current positioning frame with the point cloud data of the corresponding key frame to obtain a relative pose transformation matrix of the current positioning frame;
and calculating to obtain the accurate pose of the current positioning frame according to the relative pose conversion matrix of the current positioning frame and the pose of the corresponding key frame.
6. The online positioning method of claim 1, wherein the step of obtaining the initial pose of each current frame, the current frame point cloud data, and the corresponding reference frame point cloud data comprises:
acquiring an initial pose of each current frame;
obtaining point cloud data of each current frame through a first sensor;
and intercepting point cloud data in a preset area taking the initial pose of each current frame as the center on the global point cloud map to serve as reference frame point cloud data corresponding to each current frame.
7. The online positioning method according to claim 6, wherein the step of obtaining the initial pose of each of the current frames comprises:
judging whether the current frame is a first frame;
if the current frame is a first frame, obtaining the initial pose of the current frame through manual uploading or a second sensor;
and if the current frame is not the first frame, taking the pose of the previous frame of the current frame as the initial pose of the current frame.
8. An online positioning device, comprising:
the mobile intelligent terminal comprises an initial acquisition module, a reference frame point cloud data acquisition module and a processing module, wherein the initial acquisition module is used for selecting one frame as a current frame at intervals of a preset distance in a map area where the mobile intelligent terminal is located, and acquiring the initial pose of each current frame, the current frame point cloud data and the corresponding reference frame point cloud data;
the construction output module is used for calculating the accurate pose of each current frame according to the initial pose of each current frame, the current frame point cloud data and the corresponding reference frame point cloud data to serve as each key frame, and constructing a global key frame point cloud map based on each key frame;
the positioning initialization module is used for positioning and initializing the current position of the mobile intelligent terminal according to the global key frame point cloud map to obtain the pose of an initial positioning frame;
the secondary acquisition module is used for acquiring point cloud data of a current positioning frame and the pose of the previous frame of the current positioning frame, wherein when the previous frame of the current positioning frame is the initial positioning frame, the pose of the previous frame is the pose of the initial positioning frame;
the estimation selection module is used for obtaining the estimation pose of the current positioning frame according to the pose of the previous frame and selecting the key frame corresponding to the current positioning frame in the global key frame point cloud map according to the estimation pose;
and the online positioning module is used for positioning the current positioning frame according to the point cloud data of the current positioning frame and the corresponding key frame to obtain the accurate pose of the current positioning frame.
9. A terminal device, characterized in that the terminal device comprises a memory, a processor and an online positioning program stored on the memory and executable on the processor, the online positioning program when executed by the processor implementing the steps of the online positioning method according to any one of claims 1-7.
10. A computer-readable storage medium, having stored thereon an online positioning program, which when executed by a processor, implements the steps of the online positioning method according to any one of claims 1-7.
CN202111435639.9A 2021-11-26 2021-11-26 Online positioning method and device, terminal equipment and storage medium Pending CN114266821A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111435639.9A CN114266821A (en) 2021-11-26 2021-11-26 Online positioning method and device, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111435639.9A CN114266821A (en) 2021-11-26 2021-11-26 Online positioning method and device, terminal equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114266821A true CN114266821A (en) 2022-04-01

Family

ID=80825839

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111435639.9A Pending CN114266821A (en) 2021-11-26 2021-11-26 Online positioning method and device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114266821A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115375870A (en) * 2022-10-25 2022-11-22 杭州华橙软件技术有限公司 Loop detection optimization method, electronic equipment and computer readable storage device
CN116229119A (en) * 2022-08-30 2023-06-06 智瞰深鉴(北京)科技有限公司 Substation inspection robot and repositioning method thereof

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116229119A (en) * 2022-08-30 2023-06-06 智瞰深鉴(北京)科技有限公司 Substation inspection robot and repositioning method thereof
CN115375870A (en) * 2022-10-25 2022-11-22 杭州华橙软件技术有限公司 Loop detection optimization method, electronic equipment and computer readable storage device
CN115375870B (en) * 2022-10-25 2023-02-10 杭州华橙软件技术有限公司 Loop detection optimization method, electronic equipment and computer readable storage device

Similar Documents

Publication Publication Date Title
CN108921947B (en) Method, device, equipment, storage medium and acquisition entity for generating electronic map
Olson et al. Wide-baseline stereo vision for terrain mapping
JP4794019B2 (en) Apparatus and method for providing a three-dimensional map representation of a region
CN111429574A (en) Mobile robot positioning method and system based on three-dimensional point cloud and vision fusion
CN109186606B (en) Robot composition and navigation method based on SLAM and image information
CN109993793B (en) Visual positioning method and device
KR20180079428A (en) Apparatus and method for automatic localization
CN106056643B (en) A kind of indoor dynamic scene SLAM method and system based on cloud
CN113916243A (en) Vehicle positioning method, device, equipment and storage medium for target scene area
CN114266821A (en) Online positioning method and device, terminal equipment and storage medium
CN103424112A (en) Vision navigating method for movement carrier based on laser plane assistance
CN111413708A (en) Unmanned aerial vehicle autonomous landing site selection method based on laser radar
CN114034299A (en) Navigation system based on active laser SLAM
CN111830534B (en) Method for selecting optimal landing points by applying laser radar
CN113096190A (en) Omnidirectional mobile robot navigation method based on visual map building
CN113848912A (en) Indoor map establishing method and device based on autonomous exploration
US10914840B2 (en) Self position estimation apparatus
CN112785686A (en) Forest map construction method based on big data and readable storage medium
Xue et al. Real-time 3D grid map building for autonomous driving in dynamic environment
Aggarwal Machine vision based SelfPosition estimation of mobile robots
KR20200063363A (en) System and method for autonomously traveling mobile robot
CN109901589B (en) Mobile robot control method and device
KR20230112296A (en) Implementation of a Mobile Target Search System with 3D SLAM and Object Localization in Indoor Environments
KR101918820B1 (en) Unmanned Aerial Vehicle Homing Control Method using Scene Recognition
Munguia-Silva et al. Autonomous flight using RGB-D SLAM with a monocular onboard camera only

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination