CN116804551A - Mobile robot navigation map generation method, equipment, medium and mobile robot - Google Patents

Mobile robot navigation map generation method, equipment, medium and mobile robot Download PDF

Info

Publication number
CN116804551A
CN116804551A CN202310467315.6A CN202310467315A CN116804551A CN 116804551 A CN116804551 A CN 116804551A CN 202310467315 A CN202310467315 A CN 202310467315A CN 116804551 A CN116804551 A CN 116804551A
Authority
CN
China
Prior art keywords
point cloud
map
track
intermediate state
cloud map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310467315.6A
Other languages
Chinese (zh)
Inventor
刘学成
王琳
方思涵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wanxun Technology Shenzhen Co ltd
Original Assignee
Wanxun Technology Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wanxun Technology Shenzhen Co ltd filed Critical Wanxun Technology Shenzhen Co ltd
Priority to CN202310467315.6A priority Critical patent/CN116804551A/en
Publication of CN116804551A publication Critical patent/CN116804551A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Theoretical Computer Science (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The embodiment of the application is suitable for the technical field of mobile robots, and provides a mobile robot navigation map generation method, equipment, medium and mobile robot, wherein the method comprises the following steps: acquiring an original point cloud map and an original point cloud track, wherein the original point cloud map and the original point cloud track are acquired by acquisition equipment through an instant positioning and map construction module; performing parameter processing on the original point cloud map and the original point cloud track to obtain an intermediate state point cloud map and an intermediate state point cloud track, wherein the intermediate state point cloud map and the intermediate state point cloud track have characteristic parameters for filtering ghost images; and performing ghost filtering processing on the intermediate state point cloud map according to the intermediate state point cloud track and a preset ghost filtering rule, so as to obtain a target point cloud map and/or a target grid map, wherein the preset ghost filtering rule is associated with the characteristic parameters. By the method, the ghost image in the point cloud map can be removed, so that the accuracy of the map is improved.

Description

Mobile robot navigation map generation method, equipment, medium and mobile robot
Technical Field
The application belongs to the technical field of mobile robots, and particularly relates to a mobile robot navigation map generation method, mobile robot navigation map generation equipment, mobile robot navigation map generation medium and a mobile robot.
Background
With the development of automatic driving technology, three-dimensional laser synchronous positioning and mapping (Simultaneous localization and mapping, SLAM) technology is becoming more popular, and 3D point cloud maps are used as key input data of SLAM positioning modules, so that accuracy and noise of the three-dimensional laser synchronous positioning and mapping (Simultaneous localization and mapping, SLAM) technology play a key role in positioning accuracy.
At present, in the process of constructing a three-dimensional map through SLAM, a person is generally required to remotely control the movement of a vehicle body to record the map. In the map recording process, due to the fact that the recording personnel are not specialized in the process, the recording personnel can stay at a certain position on the road for a long time to remotely control the vehicle, and therefore residual shadows of the personnel in the map can be caused. The map building module is difficult to filter the personnel ghost in the map through a probability method, so that the map recording personnel ghost exists in the finally generated map, and the map accuracy is reduced due to the existence of the ghost.
Under relatively narrow scenes such as office building corridor, long straight corridor, office interior, factory interior passageway and transformer substation indoor passageway, the map is recorded to personnel remote control car, can form great shielding to the passageway cross section, also leads to more easily establishing the map algorithm and enters the personnel ghost shadow into the point cloud map. If there is a ghost at a position in the map, which should belong to the road area, the position may be identified as an obstacle in the generated map, and the positioning accuracy of the map is reduced.
Disclosure of Invention
In view of the above, the embodiments of the present application provide a method, an apparatus, a medium, and a mobile robot for generating a navigation map of a mobile robot, which are used for removing ghost points of a point cloud map, thereby improving the accuracy of the map.
A first aspect of an embodiment of the present application provides a method for generating a navigation map of a mobile robot, including:
acquiring an original point cloud map and an original point cloud track, wherein the original point cloud map and the original point cloud track are acquired by acquisition equipment through an instant positioning and map construction module;
performing parameter processing on the original point cloud map and the original point cloud track to obtain an intermediate state point cloud map and an intermediate state point cloud track, wherein the intermediate state point cloud map and the intermediate state point cloud track have characteristic parameters for filtering ghost images;
and performing ghost filtering processing on the intermediate state point cloud map according to the intermediate state point cloud track and a preset ghost filtering rule, so as to obtain a target point cloud map and/or a target grid map, wherein the preset ghost filtering rule is associated with the characteristic parameters.
A second aspect of an embodiment of the present application provides a mobile robot navigation map generating apparatus, including:
The acquisition module is used for acquiring an original point cloud map and an original point cloud track, wherein the original point cloud map and the original point cloud track are acquired by the acquisition equipment through the instant positioning and map construction module;
the processing module is used for carrying out parameter processing on the original point cloud map and the original point cloud track to obtain an intermediate state point cloud map and an intermediate state point cloud track, wherein the intermediate state point cloud map and the intermediate state point cloud track have characteristic parameters for filtering ghost images;
and the filtering module is used for carrying out ghost filtering processing on the intermediate state point cloud map according to the intermediate state point cloud track and a preset ghost filtering rule to obtain a target point cloud map and/or a target grid map, wherein the preset ghost filtering rule is associated with the characteristic parameter.
A third aspect of an embodiment of the present application provides a mobile robot for generating a target point cloud map and/or a target grid map by a method as described in the first aspect above.
A fourth aspect of an embodiment of the present application provides a terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the method according to the first aspect as described above when executing the computer program.
A fifth aspect of an embodiment of the present application provides a computer readable storage medium storing a computer program which, when executed by a processor, implements a method as described in the first aspect above.
A sixth aspect of an embodiment of the application provides a computer program product for, when run on a terminal device, causing the terminal device to perform the method of the first aspect described above.
Compared with the prior art, the embodiment of the application has the following advantages:
in the embodiment of the application, the acquisition equipment can acquire the original point cloud map and the original track map in the preset area through the instant positioning and map construction module, and the original point cloud map possibly has a ghost which affects the map precision. In order to remove the ghost in the original point cloud map, parameter processing can be performed on the original point cloud map and the original point cloud track to obtain an intermediate point cloud map and an intermediate point cloud track, so that the intermediate point cloud map and the intermediate point cloud track can have characteristic parameters for filtering the ghost; based on the characteristic parameters of filtering the ghost image in the intermediate state point cloud track and the preset ghost image filtering rule, the ghost image can be filtered from the intermediate state point cloud map, so that a target point cloud map and/or a target grid map with higher precision can be obtained based on the point cloud map with the ghost image filtered. According to the embodiment of the application, the ghost in the point cloud map can be automatically removed based on the point cloud track, the ghost in the point cloud map is not required to be edited manually, the method is simple, a clearer and more accurate point cloud map or grid map can be obtained based on the point cloud map after the ghost is filtered, and the positioning precision is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the following will briefly introduce the drawings that are required to be used in the embodiments or the description of the prior art.
Fig. 1 is a schematic flow chart of steps of a mobile robot navigation map generating method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a point cloud track according to an embodiment of the present application;
FIG. 3 is a diagram of a prior art two grid map generation system according to an embodiment of the present application;
FIG. 4 is a two-dimensional grid map generated using the scheme of the present application provided by an embodiment of the present application;
FIG. 5 is a flowchart illustrating steps of another method for generating a navigation map of a mobile robot according to an embodiment of the present application;
fig. 6 is a schematic diagram of an oblique point cloud map according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a point cloud map after inclination correction according to an embodiment of the present application;
fig. 8 is a view of an oblique point cloud-based map according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a point cloud map after inclination correction according to an embodiment of the present application;
FIG. 10 is a flowchart of image sticking filtering according to an embodiment of the present application;
FIG. 11 is a flowchart of generating a grid map according to an embodiment of the present application;
Fig. 12 is a schematic diagram of a mobile robot navigation map generating apparatus according to an embodiment of the present application;
fig. 13 is a schematic diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
If the point cloud map has a ghost, when the point cloud map is used as an input of the positioning module, a large matching error may occur in a region where the ghost of personnel exists, so that final positioning accuracy is affected.
Based on the method, the application provides a mobile robot navigation map generation method for removing the ghost shadows in the point cloud map.
The technical scheme of the application is described below through specific examples.
The embodiment of the application can filter the ghost in the point cloud map based on the track, thereby improving the accuracy of the map. When the embodiment is applied, for an area where a navigation map needs to be determined, a point cloud map and a point cloud track may be acquired using an acquisition device. Based on the acquired point cloud map and the point cloud track, the terminal equipment can generate a navigation map through the method in the embodiment of the application.
The mobile robot navigation map generating method provided by the embodiment of the application can be executed by a mobile phone, a tablet computer, a wearable device, a vehicle-mounted device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA) and other terminal devices, and the embodiment of the application does not limit the specific types of the terminal devices.
The mobile robot navigation map generation method in the embodiment of the application can be applied to a vehicle-mounted terminal, the vehicle-mounted terminal can obtain an original point cloud map and an original point cloud track through an instant positioning and map construction module of a vehicle, then the mobile robot navigation map generation method in the embodiment of the application is executed based on the original point cloud map and the original point cloud track to obtain a map, and a positioning module of the vehicle can perform positioning or navigation based on the obtained map.
The mobile robot navigation map generation method in the embodiment of the application can be applied to a mobile robot, the mobile robot can acquire an original point cloud map and an original point cloud track in a target area, and then the mobile robot navigation map generation method in the embodiment of the application is executed based on the original point cloud map and the original point cloud track to obtain a navigation map, and the navigation map can be used for positioning of the mobile robot.
In the following description of the scheme, the scheme is described by taking the terminal equipment as a mobile robot.
Referring to fig. 1, a schematic step flow diagram of a mobile robot navigation map generating method according to an embodiment of the present application may specifically include the following steps:
s101, an original point cloud map and an original point cloud track are obtained, and the original point cloud map and the original point cloud track are obtained by acquisition equipment through an instant positioning and map construction module.
The method in the present embodiment may be applied to a mobile robot, which may generate a navigation map based on the method in the present embodiment.
The acquisition equipment can be mobile robots or vehicles and other equipment, the acquisition equipment can walk in a preset area to acquire data, and the acquired data can be output through the instant positioning and map construction module. The real-time positioning and map construction module can output an original point cloud map of a preset area, the original point cloud map can comprise a plurality of point cloud points, and different point cloud points can mutually outline the forms of buildings or other objects in the preset area. The acquisition equipment can travel along a certain track when acquiring data in a preset area, the acquisition equipment can record the coordinates of the central position of the acquisition equipment at intervals of preset time in the traveling process, each coordinate can be used as a track point, each track point recorded by the acquisition equipment can form an original point cloud track, and the original point cloud track is used for representing the traveling track of the acquisition equipment in the data acquisition process.
In one possible implementation manner, the collecting device may include an immediate positioning and map building module, the collecting device may receive the control signal and walk in a preset area in response to the control signal, and during the walk, the collecting device may collect data through other data collecting devices such as a radar device, a camera device, a sensing device, and the like, and input the data to the immediate positioning and map building module, and the immediate positioning and map building module may output a point cloud map and a point cloud track based on the input data and a stored algorithm.
In one possible implementation manner, the mobile robot may receive the original point cloud map data and the original point cloud track data acquired by other devices through the original point cloud map data and the original point cloud track data acquired by the self instant positioning and map construction module.
It should be noted that, the original point cloud map and the original point cloud track are generally acquired by the same acquisition device under the same space-time condition, and the original point cloud track may be inside the original point cloud map, that is, the original point cloud track and the original point cloud map may be in the same coordinate system.
S102, performing parameter processing on the original point cloud map and the original point cloud track to obtain an intermediate state point cloud map and an intermediate state point cloud track, wherein the intermediate state point cloud map and the intermediate state point cloud track have characteristic parameters for filtering ghost images.
The characteristic parameters are used for carrying out ghost filtering on the point cloud map. The characteristic parameter of the intermediate state point cloud map may include a local volume density value of each point cloud point in the intermediate state point cloud map. The characteristic parameters of the intermediate state point cloud trajectory may include three-dimensional poses of respective trajectory points in the intermediate state point cloud trajectory.
In the original point cloud map, a plurality of point cloud points can form buildings or other objects in a preset area, and the ghost shadows in the point cloud map also exist in the form of the point cloud points. For personnel ghost, the density of the point cloud points is generally small, while the density of buildings or other fixing devices in the area is generally large, so that ghost filtering can be performed based on the local volume density of the point cloud points. Based on the above, in the embodiment of the application, the mobile robot can perform parameter processing on the original point cloud points, calculate the local volume density of each point cloud point, and add the local volume density value as a characteristic parameter to the original point cloud map to obtain the intermediate point cloud map.
The local volumetric density of a point cloud point may be determined using the number of point clouds within the vicinity of the point cloud point. For example, a fixed obstacle such as a wall is generally static and may be scanned multiple times, so that the local volume density value of the corresponding point cloud point is larger, while a person belongs to dynamic and is scanned fewer times, and the local volume density value of the point cloud point corresponding to the afterimage of the person is smaller.
In one possible implementation, the mobile robot may calculate the local volume density value corresponding to each point cloud point in the original point cloud map by the following formula:
ρ=N/(4/3*π*R 3 )
R=(W+L)/2
wherein ρ is the local volume density value of the point cloud point, R is the radius of the sphere centered on the point cloud point, W is the length of the acquisition device, L is the width of the acquisition device, and N is the number of the point cloud points in the sphere centered on the point cloud point.
The original point cloud map is acquired by a worker controlling an acquisition device, and the worker may stand on a running track of the acquisition device in the process of acquiring the point cloud map, so that a person ghost exists in the point cloud map. But if the area through which the acquisition device can pass, it can be indicated that the area should be a point cloud free area. Thus, the ghost of the person in the point cloud map may be determined based on the position of the acquisition device that may be passed through in the point cloud map. The corresponding position of the acquisition device in the point cloud map can be determined by the three-dimensional coordinates of the corresponding track point. Therefore, the mobile robot can perform parameter processing on the original point cloud track, and adds a three-dimensional gesture for each track point in the original point cloud track as a characteristic parameter, so that an intermediate state point cloud map is obtained.
It should be noted that, the step of calculating the parameters of the original point cloud map by the mobile robot and the step of calculating the original point cloud trajectory may be performed in parallel. The intermediate track point cloud is a running track of the acquisition device, and therefore can be determined based on the intermediate track point cloud.
When ghost filtering is performed based on the regions where the acquisition equipment can pass through in the point cloud map, all the regions where the acquisition equipment can pass through in the point cloud map need to be determined. And the determination of all regions possibly passing through by the acquisition device in the point cloud map is determined based on track points in the intermediate state point cloud track. And if the distance between two adjacent track points is smaller than or equal to the length of the acquisition equipment, the acquisition equipment corresponding areas determined based on the two adjacent track points can be connected, so that all the areas through which the acquisition equipment can pass are covered. However, if the distance between two adjacent track points is greater than the length of the acquisition device, the corresponding areas of the acquisition device determined based on the two adjacent track points cannot be connected, so that there may be areas through which unrecognized acquisition devices may pass.
Therefore, whether interpolation processing is needed for the intermediate state track point cloud can be determined based on the distance between adjacent track points in the intermediate state track point cloud. The mobile robot can determine the distance between two adjacent track points in the intermediate state point cloud track; if the distance between the two track points is larger than the length of the acquisition equipment, interpolation processing can be carried out on the intermediate state point cloud track, namely a new track point is inserted between the two track points with the distance larger than the length of the acquisition equipment. The interpolation processing aims at enabling the distance between each track point of the intermediate state point cloud track to be smaller than or equal to the length of the acquisition device, so that the area where the acquisition device can pass through in the point cloud map can be identified based on each track point in the intermediate state point cloud track. The distance between two connected track points in the intermediate state point cloud track can be determined based on three-dimensional gestures corresponding to the two track points respectively.
Fig. 2 is a schematic diagram of a point cloud track provided by the embodiment of the present application, where black points in fig. 2 are track points acquired by an acquisition device, for example, a, b, and d in fig. 2 are track points acquired by the acquisition device. Since the distance between the track point a and the track point b is shorter and smaller than the length of the acquisition device, interpolation processing between the track point a and the track point b is not required. The distance between the track point b and the track point c is longer than the length of the acquisition device, and therefore, interpolation processing between the track point b and the track point c is required. The open triangle in fig. 2 is the new locus point of insertion.
And S103, performing ghost filtering processing on the intermediate state point cloud map according to the intermediate state point cloud track and a preset ghost filtering rule, so as to obtain a target point cloud map and/or a target grid map, wherein the preset ghost filtering rule is associated with the characteristic parameter.
Based on the characteristic parameters and a preset ghost filtering rule, ghost filtering can be performed on the intermediate state point cloud map.
The track points in the intermediate state point cloud track have three-dimensional gestures, and the positions corresponding to the track points are the central positions of the acquisition equipment, so that the three-dimensional frames corresponding to the acquisition equipment can be determined based on the three-dimensional gestures of the track points and the model parameters of the acquisition equipment. For example, based on the model parameters of the acquisition device, the length, width and height of the acquisition device can be determined, a three-dimensional frame with the length, width and height being the same as the acquisition device is determined according to the three-dimensional gesture of the track point by taking the track point as the center.
Referring to fig. 2, a corresponding stereoscopic frame may be determined based on an original trajectory point and a newly inserted trajectory point. Only the case where the edges of the solid frame determined by connecting two track points are connected is shown in fig. 2. The stereo frames determined by connecting the two track points may also have partial overlapping.
The mobile robot can carry out ghost filtering on the intermediate state point cloud map according to the three-dimensional frame and the local volume density value, so as to obtain the target point cloud map.
And the region corresponding to the stereoscopic frame is the region through which the acquisition equipment can pass in the point cloud map. The area through which the acquisition equipment can pass is a road in the map and is not recognized as an obstacle, so that the point cloud point in the area corresponding to the stereoscopic frame can be determined as a ghost, and the point cloud of the area corresponding to the stereoscopic frame can be filtered from the intermediate point cloud map.
In addition, ghost points may also exist in areas within a certain range of the proximity acquisition device. The vicinity of the acquisition device travel area is also a road area in general, but there are also cases where the acquisition area travels close to a building, and therefore, the ghost point can be determined in the vicinity of the acquisition device travel area based on the local bulk density value. The mobile robot can determine the point cloud points with the local volume density value smaller than a preset threshold value in the adjacent range of the stereoscopic frame as the ghost points.
For example, the mobile robot may determine each target point cloud point having a distance from the outer surface of the stereoscopic frame less than a preset length, and then determine a point cloud point having a local volume density value less than a preset threshold value among the target point cloud points as the ghost point.
The mobile robot may further determine an external sphere corresponding to the stereoscopic frame, then determine that there is a target point cloud point inside the external sphere but not inside the stereoscopic frame, and determine a point cloud point having a local volume density value smaller than a preset threshold value among the target point cloud points as the ghost point. Namely:
the mobile robot may determine, as ghost points, point cloud points in the intermediate state point cloud map that satisfy the following conditions:
wherein ρ is n Is the local volume density of the point cloud point ρ lmin For the bulk density threshold, R n Is the distance between the sphere center of the externally connected ball corresponding to the three-dimensional frame and the point cloud point, R sbox For a preset radius threshold value, R scan Is the radius of the external ball corresponding to the three-dimensional frame.
After filtering out the residual shadow points in and around the stereoscopic frame, the mobile robot can obtain a target point cloud map after filtering out the residual shadow. Based on the target point cloud map, the mobile robot can perform positioning navigation and the like.
In one possible implementation, the mobile robot may also determine a target grid map based on the target point cloud map. The target grid map may be a two-dimensional map.
The mobile robot can intercept the target point cloud within a preset height range from the target point cloud map. The preset height can be a passing height required by a general mobile robot or a vehicle and the like. For example, where the navigation map is used for vehicle location navigation, the intercepted height range may be the highest height from the ground to the vehicle. The target point cloud within the preset height range is intercepted from the target point cloud map, so that the influence of the point cloud at a higher position in the point cloud map on the accuracy of the grid map can be avoided. For example, there is a case where buildings at high positions on both sides of a road are protruded, that is, the distance between the buildings at high positions is short, and the distance between the buildings at low positions is long, and if a point cloud map is projected directly onto a two-dimensional plane, the identified road is narrowed. Therefore, in order to improve the accuracy of the two-dimensional grid map, the point cloud within the height range through which the mobile robot or the vehicle can pass may be intercepted from the target point cloud map.
Based on the three-dimensional posture of each track point in the intermediate track point cloud, the height value corresponding to each track point can be determined, so that the average height value corresponding to the track point can be calculated. The height corresponding to the track point is the height of the central position of the acquisition equipment from the ground, namely the height corresponding to the ground can be determined.
If the horizontal plane in the target point cloud is parallel to the actual horizontal plane, that is, the point cloud is not inclined, the intercepted height range can be directly determined from the target point cloud. The height range of the interception can be determined according to the average height corresponding to the track point. For example, the height of the central position of the acquisition device from the ground is h, the average height corresponding to the track point is x, and the limited height in the current area is y, and the point cloud to be intercepted is the height range (x-h, x-h+y).
If the horizontal plane in the target point cloud is not parallel to the actual horizontal plane, that is, the point cloud has a certain inclination, the minimum height value and the maximum height value required to be cut can be determined according to the height range and the radar calibration parameter required to be cut, and then the point cloud in the minimum height value and the maximum height value range is cut to be used as the target point cloud. The radar calibration parameters can be used for calibrating the corresponding relation between the corresponding height in the point cloud map and the actual scene height.
After determining the target point cloud, the mobile robot may project the target point cloud into the two-dimensional plane, resulting in a projected Ping Miandian cloud, where the point cloud points in the target point cloud may have corresponding point cloud points in the projected Ping Miandian cloud; in a two-dimensional plane, the mobile robot may divide the projection Ping Miandian cloud into a plurality of square grids at a preset resolution. Based on whether there is a point cloud in each grid, the mobile robot determines whether to mark the grid as an obstacle. If there are point clouds in the grid, the mobile robot may mark the grid as an obstacle, and if there are no point clouds in the grid, the mobile robot may not mark the grid as an obstacle. After marking the grids, the mobile robot may perform an image closing operation on each marked grid, thereby generating a grid map.
According to the application, the ghost in the point cloud map can be filtered, so that the accuracy of the point cloud map is improved. Based on the point cloud map after the ghost is filtered, a more accurate two-dimensional grid map can be obtained.
In order to intuitively exhibit the technical effects in the present embodiment, the present embodiment provides a grid map established using the prior art and a grid map established using the scheme of the present application, referring to fig. 3 and 4. Fig. 3 is a diagram of generating a two-dimensional grid map using the prior art according to an embodiment of the present application, and fig. 4 is a diagram of generating a two-dimensional grid map using the scheme according to the present application according to an embodiment of the present application. As can be seen from comparing fig. 3 and fig. 4, the two-dimensional grid map generated by using the scheme of the present application is clearer and more accurate.
Referring to fig. 5, a schematic step flow diagram of another mobile robot navigation map generating method according to an embodiment of the present application is shown, which may specifically include the following steps:
s501, an original point cloud map and an original point cloud track are obtained, and the original point cloud map and the original point cloud track are obtained by acquisition equipment through an instant positioning and map construction module.
S502, performing principal component analysis and calculation on the original point cloud map to obtain a pose matrix of the original point cloud map, wherein the pose matrix is used for representing the pose of the original point cloud map relative to the ground.
There may be some tilt in the original point cloud map. For example, if the ground of the preset area is a horizontal ground, the heights of the objects at the same height in the actual scene and the corresponding point cloud points in the original point cloud map should be the same, however, the heights of the objects at the same height in the actual scene and the corresponding point cloud points in the original point cloud map are different due to errors in the acquisition or data processing process, which indicates that the original point cloud map has a certain inclination. Due to certain errors of the instant positioning and constructing module of the acquisition device or certain flaws of the instant positioning and constructing scheme provided by the supplier, the original point cloud map output by the instant positioning and constructing module may have certain inclination. The inclination of the original point cloud map may cause a large attitude error of an object in the point cloud map, so that the calculation of the position of an obstacle is wrong or inaccurate, and the position error exists during positioning, thereby affecting the positioning precision and the path tracking effect.
For example, if the ground of the area where the navigation map is to be generated is a horizontal plane and the ground in the point cloud map is an inclined plane, this may cause the building in the point cloud map to be in an inclined state, and the point cloud map is directly used for navigation, due to the existence of the inclined angle, a certain deviation exists between the road area in the map and the road area in the actual situation, which may cause a navigation error. Generating a two-dimensional grid map based on the tilted point cloud map is also necessarily inaccurate. Thus, the mobile robot can perform tilt correction on the original point cloud map before generating the map based on the original point cloud map.
When inclination correction is to be performed, it is first necessary to determine whether the original point cloud map is inclined. The mobile robot may determine whether the original point cloud map is tilted based on the pose of the original point cloud map.
For example, the mobile robot may perform principal component analysis calculation based on the original point cloud map, and then obtain a pose matrix of the original point cloud map, where the pose matrix is used to represent a pose of the original point cloud map relative to the ground.
S503, determining whether the original point cloud map is inclined or not based on the pose matrix.
The pose matrix may include inclination angle information, which may characterize an angle of the original point cloud map with respect to a horizontal plane of the corresponding region. The mobile robot can determine whether the original point cloud map is tilted with respect to the ground through the tilt angle information.
And S504, if the original point cloud map is inclined, performing inclination correction on the original point cloud map and the original point cloud track based on the pose matrix.
The mobile robot can correct the inclination of the original point cloud map when the inclination angle exists between the original point cloud map and the ground.
The original point cloud track and the original point cloud map are matched with each other, and the original point cloud track can be arranged in the original point cloud map, so that when the original point cloud map is subjected to inclination correction, the mobile robot can also carry out inclination correction on the original point cloud track, and the original point cloud map subjected to inclination correction is still matched with the original point cloud track.
In an exemplary embodiment, after the mobile robot determines the inclination angle of the original point cloud map relative to the opposite surface according to the pose matrix, the original point cloud map and the original point cloud track may be rotated by corresponding angles with reference to a coordinate system corresponding to the pose matrix, so that the corresponding ground in the original point cloud map and the original point cloud track is parallel to the ground in the actual scene.
To better illustrate the technical effects of S502-S504, embodiments of the present application show the map before and after tilt correction through FIGS. 6-9. Fig. 6 is a schematic diagram of an oblique point cloud map according to an embodiment of the present application; FIG. 7 is a schematic diagram of a point cloud map after inclination correction according to an embodiment of the present application; fig. 8 is a view of an oblique point cloud-based map according to an embodiment of the present application; fig. 9 is a schematic diagram of a point cloud map after inclination correction according to an embodiment of the present application. In fig. 6-9, the black point at the center of the circle can be used to represent the origin of coordinates in the coordinate system, and there are straight lines in fig. 6-9, a first broken line and a second broken line, wherein the first broken line is a line composed of uniform small line segments, and the second broken line is a line in which two points exist between two continuous small line segments. That is, in fig. 6 and 8, the horizontal line is a first broken line, and the vertical line is a second broken line; in fig. 7 and 9, the circular curve is a first broken line and the vertical line is a second broken line. There may be planes in the coordinate system represented by X, Y and Z-axes, respectively, with straight lines, first dashed lines and second dashed lines each representing one of the planes. In one possible implementation, the plane formed by the straight line, the second dashed line and the origin may be used to represent the plane in which the ground lies, i.e. the XOY plane.
The point clouds in fig. 6 and 8 correspond to the point clouds seen in the longitudinal section. As can be seen from fig. 6, there is a certain tilt of the point cloud with respect to the XOY plane. The point cloud map corresponding to the point cloud in fig. 6 is fig. 7, and fig. 7 corresponds to a map on the ground as seen from above. The point cloud map in fig. 7 has a certain deviation, so that the point cloud is inclined, and the deviation of the point cloud map is necessarily caused after the point cloud is mapped to the XOY plane. Fig. 8 is a point cloud after inclination correction. Fig. 8 is a longer point cloud than fig. 6 because the point cloud in fig. 6 is originally tilted somewhat, and after the tilt correction, the point cloud parallel to the horizontal plane is naturally tilted relatively longer. As can be seen from fig. 8, the point cloud is substantially parallel with respect to the XOY plane after the tilt correction is performed. The map obtained based on the point cloud in fig. 8 is fig. 9. Comparing fig. 9 and 7, it can be seen that tilting of the point cloud can result in a deviation of the map: the map in fig. 7 has a significant deviation in position from the origin of the coordinate system with respect to the map in fig. 9. Therefore, in the embodiment of the application, before the determination of the target map is performed, the point cloud can be subjected to inclination correction according to the steps in S502-S504, so that the generated map is more accurate.
S505, performing parameter processing on the original point cloud map after the inclination correction and the original point cloud track after the inclination correction to obtain an intermediate state point cloud map and an intermediate state point cloud track, wherein the intermediate state point cloud map and the intermediate state point cloud track have characteristic parameters for filtering residual shadows.
It should be noted that, if there is no inclination between the original point cloud map and the original point cloud track, this step may be: and carrying out parameter processing on the original point cloud map and the original point cloud track to obtain an intermediate state point cloud map and an intermediate state point cloud track, wherein the intermediate state point cloud map and the intermediate state point cloud track have characteristic parameters for filtering ghost images.
S506, performing ghost filtering processing on the intermediate state point cloud map according to the intermediate state point cloud track and a preset ghost filtering rule, so as to obtain a target point cloud map, wherein the preset ghost filtering rule is associated with the characteristic parameter.
S504-S506 in this embodiment are similar to S101-S103 in the previous embodiment, and reference may be made to each other, which is not repeated here.
S507, intercepting target point cloud in a preset height range from the target point cloud map.
Because the point cloud map is subjected to inclination correction when the ghost in the original point cloud map is filtered, when the mobile robot generates the grid map based on the point cloud map after the ghost is filtered, the target point cloud in the preset height range can be directly intercepted from the target point cloud map, so that the grid map is generated based on the target point cloud.
And S508, generating a grid map of the area corresponding to the original point cloud map based on the target point cloud.
In this embodiment, before filtering the ghost in the original point cloud map, the mobile robot may perform tilt correction on the original point cloud map and the original point cloud track with tilt, so that the original point cloud map and the original point cloud track may be matched with the actual ground, so that the generated target point cloud and the target grid map are matched with the actual ground, and the positioning accuracy of the map is improved.
It should be noted that, the sequence number of each step in the above embodiment does not mean the sequence of execution sequence, and the execution sequence of each process should be determined by its function and internal logic, and should not limit the implementation process of the embodiment of the present application in any way.
According to the method for generating the navigation map of the mobile robot, the ghost in the point cloud map can be filtered based on the point cloud track, so that the point cloud map with the ghost filtered can be obtained. Based on the point cloud map after filtering the ghost, a grid map can be obtained. In order to better explain the scheme in the application, the scheme in the application is described below by dividing the method in the application into a personnel ghost map in a filtered point cloud map and each flow of generating a grid map chain based on the point cloud map after filtering the point ghost.
Fig. 10 is a schematic flow chart of filtering out a residual image of a person in a point cloud map according to an embodiment of the present application, as shown in fig. 10, the filtering out the residual image of the person in the point cloud map may include the following steps:
s1001, calculating the integral PCA of the point cloud map to obtain the pose matrix of the point cloud map.
The point cloud map and the point cloud track can be acquired in a preset area by personnel controlling the vehicle, and can be obtained through an instant positioning and constructing module, and each point cloud point and track point in the acquired point cloud map can comprise position coordinates (x, y, z). Ideally, the point clouds collected should all be horizontal point clouds, but there may be some tilt of the point clouds with respect to the horizontal plane in practice. For the original point cloud map obtained by the instant positioning and constructing module, whether the point cloud map is inclined or not can be determined first. The mobile robot can calculate the overall PCA of the point cloud map first, so as to calculate the 4x4 pose matrix M of the point cloud map.
Because the point cloud track can be included in the point cloud map, the position relations of the point cloud track are matched with each other, and therefore whether the point cloud track is inclined can be determined only by determining whether the point cloud map is inclined.
S1002, correcting a point cloud map and a point cloud track by using the relative position posture matrix.
If the tilt of the point cloud map is determined based on the pose matrix, the mobile robot can use the pose matrix M to perform tilt correction on the point cloud map and the point cloud track, so that the point cloud map and the point cloud track after the tilt is eliminated.
Of course, if it is determined that the point cloud map is not tilted based on the pose matrix, the steps S1001 and S1002 do not need to be performed, and the subsequent steps may be directly performed.
S1003, calculating the volume density of the point cloud map, and adding a local volume density value item for each point cloud point in the point cloud map.
The local volume density value of each point cloud point may be used to characterize the number of point cloud points within a preset range from the point cloud point. For an entity obstacle, because the object is fixed, a plurality of measuring points on the surface of the object are obtained during map building scanning, and the density of point clouds corresponding to the entity obstacle is higher; in addition, for the fake barriers such as the moving human body, the positions of the fake barriers are not fixed, the number of sampling points of the temporary areas is small in the map-forming scanning process, and therefore the density of point clouds corresponding to the fake barriers such as the moving human body is low. Therefore, in the embodiment of the application, the volume density of the point cloud map can be calculated, and a local volume density value item is added for each point cloud point in the point cloud map, so that the ghost points can be identified based on the volume density.
The calculation formula of the local volume density of the point cloud point is already described in the foregoing embodiments, and is not described in detail herein.
After adding the local volume density value term to each point cloud point, the point cloud point may have a total of 4 parameter values (x, y, z, ρ). Based on the local point cloud density value term, subsequent ghost filtering can be performed. Illustratively, the bulk density of the wall point cloud is generally greater than the point cloud density of the ghost image because the person is dynamic, while the wall is static, the static wall will be scanned multiple times, and the density of the point cloud will be relatively high.
S1006, calculating the attitude value of each track point through the front and rear adjacent track points, and adding the attitude value into a storage data structure.
The point cloud track is used for reflecting the running track of the acquisition equipment, three-dimensional attitude values of each track point can be calculated through track points adjacent to each other in the point cloud track, and the three-dimensional attitude values are added into a storage data structure of the three-dimensional attitude values, so that each track point has 10 attribute values in total. The three-dimensional pose value of the trajectory point may include coordinate position information and angle information, wherein roll, pitch, yaw is used to characterize the angle information. Based on the completed three-dimensional gesture, the vehicle body space of the corresponding acquisition equipment in the point cloud map can be calculated.
S1005, interpolating the track points with larger spacing so that the distance between two adjacent track points is smaller than the length of the vehicle body.
In this embodiment, the ghost filtering is performed based on the vehicle body space, so that it is necessary to determine all the vehicle body spaces in the point cloud map, and in the point cloud map, the corresponding vehicle body spaces should be continuous. However, when the distance between the track points is too long, the determined vehicle body space cannot be continuous, and a vehicle body space-time area exists, and accordingly, whether the point cloud of the vehicle body space-time area is a ghost cannot be identified. Therefore, the track points can be interpolated such that the distance between adjacent track points in the point cloud track is less than or equal to the length of the acquisition device.
The mobile robot can interpolate the track points with larger distance, so that the cuboid of the vehicle body between every two track points can be connected.
S1006, traversing the corrected track points, and eliminating point cloud points in the cuboid frame of the vehicle body corresponding to the track points.
The position of the vehicle body space is the position which can pass through, namely the position which is not recognized as an obstacle in the area, so that all the ghost points in the vehicle body space can be eliminated.
For the point cloud track after the difference value processing, each track point in the point cloud track can be traversed, and according to the three-dimensional gesture corresponding to the track point, the model of the acquisition equipment is combined, so that a cuboid frame of the occupied space of the acquisition equipment at the track point is calculated. For each track point, a corresponding cuboid frame can be determined, and the ghost images in the cuboid frame are removed. The vehicle body space may include a vehicle body surface.
And S1007, traversing the corrected track points, and providing points, corresponding to the track points, of which the volume density of the vehicle body adjacent range is smaller than a threshold value.
In addition, personnel ghost may exist in the vicinity of the rectangular box, and the ghost points in the rectangular box can be removed based on the local point cloud density value. The steps for determining the ghost points are described in the foregoing embodiments, and are not repeated here.
After eliminating the residual shadows of personnel in and near the cuboid frame corresponding to the acquisition equipment, the processed point cloud map and the processed point cloud track can be output. The positioning module can perform positioning based on the processed point cloud map and the point cloud track.
According to the embodiment of the application, the corresponding space of the acquisition equipment in the point cloud map is determined through the point cloud regulation, so that the personnel ghost in the three-dimensional point cloud map is filtered, the probability of mismatching of personnel areas of the original point cloud map is reduced, and the positioning precision of the positioning module using the three-dimensional point cloud map as input is improved.
Fig. 11 is a schematic flow chart of generating a grid map based on a point cloud map after filtering point residues according to an embodiment of the present application. Referring to fig. 11, the process of generating a grid map based on the point cloud map after filtering out the point residues may include:
S1101, calculating a track point cloud average z value using the corrected track point cloud.
The track point can be the central position of the acquisition equipment, and the average z value corresponding to the corrected track point cloud is equivalent to the height of the position corresponding to the central position of the acquisition equipment. The average z-value may reflect an average height of the acquisition device as it travels within the area. The acquired point cloud map may be very high, but actually, the map is generated without the need of a higher point cloud, and only the point cloud in the passable height range is needed, so that the average z value of the track point cloud can be calculated based on the track points.
S1102, calculating a cutting maximum Z value and a cutting minimum Z value according to an average Z value and radar calibration parameters by using a point cloud map for filtering out personnel ghost, and cutting point clouds within the Z value range.
After the average z value is determined, the point cloud can be intercepted from the point cloud map after the ghost is filtered according to the average z value.
If the point cloud map is subjected to inclination correction when the ghost is filtered out by the point cloud map, so that the point cloud map subjected to inclination correction is corrected to the point cloud map which is basically coincident with the XoY plane or is not inclined by itself by the transformation matrix, the corresponding intercepting height Z value can be directly determined according to the average Z value, and then the point cloud below the height Z value is intercepted from the point cloud map after the ghost is filtered out.
If the point cloud map is inclined but inclination correction is not performed, a maximum Z value and a minimum Z value can be calculated according to the average Z value and radar calibration parameters, and the point cloud map in the Z value range is cut, so that a running height sub-point cloud is generated.
S1103, projecting the cut point cloud to the XOY plane, and marking the grid point of each existing point cloud point as an obstacle according to the grid map resolution.
For the truncated point cloud, the point cloud may be projected onto a two-dimensional plane, i.e., an XOY plane. The grid may then be partitioned, with grid points for each existing point cloud being marked as obstacles at the grid map resolution. For example, the obstacle may be a wall or the like.
S1104, performing a closing operation on the preliminary determined grid map, and outputting a final grid map.
The calibrated grid map can be subjected to a closing operation, and then a final grid map is output.
In the embodiment, the two-dimensional grid map can be generated by filtering the point cloud map after the ghost, so that the pseudo obstacle caused by the ghost of personnel is directly filtered, the process of manually editing the two-dimensional grid map to remove the obstacle is omitted, and the learning cost and the operation difficulty of a user are reduced.
The embodiment of the application also provides a mobile robot, which can carry out ghost filtering on the point cloud map through the steps in the embodiments of the method and generate a target map or a target grid map based on the point cloud map after ghost filtering. The mobile robot may include a positioning module based on which the mobile robot may navigate using the target map or the target grid map.
In one possible implementation manner, the mobile robot may include an immediate positioning and map building module, the mobile robot may receive the control signal and walk in a preset area in response to the control signal, and during the walking process, the mobile robot may collect data through other data collecting devices such as a radar device, a camera device, a sensing device, etc., and input the data to the immediate positioning and map building module, and the immediate positioning and map building module may output a point cloud map and a point cloud track based on the input data and a stored algorithm. Based on the point cloud track and the point cloud map, the mobile robot may obtain the point cloud map and/or the grid map after filtering the residual images based on the steps in the above method embodiments.
Referring to fig. 12, a schematic diagram of a mobile robot navigation map generating apparatus according to an embodiment of the present application may specifically include an obtaining module 1201, a processing module 1202, and a filtering module 1203, where:
the acquisition module 1201 is configured to acquire an original point cloud map and an original point cloud track, where the original point cloud map and the original point cloud track are acquired by the acquisition device through the instant positioning and map construction module;
The processing module 1202 is configured to perform parameter processing on the original point cloud map and the original point cloud track to obtain an intermediate point cloud map and an intermediate point cloud track, where the intermediate point cloud map and the intermediate point cloud track have characteristic parameters for filtering ghost images;
and the filtering module 1203 is configured to perform ghost filtering processing on the intermediate state point cloud map according to the intermediate state point cloud track and a preset ghost filtering rule, so as to obtain a target point cloud map and/or a target grid map, where the preset ghost filtering rule is associated with the feature parameter.
In one possible implementation, the processing module 1202 includes:
the characteristic parameter calculation sub-module is used for calculating local volume density values of all the point cloud points in the original point cloud map, and adding the local volume density values serving as the characteristic parameters into the original point cloud map to obtain the intermediate state point cloud map; and calculating the three-dimensional gesture of each track point in the original point cloud track, and adding the three-dimensional gesture as the marking parameter into the original point cloud track to obtain the intermediate state point cloud track.
In one possible implementation manner, the feature parameter calculation sub-module includes:
The local volume density value calculation unit is used for calculating local volume density values corresponding to all the point cloud points in the original point cloud map through the following formula:
ρ=N/(4/3*π*R 3 )
R=(W+L)/2
wherein ρ is the local volume density value of the point cloud point, R is the radius of a sphere centered on the point cloud point, W is the length of the collection device, L is the width of the collection device, and N is the number of point cloud points in the sphere centered on the point cloud point.
In one possible implementation manner, the processing module further includes:
the distance determining submodule is used for determining the distance between two adjacent track points of the intermediate state point cloud track;
and the difference processing sub-module is used for carrying out interpolation processing on the intermediate state point cloud track if the distance is greater than the length of the acquisition equipment, so that the distance between two adjacent track points is smaller than the length of the acquisition equipment.
In one possible implementation manner, the filtering module 1203 includes:
the three-dimensional frame determining submodule is used for determining a three-dimensional frame corresponding to the acquisition equipment according to the three-dimensional gesture of each track point in the intermediate state point cloud track;
and the filtering sub-module is used for filtering the residual shadow of the intermediate state point cloud map according to the stereoscopic frame and the local volume density value to obtain a target point cloud map.
In one possible implementation manner, the filtering submodule includes:
the filtering unit is used for filtering all the point cloud points in the area corresponding to the stereoscopic frame in the intermediate state point cloud map; and determining a ghost point from the point cloud points within a preset range from the outer surface of the stereoscopic frame according to the local volume density value, and filtering the ghost point from the intermediate state point cloud map to obtain the target point cloud map.
In one possible implementation manner, the filtering unit further includes:
the ghost point determining subunit is configured to determine, as the ghost point, a point cloud point in the intermediate state point cloud map that satisfies the following condition:
wherein ρ is n For the local volume density of the point cloud point ρ lmin For the bulk density threshold, R n R is the distance between the sphere center of the externally connected sphere corresponding to the three-dimensional frame and the point cloud point sbox For a preset radius threshold value, R scan And the radius of the outer ball corresponding to the three-dimensional frame.
In one possible implementation manner, the filtering module 1203 includes:
the three-dimensional frame determining submodule is used for determining a three-dimensional frame corresponding to the acquisition equipment according to the three-dimensional gesture of each track point in the intermediate state point cloud track;
The target point cloud map determining submodule is used for carrying out ghost filtering on the intermediate state point cloud map according to the three-dimensional frame to obtain a target point cloud map;
and the target grid map generation sub-module is used for generating a target grid map based on the target point cloud map.
In one possible implementation manner, the target grid map generating submodule includes:
the target point cloud intercepting unit is used for intercepting target point clouds in a preset height range from the target point cloud map;
and the target grid map generation unit is used for generating a grid map of the area corresponding to the original point cloud map based on the target point cloud.
In one possible implementation manner, the target grid map generating submodule includes:
the intercepting distance determining unit is used for determining an intercepting distance range of the point cloud map according to the target point cloud map and the intermediate state point cloud track, wherein the distance range comprises a highest value and a lowest value;
the intercepting unit is used for intercepting the target point cloud from the target point cloud map based on the intercepting distance range;
and the target grid map generation unit is used for generating a grid map of the area corresponding to the original point cloud map based on the target point cloud.
In one possible implementation manner, the target grid map generating unit is configured to include:
a projection subunit, configured to project the target point cloud onto a two-dimensional plane parallel to the ground of the area, to obtain a projection Ping Miandian cloud;
the grid dividing subunit is used for dividing the projection Ping Miandian cloud into a plurality of square grids according to a preset resolution;
a marking subunit, configured to determine whether to mark each grid as an obstacle according to whether a point cloud point exists in the grid;
and the generation subunit is used for performing image closing operation on each marked grid to generate the grid map.
In one possible implementation manner, the apparatus further includes:
the pose matrix calculation module is used for carrying out principal component analysis and calculation on the point cloud map to obtain a pose matrix of the point cloud map, and the pose matrix is used for representing the pose of the point cloud map relative to the ground;
the inclination judging module is used for determining whether the point cloud map is inclined or not based on the pose matrix;
and the correction module is used for correcting the inclination of the point cloud map and the point cloud estimation based on the pose matrix if the point cloud map is inclined.
For the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference should be made to the description of the method embodiments.
Fig. 13 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 13, the terminal device 130 of this embodiment includes: at least one processor 1300 (only one processor is shown in fig. 13), a memory 1301, and a computer program 1302 stored in the memory 1301 and executable on the at least one processor 1300, the processor 1300 implementing the steps in any of the various method embodiments described above when executing the computer program 1302.
The terminal device 130 may be a computing device such as a desktop computer, a notebook computer, a palm computer, and a cloud terminal device. The terminal device may include, but is not limited to, a processor 1300, a memory 1301. It will be appreciated by those skilled in the art that fig. 13 is merely an example of terminal device 130 and is not intended to limit terminal device 130, and may include more or fewer components than shown, or may combine certain components, or may include different components, such as input-output devices, network access devices, etc.
The processor 1300 may be a central processing unit (CentralProcessingUnit, CPU), and the processor 1300 may also be other general purpose processors, digital signal processors (DigitalSignalProcessor, DSP), application specific integrated circuits (ApplicationSpecificIntegratedCircuit, ASIC), off-the-shelf programmable gate arrays (Field-ProgrammableGateArray, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 1301 may in some embodiments be an internal storage unit of the terminal device 130, such as a hard disk or a memory of the terminal device 130. The memory 1301 may in other embodiments also be an external storage device of the terminal device 130, such as a plug-in hard disk provided on the terminal device 130, a smart memory card (SmartMediaCard, SMC), a secure digital (SecureDigital, SD) card, a flash card (FlashCard), etc. Further, the memory 1301 may include both an internal storage unit and an external storage device of the terminal device 130. The memory 1301 is used to store an operating system, application programs, boot loader (BootLoader), data, and other programs, etc., such as program codes of the computer program. The memory 1301 can also be used to temporarily store data that has been output or is to be output.
Embodiments of the present application also provide a computer readable storage medium storing a computer program which, when executed by a processor, implements steps for implementing the various method embodiments described above.
Embodiments of the present application provide a computer program product enabling a terminal device to carry out the steps of the method embodiments described above when the computer program product is run on the terminal device.
The above embodiments are only for illustrating the technical solution of the present application, and are not limited thereto. Although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (15)

1. A mobile robot navigation map generation method, characterized by comprising:
acquiring an original point cloud map and an original point cloud track, wherein the original point cloud map and the original point cloud track are acquired by acquisition equipment through an instant positioning and map construction module;
Performing parameter processing on the original point cloud map and the original point cloud track to obtain an intermediate state point cloud map and an intermediate state point cloud track, wherein the intermediate state point cloud map and the intermediate state point cloud track have characteristic parameters for filtering ghost images;
and performing ghost filtering processing on the intermediate state point cloud map according to the intermediate state point cloud track and a preset ghost filtering rule, so as to obtain a target point cloud map and/or a target grid map, wherein the preset ghost filtering rule is associated with the characteristic parameters.
2. The method of claim 1, wherein the performing parameter processing on the original point cloud map and the original point cloud trajectory to obtain an intermediate point cloud map and an intermediate point cloud trajectory comprises:
calculating local volume density values of all the point cloud points in the original point cloud map, and adding the local volume density values serving as the characteristic parameters into the original point cloud map to obtain the intermediate point cloud map; and calculating the three-dimensional gesture of each track point in the original point cloud track, and adding the three-dimensional gesture as the characteristic parameter into the original point cloud track to obtain the intermediate state point cloud track.
3. The method of claim 2, wherein the calculating the local volumetric density values for each point cloud point in the original point cloud map comprises:
calculating local volume density values corresponding to all the point cloud points in the original point cloud map through the following formula:
ρ=N/(4/3*π*R 3 )
R=(W+L)/2
wherein ρ is the local volume density value of the point cloud point, R is the radius of a sphere centered on the point cloud point, W is the length of the collection device, L is the width of the collection device, and N is the number of point cloud points in the sphere centered on the point cloud point.
4. The method of claim 2, wherein after adding the three-dimensional pose as the characteristic parameter to the original point cloud trajectory to obtain the intermediate state point cloud trajectory, the method further comprises:
determining the distance between two adjacent track points of the intermediate state point cloud track;
if the distance is greater than the length of the acquisition equipment, interpolation processing is carried out on the intermediate state point cloud track, so that the distance between two adjacent track points is smaller than the length of the acquisition equipment.
5. The method of any one of claims 2-4, wherein performing the ghost filtering process on the intermediate state point cloud map according to the intermediate state point cloud track and a preset ghost filtering rule to obtain a target point cloud map comprises:
Determining a three-dimensional frame corresponding to the acquisition equipment according to the three-dimensional gesture of each track point in the intermediate state point cloud track;
and filtering the residual shadow of the intermediate state point cloud map according to the stereoscopic frame and the local volume density value to obtain a target point cloud map.
6. The method of claim 5, wherein performing ghost filtering on the intermediate state point cloud map according to the stereo frame and the local volume density value to obtain a target point cloud map comprises:
filtering all point cloud points in the area corresponding to the stereoscopic frame in the intermediate state point cloud map; and determining a ghost point from the point cloud points within a preset range from the outer surface of the stereoscopic frame according to the local volume density value, and filtering the ghost point from the intermediate state point cloud map to obtain the target point cloud map.
7. The method of claim 6, wherein determining the ghost point from the point cloud points within a predetermined range from the exterior surface of the stereoscopic frame based on the local bulk density value comprises:
and determining point cloud points meeting the following conditions in the intermediate state point cloud map as the ghost points:
Wherein ρ is n For the local volume density of the point cloud point ρ lmin For the bulk density threshold, R n The sphere center and the ball seat of the externally-connected ball corresponding to the three-dimensional frameThe distance between the points and the cloud points, R sbox For a preset radius threshold value, R scan And the radius of the outer ball corresponding to the three-dimensional frame.
8. The method of any one of claims 2 to 4, wherein performing the ghost filtering process on the intermediate state point cloud map according to the intermediate state point cloud track and a preset ghost filtering rule to obtain a target grid map includes:
determining a three-dimensional frame corresponding to the acquisition equipment according to the three-dimensional gesture of each track point in the intermediate state point cloud track;
performing ghost filtering on the intermediate state point cloud map according to the stereoscopic frame to obtain a target point cloud map;
and generating a target grid map based on the target point cloud map.
9. The method of claim 8, wherein the generating a target grid map based on the target point cloud map comprises:
intercepting a target point cloud in a preset height range from the target point cloud map;
and generating a grid map of the area corresponding to the original point cloud map based on the target point cloud.
10. The method of claim 9, wherein the generating a target grid map based on the target point cloud map comprises:
determining a intercepting distance range of a point cloud map according to the target point cloud map and the intermediate state point cloud track, wherein the distance range comprises a highest value and a lowest value;
intercepting a target point cloud from the target point cloud map based on the intercepting distance range;
and generating a grid map of the area corresponding to the original point cloud map based on the target point cloud.
11. The method of claim 9 or 10, wherein the generating a grid map of the original point cloud map corresponding area based on the target point cloud comprises:
projecting the target point cloud to a two-dimensional plane parallel to the ground of the area to obtain a projection Ping Miandian cloud;
dividing the projection Ping Miandian cloud into a plurality of square grids according to a preset resolution;
determining whether to mark each grid as an obstacle according to whether point cloud points exist in the grid;
and performing image closing operation on each marked grid to generate the grid map.
12. The method of claim 1, wherein after acquiring the original point cloud map and the point cloud trajectory, the method further comprises:
Performing principal component analysis and calculation on the original point cloud map to obtain a pose matrix of the original point cloud map, wherein the pose matrix is used for representing the pose of the original point cloud map relative to the ground;
determining whether the original point cloud map is inclined based on the pose matrix;
and if the original point cloud map is inclined, performing inclination correction on the original point cloud map and the original point cloud track based on the pose matrix.
13. A mobile robot, characterized in that it generates a target point cloud map and/or a target grid map by means of a method according to any of claims 1-12.
14. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1-12 when executing the computer program.
15. A computer readable storage medium storing a computer program, which when executed by a processor performs the method according to any one of claims 1-12.
CN202310467315.6A 2023-04-25 2023-04-25 Mobile robot navigation map generation method, equipment, medium and mobile robot Pending CN116804551A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310467315.6A CN116804551A (en) 2023-04-25 2023-04-25 Mobile robot navigation map generation method, equipment, medium and mobile robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310467315.6A CN116804551A (en) 2023-04-25 2023-04-25 Mobile robot navigation map generation method, equipment, medium and mobile robot

Publications (1)

Publication Number Publication Date
CN116804551A true CN116804551A (en) 2023-09-26

Family

ID=88080083

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310467315.6A Pending CN116804551A (en) 2023-04-25 2023-04-25 Mobile robot navigation map generation method, equipment, medium and mobile robot

Country Status (1)

Country Link
CN (1) CN116804551A (en)

Similar Documents

Publication Publication Date Title
JP6830139B2 (en) 3D data generation method, 3D data generation device, computer equipment and computer readable storage medium
CN108369743B (en) Mapping a space using a multi-directional camera
CN111121754A (en) Mobile robot positioning navigation method and device, mobile robot and storage medium
CN111815707B (en) Point cloud determining method, point cloud screening method, point cloud determining device, point cloud screening device and computer equipment
KR20220025028A (en) Method and device for building beacon map based on visual beacon
JP2013186816A (en) Moving image processor, moving image processing method and program for moving image processing
CN112789609A (en) Map updating method and device, movable platform and storage medium
CN111142514B (en) Robot and obstacle avoidance method and device thereof
CN108564604B (en) Binocular vision stereo matching method and device based on plane constraint and triangulation
CN112270642A (en) Method, system and device for constructing external rectangular frame of obstacle point cloud
CN111369680B (en) Method and device for generating three-dimensional image of building
CN114255323A (en) Robot, map construction method, map construction device and readable storage medium
CN114859938A (en) Robot, dynamic obstacle state estimation method and device and computer equipment
CN113034347A (en) Oblique photographic image processing method, device, processing equipment and storage medium
CN113776520B (en) Map construction, using method, device, robot and medium
CN116804551A (en) Mobile robot navigation map generation method, equipment, medium and mobile robot
KR102438490B1 (en) Heterogeneous sensors calibration method and apparatus using single checkerboard
CN114445415A (en) Method for dividing a drivable region and associated device
US11227166B2 (en) Method and device for evaluating images, operating assistance method, and operating device
Martinez et al. Map-based lane identification and prediction for autonomous vehicles
An et al. Tracking an RGB-D camera on mobile devices using an improved frame-to-frame pose estimation method
US11776148B1 (en) Multi-view height estimation from satellite images
CN115019167B (en) Fusion positioning method, system, equipment and storage medium based on mobile terminal
CN113091759B (en) Pose processing and map building method and device
CN116843846A (en) Point cloud data generation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination