CN117191005B - Air-ground heterogeneous collaborative mapping method, device, equipment and storage medium - Google Patents

Air-ground heterogeneous collaborative mapping method, device, equipment and storage medium Download PDF

Info

Publication number
CN117191005B
CN117191005B CN202311479552.0A CN202311479552A CN117191005B CN 117191005 B CN117191005 B CN 117191005B CN 202311479552 A CN202311479552 A CN 202311479552A CN 117191005 B CN117191005 B CN 117191005B
Authority
CN
China
Prior art keywords
point cloud
pose
cloud data
laser radar
platform
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311479552.0A
Other languages
Chinese (zh)
Other versions
CN117191005A (en
Inventor
孙振平
涂志明
付浩
刘伯凯
吴涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN202311479552.0A priority Critical patent/CN117191005B/en
Publication of CN117191005A publication Critical patent/CN117191005A/en
Application granted granted Critical
Publication of CN117191005B publication Critical patent/CN117191005B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Traffic Control Systems (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application relates to a space-ground heterogeneous collaborative mapping method, device, equipment and storage medium. The method comprises the following steps: the method comprises the steps of carrying out fusion of multi-platform multi-source information on the unmanned aerial vehicle platform and sensor information carried by the unmanned aerial vehicle platform through an air-ground collaborative pose factor graph, associating the unmanned aerial vehicle with the unmanned aerial vehicle in a cross-platform cross-view point cloud matching mode, adding the unmanned aerial vehicle as an observed quantity into the air-ground collaborative pose factor graph to further realize collaborative optimization, and finally generating a high-precision map according to the collaborative optimized pose to realize more comprehensive, detailed and accurate map construction on the surrounding environment where the unmanned aerial platform is located.

Description

Air-ground heterogeneous collaborative mapping method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of map generation technologies, and in particular, to a method, an apparatus, a device, and a storage medium for collaborative map building of air-ground isomerism.
Background
The three-dimensional high-precision map plays an increasingly important role in independent and autonomous operation of the unmanned platform, and compared with a traditional map, the high-precision map has higher accuracy, contains more abundant environmental characteristics, can remarkably improve situation awareness capability of the unmanned platform, simultaneously provides greater assistance for modules such as navigation positioning, decision making and the like, has priori high-precision map information, and can enhance a man-machine collaborative work scene.
At present, the technology of constructing a high-precision point cloud map offline through laser radar point cloud data by means of platforms such as ground unmanned vehicles and the like is mature, and the technology can be applied to various automatic driving platforms carrying three-dimensional rotary laser radars, wherein the application scenes cover complex urban scenes, underground parking lots, open scenes with sparse features, off-road scenes and the like. However, only the ground unmanned aerial vehicle platform can only drive in certain specific areas, and only the vicinity of the passable area can be drawn with high precision, so that the detailed map construction of all the areas where the unmanned aerial vehicle platform works cannot be realized. For more complex urban and off-road environments in the future, a high-precision map with only a vehicle-passable area is insufficient, and various unmanned platforms need to perform autonomous functions in a wider range of areas.
Disclosure of Invention
Based on the above, it is necessary to provide a method, a device, equipment and a storage medium for constructing a space-ground heterogeneous collaborative map, which can cooperate with an unmanned aerial vehicle platform through exploring the unmanned aerial vehicle platform with stronger capability to perform the point cloud data acquisition and the point cloud map construction together, so as to realize the generation of a space-ground collaborative high-precision map.
An air-ground heterogeneous collaborative mapping method, comprising:
constructing an air-ground heterogeneous platform comprising an unmanned vehicle platform and an unmanned vehicle platform, acquiring laser radar point cloud data according to a laser radar carried on the air-ground heterogeneous platform, and preprocessing the laser radar point cloud data to obtain preprocessed point cloud data;
acquiring the pose of each key frame in the preprocessed point cloud data and pose constraint among the key frames according to a sensor carried on the air-ground heterogeneous platform, taking the pose of the key frames as a node, taking the pose constraint among the key frames as an edge, and constructing an air-ground collaborative pose factor graph; the sensor comprises a laser odometer, an IMU and a GNSS, wherein pose constraints comprise pose point observation constraints and pose side observation constraints, and the pose side observation constraints comprise local adjacent frame constraints, closed loop matching constraints and heterogeneous platform matching constraints;
and performing space-ground collaborative optimization on the space-ground collaborative pose factor graph according to the pose graph optimizer, generating an optimized key frame pose, and splicing the preprocessed point cloud data according to the optimized key frame pose to generate a high-precision map.
In one embodiment, acquiring and preprocessing laser radar point cloud data according to a laser radar carried on an air-ground heterogeneous platform includes:
Respectively acquiring unmanned vehicle laser radar point cloud data and unmanned vehicle laser radar point cloud data according to the unmanned vehicle platform and the laser radars respectively carried by the unmanned vehicle platform in the air-ground heterogeneous platform;
respectively performing interval sampling on unmanned vehicle laser radar point cloud data and unmanned vehicle laser radar point cloud data to acquire key frames in the laser radar point cloud data;
and respectively carrying out point cloud intra-frame compensation on the unmanned vehicle laser radar point cloud data and point cloud distortion caused by the movement of the space-ground heterogeneous platform in the unmanned vehicle laser radar point cloud data to obtain the point cloud data after intra-frame compensation.
In one embodiment, the method for performing interval sampling on unmanned vehicle laser radar point cloud data and unmanned vehicle laser radar point cloud data respectively to obtain a key frame in the laser radar point cloud data includes:
performing interval sampling on unmanned vehicle laser radar point cloud data according to a preset distance interval to acquire key frames in the unmanned vehicle laser radar point cloud data;
and performing interval sampling on the unmanned aerial vehicle laser radar point cloud data according to a preset distance interval and a preset gesture angle interval, and acquiring a key frame in the unmanned aerial vehicle laser radar point cloud data.
In one embodiment, performing point cloud intra-frame compensation on point cloud distortion caused by space-to-ground heterogeneous platform motion in unmanned vehicle laser radar point cloud data and unmanned vehicle laser radar point cloud data respectively to obtain intra-frame compensated point cloud data, including:
Acquiring the pose of the air-ground heterogeneous platform according to the IMU and the GNSS carried on the air-ground heterogeneous platform, respectively carrying out point cloud intra-frame compensation on unmanned vehicle laser radar point cloud data and unmanned vehicle laser radar point cloud data according to the pose of the air-ground heterogeneous platform, and obtaining the point cloud data after intra-frame compensation, wherein the point cloud data is expressed as
Wherein,representing a lidar coordinate systemLThe point cloud data after the intra-frame compensation below, < >>Representing the carrier coordinate system in which the IMU and GNSS are locatedICoordinate system with laser radarLTransformation relation between->Representing a lidar coordinate systemLCoordinate system with carrierITransformation relation between->Representing a lidar coordinate systemLLaser radar point cloud data before intra-frame compensation below,/>Expressed in a carrier coordinate systemIThe laser radar point cloud data is determined from +.>Time transition to +.>Moment pose transformation matrix +_>Representing the acquisition time interval of the laser radar point cloud data.
In one embodiment, according to a sensor carried on an air-ground heterogeneous platform, acquiring a pose of each key frame in preprocessed point cloud data and pose constraints among the key frames, and constructing an air-ground collaborative pose factor graph by taking the poses of the key frames as nodes and the pose constraints among the key frames as edges, wherein the method comprises the following steps:
Acquiring the pose of each key frame in the preprocessed point cloud data according to an IMU and a GNSS carried on the space-earth heterogeneous platform, and taking the pose of the key frame as a node in a space-earth cooperative pose factor graph;
taking global observation constraint provided by an IMU and a GNSS as pose point observation constraint, taking pose constraint calculated by a laser odometer carried on an air-ground heterogeneous platform by adopting a point cloud matching algorithm as local adjacent frame constraint, taking pose constraint obtained by carrying out point cloud matching on key frames with time intervals exceeding a set time threshold and space distances being lower than a set distance threshold in preprocessed point cloud data as closed loop matching constraint, taking pose constraint obtained by respectively carrying out cross-view matching on key frames obtained by respectively carrying out cross-view matching on the unmanned vehicle platform and the unmanned vehicle platform in the air-ground heterogeneous platform when the unmanned vehicle platform and the unmanned vehicle platform travel to the same area at different times as heterogeneous platform matching constraint, and forming pose constraint among the key frames according to pose point observation constraint, local adjacent frame constraint, closed loop matching constraint and heterogeneous platform matching constraint combination, and taking the pose constraint among the key frames as edges in an air-ground collaborative pose factor graph.
In one embodiment, taking pose constraint obtained by performing point cloud matching on a key frame with a time interval exceeding a set time threshold and a spatial distance being lower than a set distance threshold in the preprocessed point cloud data as closed loop matching constraint includes:
And searching the key frames of which the time interval exceeds a set time threshold and the space distance is lower than a set distance threshold in the preprocessed point cloud data, acquiring corresponding poses, performing local splicing, generating super-frame point clouds of the closed-loop key frames, performing point cloud matching on the super-frame point clouds of the closed-loop key frames according to a normal distribution transformation algorithm, and taking pose constraint obtained by matching as closed-loop matching constraint.
In one embodiment, when an unmanned vehicle platform and an unmanned vehicle platform in an air-ground heterogeneous platform travel to the same area at different times, pose constraints obtained by cross-view matching of key frames obtained respectively are used as heterogeneous platform matching constraints, and the method comprises the following steps:
and searching for the unmanned aerial vehicle platform and the unmanned aerial vehicle platform in the empty space heterogeneous platform in the preprocessed point cloud data, respectively acquiring key frames and corresponding poses when the unmanned aerial vehicle platform runs to the same area at different times, performing local splicing, generating a super-frame point cloud matched with the platform, performing cross-view matching on the super-frame point cloud matched with the platform according to a normal distribution transformation algorithm, and taking pose constraint obtained by matching as heterogeneous platform matching constraint.
An air-ground heterogeneous collaborative mapping device, the device comprising:
The data preprocessing module is used for constructing an air-ground heterogeneous platform comprising an unmanned vehicle platform and an unmanned vehicle platform, acquiring laser radar point cloud data according to a laser radar carried on the air-ground heterogeneous platform and preprocessing the laser radar point cloud data to obtain preprocessed point cloud data;
the pose factor graph construction module is used for acquiring the pose of each key frame in the preprocessed point cloud data and pose constraint among the key frames according to the sensors carried on the space-earth heterogeneous platform, taking the poses of the key frames as nodes and the pose constraint among the key frames as edges, and constructing a space-earth collaborative pose factor graph; the sensor comprises a laser odometer, an IMU and a GNSS, wherein pose constraints comprise pose point observation constraints and pose side observation constraints, and the pose side observation constraints comprise local adjacent frame constraints, closed loop matching constraints and heterogeneous platform matching constraints;
the high-precision map generation module is used for performing space-ground collaborative optimization on the space-ground collaborative pose factor map according to the pose map optimizer, generating an optimized key frame pose, and splicing the preprocessed point cloud data according to the optimized key frame pose to generate a high-precision map.
A computer device comprising a memory storing a computer program and a processor which when executing the computer program performs the steps of:
Constructing an air-ground heterogeneous platform comprising an unmanned vehicle platform and an unmanned vehicle platform, acquiring laser radar point cloud data according to a laser radar carried on the air-ground heterogeneous platform, and preprocessing the laser radar point cloud data to obtain preprocessed point cloud data;
acquiring the pose of each key frame in the preprocessed point cloud data and pose constraint among the key frames according to a sensor carried on the air-ground heterogeneous platform, taking the pose of the key frames as a node, taking the pose constraint among the key frames as an edge, and constructing an air-ground collaborative pose factor graph; the sensor comprises a laser odometer, an IMU and a GNSS, wherein pose constraints comprise pose point observation constraints and pose side observation constraints, and the pose side observation constraints comprise local adjacent frame constraints, closed loop matching constraints and heterogeneous platform matching constraints;
and performing space-ground collaborative optimization on the space-ground collaborative pose factor graph according to the pose graph optimizer, generating an optimized key frame pose, and splicing the preprocessed point cloud data according to the optimized key frame pose to generate a high-precision map.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
Constructing an air-ground heterogeneous platform comprising an unmanned vehicle platform and an unmanned vehicle platform, acquiring laser radar point cloud data according to a laser radar carried on the air-ground heterogeneous platform, and preprocessing the laser radar point cloud data to obtain preprocessed point cloud data;
acquiring the pose of each key frame in the preprocessed point cloud data and pose constraint among the key frames according to a sensor carried on the air-ground heterogeneous platform, taking the pose of the key frames as a node, taking the pose constraint among the key frames as an edge, and constructing an air-ground collaborative pose factor graph; the sensor comprises a laser odometer, an IMU and a GNSS, wherein pose constraints comprise pose point observation constraints and pose side observation constraints, and the pose side observation constraints comprise local adjacent frame constraints, closed loop matching constraints and heterogeneous platform matching constraints;
and performing space-ground collaborative optimization on the space-ground collaborative pose factor graph according to the pose graph optimizer, generating an optimized key frame pose, and splicing the preprocessed point cloud data according to the optimized key frame pose to generate a high-precision map.
Compared with the prior art, only consider unmanned vehicle platform to carry out high accuracy map construction, the technical effect that this application possesses is:
1. the method and the system have the advantages that the unmanned aerial vehicle platform and the unmanned aerial vehicle platform carry sensor information through the air-ground collaborative pose factor graph to carry out multi-platform multi-source information fusion, the unmanned aerial vehicle and the unmanned aerial vehicle are associated through a cross-platform cross-view point cloud matching mode, the unmanned aerial vehicle and the unmanned aerial vehicle are added into the air-ground collaborative pose factor graph as observed quantity to achieve collaborative optimization, finally, a high-precision map is generated according to the collaborative optimized pose, and the fact that the surrounding environment where the unmanned aerial platform is located is constructed in a more comprehensive, detailed and accurate mode is achieved.
2. When cross-view matching is carried out, the method and the device perform local splicing by searching the keyframes respectively acquired by the unmanned vehicle platform and the unmanned vehicle platform when the unmanned vehicle platform travel to the same area at different times, and acquiring corresponding poses, so that the super-frame point cloud matched with the platform is generated, the super-frame point cloud can make up the defects of small field angle, small quantity of the point clouds and sparse characteristics of the single-frame point cloud, and the super-frame point cloud matched with the platform has certain coincident environmental characteristics, so that when the super-frame point cloud matched with the platform is subjected to cross-view matching according to a normal distribution transformation algorithm, the environmental characteristics of the super-frame point cloud can be subjected to accurate matching, the calculation accuracy of heterogeneous platform matching constraint is improved, and the collaborative optimization accuracy of the air-ground collaborative pose factor graph is further improved.
Drawings
FIG. 1 is a schematic flow diagram of a method for spatially heterogeneous collaborative mapping in one embodiment;
FIG. 2 is a schematic illustration of drone and drone key frame sampling in one embodiment; FIG. 2 (a) is a schematic illustration of unmanned vehicle keyframe sampling; FIG. 2 (b) is a schematic illustration of a sample of a key frame of the unmanned aerial vehicle;
FIG. 3 is a schematic diagram of a specific structure of a hollow cooperative pose factor graph according to an embodiment;
FIG. 4 is a schematic diagram of closed loop search in one embodiment;
fig. 5 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
In one embodiment, as shown in fig. 1, a space-ground heterogeneous collaborative mapping method is provided, which includes the following steps:
firstly, constructing an air-ground heterogeneous platform comprising an unmanned vehicle platform (UGV) and an unmanned aerial vehicle platform (UAV), acquiring laser radar point cloud data according to a laser radar carried on the air-ground heterogeneous platform, and preprocessing to obtain preprocessed point cloud data. The preprocessing process comprises two parts of key frame selection and point cloud intra-frame compensation.
Then, acquiring the pose of each key frame and pose constraint among the key frames in the preprocessed point cloud data according to a sensor carried on the space-to-ground heterogeneous platform, and constructing a space-to-ground collaborative pose factor graph by taking the poses of the key frames as nodes and the pose constraint among the key frames as edges; the sensor comprises a laser odometer, an IMU (inertial measurement unit) and a GNSS (global navigation satellite system), wherein pose constraints comprise pose point observation constraints and pose side observation constraints, and the pose side observation constraints comprise local adjacent frame constraints, closed loop matching constraints and heterogeneous platform matching constraints.
And finally, performing space-ground collaborative optimization on the space-ground collaborative pose factor graph according to the pose graph optimizer, generating an optimized key frame pose, and splicing the preprocessed point cloud data according to the optimized key frame pose to generate a high-precision map. The high-precision map comprises an unmanned vehicle high-precision point cloud map generated by an unmanned vehicle platform, an unmanned vehicle high-precision point cloud map generated by the unmanned vehicle platform and an air-ground collaborative high-precision point cloud map generated by an air-ground heterogeneous platform.
In order to be different from single-frame point clouds, the method and the device sample based on modes such as distance and angle threshold values, and a local sub-image formed by fusing a plurality of single-frame point clouds is called super-frame point clouds (MetaScan), and the super-frame point clouds can make up the defects of small single-frame point clouds field angle, small number of point clouds and sparse characteristics, and have important significance for an unmanned system perception module.
Specifically, the method for generating the optimized key frame pose comprises the following specific steps of:
according to Bayesian inference algorithm, calculating the total probability relation expression between nodes and edges in the space-ground coordination pose factor graph as
Wherein,representing all nodes to be optimized in the air-ground cooperative pose factor graph, < ->Represents the observation constraint of the pose point,representing pose side observation constraints +.>Represents the i-th node and the j-th node, respectively,>when (I)>Representation->And->Local adjacent frame constraint between ∈>When (I)>Representation->And->Closed loop matching constraints between->When (I)>Representation ofAnd->Heterogeneous platform matching constraint between ∈>Representing a priori pose factors->Representing state transition factors, ++>The observation factor is represented by a graph of the observation,krepresent the firstkPose of each node;
according to a state transition model between the node and the pose edge observation constraint and an observation model between the node and the pose point observation constraint, the space-earth co-pose factor graph optimization problem is converted into the following least square problem, and the optimized key frame pose is obtained and expressed as
Wherein,will->Is abbreviated as->Information matrixes representing pose side observation constraints and pose point observation constraints, and confidence levels of the pose side observation constraints and the pose point observation constraints are also represented, and +.>The diagonal matrix is set to be 6 multiplied by 6, namely the nodes to be optimized have six degrees of freedom pose components which are mutually independent, namely transverse position, longitudinal position, height, roll angle, pitch angle and yaw angle; / >Representing a state transition model->Representing the observation model->Information matrix representing node and pose edge observation constraints, +.>And the information matrix represents the observation constraint of the nodes and the pose points.
In one embodiment, acquiring and preprocessing laser radar point cloud data according to a laser radar carried on an air-ground heterogeneous platform includes:
respectively acquiring unmanned vehicle laser radar point cloud data and unmanned vehicle laser radar point cloud data according to the unmanned vehicle platform and the laser radars respectively carried by the unmanned vehicle platform in the air-ground heterogeneous platform;
respectively performing interval sampling on unmanned vehicle laser radar point cloud data and unmanned vehicle laser radar point cloud data to acquire key frames in the laser radar point cloud data;
and respectively carrying out point cloud intra-frame compensation on the unmanned vehicle laser radar point cloud data and point cloud distortion caused by the movement of the space-ground heterogeneous platform in the unmanned vehicle laser radar point cloud data to obtain the point cloud data after intra-frame compensation.
It can be understood that, because the frequency of laser radar point cloud data acquisition is higher, the frequency is usually 10Hz, the acquired data frames are very compact, and particularly when an unmanned platform stops moving, a large amount of data is still acquired, so that a large amount of laser points are compact, pose matching calculation of adjacent frames is needed to be carried out on the point cloud data frames and the pose factor graph structure is optimized, key point cloud frames are needed to be selected for map construction, so that the accuracy of calculation can be ensured, environmental information can not be reduced, and the calculation amount of the subsequent map construction process can be reduced.
In one embodiment, the method for performing interval sampling on unmanned vehicle laser radar point cloud data and unmanned vehicle laser radar point cloud data respectively to obtain a key frame in the laser radar point cloud data includes:
as shown in fig. 2 (a), performing interval sampling on unmanned vehicle laser radar point cloud data according to a preset distance interval to obtain a key frame in the unmanned vehicle laser radar point cloud data;
as shown in fig. 2 (b), the unmanned aerial vehicle laser radar point cloud data is sampled at intervals according to a preset distance interval and a preset gesture angle interval, and a key frame in the unmanned aerial vehicle laser radar point cloud data is obtained.
It can be understood that the attitude angle changes slowly during the movement of the unmanned vehicle platform, the pitch angle and the roll angle change range are very small, and the yaw angle also changes very severely due to the fact that the vehicle is restrained by incomplete movement, so the unmanned vehicleThe platform directly performs interval sampling according to a preset distance interval. And unmanned aerial vehicle platform is stronger for unmanned aerial vehicle platform's mobility, and gesture angle dynamic change scope is bigger, and degree of change is more violent, and the angle also has very big influence to unmanned aerial vehicle keyframe's sampling, so this application adopts preset distance interval and gesture angle interval to carry out keyframe interval sampling to unmanned aerial vehicle platform. Furthermore, the freedom of movement of the drone is obviously higher with respect to the characteristics of the drone with minimum turning radius, which enables in-situ rotational movements. When the unmanned aerial vehicle is operated to collect point cloud data through the set route, the unmanned aerial vehicle just rotates in situ through the fuselage at the corner of the route to change the direction of the route. In addition, in order to maintain the stability of the attitude, the unmanned plane platform flight control algorithm has very large angular speed of a pitch angle and a roll angle when the in-situ rotation course angle is rapidly changed, and the roll angle speed can reach 50 degrees/one sThe pitch angle rate can reach 100 DEG +.sThe yaw rate can reach 100 DEG +sAnd the dynamic range varies widely. Any point cloud pose matching algorithm is very easy to match with error when the unmanned aerial vehicle moves in a high maneuver, the acquired point cloud data with high maneuver angular velocity is very easy to match with error, and the robustness of accurate matching is very poor, so that the unmanned aerial vehicle platform is also eliminated from adjacent frames in-situ rotation, as shown in fig. 2 (b), poses of the unmanned aerial vehicle when the unmanned aerial vehicle is acquired and the corner is in contour maneuver are all processed into non-key frames, and subsequent operations such as laser radar odometer and map construction are not performed.
In one embodiment, performing point cloud intra-frame compensation on point cloud distortion caused by space-to-ground heterogeneous platform motion in unmanned vehicle laser radar point cloud data and unmanned vehicle laser radar point cloud data respectively to obtain intra-frame compensated point cloud data, including:
acquiring the pose of the air-ground heterogeneous platform according to the IMU and the GNSS carried on the air-ground heterogeneous platform, respectively carrying out point cloud intra-frame compensation on unmanned vehicle laser radar point cloud data and unmanned vehicle laser radar point cloud data according to the pose of the air-ground heterogeneous platform, and obtaining the point cloud data after intra-frame compensation, wherein the point cloud data is expressed as
Wherein,representing a lidar coordinate systemLThe point cloud data after the intra-frame compensation below, < >>Representing the carrier coordinate system in which the IMU and GNSS are locatedICoordinate system with laser radarLThe transformation relation between the two is obtained by solving a hand-eye calibration equation>Representing a lidar coordinate systemLCoordinate system with carrierITransformation relation between->Representing a lidar coordinate systemLLaser radar point cloud data before intra-frame compensation below,/>Expressed in a carrier coordinate systemIFollow laser radar point cloud datatTime transition to +.>Moment pose transformation matrix +_>Representing the acquisition time interval of the laser radar point cloud data.
In one embodiment, as shown in fig. 3, according to a sensor carried on an air-ground heterogeneous platform, acquiring a pose of each key frame in preprocessed point cloud data and pose constraints among the key frames, using the poses of the key frames as nodes and the pose constraints among the key frames as edges, and constructing an air-ground collaborative pose factor graph, including:
acquiring the pose of each key frame in the preprocessed point cloud data according to an IMU and a GNSS carried on the space-earth heterogeneous platform, and taking the pose of the key frame as a node in a space-earth cooperative pose factor graph;
taking global observation constraint provided by an IMU and a GNSS as pose point observation constraint, taking pose constraint calculated by a laser odometer carried on an air-ground heterogeneous platform by adopting a point cloud matching algorithm as local adjacent frame constraint, taking pose constraint obtained by carrying out point cloud matching on key frames with time intervals exceeding a set time threshold and space distances being lower than a set distance threshold in preprocessed point cloud data as closed loop matching constraint, taking pose constraint obtained by respectively carrying out cross-view matching on key frames obtained by respectively carrying out cross-view matching on the unmanned vehicle platform and the unmanned vehicle platform in the air-ground heterogeneous platform when the unmanned vehicle platform and the unmanned vehicle platform travel to the same area at different times as heterogeneous platform matching constraint, and forming pose constraint among the key frames according to pose point observation constraint, local adjacent frame constraint, closed loop matching constraint and heterogeneous platform matching constraint combination, and taking the pose constraint among the key frames as edges in an air-ground collaborative pose factor graph.
In one embodiment, taking pose constraint obtained by performing point cloud matching on a key frame with a time interval exceeding a set time threshold and a spatial distance being lower than a set distance threshold in the preprocessed point cloud data as closed loop matching constraint includes:
as shown in fig. 4, searching for a key frame with a time interval exceeding a set time threshold and a spatial distance being lower than a set distance threshold in the preprocessed point cloud data, acquiring a corresponding pose, performing local splicing, generating a super-frame point cloud of a closed-loop key frame, performing point cloud matching on the super-frame point cloud of the closed-loop key frame according to a normal distribution transformation algorithm, and taking pose constraint obtained by matching as closed-loop matching constraint. Wherein in FIG. 4And representing the pose nodes of the key frames at different moments.
It can be understood that pose data in the whole closed loop searching process are all according to priori values provided by GNSS and IMU, and have no accumulated error because of absolute quantity, and the closed loop can still be effectively solved under the conditions that the larger initial position has deviation and the radar view angle of the unmanned plane platform is very limited by the superframe point cloud matching solving method, so that the consistency of the whole map building and the accuracy of local details are improved.
Further, a two-stage normal distribution transformation (NDT: normal Distribution Transformation) algorithm is used, the core idea of the NDT algorithm is that a Target point cloud (Target) is divided into a plurality of small cubes (cells), the multidimensional normal distribution of each grid is solved according to set parameters, a probability distribution model of the grid is calculated, when a Source point cloud (Source) under the same coordinates enters, the probability of each conversion point in the corresponding grid is calculated according to the normal distribution parameters, the probabilities of all grids are calculated in an accumulated mode, and the maximum accumulated probability corresponds to the optimal matching pose. In the two-stage NDT algorithm, the first step of NDT matching sets the resolution of a grid to 5m, and the second step of NDT matching sets the resolution of the grid to 1m. The matching priori value of the first step NDT algorithm is the sensing data of a GNSS/IMU sensor carried by the platform, the rough matching pose obtained by the first step is used as the initial value of the second step matching, and then fine registration is carried out to obtain the final closed loop pose matching result.
In one embodiment, when an unmanned vehicle platform and an unmanned vehicle platform in an air-ground heterogeneous platform travel to the same area at different times, pose constraints obtained by cross-view matching of key frames obtained respectively are used as heterogeneous platform matching constraints, and the method comprises the following steps:
And searching for the unmanned aerial vehicle platform and the unmanned aerial vehicle platform in the empty space heterogeneous platform in the preprocessed point cloud data, respectively acquiring key frames and corresponding poses when the unmanned aerial vehicle platform runs to the same area at different times, performing local splicing, generating a super-frame point cloud matched with the platform, performing cross-view matching on the super-frame point cloud matched with the platform according to a normal distribution transformation algorithm, and taking pose constraint obtained by matching as heterogeneous platform matching constraint.
It can be understood that when searching for the keyframes acquired by the unmanned aerial vehicle platform and the unmanned aerial vehicle platform in the air-ground heterogeneous platform respectively when the unmanned aerial vehicle platform travel to the same area at different times, firstly, for each keyframe of the unmanned aerial vehicle, according to the GNSS position, searching for the keyframe of the unmanned aerial vehicle closest to the keyframe of the unmanned aerial vehicle, wherein the algorithm complexity is the product of the number of the keyframes of the two platforms. And then, searching a state area which is relatively close to each other and is in parallel motion according to the GNSS positions of the key frames of the two, ensuring that the super-frame point clouds formed by the two have certain coincident environmental characteristics, and accurately matching according to the environmental characteristics when the super-frame point clouds are matched across view angles, so that the calculation accuracy of heterogeneous platform matching constraint is improved.
It should be understood that, although the steps in the flowchart of fig. 1 are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 1 may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor do the order in which the sub-steps or stages are performed necessarily performed in sequence, but may be performed alternately or alternately with at least a portion of other steps or sub-steps of other steps.
In one embodiment, there is provided an air-ground heterogeneous collaborative mapping apparatus, including:
the data preprocessing module is used for constructing an air-ground heterogeneous platform comprising an unmanned vehicle platform and an unmanned vehicle platform, acquiring laser radar point cloud data according to a laser radar carried on the air-ground heterogeneous platform and preprocessing the laser radar point cloud data to obtain preprocessed point cloud data;
the pose factor graph construction module is used for acquiring the pose of each key frame in the preprocessed point cloud data and pose constraint among the key frames according to the sensors carried on the space-earth heterogeneous platform, taking the poses of the key frames as nodes and the pose constraint among the key frames as edges, and constructing a space-earth collaborative pose factor graph; the sensor comprises a laser odometer, an IMU and a GNSS, wherein pose constraints comprise pose point observation constraints and pose side observation constraints, and the pose side observation constraints comprise local adjacent frame constraints, closed loop matching constraints and heterogeneous platform matching constraints;
The high-precision map generation module is used for performing space-ground collaborative optimization on the space-ground collaborative pose factor map according to the pose map optimizer, generating an optimized key frame pose, and splicing the preprocessed point cloud data according to the optimized key frame pose to generate a high-precision map.
The specific limitation of the air-ground isomerism collaborative mapping device can be referred to the limitation of the air-ground isomerism collaborative mapping method, and the description thereof is omitted herein. All or part of each module in the air-ground heterogeneous collaborative mapping device can be realized by software, hardware and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal, and the internal structure of which may be as shown in fig. 5. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program, when executed by a processor, implements a space-ground heterogeneous collaborative mapping method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 5 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided comprising a memory storing a computer program and a processor that when executing the computer program performs the steps of:
constructing an air-ground heterogeneous platform comprising an unmanned vehicle platform and an unmanned vehicle platform, acquiring laser radar point cloud data according to a laser radar carried on the air-ground heterogeneous platform, and preprocessing the laser radar point cloud data to obtain preprocessed point cloud data;
acquiring the pose of each key frame in the preprocessed point cloud data and pose constraint among the key frames according to a sensor carried on the air-ground heterogeneous platform, taking the pose of the key frames as a node, taking the pose constraint among the key frames as an edge, and constructing an air-ground collaborative pose factor graph; the sensor comprises a laser odometer, an IMU and a GNSS, wherein pose constraints comprise pose point observation constraints and pose side observation constraints, and the pose side observation constraints comprise local adjacent frame constraints, closed loop matching constraints and heterogeneous platform matching constraints;
And performing space-ground collaborative optimization on the space-ground collaborative pose factor graph according to the pose graph optimizer, generating an optimized key frame pose, and splicing the preprocessed point cloud data according to the optimized key frame pose to generate a high-precision map.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
constructing an air-ground heterogeneous platform comprising an unmanned vehicle platform and an unmanned vehicle platform, acquiring laser radar point cloud data according to a laser radar carried on the air-ground heterogeneous platform, and preprocessing the laser radar point cloud data to obtain preprocessed point cloud data;
acquiring the pose of each key frame in the preprocessed point cloud data and pose constraint among the key frames according to a sensor carried on the air-ground heterogeneous platform, taking the pose of the key frames as a node, taking the pose constraint among the key frames as an edge, and constructing an air-ground collaborative pose factor graph; the sensor comprises a laser odometer, an IMU and a GNSS, wherein pose constraints comprise pose point observation constraints and pose side observation constraints, and the pose side observation constraints comprise local adjacent frame constraints, closed loop matching constraints and heterogeneous platform matching constraints;
And performing space-ground collaborative optimization on the space-ground collaborative pose factor graph according to the pose graph optimizer, generating an optimized key frame pose, and splicing the preprocessed point cloud data according to the optimized key frame pose to generate a high-precision map.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the various embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (6)

1. The method for collaborative mapping of air-ground isomerism is characterized by comprising the following steps:
constructing an air-ground heterogeneous platform comprising an unmanned vehicle platform and an unmanned vehicle platform, acquiring laser radar point cloud data according to a laser radar carried on the air-ground heterogeneous platform, and preprocessing the laser radar point cloud data to obtain preprocessed point cloud data;
acquiring the pose of each key frame in the preprocessed point cloud data and pose constraint among the key frames according to a sensor carried on the space-to-ground heterogeneous platform, taking the pose of the key frames as a node, and taking the pose constraint among the key frames as an edge to construct a space-to-ground collaborative pose factor graph; the sensor comprises a laser odometer, an IMU and a GNSS, wherein the pose constraint comprises a pose point observation constraint and a pose side observation constraint, and the pose side observation constraint comprises a local adjacent frame constraint, a closed loop matching constraint and a heterogeneous platform matching constraint;
Performing space-ground collaborative optimization on the space-ground collaborative pose factor graph according to a pose graph optimizer, generating an optimized key frame pose, and splicing the preprocessed point cloud data according to the optimized key frame pose to generate a high-precision map;
the method for constructing the space-ground cooperative pose factor graph comprises the following specific steps of:
acquiring the pose of each key frame in the preprocessed point cloud data according to the IMU and the GNSS carried on the space-earth heterogeneous platform, and taking the pose of the key frame as a node in a space-earth collaborative pose factor graph;
taking global observation constraints provided by the IMU and the GNSS as pose point observation constraints, taking pose constraints calculated by a point cloud matching algorithm by a laser odometer carried on the space-earth heterogeneous platform as local adjacent frame constraints, taking pose constraints obtained by carrying out point cloud matching on key frames with time intervals exceeding a set time threshold and space distances being lower than a set distance threshold in the preprocessed point cloud data as closed-loop matching constraints, taking pose constraints obtained by carrying out cross-view matching on key frames respectively obtained when an unmanned vehicle platform and an unmanned vehicle platform in the space-earth heterogeneous platform travel to the same area at different times as heterogeneous platform matching constraints, and forming pose constraints among the key frames according to the pose point observation constraints, the local adjacent frame constraints, the closed-loop matching constraints and heterogeneous platform matching constraints, and taking the pose constraints among the key frames as edges in a space-earth cooperative pose factor graph;
The specific steps of acquiring the heterogeneous platform matching constraint comprise:
searching for key frames respectively acquired by unmanned aerial vehicle platforms and unmanned aerial vehicle platforms in the space-to-ground heterogeneous platforms when the unmanned aerial vehicle platforms travel to the same area at different times, acquiring corresponding poses, performing local splicing, generating a superframe point cloud matched with the platforms, performing cross-view matching on the superframe point cloud matched with the platforms according to a normal distribution transformation algorithm, and taking pose constraint obtained by matching as heterogeneous platform matching constraint; specifically, when an unmanned aerial vehicle platform and an unmanned aerial vehicle platform in a space-to-ground heterogeneous platform are searched and run to the same area at different time, firstly, for each keyframe of the unmanned aerial vehicle platform, according to the GNSS position of the keyframe, the keyframe of the unmanned aerial vehicle platform closest to the keyframe is searched, and the algorithm complexity of the keyframe is the product of the number of the keyframes of the two platforms; then, according to the GNSS positions of the key frames of the two, searching a state area which is relatively close to each other and in parallel motion, so that the super-frame point cloud formed by the two has coincident environmental characteristics;
the method comprises the steps of acquiring and preprocessing laser radar point cloud data according to the laser radar carried on the air-ground heterogeneous platform, and comprises the following steps:
Respectively acquiring unmanned vehicle laser radar point cloud data and unmanned vehicle laser radar point cloud data according to the unmanned vehicle platform and the laser radars respectively carried by the unmanned vehicle platform in the air-ground heterogeneous platform;
respectively performing interval sampling on the unmanned vehicle laser radar point cloud data and the unmanned vehicle laser radar point cloud data to acquire key frames in the laser radar point cloud data;
respectively carrying out point cloud intra-frame compensation on the unmanned vehicle laser radar point cloud data and point cloud distortion caused by the movement of the space-ground heterogeneous platform in the unmanned vehicle laser radar point cloud data to obtain intra-frame compensated point cloud data;
the method for compensating the point cloud in the unmanned vehicle laser radar point cloud data and the point cloud distortion caused by the movement of the air-ground heterogeneous platform in the unmanned vehicle laser radar point cloud data respectively comprises the following steps of:
acquiring the pose of the space-to-ground heterogeneous platform according to the IMU and the GNSS carried on the space-to-ground heterogeneous platform, respectively carrying out point cloud intra-frame compensation on the unmanned vehicle laser radar point cloud data and the unmanned vehicle laser radar point cloud data according to the pose of the space-to-ground heterogeneous platform, and obtaining the point cloud data after intra-frame compensation, wherein the point cloud data is expressed as
Wherein,representing a lidar coordinate systemLThe point cloud data after the intra-frame compensation below, < >>Representing the carrier coordinate system in which the IMU and GNSS are locatedICoordinate system with laser radarLTransformation relation between->Representing a lidar coordinate systemLCoordinate system with carrierITransformation relation between->Representing a lidar coordinate systemLThe laser radar point cloud data before the next intra-frame compensation,expressed in a carrier coordinate systemIFollow laser radar point cloud datat Time transition to +.>Moment pose transformation matrix +_>Representing the acquisition time interval of the laser radar point cloud data.
2. The method of claim 1, wherein the performing interval sampling on the unmanned vehicle laser radar point cloud data and the unmanned vehicle laser radar point cloud data to obtain a key frame in the laser radar point cloud data includes:
performing interval sampling on the unmanned vehicle laser radar point cloud data according to a preset distance interval to acquire a key frame in the unmanned vehicle laser radar point cloud data;
and performing interval sampling on the unmanned aerial vehicle laser radar point cloud data according to a preset distance interval and a preset gesture angle interval, and acquiring a key frame in the unmanned aerial vehicle laser radar point cloud data.
3. The method according to claim 1, wherein the step of performing the pose constraint obtained by performing the point cloud matching on the key frames with the time interval exceeding the set time threshold and the spatial distance being lower than the set distance threshold in the preprocessed point cloud data is used as a closed loop matching constraint, and includes:
and searching for a key frame of which the time interval exceeds a set time threshold and the space distance is lower than the set distance threshold in the preprocessed point cloud data, acquiring a corresponding pose for local splicing, generating a super-frame point cloud of a closed-loop key frame, carrying out point cloud matching on the super-frame point cloud of the closed-loop key frame according to a normal distribution transformation algorithm, and taking pose constraint obtained by matching as closed-loop matching constraint.
4. An air-ground heterogeneous collaborative mapping device, characterized in that the device comprises:
the data preprocessing module is used for constructing an air-ground heterogeneous platform comprising an unmanned vehicle platform and an unmanned vehicle platform, acquiring laser radar point cloud data according to a laser radar carried on the air-ground heterogeneous platform and preprocessing the laser radar point cloud data to obtain preprocessed point cloud data;
the pose factor graph construction module is used for acquiring the pose of each key frame in the preprocessed point cloud data and pose constraint among the key frames according to the sensors carried on the air-ground heterogeneous platform, taking the pose of the key frames as nodes and the pose constraint among the key frames as edges to construct an air-ground collaborative pose factor graph; the sensor comprises a laser odometer, an IMU and a GNSS, wherein the pose constraint comprises a pose point observation constraint and a pose side observation constraint, and the pose side observation constraint comprises a local adjacent frame constraint, a closed loop matching constraint and a heterogeneous platform matching constraint;
The high-precision map generation module is used for performing space-ground collaborative optimization on the space-ground collaborative pose factor map according to a pose map optimizer to generate an optimized key frame pose, and splicing the preprocessed point cloud data according to the optimized key frame pose to generate a high-precision map;
the method for constructing the space-ground cooperative pose factor graph comprises the following specific steps of:
acquiring the pose of each key frame in the preprocessed point cloud data according to the IMU and the GNSS carried on the space-earth heterogeneous platform, and taking the pose of the key frame as a node in a space-earth collaborative pose factor graph;
taking global observation constraints provided by the IMU and the GNSS as pose point observation constraints, taking pose constraints calculated by a point cloud matching algorithm by a laser odometer carried on the space-earth heterogeneous platform as local adjacent frame constraints, taking pose constraints obtained by carrying out point cloud matching on key frames with time intervals exceeding a set time threshold and space distances being lower than a set distance threshold in the preprocessed point cloud data as closed-loop matching constraints, taking pose constraints obtained by carrying out cross-view matching on key frames respectively obtained when an unmanned vehicle platform and an unmanned vehicle platform in the space-earth heterogeneous platform travel to the same area at different times as heterogeneous platform matching constraints, and forming pose constraints among the key frames according to the pose point observation constraints, the local adjacent frame constraints, the closed-loop matching constraints and heterogeneous platform matching constraints, and taking the pose constraints among the key frames as edges in a space-earth cooperative pose factor graph;
The specific steps of acquiring the heterogeneous platform matching constraint comprise:
searching for key frames respectively acquired by unmanned aerial vehicle platforms and unmanned aerial vehicle platforms in the space-to-ground heterogeneous platforms when the unmanned aerial vehicle platforms travel to the same area at different times, acquiring corresponding poses, performing local splicing, generating a superframe point cloud matched with the platforms, performing cross-view matching on the superframe point cloud matched with the platforms according to a normal distribution transformation algorithm, and taking pose constraint obtained by matching as heterogeneous platform matching constraint; specifically, when an unmanned aerial vehicle platform and an unmanned aerial vehicle platform in a space-to-ground heterogeneous platform are searched and run to the same area at different time, firstly, for each keyframe of the unmanned aerial vehicle platform, according to the GNSS position of the keyframe, the keyframe of the unmanned aerial vehicle platform closest to the keyframe is searched, and the algorithm complexity of the keyframe is the product of the number of the keyframes of the two platforms; then, according to the GNSS positions of the key frames of the two, searching a state area which is relatively close to each other and in parallel motion, so that the super-frame point cloud formed by the two has coincident environmental characteristics;
the method comprises the steps of acquiring and preprocessing laser radar point cloud data according to the laser radar carried on the air-ground heterogeneous platform, and comprises the following steps:
Respectively acquiring unmanned vehicle laser radar point cloud data and unmanned vehicle laser radar point cloud data according to the unmanned vehicle platform and the laser radars respectively carried by the unmanned vehicle platform in the air-ground heterogeneous platform;
respectively performing interval sampling on the unmanned vehicle laser radar point cloud data and the unmanned vehicle laser radar point cloud data to acquire key frames in the laser radar point cloud data;
respectively carrying out point cloud intra-frame compensation on the unmanned vehicle laser radar point cloud data and point cloud distortion caused by the movement of the space-ground heterogeneous platform in the unmanned vehicle laser radar point cloud data to obtain intra-frame compensated point cloud data;
the method for compensating the point cloud in the unmanned vehicle laser radar point cloud data and the point cloud distortion caused by the movement of the air-ground heterogeneous platform in the unmanned vehicle laser radar point cloud data respectively comprises the following steps of:
acquiring the pose of the space-to-ground heterogeneous platform according to the IMU and the GNSS carried on the space-to-ground heterogeneous platform, respectively carrying out point cloud intra-frame compensation on the unmanned vehicle laser radar point cloud data and the unmanned vehicle laser radar point cloud data according to the pose of the space-to-ground heterogeneous platform, and obtaining the point cloud data after intra-frame compensation, wherein the point cloud data is expressed as
Wherein,representing a lidar coordinate systemLThe point cloud data after the intra-frame compensation below, < >>Representing the carrier coordinate system in which the IMU and GNSS are locatedICoordinate system with laser radarLTransformation relation between->Representing a lidar coordinate systemLCoordinate system with carrierITransformation relation between->Representing a lidar coordinate systemLThe laser radar point cloud data before the next intra-frame compensation,expressed in a carrier coordinate systemIFollow laser radar point cloud datat Time transition to +.>Moment pose transformation matrix +_>Representing the acquisition time interval of the laser radar point cloud data.
5. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 3 when the computer program is executed.
6. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 3.
CN202311479552.0A 2023-11-08 2023-11-08 Air-ground heterogeneous collaborative mapping method, device, equipment and storage medium Active CN117191005B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311479552.0A CN117191005B (en) 2023-11-08 2023-11-08 Air-ground heterogeneous collaborative mapping method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311479552.0A CN117191005B (en) 2023-11-08 2023-11-08 Air-ground heterogeneous collaborative mapping method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN117191005A CN117191005A (en) 2023-12-08
CN117191005B true CN117191005B (en) 2024-01-30

Family

ID=89005671

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311479552.0A Active CN117191005B (en) 2023-11-08 2023-11-08 Air-ground heterogeneous collaborative mapping method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117191005B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019018315A1 (en) * 2017-07-17 2019-01-24 Kaarta, Inc. Aligning measured signal data with slam localization data and uses thereof
CN113470089A (en) * 2021-07-21 2021-10-01 中国人民解放军国防科技大学 Cross-domain cooperative positioning and mapping method and system based on three-dimensional point cloud
CN116989772A (en) * 2023-09-26 2023-11-03 北京理工大学 Air-ground multi-mode multi-agent cooperative positioning and mapping method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11313684B2 (en) * 2016-03-28 2022-04-26 Sri International Collaborative navigation and mapping
US20230136329A1 (en) * 2021-10-28 2023-05-04 Emesent Pty Ltd Target detection in a point cloud

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019018315A1 (en) * 2017-07-17 2019-01-24 Kaarta, Inc. Aligning measured signal data with slam localization data and uses thereof
CN113470089A (en) * 2021-07-21 2021-10-01 中国人民解放军国防科技大学 Cross-domain cooperative positioning and mapping method and system based on three-dimensional point cloud
CN116989772A (en) * 2023-09-26 2023-11-03 北京理工大学 Air-ground multi-mode multi-agent cooperative positioning and mapping method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LiDAR-Based High-Precision Mapping and GNSS-Denied Localiztion for UAV;Zhiming Tu et.al;Proceedings of 2022 International Conference on Autonomous Unmanned SyStems(ICAUS 2022);2977-2987 *
一种基于地空视角信息融合的激光SLAM ***;张满 等;机器人;第45卷(第5期);568-580 *

Also Published As

Publication number Publication date
CN117191005A (en) 2023-12-08

Similar Documents

Publication Publication Date Title
EP3779358B1 (en) Map element extraction method and apparatus
CN109211251B (en) Instant positioning and map construction method based on laser and two-dimensional code fusion
CN111402339B (en) Real-time positioning method, device, system and storage medium
CN113593017B (en) Method, device, equipment and storage medium for constructing surface three-dimensional model of strip mine
CN113654555A (en) Automatic driving vehicle high-precision positioning method based on multi-sensor data fusion
CN104992074A (en) Method and device for splicing strip of airborne laser scanning system
CN114964212B (en) Multi-machine collaborative fusion positioning and mapping method oriented to unknown space exploration
CN113110455A (en) Multi-robot collaborative exploration method, device and system for unknown initial state
AU2020375559B2 (en) Systems and methods for generating annotations of structured, static objects in aerial imagery using geometric transfer learning and probabilistic localization
CN116429116A (en) Robot positioning method and equipment
CN115046542A (en) Map generation method, map generation device, terminal device and storage medium
CN113960614A (en) Elevation map construction method based on frame-map matching
CN116222579B (en) Unmanned aerial vehicle inspection method and system based on building construction
CN117191005B (en) Air-ground heterogeneous collaborative mapping method, device, equipment and storage medium
CN116466356A (en) Multi-laser global positioning method and system
Hanyu et al. Absolute pose estimation of UAV based on large-scale satellite image
CN112747752B (en) Vehicle positioning method, device, equipment and storage medium based on laser odometer
CN114839975A (en) Autonomous exploration type semantic map construction method and system
CN113403942A (en) Label-assisted bridge detection unmanned aerial vehicle visual navigation method
Vauchey et al. Particle filter meets hybrid octrees: an octree-based ground vehicle localization approach without learning
CN113379915A (en) Driving scene construction method based on point cloud fusion
Pang et al. FLAME: Feature-likelihood based mapping and localization for autonomous vehicles
CN114518108B (en) Positioning map construction method and device
CN114323038B (en) Outdoor positioning method integrating binocular vision and 2D laser radar
CN113758491B (en) Relative positioning method and system based on multi-sensor fusion unmanned vehicle and vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant