CN116824080A - Method for realizing SLAM point cloud mapping of power transmission corridor based on multi-sensor fusion - Google Patents

Method for realizing SLAM point cloud mapping of power transmission corridor based on multi-sensor fusion Download PDF

Info

Publication number
CN116824080A
CN116824080A CN202310817914.6A CN202310817914A CN116824080A CN 116824080 A CN116824080 A CN 116824080A CN 202310817914 A CN202310817914 A CN 202310817914A CN 116824080 A CN116824080 A CN 116824080A
Authority
CN
China
Prior art keywords
point cloud
frame
power transmission
laser radar
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310817914.6A
Other languages
Chinese (zh)
Inventor
叶润
闫斌
周小佳
何雪梅
赵浩泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202310817914.6A priority Critical patent/CN116824080A/en
Publication of CN116824080A publication Critical patent/CN116824080A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a method for realizing SLAM point cloud mapping of a power transmission corridor based on multi-sensor fusion, and belongs to the technical field of computer vision. The invention is an algorithm based on SLAM idea, can realize real-time dense mapping effect with low operand, greatly reduce the demand of three-dimensional reconstruction on calculation force, and the constructed point cloud map can better represent scene information of a power transmission corridor, such as a power transmission tower, a power transmission line, surrounding tree barriers and the like; the three sensor information of the vision, the solid-state laser radar and the IMU inertial navigation unit are fused together, so that the vision image contains hundred-meter-level scene accurate depth data, and positioning is assisted to be completed under the condition that the image information is not abundant and the vision odometer fails through the IMU.

Description

Method for realizing SLAM point cloud mapping of power transmission corridor based on multi-sensor fusion
Technical Field
The invention belongs to the technical field of computer vision, and particularly relates to a three-dimensional point cloud map for constructing a power transmission corridor based on a multi-sensor fusion SLAM algorithm.
Background
The power transmission corridor erected in the field is easy to be affected by tree barriers, foreign matter hanging, ice coating and the like to cause potential safety hazards, normal production and life and property safety of people are affected, the unmanned aerial vehicle is used for carrying a multi-sensor real-time reconstruction power transmission corridor three-dimensional map to intuitively describe information such as sag, tree barriers and the like of a line in a scene, power grid maintainers are helped to accurately judge a power transmission line and an iron tower, and the power transmission corridor three-dimensional map is an important research direction in the field of power inspection. At present, three-dimensional map construction is mainly carried out in the following two modes: one is to reconstruct three-dimensional point cloud or Mesh by looking for relations between image sets off-line based on multi-view solid geometry, and the algorithm is to shoot a large amount of image data in three-dimensional space by adopting manpower or machine, and then complete three-dimensional reconstruction through the steps of depth estimation, point cloud processing, grid reconstruction and optimization, texture mapping and the like. The other mode is to input environment information in real time through a SLAM algorithm-based means to perform pose estimation and reconstruct a three-dimensional point cloud of a scene. The first method is greatly affected by system computing power, and the complex computing power causes the first method to need hardware acceleration and cannot meet the real-time requirement. In the second method, although the requirement on the system calculation force is low, the reconstructed point cloud map is sparse, and the reconstruction detail is to be improved; and secondly, because the pose of the system is an estimated value, accumulated errors are caused, and the phenomena of point cloud ghost and the like easily occur in a large scene of a power transmission corridor, so that the scene reconstruction effect is greatly influenced.
Disclosure of Invention
The invention aims to provide a three-dimensional point cloud map reconstruction algorithm of a power transmission corridor based on a multi-sensor fusion SLAM algorithm, the method can ensure the real-time performance of a map reconstruction process by fusing multi-sensor data input, can also construct a three-dimensional point cloud map with high density and good scene restoration effect, and finally constructs a globally consistent three-dimensional point cloud map of the power transmission corridor by loop detection based on a word bag model.
In order to achieve the above task and improve the defects of the single sensor SLAM technology, the invention provides a method for realizing SLAM point cloud mapping of a power transmission corridor based on multi-sensor fusion, which comprises the following steps:
step 1: building a data acquisition platform, wherein the data acquisition platform comprises: the system comprises a Ros medium robot, an unmanned aerial vehicle, a solid-state laser radar, a monocular industrial camera, an IMU inertial navigation unit and an edge computer;
the Ros media is a robot operating system under Ubuntu, data information collected under the environment is provided with uniform ROS time stamps, and soft synchronization of data can be realized by matching time stamps of different sensor data;
the unmanned aerial vehicle is used for carrying data acquisition equipment comprising an edge computer, a solid-state laser radar, a monocular industrial camera, an IMU inertial navigation unit and the like
The solid-state laser radar is used for collecting depth point cloud data of a scene of the power transmission corridor;
the monocular industrial camera is used for collecting visual image data of the power transmission corridor;
the IMU inertial navigation unit is used for collecting real-time motion data of the unmanned aerial vehicle in the flight process, including linear acceleration and angular acceleration, so that the SLAM system is helped to initialize a motion state faster, and the pose is estimated more accurately;
the ROS system is carried on the edge computer to provide driving and data integration for various sensors
Step 2: sensor calibration and information synchronization;
the method comprises the steps of taking a monocular industrial camera as a center, completing joint calibration with a solid-state laser radar and an inertial navigation unit of an IMU, determining the inner parameters of the monocular industrial camera and the outer parameters of the solid-state laser radar and the IMU, and converting information of the monocular industrial camera, the solid-state laser radar and the inertial navigation unit of the IMU into a unified camera coordinate system; collecting multi-sensor data with uniform ROS time stamps through a topic mechanism of an ROS operating system;
step 3: estimating the front end pose;
step 3.1: in the feature extraction stage, preprocessing an original image acquired by a sensor by utilizing an ORB feature extraction and matching algorithm based on CLAHE optimization;
step 3.2: extracting the FAST corner of an image, carrying out feature description by using a BRIEF descriptor, realizing feature matching by adopting Hamming distances among different frames, and eliminating mismatching point pairs by using a RANSAC; performing inter-frame motion estimation by adopting an EPnP algorithm according to ORB characteristic point matching results of the two frames of images;
step 3.3: selecting a key frame;
step 3.3.1: determining whether the current image frame is determined to be a key frame or not according to the system instantaneous angular acceleration and linear acceleration data acquired by the IMU inertial navigation unit;
step 3.3.2: by constructing concentric cones for each three-dimensional map point, counting the change of the matching feature points of the previous key frame and the current frame relative to the communication cone area of the three-dimensional map point, then according to the concept of the feature center, providing a self-adaptive threshold value for feature frame selection according to the concept, and simultaneously guaranteeing the uniform distribution characteristic of the feature points;
step 4: building a rear-end map;
step 4.1: calculating the motion state of the system by using an IMU pre-integration principle, and counteracting the solid-state laser radar point cloud distortion caused by the self-motion of the system;
step 4.2: converting the solid-state laser radar point cloud data into a camera coordinate system through a calibrated transformation matrix to give depth information to the corresponding pixel points;
step 4.3: aiming at the defect that the point cloud data of the solid-state laser radar is relatively sparse, the point cloud depth is properly expanded through image expansion based on diamond kernels;
step 4.4: back projecting the color information of the pixels corresponding to the processed solid-state laser radar depth point cloud fusion image frame to a unified world coordinate system at the frame pose estimated at the front end of the system;
step 4.5: the point clouds corresponding to the multiple image frames are spliced to obtain the point clouds under the world coordinate system;
step 5: back-end nonlinear optimization: the algorithm adopts loop detection based on a bag-of-word model, and when a robot detects that the robot passes through a previously accessed region, the global pose is corrected through visual constraint between a current frame and a historical frame, so that the accumulated error is reduced; the three-dimensional point cloud map is updated in real time along with the correction of the global pose, so that the three-dimensional point cloud map of the power transmission corridor with the global consistency is constructed.
Further, the inter-frame motion estimation method in step 3.2 is as follows:
firstly, randomly selecting matching point pairs by adopting an EPnP algorithm, and iteratively optimizing an inter-frame transformation matrix by minimizing a reprojection error; then tracking the inter-frame motion based on a constant-speed motion model;
the construction method of the concentric cone in the step 3.3.2 comprises the following steps:
taking each three-dimensional map point as a circle center, taking a connecting line of the map point and a key frame as an axle center, respectively taking 10 degrees, 20 degrees, 30 degrees and 40 degrees as opening angles, constructing concentric cones, and taking the value of a variable threshold into consideration to be expressed as:
Th=(1+γ-β-α)×Th i
wherein Th is i Setting an initial value for the self-adaptive threshold; the expression is as follows:
wherein M is c Is the number of matching point pairs of the current frame and the nearest key frame, M s Is the number of matching point pairs of the similar frame and the nearest key frame, A c Representing the number of all feature points of the current frame, A s Is the number of feature points of the similar frame, N c Is the number of feature points where the similar frame changes relative to the region to which the nearest key frame belongs; gamma represents the changing proportion of the matching point pair of the current frame and the reference frame, and the parameters beta and alpha are setting standards for reducing the self-adaptive threshold value in order to ensure the robustness of the system in the actual experimental process.
Further, in one implementation, before calculating the motion state of the system in step 4.1, the following process is first performed:
because the IMU inertial navigation unit information is different from the solid-state laser radar information acquisition frequency and acquisition time, in order to obtain the motion information of the laser radar point cloud data time, the IMU data is processed by adopting a linear interpolation method; at t 1 And t 1+s At a time t between moments 1+ηs Position p of (2) 1+ηs Expressed as:
wherein p is 1 At t 1 The pose of moment, p 1+s At t 1+s The position of the moment;
the expansion method in the step 4.3 is as follows:
the formulation is to expand image a with B, where B is a convolution template or convolution kernel in the shape of a diamond convolution kernel of size 5 x 5;
in the step 4.4, the method for converting the coordinate system is as follows: the visual and laser point cloud coordinate conversion relationship is as follows:
P camera =P lidar ×T lidar_to_comera
the point set of the point cloud is P lidar ={p 1 ,p 2 ,...,p n Calibrating a transformation matrix T from the obtained laser radar to a camera coordinate system lidar_to_camera The method comprises the steps of carrying out a first treatment on the surface of the And then converting the point cloud under the camera coordinate system into pixel coordinate points of a plane according to the camera internal parameters, and completing the depth assignment of the RGB image. Wherein K is an internal reference matrix of the camera;
P camera-2D =P camera ×K。
the invention realizes a real-time three-dimensional point cloud map construction algorithm under a power transmission corridor scene in a multi-sensor fusion SLAM mode, and has the advantages that compared with the prior art, the invention has the following advantages: 1) The invention is an algorithm based on SLAM idea, can realize real-time dense mapping effect with low operand, greatly reduce the demand of three-dimensional reconstruction on calculation force, and the constructed point cloud map can better represent scene information of a power transmission corridor, such as a power transmission tower, a power transmission line, surrounding tree barriers and the like; 2) The invention fuses the information of three sensors of the vision, the solid-state laser radar and the IMU inertial navigation unit, so that the vision image contains hundred-meter-level scene accurate depth data, and the positioning is assisted to be completed under the condition that the image information is not abundant and the vision odometer fails through the IMU.
Drawings
Fig. 1 is a flowchart of the whole power transmission corridor point cloud map reconstruction algorithm of the present invention;
FIG. 2 is a schematic diagram of a working flow of a front-end odometer of the power transmission corridor point cloud map reconstruction algorithm of the present invention;
fig. 3 is a schematic diagram of an observation relationship between a map point and a power transmission corridor point cloud map reconstruction algorithm frame according to the present invention;
fig. 4 is a flow chart of a back-end construction chart of the power transmission corridor point cloud map reconstruction algorithm of the present invention;
fig. 5 is a data acquisition diagram of a reconstruction algorithm of a point cloud map of a power transmission corridor;
fig. 6 is a detail of a corner tower and a double split power transmission wire of a point cloud model reconstructed by three algorithms.
Description of the embodiments
Step 1: and (3) building a data acquisition platform: an ROS robot operating system is configured on an edge computer carrying a Ubuntu operating system, a LAN port is used for connecting a solid laser radar device, a USB port is used for connecting a monocular industrial camera and an IMU inertial navigation unit, then the whole set of equipment is configured on the unmanned aerial vehicle to fix the relative position, and related drivers are configured to start all sensors for data acquisition.
Step 1-1, carrying a Ubuntu 18 operating system on an edge computer, and configuring a Ross media robot operating system. And configuring a solid-state laser radar and a monocular industrial camera drive.
And step 1-2, fixedly arranging the solid-state laser radar and the camera on the carrying unmanned aerial vehicle in parallel, and ensuring that the solid-state laser radar and the camera have a common scene view. The IMU inertial navigation unit is placed in parallel with the camera, ensuring that the coordinate system of the IMU inertial navigation unit has relatively small change. The computer is connected with the sensor to ensure the normal operation of the sensor.
Step 2: sensor calibration and information synchronization: taking a monocular industrial camera sensor as a center, completing combined calibration with a solid-state laser radar and an IMU inertial navigation unit, determining camera internal parameters and external parameters with the solid-state laser radar and the IMU, and converting information of the three sensors into a unified camera coordinate system; the multi-sensor data with uniform ROS time stamps is collected through the topic mechanism of the ROS operating system.
And 2-1, calibrating the internal parameters of the camera by using the kalibr package, namely calibrating the internal parameters of the IMU through imu_uteils, and calibrating the noise density and random walk noise of the IMU, and calibrating the external parameters between the camera and the IMU.
And 2-2, realizing vision-laser automatic combined calibration through an Livox_camera_calib algorithm to obtain a transformation matrix.
And 2-3, recording rosbag to acquire scene information by recording the ROS topics.
Step 3: front end pose estimation:
1) In the feature extraction stage, preprocessing an image by utilizing an ORB feature extraction and matching algorithm based on CLAHE optimization;
2) Performing inter-frame motion estimation by adopting an EPnP algorithm according to ORB characteristic point matching results of the two frames of images;
3) In a key frame selection stage, on one hand, whether the current image frame is determined to be a key frame is determined through system instantaneous angular acceleration and linear acceleration data acquired by an IMU inertial navigation unit, and on the other hand, by constructing concentric cones for each three-dimensional map point, the change of a communication cone area of a matching characteristic point of a previous key frame and the current frame relative to the three-dimensional map point is counted, then a concept of a characteristic gravity center is put forward according to the change, a self-adaptive threshold is put forward according to the concept, and meanwhile, the uniform distribution characteristic of the characteristic points is guaranteed.
Step 4: and (3) back-end map construction:
1) Calculating the motion state of the system by using an IMU pre-integration principle, and counteracting the solid-state laser radar point cloud distortion caused by the self-motion of the system;
2) Converting the solid-state laser radar point cloud data into a camera coordinate system through a calibrated transformation matrix to give depth information to the corresponding pixel points;
3) Aiming at the defect that the point cloud data of the solid-state laser radar is relatively sparse, the point cloud depth is properly expanded through image expansion based on diamond kernels;
4) Back projecting the color information of the pixels corresponding to the processed solid-state laser radar depth point cloud fusion image frame to a unified world coordinate system at the frame pose estimated at the front end of the system;
5) And the point clouds corresponding to the multiple image frames are spliced to obtain the point clouds under the world coordinate system.
Step 5: back-end nonlinear optimization: the algorithm adopts loop detection based on the bag-of-word model, and when the robot detects that the robot passes through the previously accessed region, the global pose is corrected through the visual constraint between the current frame and the historical frame, so that the accumulated error is reduced. The three-dimensional point cloud map is updated in real time along with the correction of the global pose, so that the three-dimensional point cloud map of the power transmission corridor with the global consistency is constructed.
And 5-1, loop detection, namely, calculating a BoW similarity score of the key frame by loading an ORB word bag file which is trained by an open source, and checking consistency of the common view relationship.
And 5-2, after loop detection, carrying out global pose adjustment by an algorithm through a Ceres nonlinear optimization library, and after adjustment, re-projecting fusion point clouds contained in each frame into a world coordinate system to construct a global consistent point cloud map.
Acquiring real power transmission corridor scene data through the experimental platform, comparing the algorithm with the FAST-LIO2 Mapping effect of the prior published Livox-Mapping and hong Kong university, and FIG. 6 shows the details of the angle steel towers and the double-split power transmission wires of the point cloud model reconstructed by the three algorithms, wherein tracking failure occurs in the process of completing the point cloud reconstruction by the Livox-Mapping, and the details are that when an unmanned aerial vehicle passes through between two power transmission towers, the algorithm is erroneously identified as that the unmanned aerial vehicle does not have position change, so that ghost and ghost of the power transmission towers occur, namely the same power transmission tower is projected in different space areas; the consistency of the point cloud model reconstructed by the FAST-LIO2 is relatively good, but the outline of the angle steel tower is relatively fuzzy, and the double-split conductor has large-area data loss, so that the situation of broken strands and sagging of the conductor in a real scene cannot be shown; the algorithm can well estimate the motion condition and the pose of the system when traversing the power transmission line corridor. In terms of point cloud density, the point cloud reconstructed by the algorithm is denser, corresponding scene colors are given to the corresponding point clouds, and the method has good performance in terms of scene consistency and consistency.

Claims (3)

1. A method for realizing SLAM point cloud mapping of a power transmission corridor based on multi-sensor fusion comprises the following steps:
step 1: building a data acquisition platform, wherein the data acquisition platform comprises: the system comprises a Ros medium robot, an unmanned aerial vehicle, a solid-state laser radar, a monocular industrial camera, an IMU inertial navigation unit and an edge computer;
the Ros media is a robot operating system under Ubuntu, data information collected under the environment is provided with uniform ROS time stamps, and soft synchronization of data can be realized by matching time stamps of different sensor data;
the unmanned aerial vehicle is used for carrying data acquisition equipment comprising an edge computer, a solid-state laser radar, a monocular industrial camera, an IMU inertial navigation unit and the like
The solid-state laser radar is used for collecting depth point cloud data of a scene of the power transmission corridor;
the monocular industrial camera is used for collecting visual image data of the power transmission corridor;
the IMU inertial navigation unit is used for collecting real-time motion data of the unmanned aerial vehicle in the flight process, including linear acceleration and angular acceleration, so that the SLAM system is helped to initialize a motion state faster, and the pose is estimated more accurately;
the ROS system is carried on the edge computer to provide driving and data integration for various sensors
Step 2: sensor calibration and information synchronization;
the method comprises the steps of taking a monocular industrial camera as a center, completing joint calibration with a solid-state laser radar and an inertial navigation unit of an IMU, determining the inner parameters of the monocular industrial camera and the outer parameters of the solid-state laser radar and the IMU, and converting information of the monocular industrial camera, the solid-state laser radar and the inertial navigation unit of the IMU into a unified camera coordinate system; collecting multi-sensor data with uniform ROS time stamps through a topic mechanism of an ROS operating system;
step 3: estimating the front end pose;
step 3.1: in the feature extraction stage, preprocessing an original image acquired by a sensor by utilizing an ORB feature extraction and matching algorithm based on CLAHE optimization;
step 3.2: extracting the FAST corner of an image, carrying out feature description by using a BRIEF descriptor, realizing feature matching by adopting Hamming distances among different frames, and eliminating mismatching point pairs by using a RANSAC; performing inter-frame motion estimation by adopting an EPnP algorithm according to ORB characteristic point matching results of the two frames of images;
step 3.3: selecting a key frame;
step 3.3.1: determining whether the current image frame is determined to be a key frame or not according to the system instantaneous angular acceleration and linear acceleration data acquired by the IMU inertial navigation unit;
step 3.3.2: by constructing concentric cones for each three-dimensional map point, counting the change of the matching feature points of the previous key frame and the current frame relative to the communication cone area of the three-dimensional map point, then according to the concept of the feature center, providing a self-adaptive threshold value for feature frame selection according to the concept, and simultaneously guaranteeing the uniform distribution characteristic of the feature points;
step 4: building a rear-end map;
step 4.1: calculating the motion state of the system by using an IMU pre-integration principle, and counteracting the solid-state laser radar point cloud distortion caused by the self-motion of the system;
step 4.2: converting the solid-state laser radar point cloud data into a camera coordinate system through a calibrated transformation matrix to give depth information to the corresponding pixel points;
step 4.3: aiming at the defect that the point cloud data of the solid-state laser radar is relatively sparse, the point cloud depth is properly expanded through image expansion based on diamond kernels;
step 4.4: back projecting the color information of the pixels corresponding to the processed solid-state laser radar depth point cloud fusion image frame to a unified world coordinate system at the frame pose estimated at the front end of the system;
step 4.5: the point clouds corresponding to the multiple image frames are spliced to obtain the point clouds under the world coordinate system;
step 5: back-end nonlinear optimization: the algorithm adopts loop detection based on a bag-of-word model, and when a robot detects that the robot passes through a previously accessed region, the global pose is corrected through visual constraint between a current frame and a historical frame, so that the accumulated error is reduced; the three-dimensional point cloud map is updated in real time along with the correction of the global pose, so that the three-dimensional point cloud map of the power transmission corridor with the global consistency is constructed.
2. The method for realizing the SLAM point cloud mapping of the power transmission corridor based on the multi-sensor fusion according to claim 1, wherein the inter-frame motion estimation method in the step 3.2 is as follows:
firstly, randomly selecting matching point pairs by adopting an EPnP algorithm, and iteratively optimizing an inter-frame transformation matrix by minimizing a reprojection error; then tracking the inter-frame motion based on a constant-speed motion model;
the construction method of the concentric cone in the step 3.3.2 comprises the following steps:
taking each three-dimensional map point as a circle center, taking a connecting line of the map point and a key frame as an axle center, respectively taking 10 degrees, 20 degrees, 30 degrees and 40 degrees as opening angles, constructing concentric cones, and taking the value of a variable threshold into consideration to be expressed as:
Th=(1+γ-β-α)×Th i
wherein Th is i Setting an initial value for the self-adaptive threshold; the expression is as follows:
wherein M is c Is the number of matching point pairs of the current frame and the nearest key frame, M s Number of matching point pairs that are similar frames and nearest keyframes,A c Representing the number of all feature points of the current frame, A s Is the number of feature points of the similar frame, N c Is the number of feature points where the similar frame changes relative to the region to which the nearest key frame belongs; gamma represents the changing proportion of the matching point pair of the current frame and the reference frame, and the parameters beta and alpha are setting standards for reducing the self-adaptive threshold value in order to ensure the robustness of the system in the actual experimental process.
3. The method for implementing the SLAM point cloud mapping of the power transmission corridor based on the multi-sensor fusion according to claim 1, wherein in one implementation, before calculating the motion state of the system in step 4.1, the following processes are first performed:
because the IMU inertial navigation unit information is different from the solid-state laser radar information acquisition frequency and acquisition time, in order to obtain the motion information of the laser radar point cloud data time, the IMU data is processed by adopting a linear interpolation method; at t 1 And t 1+s At a time t between moments 1+ηs Position p of (2) 1+ηs Expressed as:
wherein p is 1 At t 1 The pose of moment, p 1+s At t 1+s The position of the moment;
the expansion method in the step 4.3 is as follows:
the formulation is to expand image a with B, where B is a convolution template or convolution kernel in the shape of a diamond convolution kernel of size 5 x 5;
in the step 4.4, the method for converting the coordinate system is as follows: the visual and laser point cloud coordinate conversion relationship is as follows:
P camera =P lidar ×T lidar_to_camera
the point set of the point cloud is P lidar ={p 1 ,p 2 ,...,p n Calibrating a transformation matrix T from the obtained laser radar to a camera coordinate system lidar_to_camera The method comprises the steps of carrying out a first treatment on the surface of the And then converting the point cloud under the camera coordinate system into pixel coordinate points of a plane according to the camera internal parameters, and completing the depth assignment of the RGB image. Wherein K is an internal reference matrix of the camera;
P camera-2D =P camera ×K。
CN202310817914.6A 2023-07-05 2023-07-05 Method for realizing SLAM point cloud mapping of power transmission corridor based on multi-sensor fusion Pending CN116824080A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310817914.6A CN116824080A (en) 2023-07-05 2023-07-05 Method for realizing SLAM point cloud mapping of power transmission corridor based on multi-sensor fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310817914.6A CN116824080A (en) 2023-07-05 2023-07-05 Method for realizing SLAM point cloud mapping of power transmission corridor based on multi-sensor fusion

Publications (1)

Publication Number Publication Date
CN116824080A true CN116824080A (en) 2023-09-29

Family

ID=88142814

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310817914.6A Pending CN116824080A (en) 2023-07-05 2023-07-05 Method for realizing SLAM point cloud mapping of power transmission corridor based on multi-sensor fusion

Country Status (1)

Country Link
CN (1) CN116824080A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117968682A (en) * 2024-04-01 2024-05-03 山东大学 Dynamic point cloud removing method based on multi-line laser radar and inertial measurement unit

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117968682A (en) * 2024-04-01 2024-05-03 山东大学 Dynamic point cloud removing method based on multi-line laser radar and inertial measurement unit

Similar Documents

Publication Publication Date Title
CN111968129B (en) Instant positioning and map construction system and method with semantic perception
CN109144095B (en) Embedded stereoscopic vision-based obstacle avoidance system for unmanned aerial vehicle
WO2019170164A1 (en) Depth camera-based three-dimensional reconstruction method and apparatus, device, and storage medium
CN107862293B (en) Radar color semantic image generation system and method based on countermeasure generation network
Wang et al. 360sd-net: 360 stereo depth estimation with learnable cost volume
CN110264563A (en) A kind of Octree based on ORBSLAM2 builds drawing method
CN114399554B (en) Calibration method and system of multi-camera system
WO2020113423A1 (en) Target scene three-dimensional reconstruction method and system, and unmanned aerial vehicle
CN113362247A (en) Semantic live-action three-dimensional reconstruction method and system of laser fusion multi-view camera
CN111027415B (en) Vehicle detection method based on polarization image
CN112734765A (en) Mobile robot positioning method, system and medium based on example segmentation and multi-sensor fusion
CN116222543B (en) Multi-sensor fusion map construction method and system for robot environment perception
CN116824080A (en) Method for realizing SLAM point cloud mapping of power transmission corridor based on multi-sensor fusion
CN116449384A (en) Radar inertial tight coupling positioning mapping method based on solid-state laser radar
CN111899345B (en) Three-dimensional reconstruction method based on 2D visual image
CN113284251A (en) Cascade network three-dimensional reconstruction method and system with self-adaptive view angle
CN110910349B (en) Wind turbine state acquisition method based on aerial photography vision
CN114898061A (en) Power transmission corridor fast reconstruction method based on double-mode visual information of flying away
Zhu et al. PairCon-SLAM: Distributed, online, and real-time RGBD-SLAM in large scenarios
CN114119987A (en) Feature extraction and descriptor generation method and system based on convolutional neural network
CN112419411A (en) Method for realizing visual odometer based on convolutional neural network and optical flow characteristics
CN116105721B (en) Loop optimization method, device and equipment for map construction and storage medium
CN117115271A (en) Binocular camera external parameter self-calibration method and system in unmanned aerial vehicle flight process
CN108921852B (en) Double-branch outdoor unstructured terrain segmentation network based on parallax and plane fitting
CN114648639B (en) Target vehicle detection method, system and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination