CN118050008A - Robot navigation system and navigation method thereof - Google Patents

Robot navigation system and navigation method thereof Download PDF

Info

Publication number
CN118050008A
CN118050008A CN202410454147.1A CN202410454147A CN118050008A CN 118050008 A CN118050008 A CN 118050008A CN 202410454147 A CN202410454147 A CN 202410454147A CN 118050008 A CN118050008 A CN 118050008A
Authority
CN
China
Prior art keywords
obstacle
robot
point cloud
image data
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410454147.1A
Other languages
Chinese (zh)
Other versions
CN118050008B (en
Inventor
刘培勋
马天娇
贾平
孙海江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun Institute of Optics Fine Mechanics and Physics of CAS
Original Assignee
Changchun Institute of Optics Fine Mechanics and Physics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun Institute of Optics Fine Mechanics and Physics of CAS filed Critical Changchun Institute of Optics Fine Mechanics and Physics of CAS
Priority to CN202410454147.1A priority Critical patent/CN118050008B/en
Priority claimed from CN202410454147.1A external-priority patent/CN118050008B/en
Publication of CN118050008A publication Critical patent/CN118050008A/en
Application granted granted Critical
Publication of CN118050008B publication Critical patent/CN118050008B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention relates to the technical field of navigation, in particular to a robot navigation system and a navigation method thereof, wherein the navigation method comprises the following steps: pretreatment step S0: respectively acquiring image data and point cloud data through a camera module and a laser radar module; s1: registering the image data and the point cloud data, wherein the registered image data corresponds to the point cloud data one by one; s2: identifying obstacles in the image data and the point cloud data respectively to obtain a linear distance between a center point of the obstacle and the robot; s3: when the linear distance is smaller than the preset distance, the robot stops; when the obstacle leaves, the robot continues to work. The invention adopts a mode of combining a laser radar and a depth camera to identify the obstacle in front of the robot and avoid the obstacle according to the position of the obstacle.

Description

Robot navigation system and navigation method thereof
Technical Field
The invention relates to the technical field of navigation, in particular to a robot navigation system and a navigation method thereof.
Background
Most of the existing methods adopt a visual camera (such as Tesla visual navigation scheme), a depth camera (astronomical tree technology) or a laser radar (hundred-degree unmanned vehicles and Google unmanned vehicles) to carry out robot navigation operation, and the existing methods adopt a visual sensor alone to sense that the three-dimensional information of the obstacle is insufficient, and the existing methods cannot fully utilize the texture and characteristic information of the target by adopting laser alone, so that difficulty is brought to the classification of the obstacle.
Disclosure of Invention
In view of the above, an object of the present invention is to provide a robot navigation device and a navigation method thereof, which can recognize an obstacle in front of a robot by combining a laser radar and a depth camera and avoid the obstacle according to the position of the obstacle.
In order to achieve the above purpose, the present invention adopts the following specific technical scheme:
the invention provides a robot navigation method, which comprises the following steps:
S1: respectively acquiring image data and point cloud data of an obstacle through a camera module and a laser radar module;
s2: registering the image data and the point cloud data, wherein the registered image data corresponds to the point cloud data one by one;
s3: identifying obstacles in the image data and the point cloud data respectively to obtain a linear distance between a center point of the obstacle and the robot;
s4: when the linear distance is smaller than the preset distance, the robot stops; when the obstacle leaves, the robot continues to work.
Preferably, step S2 comprises the following sub-steps:
S21: marking appointed points on the carton in the point cloud data through polymorks;
S22: finding the corresponding point of the specified point in the image data through Photoshop;
s23: the calibration is assisted by using a MATLAB tool box through a Zhengyou camera internal reference calibration method, so that an internal reference of a camera module is obtained; obtaining external parameters of the camera module through a Gauss Newton rapid decrease method;
S24: and calculating a rotation matrix R and a translation matrix T according to the internal parameters and the external parameters, and converting the point cloud data into pixel coordinates of the camera data.
Preferably, the process of identifying image data in step S3 comprises the sub-steps of:
s31: manufacturing a training sample, and defining an obstacle in the image data;
s32: training an obstacle classifier through a deep learning target recognition algorithm;
s33: identifying the obstacle in the image data according to the obstacle classifier to obtain the position coordinates of the obstacle in the image data;
S34: the distance O 1 between the obstacle centre point and the robot is calculated.
Preferably, the process of identifying the point cloud data in step S3 is:
Processing point cloud data by adopting a point cloud rapid robust clustering method based on depth_ Clustering to obtain position coordinates of the center of the obstacle, and calculating the distance O 2 between the center point of the obstacle and the robot;
And meanwhile, fusing the distance O 1 and the distance O 2 by using a D-S evidence theory method to obtain a fused distance value, and taking the fused distance value as the real distance between the center of the obstacle and the robot.
The invention also provides a robot navigation system realized by the robot navigation method, which comprises the following steps:
the radar module is used for scanning the surrounding environment of the robot and collecting point cloud data;
the camera module is used for collecting image data;
the data registration module is used for registering the image data and the point cloud data, and the registered image data corresponds to the point cloud data one by one;
The calculation control module is used for respectively identifying the obstacle in the image data and the point cloud data and calculating the linear distance between the obstacle and the robot, and when the linear distance is smaller than the preset distance, the calculation control module controls the robot to stop moving.
Preferably, the positioning module comprises a high-precision satellite positioning unit and a positioning tag;
the high-precision satellite positioning unit is used for realizing the positioning of the robot in the outdoor centimeter level;
The positioning label is used for realizing the positioning of the robot in the indoor centimeter level.
Preferably, the data registration module comprises:
the marking unit is used for marking the appointed points on the carton in the point cloud data through the polymorks;
a searching unit for searching the corresponding point of the appointed point in the image data through Photoshop;
The calibration unit is used for assisting in calibration by using a MATLAB tool box through a calibration method of the internal parameters of the front-friend camera, so as to obtain the internal parameters of the camera module; obtaining external parameters of the camera module through a Gauss Newton rapid decrease method;
And the conversion unit is used for calculating a rotation matrix R and a translation matrix T according to the internal parameters and the external parameters of the camera module and converting the point cloud data into pixel coordinates of the camera data.
Preferably, the calculation control module includes:
A delineating unit for delineating an obstacle in the image data;
The training unit is used for training the obstacle classifier through a deep learning target recognition algorithm;
the identification unit is used for identifying the obstacle in the image data according to the obstacle classifier to obtain the position coordinates of the obstacle in the image data;
And the first calculation unit is used for calculating the distance O 1 between the center point of the obstacle and the robot.
Preferably, the calculation control module further includes:
The second calculation unit is used for processing the point cloud data by adopting a point cloud rapid robust clustering method based on depth_ Clustering to obtain the position coordinates of the center of the obstacle and calculating the distance O 2 between the center point of the obstacle and the robot;
And the third calculation unit fuses the distance O 1 and the distance O 2 by using a D-S evidence theory method to obtain a fused distance value, and the fused distance value is used as the real distance between the center of the obstacle and the robot.
Compared with the prior art, the method has the advantages that the detection and avoidance of the obstacle are realized by combining the depth camera and the 16-line laser radar aiming at the open agricultural scene, the texture and the three-dimensional information of the obstacle can be fully utilized to monitor, identify and position the obstacle, and the adopted positioning module comprises a high-precision Beidou satellite positioning sub-module and Autolabor RTK970 positioning tags. The two can be used according to the environment, and the high-precision satellite positioning sub-module can realize the centimeter-level positioning of the robot under the working condition of outdoor large-area operation. In facility agriculture, positioning tags can achieve indoor centimeter level positioning. Compared with the traditional GPS scheme and RTK positioning scheme, the invention has the characteristics of high positioning precision and no need of paving RTK base stations, and the system can improve the positioning precision and reduce the cost of system layout.
Drawings
Fig. 1 is a flow chart of a navigation method according to an embodiment of the present invention.
Detailed Description
Hereinafter, embodiments of the present invention will be described with reference to the accompanying drawings. In the following description, like modules are denoted by like reference numerals. In the case of the same reference numerals, their names and functions are also the same. Therefore, a detailed description thereof will not be repeated.
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not to be construed as limiting the invention.
Fig. 1 shows a flowchart of a navigation method provided according to an embodiment of the present invention.
As shown in fig. 1, the navigation method provided by the embodiment of the invention includes the following steps:
S1: image data and point cloud data of the obstacle are respectively acquired through the camera module and the laser radar module.
Selecting a plurality of cartons with checkerboards as barriers, wherein one surface of each carton, which is attached with each checkerboard, faces towards the camera module and the laser radar module; the image data includes a color image and a depth image. And acquiring and storing color images and depth images of the camera module and point cloud data of the laser radar module.
S2: registering the acquired image data and the point cloud data, wherein the registered image data corresponds to the point cloud data one by one, and the registered image data can correspond to the point cloud data after target information in the image data is detected.
The registration process of the image data and the point cloud data comprises the following steps: in order to fully exploit the information from the different sensors, it is necessary to place the visual image and the laser point cloud under the same reference frame. It is therefore necessary to know the rigid spatial transformations of the lidar and camera coordinate systems, including the relative rotation R and relative translation T, making up the six degrees of freedom of the external parameters. The process of solving the transformation relation is called laser radar and camera combined external parameter calibration and registration.
Step S2 comprises the following sub-steps:
manual registration of point cloud data by polymorks,
S21: marking designated points on the carton through polymorks;
when the number of cartons is 4: first, 4 more distinct points on each carton were marked on the polymorks, at which time 16 points were available.
S22: finding the corresponding point of the specified point in the image data through Photoshop;
s23: the camera internal reference is obtained by using MATLAB tool box to help calibration through a Zhengyou camera internal reference calibration method; and obtaining the camera external parameters through a Gauss Newton rapid decrease method.
S24: and (3) obtaining a rotation matrix R and a translation matrix T through calculation, and converting the point cloud data into pixel coordinates of the camera data.
In order to fully exploit the information from the different sensors, it is necessary to place the visual image and the laser point cloud under the same reference frame.
Mapping the laser radar 3D point cloud onto a camera two-dimensional image according to the following projection formula obtained in a pinhole camera model:
Wherein (x i,yi) represents coordinates in an image coordinate system, (x c,yc) represents coordinates in the center of the image, f is a focal length, (Xv, yv, zv) represents coordinates in a lidar coordinate system corresponding to the pixel point (x i,yi), and (Xcw, ycw, zcw) represents coordinates in a lidar coordinate system corresponding to the center point (x c,yc) of the image.
Let X B,YB and Z B denote unit vectors of the principal axis directions of the coordinate system { B }. When the coordinate system a is used as the reference system, they are written,,/>And/>The unit vectors of the principal axes of the coordinate system { B } under the description of the coordinate system { A } are respectively represented, namely the projection components of the principal axes of the coordinate system { B } under the three principal axes of the coordinate system { A }.
The three unit vectors, namely the rotation vectors, are sequentially arranged into a 3*3 matrix to obtain a rotation matrix
The rotation vectors are orthogonal:
Wherein u and v are any coordinate points in the image coordinate system respectively. u 0,v0 is the center coordinates of the images, respectively. X, Y, Z represent three-dimensional coordinate points in the world coordinate system. zc represents the z-axis value of the camera coordinates, i.e. the target-to-camera distance. R, t are the 3 x 3 rotation matrix and the 3 x1 translation matrix of the extrinsic matrix, respectively.
S3: and respectively identifying the obstacle in the image data and the point cloud data to obtain the linear distance between the center point of the obstacle and the robot.
The image data and the point cloud data are acquired simultaneously and are acquired by the camera module and the laser radar respectively. The image data acquired by the camera can be subjected to real-time two-dimensional target detection through YOLOv algorithm, and then the bounding box on the detected image is transformed, and the transformation function is to map to the point cloud data of Lidar for three-dimensional target detection. By comparing whether the two-dimensional coordinates transmitted by the three-dimensional point cloud are in the object boundary box or not, a high-speed three-dimensional object recognition function is realized in the GPU. And finally, carrying out k-means clustering on the point cloud to improve the precision and the precision of clustering.
The process of identifying image data in step S3 comprises the sub-steps of:
S31: training samples are made and obstacles are defined in the image data.
Image data is acquired by using REALSENSED435i depth cameras, and obstacles frequently appearing in fields such as people, vehicles, wood piles and the like in the images are defined.
S32: the obstacle classifier is trained by a deep learning-based object recognition algorithm.
And saves the classifier as an object.
S33: and identifying the obstacle in the image data by using a YOLOV-4-based object identification algorithm to obtain the position coordinates of the obstacle in the image data.
In the running process of the robot, the REALSENSED435i depth camera collects live real-time images, and the color images collected by the depth camera are input into a target recognition algorithm.
S34: the distance O 1 between the obstacle centre point and the robot is calculated.
The process of identifying the point cloud data in the step S3 includes:
The point cloud data acquired by the laser radar module is processed by a point cloud rapid robust clustering method based on depth_ Clustering to obtain the position coordinates of the center of the obstacle, and the distance O 2 between the center point of the obstacle and the robot is calculated.
And meanwhile, fusing the result of the distance O 1 obtained in the step S34 with the distance O 2 obtained in the step S4 by using a D-S evidence theory method to obtain a fused distance value, and using the fused distance value as the real distance between the center of the obstacle and the robot.
S4: when the linear distance is less than 0.5m, stopping the robot; when the obstacle leaves, the robot continues to work.
The invention also provides a robot navigation system realized by the robot navigation method, which specifically comprises the following steps: the system comprises a radar module, a camera module, a data registration module and a calculation control module; the radar module is used for scanning the surrounding environment of the robot and collecting point cloud data; the camera module is used for collecting image data; the data registration module is used for registering the image data and the point cloud data, and the registered image data corresponds to the point cloud data one by one; the calculation control module is used for respectively identifying the obstacle in the image data and the point cloud data and calculating the linear distance between the obstacle and the robot, and when the linear distance is smaller than the preset distance, the calculation control module controls the robot to stop moving.
The positioning module comprises a high-precision Beidou satellite positioning unit and Autolabor RTK positioning tags. The two can be used according to the environment, and the high-precision satellite positioning sub-module can realize the centimeter-level positioning of the robot under the working condition of outdoor large-area operation. In facility agriculture, positioning tags can achieve indoor centimeter level positioning.
The data registration module comprises: the system comprises a marking unit, a searching unit, a calibrating unit and a converting unit, wherein the marking unit is used for marking appointed points on the carton in point cloud data through polymorks; the searching unit is used for searching corresponding points of the specified points in the image data through Photoshop; the calibration unit is used for assisting in calibration by using an MATLAB tool box through a calibration method of the internal parameters of the Zhengyou camera, so as to obtain the internal parameters of the camera module; obtaining external parameters of the camera module through a Gauss Newton rapid decrease method; the conversion unit is used for calculating a rotation matrix R and a translation matrix T according to the internal parameters and the external parameters of the camera module, and converting the point cloud data into pixel coordinates of the camera data.
The calculation control module comprises: the system comprises a delineating unit, a training unit, an identification unit, a first calculation unit, a second calculation unit and a third calculation unit, wherein the delineating unit is used for delineating an obstacle in image data; the training unit is used for training the obstacle classifier through a deep learning target recognition algorithm; the identification unit is used for identifying the obstacle in the image data according to the obstacle classifier to obtain the position coordinates of the obstacle in the image data; the first calculation unit is used for calculating the distance O 1 between the center point of the obstacle and the robot; the second calculation unit processes the point cloud data by adopting a point cloud rapid robust clustering method based on depth_ Clustering to obtain the position coordinates of the center of the obstacle, and calculates the distance O 2 between the center point of the obstacle and the robot; and the third calculation unit fuses the distance O 1 and the distance O 2 by using a D-S evidence theory method to obtain a fused distance value, and the fused distance value is used as the real distance between the center of the obstacle and the robot.
The radar module is 16-line laser radar (Velodyne 16-line three-dimensional laser radar VLP-16 or three-dimensional multi-line laser radar RS-LiDAR-16). The method is used for 360-degree scanning of the surrounding environment of the robot and provides data support for identifying obstacles.
The camera module comprises a depth camera, wherein the depth camera is a structured light camera and is used for collecting color images and depth images.
While embodiments of the present invention have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the invention, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the invention. The above embodiments of the present invention do not limit the scope of the present invention. Any of various other corresponding changes and modifications made according to the technical idea of the present invention should be included in the scope of the claims of the present invention.

Claims (9)

1. A method of navigating a robot comprising the steps of:
S1: respectively acquiring image data and point cloud data of an obstacle through a camera module and a laser radar module;
s2: registering the image data and the point cloud data, wherein the registered image data corresponds to the point cloud data one by one;
s3: identifying obstacles in the image data and the point cloud data respectively to obtain a linear distance between a center point of the obstacle and the robot;
s4: when the linear distance is smaller than the preset distance, the robot stops; when the obstacle leaves, the robot continues to work.
2. The robot navigation method according to claim 1, characterized in that step S2 comprises the sub-steps of:
S21: marking appointed points on the carton in the point cloud data through polymorks;
S22: finding the corresponding point of the specified point in the image data through Photoshop;
s23: the calibration is assisted by using a MATLAB tool box through a Zhengyou camera internal reference calibration method, so that an internal reference of a camera module is obtained; obtaining external parameters of the camera module through a Gauss Newton rapid decrease method;
S24: and calculating a rotation matrix R and a translation matrix T according to the internal parameters and the external parameters, and converting the point cloud data into pixel coordinates of the camera data.
3. The robot navigation method according to claim 2, wherein the process of recognizing the image data in step S3 includes the sub-steps of:
S31: defining an obstacle in the image data;
s32: training an obstacle classifier through a deep learning target recognition algorithm;
s33: identifying the obstacle in the image data according to the obstacle classifier to obtain the position coordinates of the obstacle in the image data;
S34: the distance O 1 between the obstacle centre point and the robot is calculated.
4. The robot navigation method according to claim 3, wherein the process of identifying the point cloud data in step S3 is:
Processing point cloud data by adopting a point cloud rapid robust clustering method based on depth_ Clustering to obtain position coordinates of the center of the obstacle, and calculating the distance O 2 between the center point of the obstacle and the robot;
And meanwhile, fusing the distance O 1 and the distance O 2 by using a D-S evidence theory method to obtain a fused distance value, and taking the fused distance value as the real distance between the center of the obstacle and the robot.
5. A robot navigation system implemented with the robot navigation method of any one of claims 1-4, comprising:
the radar module is used for scanning the surrounding environment of the robot and collecting point cloud data;
the camera module is used for collecting image data;
the data registration module is used for registering the image data and the point cloud data, and the registered image data corresponds to the point cloud data one by one;
The calculation control module is used for respectively identifying the obstacle in the image data and the point cloud data and calculating the linear distance between the obstacle and the robot, and when the linear distance is smaller than the preset distance, the calculation control module controls the robot to stop moving.
6. The robotic navigation system of claim 5, further comprising a positioning module; the positioning module comprises a high-precision satellite positioning unit and a positioning tag;
the high-precision satellite positioning unit is used for realizing the positioning of the robot in the outdoor centimeter level;
The positioning label is used for realizing the positioning of the robot in the indoor centimeter level.
7. The robotic navigation system of claim 5, wherein the data registration module comprises:
the marking unit is used for marking the appointed points on the carton in the point cloud data through the polymorks;
a searching unit for searching the corresponding point of the appointed point in the image data through Photoshop;
The calibration unit is used for assisting in calibration by using a MATLAB tool box through a calibration method of the internal parameters of the front-friend camera, so as to obtain the internal parameters of the camera module; obtaining external parameters of the camera module through a Gauss Newton rapid decrease method;
And the conversion unit is used for calculating a rotation matrix R and a translation matrix T according to the internal parameters and the external parameters of the camera module and converting the point cloud data into pixel coordinates of the camera data.
8. The robotic navigation system of claim 5, wherein the computing control module includes:
A delineating unit for delineating an obstacle in the image data;
The training unit is used for training the obstacle classifier through a deep learning target recognition algorithm;
the identification unit is used for identifying the obstacle in the image data according to the obstacle classifier to obtain the position coordinates of the obstacle in the image data;
And the first calculation unit is used for calculating the distance O 1 between the center point of the obstacle and the robot.
9. The robotic navigation system of claim 8, wherein the computing control module further comprises:
The second calculation unit is used for processing the point cloud data by adopting a point cloud rapid robust clustering method based on depth_ Clustering to obtain the position coordinates of the center of the obstacle and calculating the distance O 2 between the center point of the obstacle and the robot;
And the third calculation unit fuses the distance O 1 and the distance O 2 by using a D-S evidence theory method to obtain a fused distance value, and the fused distance value is used as the real distance between the center of the obstacle and the robot.
CN202410454147.1A 2024-04-16 Robot navigation system and navigation method thereof Active CN118050008B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410454147.1A CN118050008B (en) 2024-04-16 Robot navigation system and navigation method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410454147.1A CN118050008B (en) 2024-04-16 Robot navigation system and navigation method thereof

Publications (2)

Publication Number Publication Date
CN118050008A true CN118050008A (en) 2024-05-17
CN118050008B CN118050008B (en) 2024-07-16

Family

ID=

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110726990A (en) * 2019-09-23 2020-01-24 江苏大学 Multi-sensor fusion method based on DS-GNN algorithm
CN112101092A (en) * 2020-07-31 2020-12-18 北京智行者科技有限公司 Automatic driving environment sensing method and system
CN112307594A (en) * 2020-09-22 2021-02-02 中国汽车技术研究中心有限公司 Road data acquisition and simulation scene establishment integrated system and method
CN113743391A (en) * 2021-11-08 2021-12-03 江苏天策机器人科技有限公司 Three-dimensional obstacle detection system and method applied to low-speed autonomous driving robot
CN113936198A (en) * 2021-11-22 2022-01-14 桂林电子科技大学 Low-beam laser radar and camera fusion method, storage medium and device
CN114084129A (en) * 2021-10-13 2022-02-25 武汉光庭信息技术股份有限公司 Fusion-based vehicle automatic driving control method and system
CN114200430A (en) * 2021-12-10 2022-03-18 上海西井信息科技有限公司 Calibration method, system, equipment and storage medium for laser radar and camera
CN117111040A (en) * 2023-07-14 2023-11-24 天津职业技术师范大学(中国职业培训指导教师进修中心) LM algorithm-based three-dimensional laser radar and camera combined calibration external parameter optimization method
US20240013505A1 (en) * 2022-07-11 2024-01-11 Wuhan University Of Technology Method, system, medium, equipment and terminal for inland vessel identification and depth estimation for smart maritime

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110726990A (en) * 2019-09-23 2020-01-24 江苏大学 Multi-sensor fusion method based on DS-GNN algorithm
CN112101092A (en) * 2020-07-31 2020-12-18 北京智行者科技有限公司 Automatic driving environment sensing method and system
CN112307594A (en) * 2020-09-22 2021-02-02 中国汽车技术研究中心有限公司 Road data acquisition and simulation scene establishment integrated system and method
CN114084129A (en) * 2021-10-13 2022-02-25 武汉光庭信息技术股份有限公司 Fusion-based vehicle automatic driving control method and system
CN113743391A (en) * 2021-11-08 2021-12-03 江苏天策机器人科技有限公司 Three-dimensional obstacle detection system and method applied to low-speed autonomous driving robot
CN113936198A (en) * 2021-11-22 2022-01-14 桂林电子科技大学 Low-beam laser radar and camera fusion method, storage medium and device
CN114200430A (en) * 2021-12-10 2022-03-18 上海西井信息科技有限公司 Calibration method, system, equipment and storage medium for laser radar and camera
US20240013505A1 (en) * 2022-07-11 2024-01-11 Wuhan University Of Technology Method, system, medium, equipment and terminal for inland vessel identification and depth estimation for smart maritime
CN117111040A (en) * 2023-07-14 2023-11-24 天津职业技术师范大学(中国职业培训指导教师进修中心) LM algorithm-based three-dimensional laser radar and camera combined calibration external parameter optimization method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李琳;张旭;屠大维;: "二维和三维视觉传感集成***联合标定方法", 仪器仪表学报, no. 11, 15 November 2012 (2012-11-15) *
黄兴;应群伟;: "应用激光雷达与相机信息融合的障碍物识别", 计算机测量与控制, no. 01, 25 January 2020 (2020-01-25) *

Similar Documents

Publication Publication Date Title
CN111462135B (en) Semantic mapping method based on visual SLAM and two-dimensional semantic segmentation
CN108555908B (en) Stacked workpiece posture recognition and pickup method based on RGBD camera
Lins et al. Vision-based measurement for localization of objects in 3-D for robotic applications
CN110361027A (en) Robot path planning method based on single line laser radar Yu binocular camera data fusion
CN111476841B (en) Point cloud and image-based identification and positioning method and system
Yan et al. Joint camera intrinsic and lidar-camera extrinsic calibration
CN112884841B (en) Binocular vision positioning method based on semantic target
CN112629520A (en) Robot navigation and positioning method, system, equipment and storage medium
CN111998862A (en) Dense binocular SLAM method based on BNN
Nissler et al. Evaluation and improvement of global pose estimation with multiple apriltags for industrial manipulators
CN114413958A (en) Monocular vision distance and speed measurement method of unmanned logistics vehicle
CN114972421A (en) Workshop material identification tracking and positioning method and system
CN111724432B (en) Object three-dimensional detection method and device
CN118050008B (en) Robot navigation system and navigation method thereof
CN116679314A (en) Three-dimensional laser radar synchronous mapping and positioning method and system for fusion point cloud intensity
CN118050008A (en) Robot navigation system and navigation method thereof
CN116403186A (en) Automatic driving three-dimensional target detection method based on FPN Swin Transformer and Pointernet++
CN115588036A (en) Image acquisition method and device and robot
Aggarwal Machine vision based SelfPosition estimation of mobile robots
WO2023283929A1 (en) Method and apparatus for calibrating external parameters of binocular camera
Zou et al. An automatic calibration between an omni-directional camera and a laser rangefinder for dynamic scenes reconstruction
CN112686963B (en) Target positioning method of aerial work robot for shielding
CN117367425B (en) Mobile robot positioning method and system based on multi-camera fusion
CN116309883B (en) 3D target 6DoF accurate positioning method and system
CN116499456B (en) Automatic positioning device and method for mobile robot and positioning system for unmanned mower

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant