CN118168545A - Positioning navigation system and method for weeding robot based on multi-source sensor fusion - Google Patents

Positioning navigation system and method for weeding robot based on multi-source sensor fusion Download PDF

Info

Publication number
CN118168545A
CN118168545A CN202410411661.7A CN202410411661A CN118168545A CN 118168545 A CN118168545 A CN 118168545A CN 202410411661 A CN202410411661 A CN 202410411661A CN 118168545 A CN118168545 A CN 118168545A
Authority
CN
China
Prior art keywords
data
robot
map
positioning
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410411661.7A
Other languages
Chinese (zh)
Inventor
卢倩
葛飞繁
姚苏恒
马子源
金李烨
高子言
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yancheng Institute of Technology
Original Assignee
Yancheng Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yancheng Institute of Technology filed Critical Yancheng Institute of Technology
Priority to CN202410411661.7A priority Critical patent/CN118168545A/en
Publication of CN118168545A publication Critical patent/CN118168545A/en
Pending legal-status Critical Current

Links

Landscapes

  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a weeding robot positioning navigation system and a weeding robot positioning navigation method based on multi-source sensor fusion, wherein the positioning navigation system comprises an acquisition module, a data processing module, a map construction module and a path planning module; the positioning navigation method comprises the following steps: the robot acquires a grassland environment depth image and an RGB image; sensing the environment in real time through a laser radar and a detector, acquiring the distance and the outline of surrounding objects, and scanning to acquire laser point cloud data; the acceleration and angular velocity data of the robot are collected through an inertial sensor when the robot moves; creating a local map and optimizing the local map by moving the robot and collecting new data; acquiring grassland distribution and shape information, an initial position and a target grassland position, and calculating an optimal path from a starting point to an end point; and calculating an optimal path of the robot moving in the map, and adjusting the planned path through real-time sensing and judging of the environment so as to plan the weeding route.

Description

Positioning navigation system and method for weeding robot based on multi-source sensor fusion
Technical Field
The invention relates to a weeding robot, in particular to a weeding robot positioning navigation system and method based on multi-source sensor fusion.
Background
The traditional mowing method generally needs manual operation, is time-consuming and labor-consuming, consumes a large amount of manpower, is low in efficiency, can pollute the environment, and cannot meet the requirements of modern society on high efficiency, environmental protection and intelligence. In addition, the traditional weeding mode is also easily influenced by various factors such as weather, topography and the like, and great difficulty is brought to weeding work.
Therefore, there is a need to develop an automatic, intelligent, environmentally friendly intelligent automatic weeding machine to replace the traditional manual labor to save cost and improve efficiency. The intelligent automatic weeding robot can identify and process different working environments through the sensor and the processor of the intelligent automatic weeding robot, wherein the automatic positioning navigation system realizes accurate positioning and navigation, and the positioning navigation system not only can improve the efficiency and the precision of the weeding robot, but also reduces the use of human resources and realizes intelligent control.
The existing weeding robot positioning navigation system mainly adopts a Geographic Information System (GIS) for navigation, and the GIS navigation system can help the robot to acquire spatial information such as the shape, the size, the height, the vegetation coverage and the like of a lawn, and utilize the information to carry out path planning and decision. However, conventional systems have some significant limitations. First, they rely mainly on static GIS data, and cannot sense real-time information of environmental changes and the state of the robot itself in real time. This limits the performance of the robot in a dynamic or unpredictable environment. Second, due to the limitation of a single sensor, the conventional system may have a certain error in precisely recognizing the boundary of the lawn and the position of the obstacle, thereby resulting in unsatisfactory weeding effect.
The global positioning system GPS and the inertial navigation system INS are also commonly used for positioning navigation of the weeding robot. GPS can provide relatively high positioning accuracy by receiving signals transmitted by satellites to determine the position of the mowing robot. However, signal shielding and multipath effects may cause a decrease in positioning accuracy in a complex environment, and the positioning accuracy is affected by atmospheric conditions, topography and other factors, and the real-time performance is limited. The INS estimates its position and direction by measuring acceleration and angular velocity of the mowing robot. The INS can measure acceleration and angular velocity of the robot in real time to provide instant navigation information, but as time goes by, due to accumulation of errors, deviation of position and direction may increase gradually. In addition, the INS may be affected by factors such as earth rotation for a robot running for a long time, resulting in a decrease in positioning accuracy.
Disclosure of Invention
The invention aims to: aiming at the defects of the prior art, the invention provides a weeding robot positioning navigation system and a weeding robot positioning navigation method based on multi-source sensor fusion.
The technical scheme is as follows: the weeding robot positioning navigation system based on multi-source sensor fusion comprises an acquisition module, a data processing module, a map construction module and a path planning module, wherein the acquisition module acquires grassland environment information, the data processing module preprocesses and analyzes acquired data, the map construction module constructs a map based on the acquired data, and the path planning module plans a moving path of a robot based on a global map.
The acquisition module comprises a depth camera, an inertial sensor IMU and a laser radar;
the depth camera shoots an environment image to obtain an RGB image and a depth image;
The inertial sensor IMU comprises an accelerometer and an angular velocity meter, and the inertial sensor IMU collects acceleration and angular velocity data of the robot;
The laser radar senses the environment in real time through the laser and the detector, acquires the distance and contour information of surrounding objects, and scans the acquired laser point cloud data.
The data processing module performs preprocessing of noise filtering, deviation correction, data normalization and temperature compensation on inertial sensor IMU data.
The weeding robot positioning navigation method based on multi-source sensor fusion comprises the following steps:
(1) When the robot is positioned on a lawn to be worked, the robot shoots the lawn through a depth camera assembled on the machine body to obtain a lawn depth image and an RGB image; sensing the environment in real time through laser of a laser radar and a detector, acquiring distance and contour information of surrounding objects, and scanning the acquired laser point cloud data; the acceleration and angular velocity data of the robot are collected through an inertial sensor IMU when the robot moves;
(2) Projecting the depth image into laser type data:
(3) Obtaining ORB characteristics through an RGB map:
(4) Acquiring acceleration and angular velocity data of the robot by an inertial sensor IMU and constructing a motion model;
(5) Fusing a motion model, laser cloud point data and laser type data of an inertial sensor IMU through an extended Kalman filtering algorithm; establishing a local map by moving the robot and collecting new data based on the data through SLAM mapping;
(6) Optimizing a global map: when the robot moves and collects new data, comparing and matching the new data with original map data, finding out the similarity between the new data and the old data, and optimizing a global map;
(7) After the optimized global map is obtained, grassland distribution and shape information, an initial position and a target grassland position are obtained from the optimized global map, and an optimal path from a starting point to an end point is calculated; according to the type of the map and the task requirement, calculating an optimal path of the robot moving in the map, and adjusting the planned path through real-time sensing and judging of the environment so as to plan the weeding route.
Step (2) comprises the following steps:
(2.1) obtaining a depth image through a depth camera, and obtaining a depth image coordinate point and depth information;
(2.2) according to the depth information and the camera internal parameters, carrying out coordinate transformation through a pinhole camera model, and calculating world coordinates P (Xw, yw, zw);
(2.3) calculating a projection angle θ=arctan (Y/X) of the coordinate point;
(2.4) mapping the angle theta into the laser data groove to complete data conversion.
In the step (2.4), converting the polar coordinate system of the laser radar data into a Cartesian coordinate system; the angle θ in the converted Cartesian coordinate system is mapped into the laser data slots.
Step (3) comprises the following steps:
(3.1) selecting a pixel point p, and setting the brightness as lp;
(3.2) setting a brightness threshold T of the pixel point;
(3.3) selecting a plurality of pixel points on a circle with the radius by taking the pixel point p as the center;
(3.4) if the brightness of the N consecutive points on the selected circle is greater than lp+t or less than lp+t, then the pixel p is considered as the feature point;
(3.5) looping the above steps, the same operation is performed for each pixel.
In the step (4), the motion model expression of the IMU is:
X=[WPIT,WVIT,qIW,bADT,bAWT]T
Wherein X is a state variable, including estimates of pose, velocity, and zero bias; WPI is angular velocity, WVI is linear velocity, qIW is direction cosine matrix, bADT is zero offset of accelerometer, bAWT is zero offset of gyroscope.
In step (6), the new data and the map data are subjected to feature point matching.
In the step (7), the starting point is taken as a center, and the starting point is expanded outwards layer by layer, so as to establish an adjacency list; the point with the minimum dist is fetched from the top heap, and if this point is the end point, the algorithm ends; traversing edges taking the extracted points as starting points, calculating dist from the end points of the edges, and then placing the dist in a top pile; repeating the steps until the end point is taken out of the top stack.
The beneficial effects are that: compared with the prior art, the invention has the following advantages:
According to the invention, the multi-source sensor is arranged to collect the information of the robot and the grassland, and the global map is built in a fusion way, so that the weeding route is planned, and the automatic positioning navigation weeding operation is realized.
Drawings
FIG. 1 is a schematic structural diagram of a positioning navigation system of a weeding robot based on multi-source sensor fusion.
Detailed Description
As shown in fig. 1, the weeding robot positioning and navigation system based on multi-source sensor fusion comprises an acquisition module, a data processing module, a map construction module and a path planning module, wherein the acquisition module acquires grassland environment information, the data processing module preprocesses and analyzes acquired data, the map construction module constructs a map based on the acquired data, and the path planning module plans a moving path of a robot based on a global map.
The acquisition module comprises a depth camera, an inertial sensor IMU and a laser radar;
the depth camera shoots an environment image to obtain an RGB image and a depth image;
The inertial sensor comprises an accelerometer and an angular velocity meter, and the inertial sensor IMU acquires acceleration and angular velocity data of the robot;
The laser radar senses the environment in real time through the laser and the detector, acquires the distance and contour information of surrounding objects, and scans the acquired laser point cloud data.
In this embodiment, the data processing module performs noise filtering, offset correction, data normalization, and temperature compensation preprocessing on the inertial sensor IMU data.
Noise filtering: moving average filtering, median filtering and Kalman filtering are adopted, so that noise is effectively removed, and the accuracy of data is improved.
Deviation correction: various deviations exist in the IMU sensor, such as zero deviation, proportion deviation and the like, and the accuracy of measurement is improved by correcting the deviation of the data.
Data normalization: the data of the accelerometer and the gyroscope are converted into real physical units.
Temperature compensation: to compensate for the effects of temperature variations on IMU sensor performance.
The weeding robot positioning navigation method based on multi-source sensor fusion comprises the following steps:
(1) When the robot is positioned on a grassland to be worked, the robot shoots the grassland through a depth camera assembled on the machine body to obtain a grassland depth image and an RGB image; sensing the environment in real time through laser of a laser radar and a detector, acquiring distance and contour information of surrounding objects, and scanning to acquire laser point cloud data; acquiring acceleration and angular velocity data of the robot through the IMU when the robot moves;
(2) Projecting the depth image into laser type data:
And (2.1) obtaining a depth image through a depth camera, and obtaining depth map coordinate points and depth information.
Where the depth map coordinate points are the positions of each pixel in the image taken by the depth camera, which are represented by depth values or values normalized to between 0 and 255. The depth camera coordinate system is used for generating a three-dimensional point cloud, and the three-dimensional point cloud is formed by converting pixel coordinates into physical coordinates under the depth camera through an internal reference matrix and depth values.
The depth information is three-dimensional coordinate information of each point of the detected object, namely X, Y, Z axial distance data. In the field of computer vision, depth information is obtained by binocular or multiview vision principles. The binocular vision principle utilizes two cameras to observe the same scene from slightly different angles, calculates the parallax of each pixel point in images shot by the two cameras, and calculates the distance between each pixel point and the camera by combining parameters such as the focal length, the base line distance and the like of the cameras, so as to acquire the depth information of an object.
And (2.2) carrying out coordinate transformation through a pinhole camera model according to the depth information and the camera internal parameters, and calculating world coordinates P (Xw, yw, zw). A camera coordinate system origin is defined, the coordinate system origin being the optical center position, and the X-axis and Y-axis being parallel to the image coordinate system.
Determining a Z axis as an optical axis of the camera according to a right-hand rule; defining the position of a world coordinate system, and freely determining a coordinate origin; determining the position of an object to be measured in a world coordinate system, wherein the positions are expressed as Xw, yw and Zw; corresponding points in a camera coordinate system with points in a world coordinate system through a pinhole camera model; calculating the positions (Xc, yc, zc) of the object in a camera coordinate system according to the parameter relation in the pinhole camera model; the calculated points in the camera coordinate system are converted into points (Xw, yw, zw) in the world coordinate system by rigid body transformation.
(2.3) Calculating a projection angle theta of the coordinate point;
the position of the point P in the world coordinate system is transformed into the position under the camera coordinate system by rotating the matrix R and translating the vector T: [ Xc, yc, zc ] = [ R, T ] [ Xw, yw, zw ]; converting points in a camera coordinate system into points on a normalized plane through an internal reference matrix K of the camera: [ Xn, yn ] =k [ Xc, yc, zc ]; converting the points on the normalized plane into an image coordinate system by a projection matrix: [ xu, yu ] = [ Xn, yn ]/Zn; finally, according to the definition of the polar coordinates, the projection angle of the coordinate point is calculated by the following formula: θ=arctan (Y/X);
(2.4) mapping the angle theta into a laser data groove to finish data conversion;
Determining a coordinate system, typically a polar coordinate system, of the lidar data; determining a coordinate range of a laser radar data slot, which is usually a Cartesian coordinate system; converting a polar coordinate system of laser radar data into a Cartesian coordinate system; the angle θ in the converted Cartesian coordinate system is mapped into the laser data slots.
(3) Obtaining ORB characteristics through an RGB map:
(3.1) selecting a pixel point p, and setting the brightness of the pixel point p as lp;
(3.2) setting a brightness threshold T of the pixel point;
(3.3) taking the pixel point p as the center, and selecting 16 pixel points on a circle with the radius of 3;
(3.4) if the brightness of the N consecutive points on the selected circle is greater than lp+t or less than lp+t, the pixel p is considered as the feature point;
(3.5) looping the above steps, performing the same operation for each pixel;
(4) The inertial sensor IMU collects acceleration and angular velocity data of the robot to construct a motion model.
Data analysis: and analyzing data acquired by the inertial sensor IMU to acquire triaxial acceleration and angular velocity data of each time point.
Motion state estimation: and calculating the linear velocity and the angular velocity of the object, namely the motion state of the object, through the acceleration and the angular velocity data measured by the IMU.
Pose estimation based on IMU data: the attitude angle of the robot is calculated using the acceleration and angular velocity data.
And filtering the gesture obtained by calculating the angular velocity in real time, so that the accuracy of gesture data is improved.
And carrying out normalization processing on the gesture data to obtain a direction vector of the object in the world coordinate system, and converting the gesture data and the initial gesture into the world coordinate system from the carrier coordinate system.
In this embodiment, a motion model of the inertial sensor IMU is established based on the attitude estimation data, and the motion model expression of the inertial sensor IMU is:
X=[WPIT,WVIT,qIW,bADT,bAWT]T
Where X is a state variable, including estimates of pose, velocity, and zero offset. WPI is angular velocity, WVI is linear velocity, qIW is direction cosine matrix, bADT is zero offset of accelerometer, bAWT is zero offset of gyroscope.
Control signal generation: and generating control signals according to the analyzed and processed IMU data to control the actions of the robot, wherein the signals are used for adjusting the speed of the motor and changing the gesture of the robot.
(5) Integrating the motion model, the laser cloud point data and the laser type data of the inertial sensor through an extended Kalman filtering algorithm; mapping by SLAM based on the above data: matching the feature points of each frame by VO (Visual Odometry) and placing the feature points into a map; matching the current frame with points in the map, and calculating the pose relationship between the current frame and the map; according to the pose relation, inserting the characteristic points in the current frame into a map, and updating the position estimation of the characteristic points; and (3) performing BA optimization of the local map, wherein the BA optimization comprises triangularization of 3D points, fusion matching of the 3D points and key frame insertion.
Establishing a local map: a local map is created by the robot moving and collecting new observation data.
(6) Optimizing a global map: when the robot or device moves to a new location and collects new data, the new data will be compared and matched with the original map data and fused into the map, the similarity between the new and old data is found, and this information is used to optimize the global map:
Preprocessing, such as denoising and standardization, is carried out on the new data and the original map data so as to ensure the consistency and accuracy of the data; common features are extracted from the new data and the map data. Features include color, texture, shape, etc.; matching the new data with the map data by adopting a matching algorithm; according to the actual situation, some constraint conditions, such as time constraint and space constraint, are set so as to increase the accuracy of matching; evaluating the accuracy of the matching by calculating the re-projection error between the new data and the map data; fusion matching is carried out on the matching points and other points in the map, global or local BA optimization is carried out, and therefore a more accurate matching result is obtained; updating data: updating map data according to the matching result;
(7) After the optimized global map is obtained, grassland distribution and shape information are obtained from the global map, namely, the obtained grassland information, the initial position and the position of the target grassland are collected, and an optimal path from a starting point to an end point is calculated: taking the starting point as a center, expanding outwards layer by layer, and establishing an adjacency list; the point with the minimum dist is fetched from the top heap, and if this point is the end point, the algorithm ends; traversing edges taking the extracted points as starting points, adopting an adjacency list, calculating dist from the end points of the edges, and then placing the dist in a top pile; repeating the steps until the end point is taken out of the top stack.
According to the map type and task requirements, calculating the optimal path map type of the robot moving in the map comprises not only different grassland forms, such as weed types and corresponding quantity, non-grasslands, pits, water pits and other environments, but also the grassland types and distribution conditions of the area are obtained through sensing by acquiring the environment information through the acquisition module, and the planned path is adjusted, so that the weeding route is planned.

Claims (10)

1. A weeding robot positioning navigation system based on multisource sensor fusion is characterized in that: the system comprises an acquisition module, a data processing module, a map construction module and a path planning module; the acquisition module acquires grassland environment information, the data processing module preprocesses and analyzes the acquired data, the map construction module constructs a map based on the acquired data, and the path planning module plans the moving path of the robot based on the global map.
2. The multi-source sensor fusion-based weeding robot positioning and navigation system according to claim 1, wherein: the acquisition module comprises a depth camera, an inertial sensor and a laser radar;
The depth camera shoots an environment image to obtain an RGB image and a depth image;
The inertial sensor comprises an accelerometer and an angular velocity meter, and acquires acceleration and angular velocity data of the robot;
The laser radar senses the environment in real time through the laser and the detector, acquires the distance and contour information of surrounding objects, and scans the acquired laser point cloud data.
3. The multi-source sensor fusion-based weeding robot positioning and navigation system according to claim 1, wherein: the data processing module performs preprocessing of noise filtering, deviation correction, data normalization and temperature compensation on the inertial sensor data.
4. A weeding robot positioning navigation method based on multisource sensor fusion is characterized in that: the method comprises the following steps:
(1) Shooting grasslands by the robot through a depth camera, and acquiring grassland depth images and RGB images; sensing the environment in real time through laser of a laser radar and a detector, acquiring distance and contour information of surrounding objects, and scanning to acquire laser point cloud data; the acceleration and angular velocity data of the robot are collected through the inertial sensor when the robot moves;
(2) Projecting the depth image into laser type data:
(3) Obtaining ORB characteristics through the RGB map:
(4) Acquiring acceleration and angular velocity data of the robot by an inertial sensor and constructing a motion model;
(5) Fusing the motion model, the laser cloud point data and the laser type data of the inertial sensor through an extended Kalman filtering algorithm; establishing a local map by moving the robot and collecting new data based on the data through SLAM mapping;
(6) Optimizing a global map: when the robot moves and collects new data, comparing and matching the new data with original map data, and optimizing a global map;
(7) The method comprises the steps of obtaining grassland distribution and shape information from an optimized global map, calculating an optimal path from a starting point to an end point, wherein the initial position and the position of a target grassland are obtained; according to the type of the map and the task requirement, calculating the optimal path of the robot moving in the map, and adjusting the planned path through real-time sensing and judgment of the environment.
5. The positioning and navigation method of the weeding robot based on multi-source sensor fusion according to claim 4, wherein the positioning and navigation method is characterized by comprising the following steps: step (2) comprises the following steps:
(2.1) obtaining a depth image through a depth camera, and obtaining a depth image coordinate point and depth information;
(2.2) according to the depth information and the camera internal parameters, carrying out coordinate transformation through a pinhole camera model, and calculating world coordinates P (Xw, yw, zw);
(2.3) calculating a projection angle θ=arctan (Y/X) of the coordinate point;
(2.4) mapping the angle theta into the laser data groove to complete data conversion.
6. The positioning and navigation method of the weeding robot based on multi-source sensor fusion according to claim 5, wherein the positioning and navigation method is characterized by comprising the following steps: in the step (2.4), converting the polar coordinate system of the laser radar data into a Cartesian coordinate system; the angle θ in the converted Cartesian coordinate system is mapped into the laser data slots.
7. The positioning and navigation method of the weeding robot based on multi-source sensor fusion according to claim 4, wherein the positioning and navigation method is characterized by comprising the following steps: step (3) comprises the following steps:
(3.1) selecting a pixel point p, and setting the brightness as lp;
(3.2) setting a brightness threshold T of the pixel point;
(3.3) selecting a plurality of pixel points on a circle with the radius by taking the pixel point p as the center;
(3.4) if the brightness of the N consecutive points on the selected circle is greater than lp+t or less than lp+t, the pixel p is considered as the feature point;
(3.5) looping the above steps, the same operation is performed for each pixel.
8. The positioning and navigation method of the weeding robot based on multi-source sensor fusion according to claim 4, wherein the positioning and navigation method is characterized by comprising the following steps: in the step (4), the motion model expression of the IMU is:
X=[WPIT,WVIT,qIW,bADT,bAWT]T
Wherein X is a state variable, including estimates of pose, velocity, and zero bias; WPI is angular velocity, WVI is linear velocity, qIW is direction cosine matrix, bADT is zero offset of accelerometer, bAWT is zero offset of gyroscope.
9. The positioning and navigation method of the weeding robot based on multi-source sensor fusion according to claim 4, wherein the positioning and navigation method is characterized by comprising the following steps: in step (6), the new data and the map data are subjected to feature point matching.
10. The positioning and navigation method of the weeding robot based on multi-source sensor fusion according to claim 4, wherein the positioning and navigation method is characterized by comprising the following steps: in the step (7), the starting point is taken as a center, and the starting point is expanded outwards layer by layer, so as to establish an adjacency list; the point with the minimum dist is fetched from the top heap, and if this point is the end point, the algorithm ends; traversing edges taking the extracted points as starting points, calculating dist from the end points of the edges, and then placing the dist in a top pile; repeating the steps until the end point is taken out of the top stack.
CN202410411661.7A 2024-04-08 2024-04-08 Positioning navigation system and method for weeding robot based on multi-source sensor fusion Pending CN118168545A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410411661.7A CN118168545A (en) 2024-04-08 2024-04-08 Positioning navigation system and method for weeding robot based on multi-source sensor fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410411661.7A CN118168545A (en) 2024-04-08 2024-04-08 Positioning navigation system and method for weeding robot based on multi-source sensor fusion

Publications (1)

Publication Number Publication Date
CN118168545A true CN118168545A (en) 2024-06-11

Family

ID=91348532

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410411661.7A Pending CN118168545A (en) 2024-04-08 2024-04-08 Positioning navigation system and method for weeding robot based on multi-source sensor fusion

Country Status (1)

Country Link
CN (1) CN118168545A (en)

Similar Documents

Publication Publication Date Title
CN110070615B (en) Multi-camera cooperation-based panoramic vision SLAM method
CN110084272B (en) Cluster map creation method and repositioning method based on cluster map and position descriptor matching
CN113436260B (en) Mobile robot pose estimation method and system based on multi-sensor tight coupling
CN110033489B (en) Method, device and equipment for evaluating vehicle positioning accuracy
CN108406731A (en) A kind of positioning device, method and robot based on deep vision
CN115407357B (en) Low-harness laser radar-IMU-RTK positioning mapping algorithm based on large scene
US20230236280A1 (en) Method and system for positioning indoor autonomous mobile robot
CN103424112A (en) Vision navigating method for movement carrier based on laser plane assistance
Cai et al. Mobile robot localization using gps, imu and visual odometry
Sujiwo et al. Monocular vision-based localization using ORB-SLAM with LIDAR-aided mapping in real-world robot challenge
CN114526745A (en) Drawing establishing method and system for tightly-coupled laser radar and inertial odometer
CN208323361U (en) A kind of positioning device and robot based on deep vision
CN115468567A (en) Cross-country environment-oriented laser vision fusion SLAM method
CN116608873A (en) Multi-sensor fusion positioning mapping method for automatic driving vehicle
CN115218889A (en) Multi-sensor indoor positioning method based on dotted line feature fusion
CN116957360A (en) Space observation and reconstruction method and system based on unmanned aerial vehicle
CN116380079A (en) Underwater SLAM method for fusing front-view sonar and ORB-SLAM3
CN115773747A (en) High-precision map generation method, device, equipment and storage medium
CN118168545A (en) Positioning navigation system and method for weeding robot based on multi-source sensor fusion
CN115280960A (en) Combine harvester steering control method based on field vision SLAM
CN112146647B (en) Binocular vision positioning method and chip for ground texture
CN114485613A (en) Multi-information fusion underwater robot positioning method
CN111191513A (en) Method for estimating position of mobile robot based on scene size analysis
Wang et al. Research on Real-time and High-precision Positioning Method of Ground Target through UAV Stereo Vision and Spatial Information Fusion
CN116051629B (en) Autonomous navigation robot-oriented high-precision visual positioning method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination