CN112987763B - ROS-based intelligent trolley of autonomous navigation robot control system - Google Patents

ROS-based intelligent trolley of autonomous navigation robot control system Download PDF

Info

Publication number
CN112987763B
CN112987763B CN202110511154.7A CN202110511154A CN112987763B CN 112987763 B CN112987763 B CN 112987763B CN 202110511154 A CN202110511154 A CN 202110511154A CN 112987763 B CN112987763 B CN 112987763B
Authority
CN
China
Prior art keywords
map
trolley
path
robot
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN202110511154.7A
Other languages
Chinese (zh)
Other versions
CN112987763A (en
Inventor
周月娥
傅家云
刘寅飞
梅佳锐
王玉珏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN202110511154.7A priority Critical patent/CN112987763B/en
Publication of CN112987763A publication Critical patent/CN112987763A/en
Application granted granted Critical
Publication of CN112987763B publication Critical patent/CN112987763B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0251Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • G05D1/0278Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using satellite positioning signals, e.g. GPS

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Optics & Photonics (AREA)
  • Multimedia (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses an intelligent trolley of an autonomous navigation robot control system based on ROS, comprising a laser radar, a camera and a gravity acceleration sensor which are carried on the trolley, and the laser radar is used for acquiring the current position, distance information and environmental information of the trolley; a grid map is created based on a robot operating system, and a three-dimensional point cloud map is constructed by utilizing SLAM algorithm and depth camera visual fusion; the trolley utilizes the robot operating system to sense the environment of the distance information and a visual odometer of an RGB-D depth camera, and generates the three-dimensional point cloud map; and the robot operating system corrects the three-dimensional point cloud map when a route forms a loop according to a GPS (global positioning system), and calculates the route of the trolley to avoid obstacles. The invention enhances the motion capability and robustness of the robot to a great extent, and realizes more accurate positioning on the data of the odometer and the encoder, so that the positioning is more accurate.

Description

ROS-based intelligent trolley of autonomous navigation robot control system
Technical Field
The invention relates to the technical field of robot software development, in particular to an intelligent trolley of an autonomous navigation robot control system based on ROS.
Background
For the technology which is rapidly developed at present, the navigation development of the intelligent vehicle type is not perfect in the navigation technology, and the requirement on the environment in the navigation process is also severe, so that the intelligent autonomous navigation system can well identify, position and explore maps which can be used for map-free navigation, navigation based on a known map and navigation based on an incremental map.
The path planning algorithm of the conventional series of navigation trolleys is relatively lagged, the best path cannot be calculated under a slightly severe environment condition, and tasks such as navigation positioning cannot be well completed.
Disclosure of Invention
This section is for the purpose of summarizing some aspects of embodiments of the invention and to briefly introduce some preferred embodiments. In this section, as well as in the abstract and the title of the invention of this application, simplifications or omissions may be made to avoid obscuring the purpose of the section, the abstract and the title, and such simplifications or omissions are not intended to limit the scope of the invention.
The present invention has been made in view of the above-mentioned conventional problems.
Therefore, the invention provides the intelligent trolley of the autonomous navigation robot control system based on the ROS, and the intelligent trolley can solve the problems that the navigation trolley cannot calculate the best path and cannot well complete tasks such as navigation and positioning under slightly severe environmental conditions.
In order to solve the technical problems, the invention provides the following technical scheme: an intelligent trolley of an autonomous navigation robot control system based on ROS comprises a laser radar, a camera and a gravity acceleration sensor, wherein the laser radar is carried on the trolley to acquire the current position, distance information and environmental information of the trolley; a grid map is created based on a robot operating system, and a three-dimensional point cloud map is constructed by utilizing SLAM algorithm and depth camera visual fusion; the trolley utilizes the robot operating system to sense the environment of the distance information and a visual odometer of an RGB-D depth camera, and generates the three-dimensional point cloud map; and the robot operating system corrects the three-dimensional point cloud map when a route forms a loop according to a GPS (global positioning system), and calculates the route of the trolley to avoid obstacles.
The invention relates to a preferable scheme of an intelligent vehicle of an ROS-based autonomous navigation robot control system, which comprises the following steps: the robot operating system also comprises an upper computer in the robot operating system, a lower computer and a map, wherein the upper computer receives data sent by the lower computer, calculates pose matching and matches coordinates in the map; judging whether the vehicle issues a target point or not, if so, calculating and planning the path of the trolley, refreshing a local cost map and issuing a chassis control instruction; if not, the process is ended directly.
The invention relates to a preferable scheme of an intelligent vehicle of an ROS-based autonomous navigation robot control system, which comprises the following steps: the lower computer initializes data information and starts timing interruption, and the timing interruption is set to be 20 ms; receiving the speed and steering engine angle data of the upper computer, and acquiring MPU6050 and encoder data; and executing motion control, and sending the speed and the steering engine angle data to the upper computer.
The invention relates to a preferable scheme of an intelligent vehicle of an ROS-based autonomous navigation robot control system, which comprises the following steps: the map construction comprises the steps of carrying out synchronization processing on the acquired RGB-D image, radar data and odometer nodes to obtain sensor data; the sensor data is transmitted into a short-term memory module to form a loop and close-range detection to complete image optimization, and a global map group leader is obtained; and respectively dividing the global map group length to sequentially obtain map data, a map image, TF coordinate conversion, an octree-based three-dimensional map, the three-dimensional point cloud map and the grid map.
The invention relates to a preferable scheme of an intelligent vehicle of an ROS-based autonomous navigation robot control system, which comprises the following steps: the robot operating system carries a global planner, a 2TEB local planner and a vehicle model control algorithm; the global planner includes, dijkstra algorithm and a star algorithm.
The invention relates to a preferable scheme of an intelligent vehicle of an ROS-based autonomous navigation robot control system, which comprises the following steps: the vehicle model control algorithm comprises a steering engine control algorithm and a motor control algorithm.
The invention relates to a preferable scheme of an intelligent vehicle of an ROS-based autonomous navigation robot control system, which comprises the following steps: the Dijkstra algorithm comprises the steps of expanding towards an outer layer by taking a starting point as a center until the outer layer is expanded to a terminal point; when the Dijkstra algorithm is used for calculating the shortest path in the map, the calculation is started from the top point; introducing a set S and a set U, wherein the set S records the top point of the shortest path which is already solved and the length of the corresponding shortest path; the set U records a vertex which does not obtain the shortest path and the distance from the vertex to the starting point; initially, only a starting point exists in the set S, vertices except the starting point exist in the set U, and a path of the vertices in the set U is a path from the starting point to the vertices; finding out the top point with the shortest path from the set U, and adding the top point into the set S; updating the vertex in the set U and the path corresponding to the vertex; finding out the top point with the shortest path from the set U and adding the top point into the set S; updating the vertex in the set U and the path corresponding to the vertex; and repeating the iteration until all the vertexes are traversed, and finding the shortest path.
The invention relates to a preferable scheme of an intelligent vehicle of an ROS-based autonomous navigation robot control system, which comprises the following steps: the A star algorithm comprises the steps of simplifying a search area, simplifying a map into a grid shape, and defining the center of one square as a node; starting from the periphery of the starting point A, checking adjacent squares of the starting point A, finding targets from the periphery, and selecting the shortest and most appropriate path from a plurality of possible paths; defining a node n as a comprehensive priority, and selecting a node with the highest comprehensive priority when selecting the next node to be traversed; selecting the node with the highest priority from the priority queue as the next node to be traversed each time; if the cost from the node n to the end point is always less than or equal to the cost from the node n to the end point, the shortest path can be found certainly.
The invention has the beneficial effects that: the invention adopts the TEB algorithm, so that a better path scheme is obtained when the path planning is calculated, the speed feedforward control KPID is adopted in the aspect of control, the speed control is more sensitive, and the steering and differential steering of the steering engine are simultaneously adopted in the aspect of steering control, so that the movement capability and the robustness of the robot are greatly enhanced; the odometer and the encoder data are more accurately positioned, so that the odometer and the encoder data are more accurate in positioning; in the aspect of MAP building, an RTAB-MAP algorithm is adopted, the building of the three-dimensional point cloud MAP is realized by using lower-cost equipment, the precision is higher, and the cost is lower compared with the traditional equipment for building the MAP; in the optimization, the product realizes the front-end and rear-end optimization, is more complete, is additionally provided with two functions of loop detection and automatic correction, and has certain self-detection and motion recognition capability.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise. Wherein:
FIG. 1 is a schematic flow chart of a smart car of an ROS-based autonomous navigation robot control system according to an embodiment of the present invention;
fig. 2 is a schematic upper computer flow diagram of an intelligent vehicle of the ROS-based autonomous navigation robot control system according to an embodiment of the present invention;
fig. 3 is a flow chart of a lower computer of the intelligent vehicle of the ROS-based autonomous navigation robot control system according to an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating a flow chart of a smart car of the ROS-based autonomous navigation robot control system according to an embodiment of the present invention;
FIG. 5 is a schematic illustration of a steering engine control of a smart cart of the ROS-based autonomous navigation robot control system according to one embodiment of the present invention;
FIG. 6 is a schematic diagram of the motor control of the smart cart of the ROS-based autonomous navigation robot control system according to one embodiment of the present invention;
FIG. 7 is a schematic diagram of a first corridor setup test of a smart cart of the ROS-based autonomous navigation robot control system according to one embodiment of the present invention;
FIG. 8 is a schematic diagram of a second corridor setup test of a smart cart of the ROS-based autonomous navigation robot control system in accordance with one embodiment of the present invention;
FIG. 9 is a schematic diagram of a third corridor setup test of a smart cart of the ROS-based autonomous navigation robot control system according to one embodiment of the present invention;
FIG. 10 is a schematic view of an outdoor actual scene of a smart car of the ROS-based autonomous navigation robot control system according to one embodiment of the present invention;
FIG. 11 is a schematic diagram of a first outdoor environment mapping test of a smart cart of the ROS-based autonomous navigation robot control system in accordance with one embodiment of the present invention;
FIG. 12 is a schematic diagram of a second outdoor environment mapping test of a smart cart of the ROS-based autonomous navigation robot control system in accordance with one embodiment of the present invention;
FIG. 13 is a schematic diagram of a local planner of a smart car of the ROS-based autonomous navigational robot control system in accordance with one embodiment of the present invention;
FIG. 14 is a schematic diagram of yet another local planner of a smart car of the ROS-based autonomous navigational robot control system in accordance with one embodiment of the present invention;
fig. 15 is a schematic view of a navigation and obstacle avoidance function test of an intelligent vehicle of the ROS-based autonomous navigation robot control system according to an embodiment of the present invention;
FIG. 16 is a schematic view of a navigation path planning for a smart cart of the ROS-based autonomous navigation robot control system in accordance with one embodiment of the present invention;
FIG. 17 is a block diagram of a hardware module framework of a smart cart of the ROS-based autonomous navigation robot control system in accordance with one embodiment of the present invention;
FIG. 18 is a schematic diagram of a power module framework of a smart cart of the ROS-based autonomous navigation robot control system in accordance with one embodiment of the present invention;
fig. 19 is a schematic diagram of a motor driving module framework of a smart car of the ROS-based autonomous navigation robot control system according to an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, specific embodiments accompanied with figures are described in detail below, and it is apparent that the described embodiments are a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making creative efforts based on the embodiments of the present invention, shall fall within the protection scope of the present invention.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways than those specifically described and will be readily apparent to those of ordinary skill in the art without departing from the spirit of the present invention, and therefore the present invention is not limited to the specific embodiments disclosed below.
Furthermore, reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one implementation of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments.
The present invention will be described in detail with reference to the drawings, wherein the cross-sectional views illustrating the structure of the device are not enlarged partially in general scale for convenience of illustration, and the drawings are only exemplary and should not be construed as limiting the scope of the present invention. In addition, the three-dimensional dimensions of length, width and depth should be included in the actual fabrication.
Meanwhile, in the description of the present invention, it should be noted that the terms "upper, lower, inner and outer" and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of describing the present invention and simplifying the description, but do not indicate or imply that the referred device or element must have a specific orientation, be constructed in a specific orientation and operate, and thus, cannot be construed as limiting the present invention. Furthermore, the terms first, second, or third are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
The terms "mounted, connected and connected" in the present invention are to be understood broadly, unless otherwise explicitly specified or limited, for example: can be fixedly connected, detachably connected or integrally connected; they may be mechanically, electrically, or directly connected, or indirectly connected through intervening media, or may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Example 1
Referring to fig. 1 to 16, a first embodiment of the present invention provides an intelligent vehicle of an autonomous navigation robot control system based on an ROS, including:
s1: a laser radar, a camera and a gravity acceleration sensor are carried on the trolley, and the current position, distance information and environmental information of the trolley are obtained by the laser radar.
S2: and (3) creating a grid map based on a robot operating system, and constructing a three-dimensional point cloud map by utilizing SLAM algorithm and depth camera visual fusion.
S3: the trolley utilizes the robot operating system to sense the environment of the distance information and the visual odometer of the RGB-D depth camera, and a three-dimensional point cloud map is generated.
S4: and the robot operating system corrects the three-dimensional point cloud map when a route forms a loop according to the GPS, and calculates the route of the trolley to avoid obstacles.
Referring to fig. 2, an upper computer in the robot operating system receives data sent by a lower computer, calculates pose matching and matches coordinates in a map; judging whether the target point is issued or not, if so, calculating a planned trolley path, refreshing a local cost map and issuing a chassis control instruction; if not, the process is ended directly.
Referring to fig. 3, the lower computer initializes data information and starts timer interrupt, set to 20 ms; receiving data of the speed of an upper computer and the angle of a steering engine, and acquiring data of an MPU6050 and an encoder; and executing motion control, and sending speed and steering engine angle data to the upper computer.
Referring to fig. 4, the map construction includes that the acquired RGB-D image, radar data, and odometer nodes are subjected to synchronization processing to obtain sensor data; the data of the sensor is transmitted into a short-term memory module to form a loop and close detection to complete image optimization, and a global map group leader is obtained; and respectively dividing the group length of the global map to sequentially obtain map data, a map image, TF coordinate conversion, an octree-based three-dimensional map, a three-dimensional point cloud map and a grid map.
Referring to fig. 5 and 6, the robot operating system carries a global planner, a 2TEB local planner and a model control algorithm; the global planner includes, dijkstra algorithm and a star algorithm; the vehicle model control algorithm comprises a steering engine control algorithm and a motor control algorithm; the Dijkstra algorithm comprises the steps of expanding towards an outer layer by taking a starting point as a center until the outer layer is expanded to a terminal point; when calculating the shortest path in the map by utilizing Dijkstra algorithm, calculating from the top point; introducing a set S and a set U, wherein the set S records the top point of the shortest path which is already solved and the length of the corresponding shortest path; the set U records the top point of the shortest path which is not solved yet and the distance from the top point to the initial point; initially, only a starting point exists in the set S, vertices except the starting point exist in the set U, and paths of the vertices in the set U are paths from the starting point to the vertices; finding out the top point with the shortest path from the set U, and adding the top point into the set S; updating the vertex in the set U and the path corresponding to the vertex; finding out the top point with the shortest path from the set U and adding the top point into the set S; updating the vertex in the set U and the path corresponding to the vertex; repeating iteration until all the vertexes are traversed, and finding the shortest path; the A star algorithm comprises the steps of simplifying a search area, simplifying a map into a grid shape, and defining the center of one square as a node; starting from the periphery of the starting point A, checking adjacent squares of the starting point A, finding targets from the periphery, and selecting the shortest and most appropriate path from a plurality of possible paths; defining a node n as a comprehensive priority, and selecting a node with the highest comprehensive priority when selecting the next node to be traversed; selecting the node with the highest priority from the priority queue as the next node to be traversed each time; if the cost from the node n to the end point is always less than or equal to the cost from the node n to the end point, the shortest path can be found certainly.
Table 1: and the serial port data communication packet is sent by the lower computer.
Figure 118618DEST_PATH_IMAGE001
It is easy to understand that RTAB-Map is an open source library of large-scale and long-term online laser and vision SLMA, and RTAB-Map is initially a feature-based loop detection method and has a memory management function, and is applied to various robots and mobile platforms to achieve synchronous positioning and mapping, that is, SLMA and RTAB-Map limit the size of the Map so that loop detection is always processed within a fixed time limit, thereby satisfying the long-term and large-scale environment online mapping requirements, and RTAB-Map has become a cross-platform independent C + + library and an ROS package, and is very convenient to use.
TEB is called Time Elastic Band (Time Elastic Band) Local plane, and the method carries out subsequent correction (modification) on an initial track generated by a global path Planner so as to optimize the motion track of the robot and belongs to Local path planning; regarding a two-dimensional path, regarding the two-dimensional path as one end of an electronic Band (rubber Band) connected with a starting point and the other end connected with an end point, the path can be elastically changed like a rubber Band, regarding all constraint conditions as the action of external force on the rubber Band as the condition of path deformation, wherein the starting point and the end point are specified by a global planner, N control points for controlling the shape of the rubber Band are inserted in the middle of the path, and in order to display kinematic information of a track, a motion Time is defined between the points; at this Time, it can be understood that the TEB algorithm is Time + electronic Band = TEB.
The popular explanation is that the local track generated by the TEB consists of a series of discrete poses (poses) with time information, and the g2o algorithm optimization targets are the discrete poses, so that the track finally consisting of the discrete poses can reach the targets of shortest time, shortest distance, far away from obstacles and the like, and meanwhile, the speed and the acceleration are limited to enable the track to meet the kinematics of the robot; the local path planner needs to realize the tracking of a global path and avoid an obstacle, and the two requirements are essentially problems, namely finding the obstacle or a global path point on the path, calculating the distance between the obstacle and the global path point, defining a distance-based potential field, wherein the tracking path increases with the increase of the distance, and the obstacle avoidance decreases with the increase of the distance.
In the trajectory optimization process, the TEB possesses a variety of optimization objectives including, but not limited to: overall path length, trajectory run time, distance to the obstacle, conformance through intermediate path points and robot dynamics, kinematics, and geometric constraints; the TEB explicitly considers the dynamic constraints in terms of space and time in motion states, such as the limitation of speed and acceleration of the robot, and is expressed as a multi-objective optimization problem, most of which are local and only related to a small part of parameters, because only depending on a few continuous robot states, such a local structure generates a sparse system matrix, so that it can solve the "TEB" problem by using a fast and efficient optimization technique, for example, using the open source framework "g 2 o".
It should be noted that the result of the optimization of g2o does not necessarily satisfy the constraint, that is, it is actually a soft constraint condition, and if the parameter setting is not reasonable or the environment is too harsh, the teb may fail, and a very strange track is planned, so that the teb algorithm includes a part for collision detection, and after the track is generated, it is determined point by point whether a point on the track collides with an obstacle, and the process considers the actual contour of the robot.
The steering engine part is controlled by adopting the traditional PID control, the upper computer sends the deviation angle of the current direction and the route of the robot to the singlechip, and the singlechip controls the steering engine angle and directly outputs the duty ratio to control the steering engine, so that the vehicle body is controlled to continuously follow the calculation path planning.
The PID controller is a linear controller which is based on a given value
Figure 504600DEST_PATH_IMAGE002
And the actual output value
Figure 18758DEST_PATH_IMAGE003
Forming deviation:
Figure 105313DEST_PATH_IMAGE004
the controlled object is controlled by linearly combining the proportion (P), integral (I) and derivative (D) of the deviation to form a control quantity.
The control law is as follows:
Figure 60630DEST_PATH_IMAGE005
the transfer function is:
Figure 86355DEST_PATH_IMAGE006
wherein,
Figure 291071DEST_PATH_IMAGE007
is a coefficient of proportionality that is,
Figure 224392DEST_PATH_IMAGE008
the integral coefficient of the light beam is calculated,
Figure 362113DEST_PATH_IMAGE008
the differential coefficient of the differential signal is calculated,
Figure 89897DEST_PATH_IMAGE009
a time constant is differentiated and the integral coefficient is set to 0.
The control of the motor part adopts the traditional PID control and feedforward control method, in order to make the speed response of the robot timely as much as possible, the traditional PID control needs time to integrate because of an integral term, although the control precision is very high, the speed response time of the robot cannot be improved, and the convergence speed is not ideal; the physical meaning of the integral term is balance resistance, and since the robot needs higher response speed and therefore the system needs higher convergence speed, feedforward control is used for replacing the integral term to balance the resistance; therefore, a model needs to be established for the system, and the control accuracy depends on the model accuracy, and although the accuracy is certainly not as good as that of the PID control, the convergence speed is very high.
Preferably, in order to better verify and explain the technical effects adopted in the method of the present invention, the embodiment selects to perform a comparative test with the conventional technical scheme and the method of the present invention, and compares the test results with a scientific demonstration means to verify the actual effects of the method of the present invention.
The traditional technical scheme is as follows: compared with the SLAM technology, the traditional technical scheme has certain defects in the aspects of multi-Sensor fusion, optimized data association and loop detection, integration with a front-end heterogeneous processor, robustness improvement and repositioning precision improvement, and has the problems of low precision of a Sensor, large calculation amount and no universality of the Sensor in a use scene.
Because the sensor type is different and the mounting means is different, SLAM has difference in implementation mode and the construction degree of difficulty, and at present, laser SLAM technique is more mature, and is higher to the map precision of establishing, and general error is at 3cm, more is fit for the navigation of robot, and the birth of laser radar (laser SLAM technique) makes the measurement more quick more accurate, and information acquisition is also more abundant.
The embodiment also needs to be explained in the following description, the main test contents include a diagram building function and a navigation obstacle avoidance function, a track is built in a simulation platform for diagram building, a cone bucket is added in the navigation process for obstacle avoidance test, whether the robot can avoid obstacles smoothly is observed, the diagram building, navigation and obstacle avoidance functions are realized through test simulation, a physical test is performed, tests on indoor closed narrow space, indoor large scene and outdoor environment are respectively performed, the diagram building and navigation tests are performed, the test results better realize the diagram building and navigation functions, the robustness is better, curvature data of a forward path is calculated in the test to serve as the steering foresight of the robot, and in a control algorithm, in order to enable a steering engine to make an angle in real time, the steering engine can provide a certain foresight, and the problem that the mechanical steering of the steering engine is slow to turn is solved.
The specific test process is as follows:
(1) indoor narrow closed environment map building function test
A rectangular closed space is built indoors by using a baffle, a trolley is placed in the rectangular closed space, a map building program is operated, the outline of a closed environment is depicted on an RVIZ display, a blue line is a robot movement trajectory line along with the continuous movement of the robot, and a surrounding map is continuously perfected to finally form a closed map.
(2) Indoor large-scene environment test
Referring to fig. 7 to 9, the design needs to deal with an indoor closed narrow environment, and at the same time, the design should have a certain adaptability to common scenes in a campus environment, such as a corridor, the corridor environment is special, the general length is long, the color is single, the scene is basically pure white, such a scene feature point is single, the difference of consecutive frames is not large, and the examination of the algorithm is large, so in the test, the test is performed in the corridor of the first floor of an experimental floor D, the left side is a first visual angle of the robot, the right side is a map displayed by RVIZ, it can be seen that the left side is seriously reflected, the ambient brightness is high, the map is continuously expanded along with the movement of the robot until the surrounding known area is gradually perfected and close to the real environment, the map has no obvious distortion and is complete, wherein the bottom layer black point cloud is a wall body, which is generated according to the scanning data of the laser radar and the data of the depth camera, the color point cloud part above the ground is data shot by the depth camera, and the mobility of the laser radar and the depth camera is compensated through the self mobility of the robot, so that the construction of the three-dimensional map of the indoor environment is completed.
In conclusion, the robot can well complete map construction in an indoor environment and basically conforms to an actual environment;
(3) outdoor environment testing
Referring to fig. 10 to 12, the robot runs in a campus environment and inevitably meets outdoor conditions, in order to test the adaptability and mapping capability of the robot outdoors, the outdoor environment is mainly difficult to be in the semi-dense media such as grasses and bushes, and different from common walls and baffles, laser can penetrate through the media to map, so that the block is considered to have no object as a passable area, but after the depth camera is integrated, the semi-dense media has better adaptability, the robot does not project to the bushes during mapping, but restores the actual situation, which is an advantage that the common 2D radar does not have.
For the map building of a large scene, multiple loop detection is often needed so as to continuously correct the map to achieve the real restoration effect of the map, the robot has a good map building effect on the outdoor environment, the actual scene is basically restored, and the map with good quality can be used for navigation.
(4) Navigation and obstacle avoidance function test
Referring to fig. 13 to 16, after the environment is mapped, a path planning function needs to be performed on the map and a real-time obstacle avoidance function needs to be realized, so that the local planner has a matching effect on the surrounding environment.
When an obstacle such as a cone barrel is placed in the environment, the local planner can scan the surrounding environment in real time and the pink dots in the red circle are the obstacle scanned by the local planner, and the local planner can modify the expansion layer of the grid map according to the information scanned by the laser radar.
Before navigation starts, an objective point is calibrated on a map by using a 2D nav goal tool carried by rviz, the essence of the objective point is that a goal _ topic is issued for a navigation node to subscribe, after the objective point is calibrated, a planned path is generated in the map, and at the moment, the robot responds to the speed of a controller and the output of a steering engine according to the planned path.
When an obstacle is placed in the environment, a path which bypasses the obstacle is planned in the local path planning period, and the path planner avoids the expansion layer of the obstacle and plans an obviously visible curved path.
Through the test of the robot in the indoor environment, the conclusion can be drawn, the robot can plan the path and avoid the obstacle in the indoor environment, and the actual path can be attached to the planned path to a certain extent.
(5) Test analysis
The IP address of a host machine of a connectable trolley is remotely controlled by a virtual machine ssh, a navigation node of the robot is operated through a command frame, an established map can be called remotely by using rviz at a virtual machine end, the position, the posture and the surrounding environment of the robot in the map can be monitored, a target point which the robot needs to reach can be marked out in the map through the calibration of the target point of the rviz, a moving program of the trolley can be activated after the target point topic is issued, and the trolley can avoid obstacles according to the surrounding environment on the basis of following a route of a local planning period in the driving process.
The method comprises the steps that after target points are calibrated in an established map through rviz, black paths planned by a path planning algorithm can be seen, the overall planner is used for roughly planning the whole map, color point clouds near a robot can be continuously matched with characteristic points of surrounding scenes and matched with the map, real-time positioning is achieved in a small range, three circular points in front of the robot in the map are the steering foresight of the robot, and in the control algorithm, in order to enable a steering engine to make an angle in real time, certain foresight can be provided, and the problem that mechanical steering of the steering engine is slow is solved.
Therefore, after the look-ahead distance is reasonably set, the positioning navigation effect can be released in a topic form and displayed in rviz, and based on the positioning navigation effect, the positioning navigation effect is displayed for the system built by the embodiment.
Example 2
Referring to fig. 17 to 19, a second embodiment of the present invention is different from the first embodiment in that a hardware module of a smart car of an ROS-based autonomous navigation robot control system is provided, and specifically includes:
referring to fig. 17, STM32F103 is used as a main controller for outputting PWM for controlling a motor and the motor, and meanwhile, encoder and IMU (inertial measurement unit) data are read and sent to an upper computer Jeston nano through serial ports, Jeston nano performs main operations, a map is constructed by acquiring data of a laser radar and a depth camera in a mapping mode, the laser radar and the depth camera are used for matching feature points to estimate the self pose of the robot in a navigation mode, a motion path is calculated, and a control expectation is sent to a lower computer STM 32.
Referring to fig. 18, 12V power input is reduced to 5V through LM2596S, 5V is reduced to 3.3V through AMS1117, LM2596S is used for 5V voltage stabilization, which is a switching voltage regulator of a buck power management monolithic integrated circuit, which can output 3A driving current and has good linearity and load regulation characteristics, AMS1117 is used for 3.3V voltage stabilization, which is a forward low drop voltage regulator with an output voltage of 3.3V, and the operating temperature range: 40 ℃ below zero to 125 ℃ below zero, and 5V of input voltage.
Referring to fig. 19, the motor drive adopts TB6612FNG, the module can drive 2 two motors, STBY is connected to the IO port of the single chip microcomputer, when 0 is set, the motor stops working, when 1 is set, the motor is controlled to rotate forward and backward by AIN1, AIN2, BIN1 and BIN2, VM is connected to 12V power supply, VCC is connected to 5V power supply, GND is grounded, TB6612FNG can be used for outputting continuous driving current with the highest 1A per channel, the peak value is 2A (continuous pulse)/3A (single pulse) is started, the working temperature is-20 ℃ to 85 ℃, PWMA and PWMB are connected to the PWM output of the single chip microcomputer, a01, a02, B01 and B02 are module output ends, and the relationship between AIN and the forward and backward motor rotation is shown in table 2.
Table 2: AIN and the motor are in positive and negative rotation relation.
Figure 313068DEST_PATH_IMAGE011
Table 3: and (5) motor line sequence.
Figure 515992DEST_PATH_IMAGE013
The present embodiment uses a 20KG torque steering engine, model XTARK XDS-S20A, controllable angle 180 °, using PWM (pulse width modulation) drive, with dimensions of 40 × 20 × 37.2 mm.
Specifically, in this embodiment, it should be further described that a hardware module of the intelligent trolley of the autonomous navigation robot control system based on the ROS provided by the present invention includes a laser radar, an RGBD camera, an IMU, an encoder, an upper computer, a lower computer, a motor, and a steering engine, where the laser radar and the RGBD camera are connected to the upper computer through a USB interface, the IMU and the encoder are connected to the lower computer based on IIC and IO, respectively, the upper computer and the lower computer are connected through a serial port, the motor and the steering engine are connected to the lower computer based on a PWM technique, and a command and a calculation program are issued to the motor and the steering engine through the serial port to implement control of the trolley.
It should be recognized that embodiments of the present invention can be realized and implemented by computer hardware, a combination of hardware and software, or by computer instructions stored in a non-transitory computer readable memory. The methods may be implemented in a computer program using standard programming techniques, including a non-transitory computer-readable storage medium configured with the computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner, according to the methods and figures described in the detailed description. Each program may be implemented in a high level procedural or object oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language. Furthermore, the program can be run on a programmed application specific integrated circuit for this purpose.
Further, the operations of processes described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The processes described herein (or variations and/or combinations thereof) may be performed under the control of one or more computer systems configured with executable instructions, and may be implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) collectively executed on one or more processors, by hardware, or combinations thereof. The computer program includes a plurality of instructions executable by one or more processors.
Further, the method may be implemented in any type of computing platform operatively connected to a suitable interface, including but not limited to a personal computer, mini computer, mainframe, workstation, networked or distributed computing environment, separate or integrated computer platform, or in communication with a charged particle tool or other imaging device, and the like. Aspects of the invention may be embodied in machine-readable code stored on a non-transitory storage medium or device, whether removable or integrated into a computing platform, such as a hard disk, optically read and/or write storage medium, RAM, ROM, or the like, such that it may be read by a programmable computer, which when read by the storage medium or device, is operative to configure and operate the computer to perform the procedures described herein. Further, the machine-readable code, or portions thereof, may be transmitted over a wired or wireless network. The invention described herein includes these and other different types of non-transitory computer-readable storage media when such media include instructions or programs that implement the steps described above in conjunction with a microprocessor or other data processor. The invention also includes the computer itself when programmed according to the methods and techniques described herein. A computer program can be applied to input data to perform the functions described herein to transform the input data to generate output data that is stored to non-volatile memory. The output information may also be applied to one or more output devices, such as a display. In a preferred embodiment of the invention, the transformed data represents physical and tangible objects, including particular visual depictions of physical and tangible objects produced on a display.
As used in this application, the terms "component," "module," "system," and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, or software in execution. For example, a component may be, but is not limited to being: a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of example, both an application running on a computing device and the computing device can be a component. One or more components can reside within a process and/or thread of execution and a component can be localized on one computer and/or distributed between two or more computers. In addition, these components can execute from various computer readable media having various data structures thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the internet with other systems by way of the signal).
It should be noted that the above-mentioned embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention, which should be covered by the claims of the present invention.

Claims (3)

1. The utility model provides an intelligent vehicle of autonomous navigation robot control system based on ROS which characterized in that: comprises the steps of (a) preparing a mixture of a plurality of raw materials,
carrying a laser radar, a camera and a gravity acceleration sensor on a trolley, and acquiring the current position, distance information and environmental information of the trolley by using the laser radar;
a grid map is created based on a robot operating system, and a three-dimensional point cloud map is constructed by utilizing SLAM algorithm and depth camera visual fusion;
the trolley utilizes the robot operating system to sense the environment of the distance information and a visual odometer of an RGB-D depth camera, and generates the three-dimensional point cloud map;
the robot operating system corrects the three-dimensional point cloud map when a route forms a loop according to a GPS, and calculates the trolley route to avoid obstacles;
the construction of the map includes the steps of,
carrying out synchronization processing on the acquired RGB-D image, the radar data and the odometer node to obtain sensor data;
the sensor data is transmitted into a short-term memory module to form a loop and close-range detection to complete image optimization, and a global map group leader is obtained;
respectively dividing the global map group length to sequentially obtain map data, a map image, TF coordinate conversion, an octree-based three-dimensional map, the three-dimensional point cloud map and the grid map;
the robot operating system carries a global planner, a 2TEB local planner and a vehicle model control algorithm;
the global planner includes, dijkstra algorithm and a star algorithm;
the model control algorithm comprises a steering engine control algorithm and a motor control algorithm;
the dijkstra algorithm includes,
expanding towards the outer layer by taking the starting point as a center until the outer layer is expanded to the end point;
when the Dijkstra algorithm is used for calculating the shortest path in the map, the calculation is started from the top point;
introducing a set S and a set U, wherein the set S records the top point of the shortest path which is already solved and the length of the corresponding shortest path;
the set U records a vertex which does not obtain the shortest path and the distance from the vertex to the starting point;
initially, only a starting point exists in the set S, vertices except the starting point exist in the set U, and a path of the vertices in the set U is a path from the starting point to the vertices;
finding out the top point with the shortest path from the set U, and adding the top point into the set S;
updating the vertex in the set U and the path corresponding to the vertex;
finding out the top point with the shortest path from the set U and adding the top point into the set S;
updating the vertex in the set U and the path corresponding to the vertex;
repeating iteration until all the vertexes are traversed, and finding the shortest path;
the a star algorithm includes the steps of,
simplifying a search area, simplifying a map into a grid shape, and defining the center of one square as a node;
starting from the periphery of the starting point A, checking adjacent squares of the starting point A, finding targets from the periphery, and selecting the shortest and most appropriate path from a plurality of possible paths;
defining a node n as a comprehensive priority, and selecting a node with the highest comprehensive priority when selecting the next node to be traversed;
selecting the node with the highest priority from the priority queue as the next node to be traversed each time;
if the cost from the node n to the terminal is always less than or equal to the cost from the node n to the terminal, the shortest path can be found;
the steering engine control algorithm comprises a PID controller, wherein the PID controller is a linear controller, and forms deviation according to a given value r (t) and an actual output value c (t): e (t), c (t), r (t), and linearly combining the proportion (P), the integral (I), and the derivative (D) of the deviation to form a control amount, thereby controlling the controlled object;
the control law is as follows:
Figure FDA0003179109920000021
the transfer function is:
Figure FDA0003179109920000022
wherein, KpIs a proportionality coefficient, take Td=Ti,Ki=Kp÷TdIntegral coefficient, Kd=Kp×TdDifferential coefficient, TdDifferential time constant, TiIs the integration time constant.
2. The intelligent vehicle of the ROS-based autonomous navigation robot control system of claim 1, wherein: also comprises the following steps of (1) preparing,
an upper computer in the robot operating system receives data sent by a lower computer, calculates pose matching and matches coordinates in a map;
judging whether the vehicle issues a target point or not, if so, calculating and planning the path of the trolley, refreshing a local cost map and issuing a chassis control instruction;
if not, the process is ended directly.
3. The intelligent vehicle of the ROS-based autonomous navigation robot control system of claim 2, wherein: also comprises the following steps of (1) preparing,
the lower computer initializes data information and starts timing interruption, and the timing interruption is set to be 20 ms;
the upper computer receives the trolley speed and trolley steering engine angle data sent by the lower computer, and MPU6050 and encoder data are obtained;
motion control is performed.
CN202110511154.7A 2021-05-11 2021-05-11 ROS-based intelligent trolley of autonomous navigation robot control system Expired - Fee Related CN112987763B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110511154.7A CN112987763B (en) 2021-05-11 2021-05-11 ROS-based intelligent trolley of autonomous navigation robot control system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110511154.7A CN112987763B (en) 2021-05-11 2021-05-11 ROS-based intelligent trolley of autonomous navigation robot control system

Publications (2)

Publication Number Publication Date
CN112987763A CN112987763A (en) 2021-06-18
CN112987763B true CN112987763B (en) 2021-09-17

Family

ID=76337532

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110511154.7A Expired - Fee Related CN112987763B (en) 2021-05-11 2021-05-11 ROS-based intelligent trolley of autonomous navigation robot control system

Country Status (1)

Country Link
CN (1) CN112987763B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113465728B (en) * 2021-06-25 2023-08-04 重庆工程职业技术学院 Terrain awareness method, system, storage medium and computer equipment
CN113917915A (en) * 2021-08-09 2022-01-11 天津理工大学 Route planning device and method based on ROS mobile robot
CN113778096B (en) * 2021-09-15 2022-11-08 杭州景吾智能科技有限公司 Positioning and model building method and system for indoor robot
CN114313882A (en) * 2022-01-11 2022-04-12 浙江柯工智能***有限公司 Automatic transportation system and method for chemical fiber production
CN114543814A (en) * 2022-02-24 2022-05-27 北京化工大学 Robot autonomous positioning and navigation method applied to three-dimensional environment
CN114617114A (en) * 2022-04-19 2022-06-14 浙江理工大学 Solar weeding robot and control method thereof
CN114700927A (en) * 2022-04-19 2022-07-05 浙江理工大学 Combined weeding robot with parallel mechanical arms and flexible mechanical arm and control method
CN114700969A (en) * 2022-04-19 2022-07-05 浙江理工大学 Weeding robot based on XY I-shaped sliding table parallel flexible manipulator and control method
CN114952839B (en) * 2022-05-27 2024-02-06 西南交通大学 Cloud edge cooperation-based two-stage robot motion decision system
CN115018876B (en) * 2022-06-08 2023-09-26 哈尔滨理工大学 ROS-based non-cooperative target grabbing control method
CN114879704B (en) * 2022-07-11 2022-11-25 山东大学 Robot obstacle-avoiding control method and system
CN115648221A (en) * 2022-11-22 2023-01-31 福州大学 Education robot based on ROS system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105955273A (en) * 2016-05-25 2016-09-21 速感科技(北京)有限公司 Indoor robot navigation system and method
CN106052692A (en) * 2016-05-20 2016-10-26 中国地质大学(武汉) Shortest route planning and navigating method and system
CN107390703A (en) * 2017-09-12 2017-11-24 北京创享高科科技有限公司 A kind of intelligent blind-guidance robot and its blind-guiding method
CN107526360A (en) * 2017-09-26 2017-12-29 河南科技学院 The multistage independent navigation detection system of explosive-removal robot and method under a kind of circumstances not known
US10558224B1 (en) * 2017-08-10 2020-02-11 Zoox, Inc. Shared vehicle obstacle data
CN112154429A (en) * 2019-07-29 2020-12-29 深圳市大疆创新科技有限公司 High-precision map positioning method, system, platform and computer readable storage medium
CN112622932A (en) * 2020-12-23 2021-04-09 同济大学 Automatic driving track-changing planning algorithm based on heuristic search of potential energy field

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106052692A (en) * 2016-05-20 2016-10-26 中国地质大学(武汉) Shortest route planning and navigating method and system
CN105955273A (en) * 2016-05-25 2016-09-21 速感科技(北京)有限公司 Indoor robot navigation system and method
US10558224B1 (en) * 2017-08-10 2020-02-11 Zoox, Inc. Shared vehicle obstacle data
CN107390703A (en) * 2017-09-12 2017-11-24 北京创享高科科技有限公司 A kind of intelligent blind-guidance robot and its blind-guiding method
CN107526360A (en) * 2017-09-26 2017-12-29 河南科技学院 The multistage independent navigation detection system of explosive-removal robot and method under a kind of circumstances not known
CN112154429A (en) * 2019-07-29 2020-12-29 深圳市大疆创新科技有限公司 High-precision map positioning method, system, platform and computer readable storage medium
CN112622932A (en) * 2020-12-23 2021-04-09 同济大学 Automatic driving track-changing planning algorithm based on heuristic search of potential energy field

Also Published As

Publication number Publication date
CN112987763A (en) 2021-06-18

Similar Documents

Publication Publication Date Title
CN112987763B (en) ROS-based intelligent trolley of autonomous navigation robot control system
CN108827306A (en) A kind of unmanned plane SLAM navigation methods and systems based on Multi-sensor Fusion
CN109358638B (en) Unmanned aerial vehicle visual obstacle avoidance method based on distributed map
CN109959377A (en) A kind of robot navigation's positioning system and method
WO2021022728A1 (en) Control system of land-air amphibious unmanned vehicle
WO2016197986A1 (en) High-precision autonomous obstacle-avoidance flying method for unmanned plane
Murali et al. Perception-aware trajectory generation for aggressive quadrotor flight using differential flatness
CN109976164B (en) Energy optimization visual coverage trajectory planning method for multi-rotor unmanned aerial vehicle
CN102393744B (en) Navigation method of pilotless automobile
CN112518739B (en) Track-mounted chassis robot reconnaissance intelligent autonomous navigation method
CN111880573B (en) Four-rotor autonomous navigation method based on visual inertial navigation fusion
CN111308490B (en) Balance car indoor positioning and navigation system based on single-line laser radar
CN108469823B (en) Homography-based mobile robot formation following method
CN111427370A (en) Sparse pose adjustment-based Gmapping mapping method for mobile robot
CN105955273A (en) Indoor robot navigation system and method
Li et al. Localization and navigation for indoor mobile robot based on ROS
CN111982114B (en) Rescue robot for estimating three-dimensional pose by adopting IMU data fusion
CN109213175A (en) A kind of mobile robot visual servo track tracking prediction control method based on primal-dual neural network
CN113189613B (en) Robot positioning method based on particle filtering
CN114200926B (en) Local path planning method and system for unmanned vehicle
CN104298244A (en) Industrial robot three-dimensional real-time and high-precision positioning device and method
CN113119112A (en) Motion planning method and system suitable for vision measurement of six-degree-of-freedom robot
CN109900272B (en) Visual positioning and mapping method and device and electronic equipment
CN115599099A (en) ROS-based autonomous navigation robot
Zhou et al. Slam algorithm and navigation for indoor mobile robot based on ros

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210917