CN114756032A - Multi-sensing information efficient fusion method for intelligent agent autonomous navigation - Google Patents
Multi-sensing information efficient fusion method for intelligent agent autonomous navigation Download PDFInfo
- Publication number
- CN114756032A CN114756032A CN202210531745.5A CN202210531745A CN114756032A CN 114756032 A CN114756032 A CN 114756032A CN 202210531745 A CN202210531745 A CN 202210531745A CN 114756032 A CN114756032 A CN 114756032A
- Authority
- CN
- China
- Prior art keywords
- intelligent agent
- navigation
- fusion
- agent
- sensor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 15
- 230000004927 fusion Effects 0.000 claims abstract description 36
- 238000000034 method Methods 0.000 claims abstract description 24
- 239000003795 chemical substances by application Substances 0.000 claims description 78
- 230000006870 function Effects 0.000 claims description 20
- 239000002245 particle Substances 0.000 claims description 15
- 238000012952 Resampling Methods 0.000 claims description 12
- 239000013598 vector Substances 0.000 claims description 10
- 238000011161 development Methods 0.000 claims description 8
- 238000004891 communication Methods 0.000 claims description 7
- 238000011156 evaluation Methods 0.000 claims description 6
- 238000005070 sampling Methods 0.000 claims description 6
- 230000000007 visual effect Effects 0.000 claims description 6
- 230000001133 acceleration Effects 0.000 claims description 4
- 230000003068 static effect Effects 0.000 claims description 4
- 230000015556 catabolic process Effects 0.000 claims description 3
- 238000006731 degradation reaction Methods 0.000 claims description 3
- 230000010354 integration Effects 0.000 claims description 3
- 239000003642 reactive oxygen metabolite Substances 0.000 claims description 3
- 238000001514 detection method Methods 0.000 claims description 2
- 206010063385 Intellectualisation Diseases 0.000 abstract description 2
- 238000010276 construction Methods 0.000 abstract 2
- 238000005516 engineering process Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 4
- 238000011160 research Methods 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000002604 ultrasonography Methods 0.000 description 2
- 230000006399 behavior Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 239000002360 explosive Substances 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0214—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0221—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0223—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0255—Control of position or course in two dimensions specially adapted to land vehicles using acoustic signals, e.g. ultra-sonic singals
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0257—Control of position or course in two dimensions specially adapted to land vehicles using a radar
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0276—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Aviation & Aerospace Engineering (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Multimedia (AREA)
- Electromagnetism (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Acoustics & Sound (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention discloses a multi-sensing information efficient fusion method for intelligent agent autonomous navigation. The method relates to the construction of an experimental platform for the intelligent agent to perform multi-sensing fusion and the construction of a map for autonomous navigation; dividing the navigation area of the intelligent agent according to the importance of the navigation area of the intelligent agent, the parameter characteristics of the 2D laser radar, the depth camera and the ultrasonic sensor and the size of the dynamic window; and finally, solving the optimal motion track of the intelligent agent at the moment of t +1, controlling the intelligent agent to move, and further realizing autonomous navigation. The invention can provide an efficient multi-sensor fusion and intelligent agent flexible control method for intelligent agent autonomous navigation, the advantages of various fused sensors are highlighted, and the system consumption of the intelligent agent is not obviously increased. The system can provide a technical basis for autonomous navigation control of a robot, automatic driving of an unmanned vehicle, autonomous navigation control of an aircraft, equipment intellectualization and the like.
Description
Technical Field
The invention relates to the field of autonomous navigation of an intelligent agent, in particular to a multi-sensor information fusion method for autonomous navigation of the intelligent agent.
Background
With the rapid development of related technologies such as deep learning and the like and intelligence, key technologies such as multi-sensor fusion represented by robots, positioning and navigation, path planning, machine vision, intelligent control, intelligent interaction and the like are greatly improved, and the intelligent bodies such as robots, unmanned vehicles and the like and related industries are promoted to show an explosive growth trend in China, so that more than one third of manufacturing enterprises in China are influenced. Bill Gates et al believe that robots are likely to have a profound effect on our way of work, communication, learning and entertainment, as were personal computers in the past 30 years. However, the applications of intelligent agents represented by robots still face, however, the applications of robots still face: the fault-free stable operation has the huge tests of insufficient reliability, insufficient intelligent degree, high production cost, insufficient autonomous learning behavior capability and the like.
The multi-sensing information fusion technology and the working process thereof are proposed by American scholars in the 70 s of the 20 th century, and the multi-sensing information fusion technology is one of the key technologies in the research field of intelligent robots at present. Since 2007, the development of the Robot Operating System (ROS) is rapid, and has attracted much attention from both academic and industrial circles at home and abroad. The ROS uses a distributed system architecture, the sensor nodes are flexible to use, and the method has good plasticity and is suitable for related technical researches such as multi-sensor fusion. The book "Masken science and technology review" states that "ROS is gradually becoming a benchmark in the field of robot research". Simultaneous Localization and Mapping (SLAM) was proposed in the IEEE robotics and automation conference in 1986, first in paper in 1995, enabling robots to represent agents in a true sense of autonomy. In terms of local path planning, a Dynamic windowing algorithm (DWA, Dynamic Window algorithms) was proposed by Fox D in 1997. DWA is an optimal control instruction autonomous obstacle avoidance algorithm for directly searching the maximum value in a control instruction space, and the core idea of DWA is to create a linear velocityv And the speed of rotationSpeed pair of compositionThe speed vector space converts the path planning problem into an optimization problem of constraint on the speed vector space, and is widely applied to intelligent agent path planning.
Disclosure of Invention
The invention aims to provide a multi-sensing information efficient fusion method for autonomous navigation of an intelligent agent.
The technical scheme for realizing the purpose of the invention is as follows:
a multi-sensing information efficient fusion method for intelligent agent autonomous navigation comprises the following steps:
(1) building an experimental platform which can be used for carrying out multi-sensing fusion on an intelligent agent;
(2) constructing a laser SLAM two-dimensional grid map for autonomous navigation of the intelligent agent by utilizing a Gmapping algorithm;
(3) dividing the navigation area of the intelligent agent according to the importance of the navigation area of the intelligent agent, the parameter characteristics of the 2D laser radar, the depth camera and the ultrasonic sensor and the size of the dynamic container;
(4) and solving the optimal motion track of the intelligent agent at the moment of t +1 to realize the motion control of the intelligent agent.
The method comprises the following steps that (1) an experimental platform which can be used for an intelligent agent to perform multi-sensing fusion is built by using a Robot Operating System (ROS) frame based on Ubuntu18.04, and comprises the following steps:
1) creating nodes such as a laser radar, a depth camera and a camera, and completing communication and visual display of corresponding node data;
2) a bottom layer control system is built, a STM 32-based bottom layer motion control program is compiled, encoder data and serial port transceiving data are read, the building is completed, instructions such as forward and backward motion, left and right rotation and the like can be sent to the system by an upper computer in a remote control handle or serial port mode, and corresponding functions are realized;
3) the development platform main control system and the bottom layer control system are jointly debugged, various motion instructions are issued to the bottom layer through the development platform main control system, and the bottom layer system control chassis completes corresponding functions;
4) building an ROS (reactive oxygen species) media system based on Ubuntu18.04 on the embedded platform, and deploying the sensor nodes tested by the open platform to the embedded platform;
5) the embedded main control system and the bottom layer control system are jointly debugged, various motion instructions are issued to the bottom layer through the embedded main control system, and the bottom layer system control chassis completes corresponding functions and performs communication and visual display of various sensor data;
6) reading sensor data for obstacle detection and the ability to load and build navigation maps.
The step (2) of constructing the laser SLAM two-dimensional grid map for the autonomous navigation of the intelligent agent by using the Gmapping algorithm comprises the following steps:
1) sampling: current time of day𝑡Particle set ofIs a collection of particles from the last momentObtaining intermediate sampling;
2) calculating the weight of the particles:
3) resampling: resampling is a solution proposed for the degradation phenomenon of particles, and when the number of effective particles is lower than a set threshold, resampling operation is performed, and after resampling operation, all particles are given the same weight;
4) updating the map: and calculating the probability of the map according to the motion trail of the intelligent agent and the sensor data to update the map.
The navigation area dividing method in the step (3) comprises the following steps: the horizontal field of view area and the dynamic window size of utilizing depth camera, lidar, ultrasonic sensor divide the intelligent agent surrounding environment into 4 regions: static adjusting areaS out Dynamic adjustment regionS 1 The secondary core areaS 2 And a core regionS 3 ,S out Is a dynamic windowS DWA In the outer region of the outer zone,S 1 = S L ∩S DWA -S 2 +S 1 ,S 2 =S D ∩S DWA -S 1 ,S 3 = S s ∩S d/2 whereinS d/2 Is thatS s The sector area corresponds to an angular radius ofd/2A sector area of (a).
The optimal motion trajectory solving and motion control method in the step (4) comprises the following steps:
1) firstly, the intelligent agent in the single sensing scheme is solvedOptimal motion trajectory at time, velocity vector spaceBy a set of velocity vectorsAllowable velocity vectorAnd speed dynamic windowThe intersection of (a) and (b) constitutes:allowable velocity vectorThe following constraint is satisfied:
in the formula、Andminimum safe distance of the agent from the obstacle, horizontal acceleration and rotational acceleration of the agent, respectively, fromIn the middle solutionThe optimal motion track of the intelligent agent in the moment single sensing scheme is as follows:
in the formula、、Respectively the azimuth angle and the maximum azimuth angle in the motion process of the intelligent bodyA small safe distance and an evaluation function of the current speed,the consistent degree of the motion direction of the intelligent body and the target direction under the current pose is shown,、、the weight of the evaluation function can be adjusted according to a specific use scene to obtain the optimal motion track of the intelligent agent;
2) on the basis of the optimal motion trail of the intelligent agent single sensing scheme, solving the optimal motion trail of the intelligent agent of the multi-sensing fusion scheme, and using different fusion strategies according to different navigation areas; wherein:
S out : the updating frequency of navigation data in the area is extremely low, two main tasks are provided, and global path planning is carried out again by taking the current coordinate as a starting point when a global path is planned in the initial navigation stage and a local path is difficult to reach a target point in the navigation process.
S 1 : the agent isThe optimal motion track at the moment is acted by a laser radar sensor:G (v,w)= G L (v, w);
S 3 : the agent isOptimal motion trajectory at time:G (v,w)= λ 1 G L (v,w)+λ 2 G L&D (v,w)+(1- λ 1 -λ 2 )G L&D&S (v,w);
3) agent after integration of lidar, depth camera, and ultrasonic sensorOptimal motion trajectory at time:G S (v,w)=λ 1 G L (v,w)+λ 2 G L&D (v,w)+(1- λ 1 - λ 2 )G L&D&S (v,w)in the formulaG S (v,w)Subscript of S Indicating the range of areas used by the trajectory function.
Further, in order to show the meaning behind the optimal motion trajectory more clearly and to facilitate the control of the agent by using the function, the function is optimized as follows:G S (v,w)= G L (v,w)+ G L&D (v,w)+ G L&D&S (v,w),in the formula:、andand respectively representing the weights occupied by single sensing, characteristic level fusion and decision level fusion in the navigation process. Furthermore, the control can be realized by a program、Andand taking zero or non-zero to further control the data frequency of the three sensing schemes. Such as: assuming that the limit frame rates of the lidar, depth image and ultrasound sensor data are 30Hz, 20Hz and 10Hz, respectively, note that,,Will beAThe results are multiplied by (1,1,1), (1,1,0), (1,0,0) respectively at the same time interval and are executed in a 10Hz frequency cycle as a whole, i.e., single sensing, feature level fusion and decision level fusion can be executed at 30Hz, 20Hz and 10 Hz. Of course, any frame rate combination can be set within the limit frame rate range of the selected device as required, so that the intelligent agent can be controlled more efficiently and flexibly.
The invention has the advantages that: the multi-sensing fusion method used by the invention can fully consider the sensor quantity, the data processing quantity, the inherent characteristics of the device and the real-time performance of the system of the intelligent agentReliability of perception, and universality of fused strategy and algorithm, and the like、Andthe three parameters are flexibly controlled to adapt to different scene requirements. In addition, the advantages of various sensors can be fully exerted, the real-time requirement of intelligent body control is considered, the advantages of various sensors carried by the intelligent body can be highlighted after fusion, the system consumption of the intelligent body cannot be obviously increased, an efficient multi-sensor fusion method can be provided for intelligent body autonomous navigation, and a technical basis can be provided for robot autonomous navigation control, unmanned vehicle automatic driving, aircraft autonomous navigation control, equipment intellectualization and the like.
Drawings
Fig. 1 is a diagram of a framework of an intelligent system provided by an embodiment of the invention.
Fig. 2 is a schematic view of a navigation area partitioned according to importance of an agent navigation area, 2D lidar, a depth camera, ultrasonic sensor parameter characteristics, and a dynamic window size according to an embodiment of the present invention.
FIG. 3 is a diagram of a multi-sensing fusion type provided by an embodiment of the present invention, which is (a) feature-level fusion of a depth camera and a lidar, (b) decision-level fusion of a depth camera, a lidar and an ultrasonic sensor.
Fig. 4 is a schematic flowchart of a multi-sensing efficient fusion method for autonomous intelligent agent navigation according to an embodiment of the present invention.
Fig. 5 is a diagram of an autonomous navigation experimental arrangement and a navigation result information output result provided by the embodiment of the present invention.
Fig. 6 is a diagram of an output result of information of an autonomous navigation experiment result provided in an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Firstly, an experimental platform capable of being used for carrying out multi-sensing fusion on an intelligent agent is built, and a system framework of the experimental platform is shown in figure 1. And secondly, constructing a map for autonomous navigation by using the platform. Then according to the importance of the navigation area of the intelligent agent, the 2D laser radar, the depth camera, the parameter characteristics of the ultrasonic sensor and the size of the dynamic window, the navigation area of the intelligent agent is divided, and the surrounding environment of the intelligent agent is divided into 4 areas: static adjusting areaS out Dynamic adjustment regionS 1 The secondary core areaS 2 And a core regionS 3 The result of the division is shown in fig. 2. And finally, solving the optimal motion track of the intelligent agent at the t +1 moment by using different fusion strategies according to different navigation areas, controlling the intelligent agent to move, and further realizing autonomous navigation. The experimental set-up is shown in fig. 5 and the experimental results in fig. 6.
As shown in fig. 4, the multi-sensor high-efficiency fusion method of the present invention sequentially comprises the following steps:
s101, an experimental platform capable of being used for the intelligent agent to perform multi-sensing fusion is built, and a Robot Operating System (ROS) of Ubuntu18.04 is taken as an example to be built, but the method is not limited to the method, and specifically comprises the following steps:
s1011 creates nodes such as a laser radar, a depth camera and a camera, and completes communication and visual display of corresponding node data;
s1012, building a bottom layer control system, compiling a bottom layer motion control program based on STM32, reading encoder data and serial port transceiving data, and building to finish sending instructions of front-back motion, left-right rotation and the like to the system by using an upper computer in a remote control handle or serial port mode and realizing corresponding functions;
s1013, joint debugging of the development platform main control system and the bottom layer control system, wherein various motion instructions are issued to the bottom layer through the development platform main control system, and the bottom layer system controls the chassis to complete corresponding functions;
s1014, building an Ubuntu 18.04-based ROS (reactive oxygen species) memorial system on the embedded platform, and deploying the sensor nodes tested by the open platform to the embedded platform;
s1015, jointly debugging the embedded main control system and the bottom layer control system, issuing various motion instructions to the bottom layer through the embedded main control system, and controlling the chassis by the bottom layer system to complete corresponding functions and perform communication and visual display of various sensor data;
and S1016, reading the sensor data to detect the obstacle and loading and constructing a navigation map.
S102, constructing a laser SLAM two-dimensional grid map for intelligent autonomous navigation, and specifically comprising the following steps:
s1021, sampling: current time of day𝑡Particle set ofIs a collection of particles from the last momentObtaining intermediate sampling;
s1022, calculating the weight of the particles:
s1023, resampling: resampling is a solution proposed for the degradation phenomenon of particles, and when the number of effective particles is lower than a set threshold, resampling operation is performed, and after resampling operation, all particles are given the same weight.
S1024, updating the map: and calculating the probability of the map according to the motion trail of the intelligent agent and the sensor data to update the map.
S103, dividing the navigation area of the intelligent agent according to the importance of the navigation area of the intelligent agent, the 2D laser radar, the depth camera, the parameter characteristics of the ultrasonic sensor and the size of the dynamic window, and specifically comprising the following steps:
s103, dividing the environment around the intelligent agent into 4 areas by using the horizontal view field areas of the depth camera, the laser radar and the ultrasonic sensor and the window size of the dynamic window: static adjusting areaS out Dynamic adjustment regionS 1 The secondary core areaS 2 And a core regionS 3 ,S out Is a dynamic windowS DWA In the outer region of the outer zone,S 1 = S L ∩S DWA -S 2 +S 1 ,S 2 =S D ∩S DWA -S 1 ,S 3 = S s ∩S d/2 whereinS d/2 Is thatS s The sector area corresponds to an angular radius ofd/2The sector area of (a).
S104, solving the optimal motion track of the intelligent agent at the t +1 moment by using different fusion strategies according to different navigation areas, controlling the intelligent agent to move, and further realizing autonomous navigation, wherein the method specifically comprises the following steps:
s1041, solving the intelligent agent in the single sensing schemeOptimal motion trajectory at time:wherein、、Respectively are evaluation functions of an azimuth angle, a minimum safe distance and a current speed in the moving process of the intelligent body,the consistent degree of the moving direction of the intelligent body and the target direction under the current pose is represented,、、the weight of the evaluation function can be adjusted according to a specific use scene to obtain the optimal motion track of the intelligent agent;
and S1042, using different fusion strategies according to different navigation areas, and then solving the optimal motion trail of the intelligent agent of the multi-sensing fusion scheme on the basis of the optimal motion trail of the intelligent agent single-sensing scheme.
S1043、S out : the updating frequency of navigation data in the area is extremely low, two main tasks are provided, and global path planning is carried out again by taking the current coordinate as a starting point when a global path is planned in the initial navigation stage and a local path is difficult to reach a target point in the navigation process.
S1044、S 1 : the agent isThe optimal motion track at the moment is acted by a laser radar sensor:G (v,w)= G L (v,w);
S1045、S 2 : the fusion strategy is shown in FIG. 3 (a), with the agent inOptimal motion trajectory of timeG (v,w) = λ 1 G L (v,w)+λ 2 G L&D (v,w) ;
S1046、S 3 : the fusion strategy is shown in FIG. 3 (b), and the agent isOptimal motion trajectory of timeG (v,w)= λ 1 G L (v,w)+λ 2 G L&D (v,w)+(1-λ 1 -λ 2 )G L&D&S (v,w)。
S1047, summarizing, the intelligent agent after the integration of the laser radar, the depth camera and the ultrasonic sensorThe optimal motion track of the moment can be written asG S (v,w)=λ 1 G L (v,w)+λ 2 G L&D (v,w)+(1- λ 1 - λ 2 )G L&D&S (v,w)(ii) a In the formulaG S (v,w)Subscript of S Indicating the range of areas used by the trajectory function.
S1048, writing it asG S (v,w)= G L (v,w)+ G L&D (v,w)+ G L&D&S (v,w), 、Andand respectively representing the weights occupied by single sensing, feature level fusion and decision level fusion in the navigation process.
S1049, controlling through program、Andand taking zero or non-zero to further control the data frequency of the three sensing schemes. Such as: assuming that the limit frame rates of the lidar, depth image and ultrasound sensor data are 30Hz, 20Hz and 10Hz, respectively, note that,,Will beAThe results are multiplied by (1,1,1), (1,1,0), (1,0,0) respectively at the same time interval and are executed in a 10Hz frequency cycle as a whole, i.e., single sensing, feature level fusion and decision level fusion can be executed at 30Hz, 20Hz and 10 Hz. Of course, any frame rate combination can be set within the limit frame rate range of the selected device as required, so that the intelligent agent can be controlled more efficiently and flexibly.
Claims (6)
1. A multi-sensing information efficient fusion method for intelligent agent autonomous navigation is characterized by comprising the following steps: the method comprises the following steps:
(1) constructing an experimental platform which can be used for an intelligent agent to perform multi-sensing information fusion;
(2) constructing a laser SLAM two-dimensional grid map for autonomous navigation of the intelligent agent by utilizing a Gmapping algorithm;
(3) dividing the navigation area of the intelligent agent according to the importance of the navigation area of the intelligent agent, the parameter characteristics of the 2D laser radar, the depth camera and the ultrasonic sensor and the size of the navigation dynamic window;
(4) and solving the optimal motion track of the intelligent agent at the moment of t +1, and further realizing the motion control of the intelligent agent.
2. The multi-sensor high-efficiency fusion method according to claim 1, wherein: the method comprises the following steps that (1) an experimental platform which can be used for an intelligent agent to perform multi-sensing fusion is built by using a Robot Operating System (ROS) frame based on Ubuntu18.04, and comprises the following steps:
1) creating laser radar, a depth camera and a camera node, and finishing communication and visual display of corresponding node data;
2) a bottom layer control system is built, a STM 32-based bottom layer motion control program is compiled, encoder data and serial port transceiving data are read, the building is completed, instructions such as forward and backward motion, left and right rotation and the like can be sent to the system by an upper computer in a remote control handle or serial port mode, and corresponding functions are realized;
3) the development platform main control system and the bottom layer control system are jointly debugged, various motion instructions are issued to the bottom layer through the development platform main control system, and the bottom layer system control chassis completes corresponding functions;
4) building an ROS (reactive oxygen species) media system based on Ubuntu18.04 on the embedded platform, and deploying the sensor nodes tested by the open platform to the embedded platform;
5) the embedded main control system and the bottom layer control system are jointly debugged, various motion instructions are issued to the bottom layer through the embedded main control system, and the bottom layer system control chassis completes corresponding functions and performs communication and visual display of various sensor data;
6) reading sensor data for obstacle detection and the ability to load and build navigation maps.
3. The multi-sensor high-efficiency fusion method according to claim 1, wherein: the step (2) of constructing the laser SLAM two-dimensional grid map for the autonomous navigation of the intelligent agent by using the Gmapping algorithm comprises the following steps:
1) sampling: the current time𝑡Particle set ofIs a collection of particles from the last momentObtaining intermediate sampling;
2) calculating the weight of the particles:
3) resampling: resampling is a solution proposed for the degradation phenomenon of particles, and when the number of effective particles is lower than a set threshold, resampling operation is performed, and after resampling operation, all particles are given the same weight;
4) updating the map: and calculating the probability of the map and updating the map according to the motion trail of the intelligent agent and the sensor data.
4. The method for efficiently fusing multi-sensor information according to claim 1, wherein: the navigation area dividing method in the step (3) comprises the following steps: the horizontal field of view area and dynamic window size that utilize depth camera, laser radar, ultrasonic sensor divide into 4 regions with the intelligent agent surrounding environment: static adjusting areaS out Dynamic adjustment regionS 1 The secondary core areaS 2 And a core regionS 3 ,S out Is a dynamic windowS DWA In the outer region of the outer zone,S 1 = S L ∩S DWA -S 2 +S 1 ,S 2 =S D ∩S DWA -S 1 ,S 3 = S s ∩S d/2 whereinS d/2 Is thatS s The sector area corresponds to an angular radius ofd/2A sector area of (a).
5. The multi-sensor high-efficiency fusion method according to claim 1, wherein: the optimal motion trajectory solving and motion control method in the step (4) comprises the following steps:
1) firstly, the intelligent agent in the single sensing scheme is solvedOptimal motion trajectory at time, velocity vector spaceBy a set of velocity vectorsAllowable velocity vectorAnd speed dynamic windowThe intersection of (a) and (b) constitutes:allowable velocity vectorThe following constraint is satisfied:
in the formula、Andminimum safe distance of agent to obstacle, horizontal acceleration and rotation acceleration of agent, respectively, fromIn solution toThe optimal motion track of the intelligent agent in the moment single sensing scheme is as follows:
in the formula、、Are evaluation functions of an azimuth angle, a minimum safety distance and a current speed in the moving process of the intelligent body respectively,the consistent degree of the motion direction of the intelligent body and the target direction under the current pose is shown,、、the weight of the evaluation function can be adjusted according to a specific use scene to obtain the optimal motion track of the intelligent agent;
2) on the basis of the optimal motion trail of the intelligent agent single sensing scheme, solving the optimal motion trail of the intelligent agent of the multi-sensing fusion scheme, and using different fusion strategies according to different navigation areas; wherein:
S out : the updating frequency of navigation data in the area is extremely low, two main tasks are provided, and global path planning is carried out in the initial navigation stage and is carried out again by taking the current coordinate as a starting point when a local path is difficult to reach a target point in the navigation process;
S 1 : the agent isThe optimal motion track at the moment is acted by a laser radar sensor:G (v,w)= G L (v,w);
S 2 : the intelligent agent isOptimal motion trajectory at time:G (v,w) = λ 1 G L (v,w)+λ 2 G L&D (v,w) ;
S 3 : the agent isOptimal motion trajectory at time:G (v,w)= λ 1 G L (v,w)+λ 2 G L&D (v,w)+(1-λ 1 -λ 2 ) G L&D&S (v,w);
6. The multi-sensor high-efficiency fusion method according to claim 5, wherein: step 3) the intelligent agent after the laser radar, the depth camera and the ultrasonic sensor are fused isOptimal motion trajectory at time:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210531745.5A CN114756032A (en) | 2022-05-17 | 2022-05-17 | Multi-sensing information efficient fusion method for intelligent agent autonomous navigation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210531745.5A CN114756032A (en) | 2022-05-17 | 2022-05-17 | Multi-sensing information efficient fusion method for intelligent agent autonomous navigation |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114756032A true CN114756032A (en) | 2022-07-15 |
Family
ID=82334496
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210531745.5A Pending CN114756032A (en) | 2022-05-17 | 2022-05-17 | Multi-sensing information efficient fusion method for intelligent agent autonomous navigation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114756032A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115909728A (en) * | 2022-11-02 | 2023-04-04 | 智道网联科技(北京)有限公司 | Road side sensing method and device, electronic equipment and storage medium |
-
2022
- 2022-05-17 CN CN202210531745.5A patent/CN114756032A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115909728A (en) * | 2022-11-02 | 2023-04-04 | 智道网联科技(北京)有限公司 | Road side sensing method and device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Minguez et al. | Nearness diagram (ND) navigation: collision avoidance in troublesome scenarios | |
Hou et al. | Can a simple control scheme work for a formation control of multiple autonomous underwater vehicles? | |
González-Banos et al. | Motion planning: Recent developments | |
Wong et al. | Adaptive and intelligent navigation of autonomous planetary rovers—A survey | |
An et al. | Task planning and collaboration of jellyfish-inspired multiple spherical underwater robots | |
Sun et al. | A novel fuzzy control algorithm for three-dimensional AUV path planning based on sonar model | |
Li et al. | Characteristic evaluation via multi-sensor information fusion strategy for spherical underwater robots | |
CN114756032A (en) | Multi-sensing information efficient fusion method for intelligent agent autonomous navigation | |
Chen et al. | Study on coordinated control and hardware system of a mobile manipulator | |
Yi et al. | Intelligent robot obstacle avoidance system based on fuzzy control | |
Clark et al. | Archaeology via underwater robots: Mapping and localization within maltese cistern systems | |
CN113671960A (en) | Autonomous navigation and control method of magnetic micro-nano robot | |
Pang et al. | Multi-AUV formation reconfiguration obstacle avoidance algorithm based on affine transformation and improved artificial potential field under ocean currents disturbance | |
Igor et al. | Hybrid control approach to multi-AUV system in a surveillance mission | |
Ridao et al. | O2CA2, a new object oriented control architecture for autonomy: the reactive layer | |
Vasseur et al. | Navigation of car-like mobile robots in obstructed environments using convex polygonal cells | |
Chiu et al. | Fuzzy obstacle avoidance control of a two-wheeled mobile robot | |
Parasuraman | Sensor fusion for mobile robot navigation: Fuzzy Associative Memory | |
Ridao et al. | O2CA2: A new hybrid control architecture for a low cost AUV | |
Zhou et al. | Visual servo control of underwater vehicles based on image moments | |
Heshmati-Alamdari | Cooperative and Interaction Control for Underwater Robotic Vehicles | |
Hanz et al. | An abstraction layer for controlling heterogeneous mobile cyber-physical systems | |
Hou et al. | PD control scheme for formation control of multiple autonomous underwater vehicles | |
Chu | Development of hybrid control architecture for a small autonomous underwater vehicle | |
Matveev et al. | Reactive navigation of nonholonomic mobile robots in dynamic uncertain environments with moving and deforming obstacles |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |