CN114756032A - Multi-sensing information efficient fusion method for intelligent agent autonomous navigation - Google Patents

Multi-sensing information efficient fusion method for intelligent agent autonomous navigation Download PDF

Info

Publication number
CN114756032A
CN114756032A CN202210531745.5A CN202210531745A CN114756032A CN 114756032 A CN114756032 A CN 114756032A CN 202210531745 A CN202210531745 A CN 202210531745A CN 114756032 A CN114756032 A CN 114756032A
Authority
CN
China
Prior art keywords
intelligent agent
navigation
fusion
agent
sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210531745.5A
Other languages
Chinese (zh)
Inventor
丁数学
曹新乾
谭本英
刘晴
王宁宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Electronic Technology
Original Assignee
Guilin University of Electronic Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Electronic Technology filed Critical Guilin University of Electronic Technology
Priority to CN202210531745.5A priority Critical patent/CN114756032A/en
Publication of CN114756032A publication Critical patent/CN114756032A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0255Control of position or course in two dimensions specially adapted to land vehicles using acoustic signals, e.g. ultra-sonic singals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Acoustics & Sound (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a multi-sensing information efficient fusion method for intelligent agent autonomous navigation. The method relates to the construction of an experimental platform for the intelligent agent to perform multi-sensing fusion and the construction of a map for autonomous navigation; dividing the navigation area of the intelligent agent according to the importance of the navigation area of the intelligent agent, the parameter characteristics of the 2D laser radar, the depth camera and the ultrasonic sensor and the size of the dynamic window; and finally, solving the optimal motion track of the intelligent agent at the moment of t +1, controlling the intelligent agent to move, and further realizing autonomous navigation. The invention can provide an efficient multi-sensor fusion and intelligent agent flexible control method for intelligent agent autonomous navigation, the advantages of various fused sensors are highlighted, and the system consumption of the intelligent agent is not obviously increased. The system can provide a technical basis for autonomous navigation control of a robot, automatic driving of an unmanned vehicle, autonomous navigation control of an aircraft, equipment intellectualization and the like.

Description

Multi-sensing information efficient fusion method for intelligent agent autonomous navigation
Technical Field
The invention relates to the field of autonomous navigation of an intelligent agent, in particular to a multi-sensor information fusion method for autonomous navigation of the intelligent agent.
Background
With the rapid development of related technologies such as deep learning and the like and intelligence, key technologies such as multi-sensor fusion represented by robots, positioning and navigation, path planning, machine vision, intelligent control, intelligent interaction and the like are greatly improved, and the intelligent bodies such as robots, unmanned vehicles and the like and related industries are promoted to show an explosive growth trend in China, so that more than one third of manufacturing enterprises in China are influenced. Bill Gates et al believe that robots are likely to have a profound effect on our way of work, communication, learning and entertainment, as were personal computers in the past 30 years. However, the applications of intelligent agents represented by robots still face, however, the applications of robots still face: the fault-free stable operation has the huge tests of insufficient reliability, insufficient intelligent degree, high production cost, insufficient autonomous learning behavior capability and the like.
The multi-sensing information fusion technology and the working process thereof are proposed by American scholars in the 70 s of the 20 th century, and the multi-sensing information fusion technology is one of the key technologies in the research field of intelligent robots at present. Since 2007, the development of the Robot Operating System (ROS) is rapid, and has attracted much attention from both academic and industrial circles at home and abroad. The ROS uses a distributed system architecture, the sensor nodes are flexible to use, and the method has good plasticity and is suitable for related technical researches such as multi-sensor fusion. The book "Masken science and technology review" states that "ROS is gradually becoming a benchmark in the field of robot research". Simultaneous Localization and Mapping (SLAM) was proposed in the IEEE robotics and automation conference in 1986, first in paper in 1995, enabling robots to represent agents in a true sense of autonomy. In terms of local path planning, a Dynamic windowing algorithm (DWA, Dynamic Window algorithms) was proposed by Fox D in 1997. DWA is an optimal control instruction autonomous obstacle avoidance algorithm for directly searching the maximum value in a control instruction space, and the core idea of DWA is to create a linear velocityv And the speed of rotation
Figure 100002_DEST_PATH_IMAGE002
Speed pair of composition
Figure 100002_DEST_PATH_IMAGE004
The speed vector space converts the path planning problem into an optimization problem of constraint on the speed vector space, and is widely applied to intelligent agent path planning.
Disclosure of Invention
The invention aims to provide a multi-sensing information efficient fusion method for autonomous navigation of an intelligent agent.
The technical scheme for realizing the purpose of the invention is as follows:
a multi-sensing information efficient fusion method for intelligent agent autonomous navigation comprises the following steps:
(1) building an experimental platform which can be used for carrying out multi-sensing fusion on an intelligent agent;
(2) constructing a laser SLAM two-dimensional grid map for autonomous navigation of the intelligent agent by utilizing a Gmapping algorithm;
(3) dividing the navigation area of the intelligent agent according to the importance of the navigation area of the intelligent agent, the parameter characteristics of the 2D laser radar, the depth camera and the ultrasonic sensor and the size of the dynamic container;
(4) and solving the optimal motion track of the intelligent agent at the moment of t +1 to realize the motion control of the intelligent agent.
The method comprises the following steps that (1) an experimental platform which can be used for an intelligent agent to perform multi-sensing fusion is built by using a Robot Operating System (ROS) frame based on Ubuntu18.04, and comprises the following steps:
1) creating nodes such as a laser radar, a depth camera and a camera, and completing communication and visual display of corresponding node data;
2) a bottom layer control system is built, a STM 32-based bottom layer motion control program is compiled, encoder data and serial port transceiving data are read, the building is completed, instructions such as forward and backward motion, left and right rotation and the like can be sent to the system by an upper computer in a remote control handle or serial port mode, and corresponding functions are realized;
3) the development platform main control system and the bottom layer control system are jointly debugged, various motion instructions are issued to the bottom layer through the development platform main control system, and the bottom layer system control chassis completes corresponding functions;
4) building an ROS (reactive oxygen species) media system based on Ubuntu18.04 on the embedded platform, and deploying the sensor nodes tested by the open platform to the embedded platform;
5) the embedded main control system and the bottom layer control system are jointly debugged, various motion instructions are issued to the bottom layer through the embedded main control system, and the bottom layer system control chassis completes corresponding functions and performs communication and visual display of various sensor data;
6) reading sensor data for obstacle detection and the ability to load and build navigation maps.
The step (2) of constructing the laser SLAM two-dimensional grid map for the autonomous navigation of the intelligent agent by using the Gmapping algorithm comprises the following steps:
1) sampling: current time of day𝑡Particle set of
Figure 100002_DEST_PATH_IMAGE006
Is a collection of particles from the last moment
Figure 100002_DEST_PATH_IMAGE008
Obtaining intermediate sampling;
2) calculating the weight of the particles:
Figure 100002_DEST_PATH_IMAGE010
3) resampling: resampling is a solution proposed for the degradation phenomenon of particles, and when the number of effective particles is lower than a set threshold, resampling operation is performed, and after resampling operation, all particles are given the same weight;
4) updating the map: and calculating the probability of the map according to the motion trail of the intelligent agent and the sensor data to update the map.
The navigation area dividing method in the step (3) comprises the following steps: the horizontal field of view area and the dynamic window size of utilizing depth camera, lidar, ultrasonic sensor divide the intelligent agent surrounding environment into 4 regions: static adjusting areaS out Dynamic adjustment regionS 1 The secondary core areaS 2 And a core regionS 3 S out Is a dynamic windowS DWA In the outer region of the outer zone,S 1 = S L S DWA -S 2 +S 1 S 2 =S D S DWA -S 1 S 3 = S s ∩S d/2 whereinS d/2 Is thatS s The sector area corresponds to an angular radius ofd/2A sector area of (a).
The optimal motion trajectory solving and motion control method in the step (4) comprises the following steps:
1) firstly, the intelligent agent in the single sensing scheme is solved
Figure 100002_DEST_PATH_IMAGE012
Optimal motion trajectory at time, velocity vector space
Figure 100002_DEST_PATH_IMAGE014
By a set of velocity vectors
Figure 100002_DEST_PATH_IMAGE016
Allowable velocity vector
Figure 100002_DEST_PATH_IMAGE018
And speed dynamic window
Figure 100002_DEST_PATH_IMAGE020
The intersection of (a) and (b) constitutes:
Figure 100002_DEST_PATH_IMAGE022
allowable velocity vector
Figure 214688DEST_PATH_IMAGE018
The following constraint is satisfied:
Figure 100002_DEST_PATH_IMAGE024
in the formula
Figure 100002_DEST_PATH_IMAGE026
Figure 100002_DEST_PATH_IMAGE028
And
Figure 100002_DEST_PATH_IMAGE030
minimum safe distance of the agent from the obstacle, horizontal acceleration and rotational acceleration of the agent, respectively, from
Figure 256462DEST_PATH_IMAGE014
In the middle solution
Figure 196386DEST_PATH_IMAGE012
The optimal motion track of the intelligent agent in the moment single sensing scheme is as follows:
Figure 100002_DEST_PATH_IMAGE032
in the formula
Figure 100002_DEST_PATH_IMAGE034
Figure 100002_DEST_PATH_IMAGE036
Figure 100002_DEST_PATH_IMAGE038
Respectively the azimuth angle and the maximum azimuth angle in the motion process of the intelligent bodyA small safe distance and an evaluation function of the current speed,
Figure 450649DEST_PATH_IMAGE034
the consistent degree of the motion direction of the intelligent body and the target direction under the current pose is shown,
Figure 100002_DEST_PATH_IMAGE040
Figure 100002_DEST_PATH_IMAGE042
Figure 100002_DEST_PATH_IMAGE044
the weight of the evaluation function can be adjusted according to a specific use scene to obtain the optimal motion track of the intelligent agent;
2) on the basis of the optimal motion trail of the intelligent agent single sensing scheme, solving the optimal motion trail of the intelligent agent of the multi-sensing fusion scheme, and using different fusion strategies according to different navigation areas; wherein:
S out : the updating frequency of navigation data in the area is extremely low, two main tasks are provided, and global path planning is carried out again by taking the current coordinate as a starting point when a global path is planned in the initial navigation stage and a local path is difficult to reach a target point in the navigation process.
S 1 : the agent is
Figure 733863DEST_PATH_IMAGE012
The optimal motion track at the moment is acted by a laser radar sensor:G (v,w)= G L (v, w);
S 2 : the agent is
Figure 821905DEST_PATH_IMAGE012
Optimal motion trajectory at time:G (v,w) = λ 1 G L (v,w)+λ 2 G L&D (v,w) ;
S 3 : the agent is
Figure 980616DEST_PATH_IMAGE012
Optimal motion trajectory at time:G (v,w)= λ 1 G L (v,w)+λ 2 G L&D (v,w)+(1- λ 1 2 )G L&D&S (v,w)
3) agent after integration of lidar, depth camera, and ultrasonic sensor
Figure 445096DEST_PATH_IMAGE012
Optimal motion trajectory at time:G S (v,w)=λ 1 G L (v,w)+λ 2 G L&D (v,w)+(1- λ 1 - λ 2 )G L&D&S (v,w)in the formulaG S (v,w)Subscript of S Indicating the range of areas used by the trajectory function.
Further, in order to show the meaning behind the optimal motion trajectory more clearly and to facilitate the control of the agent by using the function, the function is optimized as follows:G S (v,w)=
Figure 100002_DEST_PATH_IMAGE046
G L (v,w)+
Figure DEST_PATH_IMAGE048
G L&D (v,w)+
Figure DEST_PATH_IMAGE050
G L&D&S (v,w),in the formula:
Figure 317237DEST_PATH_IMAGE046
Figure 966393DEST_PATH_IMAGE048
and
Figure 986301DEST_PATH_IMAGE050
and respectively representing the weights occupied by single sensing, characteristic level fusion and decision level fusion in the navigation process. Furthermore, the control can be realized by a program
Figure 988892DEST_PATH_IMAGE046
Figure 777857DEST_PATH_IMAGE048
And
Figure 614226DEST_PATH_IMAGE050
and taking zero or non-zero to further control the data frequency of the three sensing schemes. Such as: assuming that the limit frame rates of the lidar, depth image and ultrasound sensor data are 30Hz, 20Hz and 10Hz, respectively, note that
Figure DEST_PATH_IMAGE052
,
Figure 387010DEST_PATH_IMAGE048
,
Figure DEST_PATH_IMAGE054
Will beAThe results are multiplied by (1,1,1), (1,1,0), (1,0,0) respectively at the same time interval and are executed in a 10Hz frequency cycle as a whole, i.e., single sensing, feature level fusion and decision level fusion can be executed at 30Hz, 20Hz and 10 Hz. Of course, any frame rate combination can be set within the limit frame rate range of the selected device as required, so that the intelligent agent can be controlled more efficiently and flexibly.
The invention has the advantages that: the multi-sensing fusion method used by the invention can fully consider the sensor quantity, the data processing quantity, the inherent characteristics of the device and the real-time performance of the system of the intelligent agentReliability of perception, and universality of fused strategy and algorithm, and the like
Figure 82040DEST_PATH_IMAGE046
Figure 991090DEST_PATH_IMAGE048
And
Figure 467202DEST_PATH_IMAGE050
the three parameters are flexibly controlled to adapt to different scene requirements. In addition, the advantages of various sensors can be fully exerted, the real-time requirement of intelligent body control is considered, the advantages of various sensors carried by the intelligent body can be highlighted after fusion, the system consumption of the intelligent body cannot be obviously increased, an efficient multi-sensor fusion method can be provided for intelligent body autonomous navigation, and a technical basis can be provided for robot autonomous navigation control, unmanned vehicle automatic driving, aircraft autonomous navigation control, equipment intellectualization and the like.
Drawings
Fig. 1 is a diagram of a framework of an intelligent system provided by an embodiment of the invention.
Fig. 2 is a schematic view of a navigation area partitioned according to importance of an agent navigation area, 2D lidar, a depth camera, ultrasonic sensor parameter characteristics, and a dynamic window size according to an embodiment of the present invention.
FIG. 3 is a diagram of a multi-sensing fusion type provided by an embodiment of the present invention, which is (a) feature-level fusion of a depth camera and a lidar, (b) decision-level fusion of a depth camera, a lidar and an ultrasonic sensor.
Fig. 4 is a schematic flowchart of a multi-sensing efficient fusion method for autonomous intelligent agent navigation according to an embodiment of the present invention.
Fig. 5 is a diagram of an autonomous navigation experimental arrangement and a navigation result information output result provided by the embodiment of the present invention.
Fig. 6 is a diagram of an output result of information of an autonomous navigation experiment result provided in an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Firstly, an experimental platform capable of being used for carrying out multi-sensing fusion on an intelligent agent is built, and a system framework of the experimental platform is shown in figure 1. And secondly, constructing a map for autonomous navigation by using the platform. Then according to the importance of the navigation area of the intelligent agent, the 2D laser radar, the depth camera, the parameter characteristics of the ultrasonic sensor and the size of the dynamic window, the navigation area of the intelligent agent is divided, and the surrounding environment of the intelligent agent is divided into 4 areas: static adjusting areaS out Dynamic adjustment regionS 1 The secondary core areaS 2 And a core regionS 3 The result of the division is shown in fig. 2. And finally, solving the optimal motion track of the intelligent agent at the t +1 moment by using different fusion strategies according to different navigation areas, controlling the intelligent agent to move, and further realizing autonomous navigation. The experimental set-up is shown in fig. 5 and the experimental results in fig. 6.
As shown in fig. 4, the multi-sensor high-efficiency fusion method of the present invention sequentially comprises the following steps:
s101, an experimental platform capable of being used for the intelligent agent to perform multi-sensing fusion is built, and a Robot Operating System (ROS) of Ubuntu18.04 is taken as an example to be built, but the method is not limited to the method, and specifically comprises the following steps:
s1011 creates nodes such as a laser radar, a depth camera and a camera, and completes communication and visual display of corresponding node data;
s1012, building a bottom layer control system, compiling a bottom layer motion control program based on STM32, reading encoder data and serial port transceiving data, and building to finish sending instructions of front-back motion, left-right rotation and the like to the system by using an upper computer in a remote control handle or serial port mode and realizing corresponding functions;
s1013, joint debugging of the development platform main control system and the bottom layer control system, wherein various motion instructions are issued to the bottom layer through the development platform main control system, and the bottom layer system controls the chassis to complete corresponding functions;
s1014, building an Ubuntu 18.04-based ROS (reactive oxygen species) memorial system on the embedded platform, and deploying the sensor nodes tested by the open platform to the embedded platform;
s1015, jointly debugging the embedded main control system and the bottom layer control system, issuing various motion instructions to the bottom layer through the embedded main control system, and controlling the chassis by the bottom layer system to complete corresponding functions and perform communication and visual display of various sensor data;
and S1016, reading the sensor data to detect the obstacle and loading and constructing a navigation map.
S102, constructing a laser SLAM two-dimensional grid map for intelligent autonomous navigation, and specifically comprising the following steps:
s1021, sampling: current time of day𝑡Particle set of
Figure 727282DEST_PATH_IMAGE006
Is a collection of particles from the last moment
Figure 71676DEST_PATH_IMAGE008
Obtaining intermediate sampling;
s1022, calculating the weight of the particles:
Figure DEST_PATH_IMAGE056
s1023, resampling: resampling is a solution proposed for the degradation phenomenon of particles, and when the number of effective particles is lower than a set threshold, resampling operation is performed, and after resampling operation, all particles are given the same weight.
S1024, updating the map: and calculating the probability of the map according to the motion trail of the intelligent agent and the sensor data to update the map.
S103, dividing the navigation area of the intelligent agent according to the importance of the navigation area of the intelligent agent, the 2D laser radar, the depth camera, the parameter characteristics of the ultrasonic sensor and the size of the dynamic window, and specifically comprising the following steps:
s103, dividing the environment around the intelligent agent into 4 areas by using the horizontal view field areas of the depth camera, the laser radar and the ultrasonic sensor and the window size of the dynamic window: static adjusting areaS out Dynamic adjustment regionS 1 The secondary core areaS 2 And a core regionS 3 S out Is a dynamic windowS DWA In the outer region of the outer zone,S 1 = S L S DWA -S 2 +S 1 S 2 =S D S DWA -S 1 S 3 = S s ∩S d/2 whereinS d/2 Is thatS s The sector area corresponds to an angular radius ofd/2The sector area of (a).
S104, solving the optimal motion track of the intelligent agent at the t +1 moment by using different fusion strategies according to different navigation areas, controlling the intelligent agent to move, and further realizing autonomous navigation, wherein the method specifically comprises the following steps:
s1041, solving the intelligent agent in the single sensing scheme
Figure 756604DEST_PATH_IMAGE012
Optimal motion trajectory at time:
Figure DEST_PATH_IMAGE058
wherein
Figure 200354DEST_PATH_IMAGE034
Figure 416572DEST_PATH_IMAGE036
Figure 190755DEST_PATH_IMAGE038
Respectively are evaluation functions of an azimuth angle, a minimum safe distance and a current speed in the moving process of the intelligent body,
Figure 808819DEST_PATH_IMAGE034
the consistent degree of the moving direction of the intelligent body and the target direction under the current pose is represented,
Figure 282525DEST_PATH_IMAGE040
Figure 861405DEST_PATH_IMAGE042
Figure 547601DEST_PATH_IMAGE044
the weight of the evaluation function can be adjusted according to a specific use scene to obtain the optimal motion track of the intelligent agent;
and S1042, using different fusion strategies according to different navigation areas, and then solving the optimal motion trail of the intelligent agent of the multi-sensing fusion scheme on the basis of the optimal motion trail of the intelligent agent single-sensing scheme.
S1043、S out : the updating frequency of navigation data in the area is extremely low, two main tasks are provided, and global path planning is carried out again by taking the current coordinate as a starting point when a global path is planned in the initial navigation stage and a local path is difficult to reach a target point in the navigation process.
S1044、S 1 : the agent is
Figure 551330DEST_PATH_IMAGE012
The optimal motion track at the moment is acted by a laser radar sensor:G (v,w)= G L (v,w);
S1045、S 2 : the fusion strategy is shown in FIG. 3 (a), with the agent in
Figure 54992DEST_PATH_IMAGE012
Optimal motion trajectory of timeG (v,w) = λ 1 G L (v,w)+λ 2 G L&D (v,w) ;
S1046、S 3 : the fusion strategy is shown in FIG. 3 (b), and the agent is
Figure 245802DEST_PATH_IMAGE012
Optimal motion trajectory of timeG (v,w)= λ 1 G L (v,w)+λ 2 G L&D (v,w)+(1-λ 1 2 )G L&D&S (v,w)
S1047, summarizing, the intelligent agent after the integration of the laser radar, the depth camera and the ultrasonic sensor
Figure 735689DEST_PATH_IMAGE012
The optimal motion track of the moment can be written asG S (v,w)=λ 1 G L (v,w)+λ 2 G L&D (v,w)+(1- λ 1 - λ 2 )G L&D&S (v,w)(ii) a In the formulaG S (v,w)Subscript of S Indicating the range of areas used by the trajectory function.
S1048, writing it asG S (v,w)=
Figure 203711DEST_PATH_IMAGE046
G L (v,w)+
Figure 753641DEST_PATH_IMAGE048
G L&D (v,w)+
Figure 431747DEST_PATH_IMAGE050
G L&D&S (v,w),
Figure 725325DEST_PATH_IMAGE046
Figure 78813DEST_PATH_IMAGE048
And
Figure 534065DEST_PATH_IMAGE050
and respectively representing the weights occupied by single sensing, feature level fusion and decision level fusion in the navigation process.
S1049, controlling through program
Figure 699467DEST_PATH_IMAGE046
Figure 406523DEST_PATH_IMAGE048
And
Figure 239350DEST_PATH_IMAGE050
and taking zero or non-zero to further control the data frequency of the three sensing schemes. Such as: assuming that the limit frame rates of the lidar, depth image and ultrasound sensor data are 30Hz, 20Hz and 10Hz, respectively, note that
Figure 865503DEST_PATH_IMAGE052
,
Figure 642835DEST_PATH_IMAGE048
,
Figure 278216DEST_PATH_IMAGE054
Will beAThe results are multiplied by (1,1,1), (1,1,0), (1,0,0) respectively at the same time interval and are executed in a 10Hz frequency cycle as a whole, i.e., single sensing, feature level fusion and decision level fusion can be executed at 30Hz, 20Hz and 10 Hz. Of course, any frame rate combination can be set within the limit frame rate range of the selected device as required, so that the intelligent agent can be controlled more efficiently and flexibly.

Claims (6)

1. A multi-sensing information efficient fusion method for intelligent agent autonomous navigation is characterized by comprising the following steps: the method comprises the following steps:
(1) constructing an experimental platform which can be used for an intelligent agent to perform multi-sensing information fusion;
(2) constructing a laser SLAM two-dimensional grid map for autonomous navigation of the intelligent agent by utilizing a Gmapping algorithm;
(3) dividing the navigation area of the intelligent agent according to the importance of the navigation area of the intelligent agent, the parameter characteristics of the 2D laser radar, the depth camera and the ultrasonic sensor and the size of the navigation dynamic window;
(4) and solving the optimal motion track of the intelligent agent at the moment of t +1, and further realizing the motion control of the intelligent agent.
2. The multi-sensor high-efficiency fusion method according to claim 1, wherein: the method comprises the following steps that (1) an experimental platform which can be used for an intelligent agent to perform multi-sensing fusion is built by using a Robot Operating System (ROS) frame based on Ubuntu18.04, and comprises the following steps:
1) creating laser radar, a depth camera and a camera node, and finishing communication and visual display of corresponding node data;
2) a bottom layer control system is built, a STM 32-based bottom layer motion control program is compiled, encoder data and serial port transceiving data are read, the building is completed, instructions such as forward and backward motion, left and right rotation and the like can be sent to the system by an upper computer in a remote control handle or serial port mode, and corresponding functions are realized;
3) the development platform main control system and the bottom layer control system are jointly debugged, various motion instructions are issued to the bottom layer through the development platform main control system, and the bottom layer system control chassis completes corresponding functions;
4) building an ROS (reactive oxygen species) media system based on Ubuntu18.04 on the embedded platform, and deploying the sensor nodes tested by the open platform to the embedded platform;
5) the embedded main control system and the bottom layer control system are jointly debugged, various motion instructions are issued to the bottom layer through the embedded main control system, and the bottom layer system control chassis completes corresponding functions and performs communication and visual display of various sensor data;
6) reading sensor data for obstacle detection and the ability to load and build navigation maps.
3. The multi-sensor high-efficiency fusion method according to claim 1, wherein: the step (2) of constructing the laser SLAM two-dimensional grid map for the autonomous navigation of the intelligent agent by using the Gmapping algorithm comprises the following steps:
1) sampling: the current time𝑡Particle set of
Figure DEST_PATH_IMAGE002
Is a collection of particles from the last moment
Figure DEST_PATH_IMAGE004
Obtaining intermediate sampling;
2) calculating the weight of the particles:
Figure DEST_PATH_IMAGE006
3) resampling: resampling is a solution proposed for the degradation phenomenon of particles, and when the number of effective particles is lower than a set threshold, resampling operation is performed, and after resampling operation, all particles are given the same weight;
4) updating the map: and calculating the probability of the map and updating the map according to the motion trail of the intelligent agent and the sensor data.
4. The method for efficiently fusing multi-sensor information according to claim 1, wherein: the navigation area dividing method in the step (3) comprises the following steps: the horizontal field of view area and dynamic window size that utilize depth camera, laser radar, ultrasonic sensor divide into 4 regions with the intelligent agent surrounding environment: static adjusting areaS out Dynamic adjustment regionS 1 The secondary core areaS 2 And a core regionS 3 S out Is a dynamic windowS DWA In the outer region of the outer zone,S 1 = S L S DWA -S 2 +S 1 S 2 =S D S DWA -S 1 S 3 = S s ∩S d/2 whereinS d/2 Is thatS s The sector area corresponds to an angular radius ofd/2A sector area of (a).
5. The multi-sensor high-efficiency fusion method according to claim 1, wherein: the optimal motion trajectory solving and motion control method in the step (4) comprises the following steps:
1) firstly, the intelligent agent in the single sensing scheme is solved
Figure DEST_PATH_IMAGE008
Optimal motion trajectory at time, velocity vector space
Figure DEST_PATH_IMAGE010
By a set of velocity vectors
Figure DEST_PATH_IMAGE012
Allowable velocity vector
Figure DEST_PATH_IMAGE014
And speed dynamic window
Figure DEST_PATH_IMAGE016
The intersection of (a) and (b) constitutes:
Figure DEST_PATH_IMAGE018
allowable velocity vector
Figure 475231DEST_PATH_IMAGE014
The following constraint is satisfied:
Figure DEST_PATH_IMAGE020
in the formula
Figure DEST_PATH_IMAGE022
Figure DEST_PATH_IMAGE024
And
Figure DEST_PATH_IMAGE026
minimum safe distance of agent to obstacle, horizontal acceleration and rotation acceleration of agent, respectively, from
Figure 674263DEST_PATH_IMAGE010
In solution to
Figure 719579DEST_PATH_IMAGE008
The optimal motion track of the intelligent agent in the moment single sensing scheme is as follows:
Figure DEST_PATH_IMAGE028
in the formula
Figure DEST_PATH_IMAGE030
Figure DEST_PATH_IMAGE032
Figure DEST_PATH_IMAGE034
Are evaluation functions of an azimuth angle, a minimum safety distance and a current speed in the moving process of the intelligent body respectively,
Figure 505001DEST_PATH_IMAGE030
the consistent degree of the motion direction of the intelligent body and the target direction under the current pose is shown,
Figure DEST_PATH_IMAGE036
Figure DEST_PATH_IMAGE038
Figure DEST_PATH_IMAGE040
the weight of the evaluation function can be adjusted according to a specific use scene to obtain the optimal motion track of the intelligent agent;
2) on the basis of the optimal motion trail of the intelligent agent single sensing scheme, solving the optimal motion trail of the intelligent agent of the multi-sensing fusion scheme, and using different fusion strategies according to different navigation areas; wherein:
S out : the updating frequency of navigation data in the area is extremely low, two main tasks are provided, and global path planning is carried out in the initial navigation stage and is carried out again by taking the current coordinate as a starting point when a local path is difficult to reach a target point in the navigation process;
S 1 : the agent is
Figure 522636DEST_PATH_IMAGE008
The optimal motion track at the moment is acted by a laser radar sensor:G (v,w)= G L (v,w);
S 2 : the intelligent agent is
Figure 499426DEST_PATH_IMAGE008
Optimal motion trajectory at time:G (v,w) = λ 1 G L (v,w)+λ 2 G L&D (v,w) ;
S 3 : the agent is
Figure 32038DEST_PATH_IMAGE008
Optimal motion trajectory at time:G (v,w)= λ 1 G L (v,w)+λ 2 G L&D (v,w)+(1-λ 1 2 ) G L&D&S (v,w)
3) agent after integration of lidar, depth camera, and ultrasonic sensor
Figure 637463DEST_PATH_IMAGE008
Optimal motion trajectory at time:G S (v,w)=λ 1 G L (v,w)+λ 2 G L&D (v,w)+(1- λ 1 - λ 2 )G L&D&S (v,w)in the formulaG S (v,w)Subscript of S Indicating the range of areas used by the trajectory function.
6. The multi-sensor high-efficiency fusion method according to claim 5, wherein: step 3) the intelligent agent after the laser radar, the depth camera and the ultrasonic sensor are fused is
Figure 571921DEST_PATH_IMAGE008
Optimal motion trajectory at time:
G S (v,w)=
Figure DEST_PATH_IMAGE042
G L (v,w)+
Figure DEST_PATH_IMAGE044
G L&D (v,w)+
Figure DEST_PATH_IMAGE046
G L&D&S (v,w),
in the formula:
Figure 283394DEST_PATH_IMAGE042
Figure 178669DEST_PATH_IMAGE044
and
Figure 446839DEST_PATH_IMAGE046
and respectively representing the weights occupied by single sensing, characteristic level fusion and decision level fusion in the navigation process.
CN202210531745.5A 2022-05-17 2022-05-17 Multi-sensing information efficient fusion method for intelligent agent autonomous navigation Pending CN114756032A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210531745.5A CN114756032A (en) 2022-05-17 2022-05-17 Multi-sensing information efficient fusion method for intelligent agent autonomous navigation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210531745.5A CN114756032A (en) 2022-05-17 2022-05-17 Multi-sensing information efficient fusion method for intelligent agent autonomous navigation

Publications (1)

Publication Number Publication Date
CN114756032A true CN114756032A (en) 2022-07-15

Family

ID=82334496

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210531745.5A Pending CN114756032A (en) 2022-05-17 2022-05-17 Multi-sensing information efficient fusion method for intelligent agent autonomous navigation

Country Status (1)

Country Link
CN (1) CN114756032A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115909728A (en) * 2022-11-02 2023-04-04 智道网联科技(北京)有限公司 Road side sensing method and device, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115909728A (en) * 2022-11-02 2023-04-04 智道网联科技(北京)有限公司 Road side sensing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
Minguez et al. Nearness diagram (ND) navigation: collision avoidance in troublesome scenarios
Hou et al. Can a simple control scheme work for a formation control of multiple autonomous underwater vehicles?
González-Banos et al. Motion planning: Recent developments
Wong et al. Adaptive and intelligent navigation of autonomous planetary rovers—A survey
An et al. Task planning and collaboration of jellyfish-inspired multiple spherical underwater robots
Sun et al. A novel fuzzy control algorithm for three-dimensional AUV path planning based on sonar model
Li et al. Characteristic evaluation via multi-sensor information fusion strategy for spherical underwater robots
CN114756032A (en) Multi-sensing information efficient fusion method for intelligent agent autonomous navigation
Chen et al. Study on coordinated control and hardware system of a mobile manipulator
Yi et al. Intelligent robot obstacle avoidance system based on fuzzy control
Clark et al. Archaeology via underwater robots: Mapping and localization within maltese cistern systems
CN113671960A (en) Autonomous navigation and control method of magnetic micro-nano robot
Pang et al. Multi-AUV formation reconfiguration obstacle avoidance algorithm based on affine transformation and improved artificial potential field under ocean currents disturbance
Igor et al. Hybrid control approach to multi-AUV system in a surveillance mission
Ridao et al. O2CA2, a new object oriented control architecture for autonomy: the reactive layer
Vasseur et al. Navigation of car-like mobile robots in obstructed environments using convex polygonal cells
Chiu et al. Fuzzy obstacle avoidance control of a two-wheeled mobile robot
Parasuraman Sensor fusion for mobile robot navigation: Fuzzy Associative Memory
Ridao et al. O2CA2: A new hybrid control architecture for a low cost AUV
Zhou et al. Visual servo control of underwater vehicles based on image moments
Heshmati-Alamdari Cooperative and Interaction Control for Underwater Robotic Vehicles
Hanz et al. An abstraction layer for controlling heterogeneous mobile cyber-physical systems
Hou et al. PD control scheme for formation control of multiple autonomous underwater vehicles
Chu Development of hybrid control architecture for a small autonomous underwater vehicle
Matveev et al. Reactive navigation of nonholonomic mobile robots in dynamic uncertain environments with moving and deforming obstacles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination