AU2021266203B2 - Semantic laser-based multilevel obstacle avoidance system and method for mobile robot - Google Patents
Semantic laser-based multilevel obstacle avoidance system and method for mobile robot Download PDFInfo
- Publication number
- AU2021266203B2 AU2021266203B2 AU2021266203A AU2021266203A AU2021266203B2 AU 2021266203 B2 AU2021266203 B2 AU 2021266203B2 AU 2021266203 A AU2021266203 A AU 2021266203A AU 2021266203 A AU2021266203 A AU 2021266203A AU 2021266203 B2 AU2021266203 B2 AU 2021266203B2
- Authority
- AU
- Australia
- Prior art keywords
- obstacle
- laser
- robot body
- semantic
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 36
- 230000009471 action Effects 0.000 claims abstract description 37
- 238000013507 mapping Methods 0.000 claims abstract description 22
- 230000003068 static effect Effects 0.000 claims description 23
- 230000008878 coupling Effects 0.000 claims description 12
- 238000010168 coupling process Methods 0.000 claims description 12
- 238000005859 coupling reaction Methods 0.000 claims description 12
- 230000004927 fusion Effects 0.000 claims description 12
- 238000013135 deep learning Methods 0.000 claims description 8
- 238000000605 extraction Methods 0.000 claims description 8
- 230000010365 information processing Effects 0.000 claims description 8
- 230000000007 visual effect Effects 0.000 claims description 8
- 238000009434 installation Methods 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 5
- 238000006243 chemical reaction Methods 0.000 claims description 4
- 238000001514 detection method Methods 0.000 claims description 4
- 238000012545 processing Methods 0.000 claims description 4
- 238000012549 training Methods 0.000 claims description 4
- 239000000284 extract Substances 0.000 claims description 2
- 238000013459 approach Methods 0.000 claims 2
- 230000004044 response Effects 0.000 claims 1
- 238000010586 diagram Methods 0.000 description 12
- 230000003044 adaptive effect Effects 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000001149 cognitive effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- VJYFKVYYMZPMAB-UHFFFAOYSA-N ethoprophos Chemical compound CCCSP(=O)(OCC)SCCC VJYFKVYYMZPMAB-UHFFFAOYSA-N 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0238—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
- G05D1/024—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0214—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0221—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0242—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0255—Control of position or course in two dimensions specially adapted to land vehicles using acoustic signals, e.g. ultra-sonic singals
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0276—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Aviation & Aerospace Engineering (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Electromagnetism (AREA)
- Optics & Photonics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Acoustics & Sound (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
- Manipulator (AREA)
Abstract
The present invention relates to a semantic laser-based multilevel obstacle
avoidance system and method for a mobile robot. The data of a laser radar, an
industrial camera and an ultrasonic sensor are tightly coupled to obtain semantic laser,
so that the laser point cloud has attitude information, obstacle type and motion range
and other information, the scanning range of the laser radar is divided into three layers,
whether a robot body is in a mapping state is judged, and corresponding obstacle
avoidance actions in the mapping state and a navigation state of the robot body are
generated according to the feature information of an obstacle. The unicity of
traditional obstacle avoidance information is changed, the mobile robot has the ability
to recognize the feature of the obstacle in an unknown dynamic environment, and the
flexibility of the entire obstacle avoidance system is improved.
Drawings
m-- -dul ---I I
aaaIl I'l IIgi..-P - g WkMa-a f -pa~n adl
21 Idla, m o i Creaaomoliefoemaeioo daeaeeo a tl o aa e of
Moooolaeoiderel Ccame II
[1 ,fffcanoiaofth-b.-Il 1
igoolooaeeae ebaogadu fielm
Ul-w GOaa-au 1 . i I FM1f
Fig.1I
0 C
0
0
MI - 0
Fig. 2
1Fg 3
-~ -1/7
Description
Drawings
-dul-- Im-- I aaaIl I'l I gi. -P - g WkMa-a f -pa~n adl e 21 Idla, m o i Creaaomoliefoemaeioo daeaeeo a tl o aa of
Moooolaeoiderel Ccame II
[1 ,fffcanoiaofth-b.-Il igoolooaeeae ebaogadu fielm 1
Ul-w GOaa-au 1. i I FM1f
Fig.1I
C 0 0
0 MI - 0
Fig. 2
1Fg 3
-~ -1/7
Description
Field of the Invention
The present invention relates to the field of intelligent obstacle avoidance of mobile
robots, in particular to a semantic laser-based multilevel obstacle avoidance system
and method for a mobile robot.
Background of the Invention
A mobile robot is a highly integrated equipment that integrates a mechanical matrix, a
driving system, a control system, a sensor detection system and an operation
execution system, it is gradually becoming intelligent and mature under the premise
of the rapid development of sensor technology, and it has been widely used in industry,
logistics, service, medical treatment and other fields. In different working scenarios,
the mobile robot needs to complete a real-time obstacle avoidance work, so as to
ensure environmental safety and its own safety, wherein obstacles are generally
divided into static obstacles and dynamic obstacles, the static obstacles include
shelves, walls, tables and chairs in the scenario, and the dynamic obstacles include
people, equipment for large-scale space operations, elevators, and the like. After
completing mapping and positioning works, the mobile robot needs to identify the
types and characteristics of the obstacles and specify the position information of the
obstacles. On the basis of the obtained size map information, the mobile robot realizes
real-time obstacle avoidance and autonomous navigation by using global path
planning and local path planning algorithms.
The traditional obstacle avoidance methods of the mobile robot are mostly ultrasonic,
visual, laser radar and infrared sensor obstacle avoidance methods. The ultrasonic and
laser radar obstacle avoidance methods are to calculate the distance information of the
obstacle according to a round-trip time between an acoustic wave and a laser pulse
from a generator to a measured target, and the visual and infrared sensor obstacle
avoidance methods are to mostly calculate the distance information of the obstacle by
Description
using the principle of triangular ranging.
The mobile robot has excellent autonomous navigation performance in static
scenarios by using the above traditional obstacle avoidance methods of single sensor
or multi-sensor fusion. However, when the complexity of the working scenario is
relatively high, the traditional method cannot identify the types and characteristics of
the obstacles. When the sampling frequency of the sensor is relatively low, the mobile
robot generates a relatively large deviation with the actual pose of the obstacle when
estimating the pose of the obstacle, which is likely to generate a safety problem, and
at the same time, when the mobile robot is in a navigation state, it is unable to make
corresponding obstacle avoidance adjustment according to the characteristic
information of the obstacle, resulting in reduced working efficiency and safety.
Summary of the Invention
In order to solve at least one technical problem in the above-mentioned background
art, the present invention provides a semantic laser-based multilevel obstacle
avoidance system and method for a mobile robot, which can tightly couple a variety
of different types of sensors, change the unicity of traditional obstacle avoidance
information, enable the mobile robot to have the ability to recognize the features of
obstacles in an unknown dynamic environment, and improve the flexibility of the
entire obstacle avoidance system.
The first aspect of the present invention provides a semantic laser-based multilevel
obstacle avoidance method for a mobile robot, including the following steps:
according to the information obtained by a laser radar and an industrial camera on a
robot body, obtaining feature information of an obstacle through deep learning and
coordinate conversion;
according to the distance between the obstacle and the robot body, dividing the
scanning range of the laser radar into three layers, wherein the layer closest to the
robot body is a dangerous range, the farthest layer is a safe range, and the remaining
part is a deceleration range; and
judging whether the robot body is in a mapping state, and generating corresponding
Description
obstacle avoidance actions in the mapping state and a navigation state of the robot
body according to the feature information of the obstacle.
When the robot body is in the mapping state, if the obstacle exists in the outermost
two layers of scanning range, the robot body judges the type of the obstacle via
semantic laser, or otherwise the robot body enters the navigation state; and if the
obstacle exists in the innermost layer of scanning range or triggers an ultrasonic
sensor with a fixed threshold, the robot body stops acting and issues an alarm.
When the robot body is in the navigation state and judges that the obstacle is static via
semantic laser, and when the obstacle exists in the outermost layer of scanning range,
the robot body moves normally and issues an alarm; when the obstacle exists in the
middle layer of scanning range, the robot body decelerates according to the pose of
the obstacle and issues an alarm; and when the obstacle exists in the innermost layer
or triggers the ultrasonic sensor with the fixed threshold, the robot body stops acting
and re-plans the path through the DWA algorithm.
When the robot body is in the navigation state and judges that there is a dynamic
obstacle within the semantic laser range, and when the obstacle is in the two
outermost layers of scanning range, the robot body judges the dynamic characteristic
of the obstacle as a fixed range action, simulates all the actions within the action range
as static obstacles, and enters a static obstacle motion planning process again to
complete the judgment; and if the robot body judges the dynamic characteristic of the
obstacle as a random range action, the robot body judges the motion characteristic of
the obstacle by capturing multiple frames of semantic laser; and
if the obstacle is a state of far away from the robot body, the robot body moves
normally, and if the obstacle is in a static state, the robot body returns to the static
obstacle motion planning for judgment; and when the obstacle exists in the innermost
layer or triggers the ultrasonic sensor with the fixed threshold, the robot body stops
acting and re-plans the path through the DWA algorithm.
The second aspect of the present invention provides a semantic laser-based multilevel
obstacle avoidance system for a mobile robot, including a multi-sensor fusion feature
information extraction module, an obstacle type recognition module, a coupling
Description
information processing module and a mobile robot motion planning module, which
are installed on a robot body;
the multi-sensor fusion feature information extraction module extracts radar point
cloud information and image information by using a laser radar and an industrial
camera, the obstacle type recognition module obtains feature information of an
obstacle through deep learning, and the coupling information processing module
obtains semantic laser through coordinate conversion, so that the laser radar
recognizes the feature information of the obstacle; and
the mobile robot motion planning module divides the scanning range of the laser radar
into three layers according to the distance between the obstacle and the robot body,
judges whether the robot body is in a mapping state, and generates corresponding
obstacle avoidance actions in the mapping state and a navigation state of the robot
body according to the feature information of the obstacle.
The multi-sensor fusion feature information extraction module includes a laser radar,
an industrial camera and an ultrasonic sensor, the laser radar is installed in the same
direction as the monocular industrial camera, and the ultrasonic sensor is installed
within the scanning range of the laser radar at a certain distance from the laser radar.
The laser radar scans the information of the obstacle within a fixed angle range of its
installation position, and returns obstacle angle information and distance point cloud
coordinate information based on its own coordinate system, the industrial camera
returns a feature image in the visual field, and the ultrasonic sensor determines dead
zone safety information of the laser radar and the industrial camera according to a
distance value of the obstacle that is returned in real time on the basis of the TOF
principle.
The obstacle type recognition module sorts the visual information of feature objects in
the scenario where the robot body is located into a data set in the scenario, performs
training processing on the data set by using the convolutional neural network YOLO
V5 algorithm in deep learning to obtain an algorithm weight, and creates a feature
semantic data structure according to different obstacles recognized, wherein the data
structure contains the types, dynamic characteristics and possible action ranges of the
Description
obstacles.
The coupling information processing module completes a clustering operation of the
point cloud information returned by the laser radar according to the DBSCAN
algorithm, converts an image coordinate system of the industrial camera into a
scanning coordinate system of the laser radar, completes the position matching and
tight coupling of the laser point cloud and the image information, and fuses the laser
point cloud in a target detection frame of the industrial camera for obstacle
recognition with the feature semantic data structure in the obstacle type recognition
module, so that the laser point cloud contains image semantic information and
obstacle pose information, and then the semantic laser with feature information is
obtained.
The mobile robot motion planning module divides the fan-shaped scanning range of
the laser radar into three layers according to the distance between the obstacle and the
robot body, the layer closest to the robot body is a dangerous range, the farthest layer
is a safe range, and the remaining part is a deceleration range.
The mobile robot motion planning module issues different obstacle avoidance action
instructions according to the mapping state and the navigation state of the robot body
and the obstacle information.
The above one or more technical solutions have the following beneficial effects:
1. The data of the laser radar, the industrial camera and the ultrasonic sensor are
tightly coupled, the semantic laser is given to the image feature information of the
radar point cloud via the deep learning algorithm, so that the laser point cloud not
only has attitude information, but also has obstacle type and action range and other
information, which changes the unicity of traditional obstacle avoidance information,
enables the mobile robot to have the ability to recognize the feature of the obstacle in
an unknown dynamic environment, and greatly improves the flexibility of the entire
obstacle avoidance system.
2. The scanning range of the laser point cloud is divided into three levels of obstacle
avoidance. Different types of obstacles and different levels of obstacle avoidance
ranges have different obstacle avoidance actions in the mapping state and the
Description
navigation state of the mobile robot, such that the obstacle avoidance actions of the
mobile robot are safer and more reliable, the process is smoother, and the inertia
problem of the robot when performing operations is solved.
3. The mobile robot has the ability to distinguish static and dynamic objects, and
performs different obstacle avoidance actions according to their motion characteristics,
thereby improving the working efficiency of the mobile robot under the premise of
ensuring the safety, and enabling the mobile robot to have the adaptive adjustment
ability in different working scenarios.
Brief Description of the drawings
The drawings constituting a part of the present invention are used for providing a
further understanding of the present invention. The exemplary embodiments of the
present invention and the descriptions thereof are used for explaining the present
invention, but do not constitute improper limitations of the present invention.
Fig. 1 is a constitutional diagram of a semantic laser-based multilevel obstacle
avoidance system for a mobile robot provided by one or more embodiments of the
present invention;
Fig. 2 is a schematic diagram of an AGV multi-sensor installation layout provided by
one or more embodiments of the present invention;
Fig. 3 is a schematic diagram of an AGV multi-sensor fusion obstacle avoidance
range provided by one or more embodiments of the present invention;
Fig. 4 is a schematic diagram of semantic laser in a hall scenario provided by one or
more embodiments of the present invention;
Fig. 5 is a schematic diagram of multilevel obstacle avoidance range division of a
mobile robot provided by one or more embodiments of the present invention;
Fig. 6 is an overall flow diagram of a semantic laser-based multilevel obstacle
avoidance method for a mobile robot provided by one or more embodiments of the
present invention;
Fig. 7 is a sub-flow diagram of a semantic laser-based multilevel obstacle avoidance
method for a mobile robot in a mapping state provided by one or more embodiments
Description
of the present invention;
Fig. 8 is a sub-flow diagram of a semantic laser-based multilevel obstacle avoidance
method for a mobile robot in a navigation state provided by one or more embodiments
of the present invention;
Fig. 9 is a sub-flow diagram of a semantic laser-based multilevel obstacle avoidance
method of a static obstacle for a mobile robot in a navigation state provided by one or
more embodiments of the present invention; and
Fig. 10 is a sub-flow diagram of a semantic laser-based multilevel obstacle avoidance
method of a dynamic obstacle for a mobile robot in a navigation state provided by one
or more embodiments of the present invention;
In the figures: 1. 2D laser radar; 2. industrial camera; 3. ultrasonic sensor.
Detailed Description of the Embodiments
The following detailed descriptions are all exemplary and are intended to provide
further descriptions of the present invention. Unless otherwise indicated, all technical
and scientific terms used herein have the same meaning as commonly understood by
those of ordinary skill in the technical field to which the present invention belongs.
As described in the background art, in traditional robot obstacle avoidance methods,
the ultrasonic and laser radar obstacle avoidance methods are to calculate the distance
information of an obstacle according to a round-trip time between an acoustic wave
and a laser pulse from a generator to a measured target, and the visual and infrared
sensor obstacle avoidance methods are to mostly calculate the distance information of
the obstacle by using the principle of triangular ranging. The above traditional
obstacle avoidance methods of single sensor or multi-sensor fusion has excellent
autonomous navigation performance in static scenarios, however, when the
complexity of the working scenario is relatively high, the traditional method cannot
identify the types and characteristics of the obstacles, when the sampling frequency of
the sensor is relatively low, the mobile robot generates a relatively large deviation
with the actual pose of the obstacle when estimating the pose of the obstacle, which is
likely to generate a safety problem, and at the same time, when the mobile robot is in
Description
a navigation state, it is unable to make corresponding obstacle avoidance adjustment
according to the characteristic information of the obstacle, resulting in reduced
working efficiency and safety.
Embodiment 1:
As shown in Fig. 1, a semantic laser-based multilevel obstacle avoidance system for a
mobile robot includes a multi-sensor fusion feature information extraction module, an
obstacle type recognition module, a coupling information processing module, and a
mobile robot motion planning module.
As shown in Figs. 2-3, this embodiment is described in combination with an
autonomous guided vehicle (AGV). In the AGV, the multi-sensor fusion feature
information extraction module of this embodiment includes two 2D laser radars, 4
monocular industrial cameras and 4 ultrasonic sensors, and Fig. 2 shows the
installation positions of the main components in the entire module.
The 2D laser radar scans obstacle information within the range of 2700 where its
installation position is taken as the circle center at a high frequency, and returns an
obstacle angle and distance point cloud coordinates based on its own coordinate
system, and the maximum expression distance of the obstacle is 5m.
The monocular industrial camera returns feature images existing in the visual field
within the range of 90° where its installation position is taken as the circle center. In
order to fully obtain semantic laser information, the two monocular industrial cameras
are installed with the bottoms aligned at 60°. In order to simplify the information
processing of the subsequent modules, the 2D laser radar is installed in the same
direction as the monocular industrial camera.
The ultrasonic sensor will be installed at a position 20cm away from the scanning
plane of the laser radar, the dead zone safety information of the sensor will be
determined according to a distance value of the obstacle that is returned in real time
on the basis of the TOF principle, and the ultrasonic sensor is used as a soft
mechanical safety protection device.
Fig. 3 is a schematic diagram of the entire AGV multi-sensor fusion obstacle
avoidance range, wherein the thin dotted line represents the ranging range of the 2D
Description
laser radar 1, the double thin dotted line represents the ranging range of the industrial
camera 2, and the thin solid line represents the ranging range of the ultrasonic sensor
3. The AGV obstacle type recognition module firstly collects the pictures of shelves,
equipment and the like and workers in an industrial scenario, then completes the
labeling work of a data set by using Labelmg to obtain a data set in an AGV working
scenario, performs training processing on the data set in a pytorch environment of a
workstation by using the convolutional neural network YOLO V5 algorithm in deep
learning, so as to obtain a weight of a forward propagation convolutional layer,
transplants a forward propagation function with the weight into an AGV industrial
personal computer, and meanwhile creates a feature semantic data structure according
to different obstacles in the scenario, and as shown in Table 1, the structure contains
the types, dynamic characteristics and possible action ranges of the obstacles.
Table 1: Feature semantic data structure in a specified working scenario Identification mark Dynamic. Obstacle type i.tcharacterisic Possible action range bit characteristic
Staticobstacle Type classification value Dynamic Type classification Motion possibility Inherent and random obstacle value value action ranges In Table 1:
(1) The type classification value is a specific serial number according to the type of
the obstacle in the training data set in the scenario, and each serial number value
represents a type of obstacle and is not repeated.
(2) The motion probability value is a probability value assigned to a dynamic obstacle
in a motion state according to cognitive understanding, the larger the value is, the
greater the degree of the motion possibility of the obstacle is, constraints need to be
formulated in advance according to different working scenarios of the AGV, and the
value is a non-zero value.
(3) The inherent action range refers to the size of the working range of an elevator and
a rotating equipment or the like in a fixed area, and the random action range refers to
Description
the type of motion range that has high dynamics and cannot be specified such as
people.
The coupling information processing module of the AGV is a method that runs in an
industrial personal computer, which firstly performs joint calibration on the 2D laser
radar and 2 monocular industrial cameras to determine internal parameters of the
sensor, then completes a clustering operation of the point cloud information returned
by the 2D laser radar according to the DBSCAN algorithm to increase the readability
and correctness of the obstacle information, then converts an image coordinate system
of the monocular industrial camera into a scanning coordinate system of the 2D laser
radar to complete the position matching and tight coupling of the laser point cloud and
the image information, and fuses the laser point cloud in a target detection frame of
the industrial camera for obstacle recognition with the feature semantic data structure
in the obstacle type recognition module, so that the laser point cloud contains image
semantic information and obstacle pose information, and then the semantic laser with
more feature information is obtained.
Fig. 4 shows a sscannogram of the AGV in a hall environment, wherein the
environment contains the data information of pedestrians, flower beds and walls
trained in the data set, and for this frame of laser point cloud in the figure, round
frames represent pedestrians, and boxes represent flower beds.
The motion planning method of the AGV motion planning module is to divide the
fan-shaped scanning range of the laser radar into three layers according to the distance
with the AGV, as shown in Fig. 5, the scanning range of the laser radar is divided into
a safe range, a deceleration range and a dangerous range, corresponding data message
values are 0, 1, 2, that is, when a lower computer calculates that the value transmitted
from the can communication is 0, there is an obstacle in the safe range at this moment,
and the methods for the other values are the same.
Embodiment 2:
A semantic laser-based multilevel obstacle avoidance method for a mobile robot, this
embodiment is also described in combination with an autonomous guided vehicle
(AGV), as shown in Fig. 6, it is a total flow diagram of multilevel obstacle avoidance
Description
of the 4 modules of the AGV, Figs. 7, 8, 9 and 10 are its sub-flows thereof, and the
total flow includes the following steps:
according to the information obtained by a laser radar and an industrial camera on a
robot body, causing the robot body to recognize feature information of an obstacle
after coupling;
according to the distance between the obstacle and the robot body, dividing the
fan-shaped scanning range of the laser radar into three layers, wherein the layer
closest to the robot body is a dangerous range, the farthest layer is a safe range, and
the remaining part is a deceleration range; and
judging whether the robot body is in a mapping state, and generating corresponding
obstacle avoidance actions in the mapping state and a navigation state of the robot
body according to the feature information of the obstacle.
As shown in Fig. 7, when the AGV is in the mapping state, if the obstacle exists in the
outermost two layers of scanning range, the AGV judges the type of the obstacle as a
static obstacle such as a wall, a flower bed, a shelf and an upright post via semantic
laser, the AGV moves normally, or otherwise the AGV enters the navigation state; and
if the obstacle exists in the innermost layer of scanning range or triggers an ultrasonic
sensor with a fixed threshold, the AGV stops acting and issues an alarm to remind the
control personnel to adjust the pose of the AGV, so as to continue the mapping work.
As shown in Figs. 8 and 9, when the AGV in the navigation state and judges that the
obstacle is static via semantic laser, for example, the walls and upright posts and the
like in a factory environment, and when the obstacle exists in the outermost layer of
scanning range, the AGV moves normally and issues an alarm; when the obstacle
exists in the middle layer of scanning range, the AGV decelerates correspondingly
according to the pose of the obstacle and issues an alarm; and when the obstacle exists
in the innermost layer or triggers the ultrasonic sensor with the fixed threshold, the
AGV stops acting and re-plans the path through the DWA algorithm.
As shown in Fig. 10, when the AGV is in the navigation state and judges that there is
a dynamic obstacle within the semantic laser range, and when the obstacle is in the
two outermost layers of scanning range, the AGV judges the dynamic characteristic of
Description
the obstacle as a fixed range action, for example, an elevator, a door and the like,
simulates all the actions within the action range as static obstacles, and enters a static
obstacle motion planning process again to complete the judgment, and if the AGV
judges the dynamic characteristic of the obstacle as a random range action, for
example, a person, the AGV judges the motion characteristic of the obstacle by
capturing multiple frames of semantic laser; and if the obstacle is a state of far away
from the AGV, the AGV does not decelerate, and if the obstacle is in a static state, the
AGV returns to the static obstacle motion planning for judgment; and when the
obstacle exists in the innermost layer or triggers the ultrasonic sensor with the fixed
threshold, the AGV stops acting and re-plans the path through the DWA algorithm.
The data of the laser radar, the industrial camera and the ultrasonic sensor are tightly
coupled, the semantic laser is given to the image feature information of the radar point
cloud via the deep learning algorithm, so that the laser point cloud not only has
attitude information, but also has obstacle type and action range and other information,
which changes the unicity of traditional obstacle avoidance information, enables the
mobile robot to have the ability to recognize the feature of the obstacle in an unknown
dynamic environment, and greatly improves the flexibility of the entire obstacle
avoidance system.
The scanning range of the laser point cloud is divided into three levels of obstacle
avoidance. Different types of obstacles and different levels of obstacle avoidance
ranges have different obstacle avoidance actions in the mapping state and the
navigation state of the mobile robot, such that the obstacle avoidance actions of the
mobile robot are safer and more reliable, the process is smoother, and the inertia
problem of the robot when performing operations is solved.
The mobile robot has the ability to distinguish static and dynamic objects, and
performs different obstacle avoidance actions according to their motion characteristics,
thereby improving the working efficiency of the mobile robot under the premise of
ensuring the safety, and enabling the mobile robot to have the adaptive adjustment
ability in different working scenarios.
Description
Although the specific embodiments of the present invention are described above in
combination with the drawings, the protection scope of the present invention is not
limited thereto. Those skilled in the art to which the present invention belongs should
understand that, on the basis of the technical solutions of the present invention,
various modifications or deformations that can be made by those skilled in the art
without any creative effort still fall within the protection scope of the present
invention.
Claims (10)
1. A semantic laser-based multilevel obstacle avoidance method for a mobile robot,
comprising the following steps:
according to information obtained by a laser radar and/or an industrial camera on a
robot body, obtaining feature information of an obstacle through deep learning and
coordinate conversion;
according to the distance between the obstacle and the robot body, dividing a scanning
range of the laser radar into three layers, wherein an innermost layer closest to the
robot body is a dangerous range, an outermost layer furthermost from the robot body
is a safe range, and the remaining part is a middle layer forming a deceleration range,
wherein the robot body decelerates in response to an obstacle being detected in the
deceleration range; and
judging whether the robot body is in a mapping state, and generating corresponding
obstacle avoidance actions in the mapping state and a navigation state of the robot
body according to the feature information of the obstacle.
2. The semantic laser-based multilevel obstacle avoidance method for the mobile
robot according to claim 1, wherein when the robot body is in the mapping state, the
operation of generating the corresponding obstacle avoidance action according to the
feature information of the obstacle comprises:
when the robot body is in the mapping state, if the obstacle exists in the outermost
layer or the middle layer of the scanning range, the robot body judges the type of the
obstacle via semantic laser, or otherwise the robot body enters the navigation state;
and
if the obstacle exists in the innermost layer of the scanning range or triggers an
ultrasonic sensor with a fixed threshold, the robot body stops acting and issues an
alarm.
3. The semantic laser-based multilevel obstacle avoidance method for the mobile
robot according to claim 1, wherein when the robot body is in the navigation state, the
operation of generating the corresponding obstacle avoidance action according to the
feature information of the obstacle comprises:
when the robot body is in the navigation state and judges that the obstacle is static via
Claims
semantic laser, and when the obstacle exists in the outermost layer of the scanning
range, the robot body moves normally and issues an alarm;
when the obstacle exists in the middle layer of the scanning range, the robot body
decelerates according to the pose of the obstacle and issues an alarm; and
when the obstacle exists in the innermost layer or triggers an ultrasonic sensor with a
fixed threshold, the robot body stops acting and re-plans a path through a dynamic
window approach (DWA) algorithm.
4. The semantic laser-based multilevel obstacle avoidance method for the mobile
robot according to claim 1, wherein when the robot body is in the navigation state, the
operation of generating the corresponding obstacle avoidance action according to the
feature information of the obstacle further comprises:
when the robot body is in the navigation state and judges that there is a dynamic
obstacle within the semantic laser range, and when the obstacle is in the outermost
layer or the middle layer of scanning range, the robot body judges the dynamic
characteristic of the obstacle as a fixed range action, simulates all the actions within
the action range as static obstacles, and enters a static obstacle motion planning
process again to complete the judgment; and if the robot body judges the dynamic
characteristic of the obstacle as a random range action, the robot body judges the
motion characteristic of the obstacle by capturing multiple frames of semantic laser;
and
if the obstacle is in the outermost layer, the robot body moves normally, and if the
obstacle is in a static state, the robot body returns to the static obstacle motion
planning for judgment; and when the obstacle exists in the innermost layer or triggers
an ultrasonic sensor with a fixed threshold, the robot body stops acting and re-plans a
path through a dynamic window approach (DWA) algorithm.
5. A system based on the method according to claim 1, comprising a multi-sensor
fusion feature information extraction module, an obstacle type recognition module, a
coupling information processing module and a mobile robot motion planning module,
which are installed on a robot body;
the multi-sensor fusion feature information extraction module extracts radar point
Claims
cloud information and image information by using a laser radar and an industrial
camera,
the obstacle type recognition module obtains feature information of an obstacle
through deep learning, and
the coupling information processing module obtains semantic laser through
coordinate conversion, so that the laser radar recognizes the feature information of the
obstacle; and
the mobile robot motion planning module divides the scanning range of the laser radar
into three layers according to the distance between the obstacle and the robot body,
judges whether the robot body is in a mapping state, and generates corresponding
obstacle avoidance actions in the mapping state and a navigation state of the robot
body according to the feature information of the obstacle.
6. The semantic laser-based multilevel obstacle avoidance system for the mobile robot
according to claim 5, wherein the multi-sensor fusion feature information extraction
module comprises a laser radar, an industrial camera and an ultrasonic sensor, the
laser radar is installed in the same direction as the monocular industrial camera, and
the ultrasonic sensor is installed within the scanning range of the laser radar at a
certain distance from the laser radar.
7. The semantic laser-based multilevel obstacle avoidance system for the mobile robot
according to claim 5, wherein the laser radar scans the information of the obstacle
within a fixed angle range of its installation position, and returns obstacle angle
information and distance point cloud coordinate information based on its own
coordinate system, the industrial camera returns a feature image in the visual field,
and the ultrasonic sensor returns a distance value in real time, so as to determine the
dead zone safety information of the laser radar and the industrial camera.
8. The semantic laser-based multilevel obstacle avoidance system for the mobile robot
according to claim 5, wherein the obstacle type recognition module sorts the visual
information of feature objects in the scenario where the robot body is located into a
data set in the scenario, performs training processing on the data set by using deep
learning to obtain an algorithm weight, and creates a feature semantic data structure
Claims
according to different obstacles recognized, wherein the data structure contains the
types, dynamic characteristics and possible action ranges of the obstacles.
9. The semantic laser-based multilevel obstacle avoidance system for the mobile robot
according to claim 5, wherein the coupling information processing module clusters
point cloud information returned by the laser radar, converts an image coordinate
system of the industrial camera into a scanning coordinate system of the laser radar,
completes the position matching and tight coupling of the laser point cloud and the
image information, and fuses the laser point cloud in a target detection frame of the
industrial camera for obstacle recognition with the feature semantic data structure in
the obstacle type recognition module, so that the laser point cloud contains image
semantic information and obstacle pose information, and then the semantic laser with
feature information is obtained.
10. The semantic laser-based multilevel obstacle avoidance system for the mobile
robot according to claim 5, wherein the mobile robot motion planning module divides
the fan-shaped scanning range of the laser radar into three layers according to the
distance between the obstacle and the robot body, the innermost layer closest to the
robot body is a dangerous range, the outermost layer is a safe range, and the middle
layer is a deceleration range.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2021100984763 | 2021-01-25 | ||
CN202110098476.3A CN112859873B (en) | 2021-01-25 | 2021-01-25 | Semantic laser-based mobile robot multi-stage obstacle avoidance system and method |
Publications (3)
Publication Number | Publication Date |
---|---|
AU2021266203A1 AU2021266203A1 (en) | 2022-08-11 |
AU2021266203A9 AU2021266203A9 (en) | 2022-10-27 |
AU2021266203B2 true AU2021266203B2 (en) | 2023-01-19 |
Family
ID=76008770
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
AU2021266203A Active AU2021266203B2 (en) | 2021-01-25 | 2021-11-09 | Semantic laser-based multilevel obstacle avoidance system and method for mobile robot |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112859873B (en) |
AU (1) | AU2021266203B2 (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113589829A (en) * | 2021-09-29 | 2021-11-02 | 江苏天策机器人科技有限公司 | Multi-sensor area obstacle avoidance method for mobile robot |
CN114397638A (en) * | 2022-01-22 | 2022-04-26 | 深圳市神州云海智能科技有限公司 | Method and system for filtering dynamic data in laser radar data |
CN114571450A (en) * | 2022-02-23 | 2022-06-03 | 达闼机器人股份有限公司 | Robot control method, device and storage medium |
CN114815821B (en) * | 2022-04-19 | 2022-12-09 | 山东亚历山大智能科技有限公司 | Indoor self-adaptive panoramic obstacle avoidance method and system based on multi-line laser radar |
CN114994634B (en) * | 2022-05-18 | 2024-05-28 | 盐城中科高通量计算研究院有限公司 | Patrol car laser radar probe algorithm |
CN115185285B (en) * | 2022-09-06 | 2022-12-27 | 深圳市信诚创新技术有限公司 | Automatic obstacle avoidance method, device and equipment for dust collection robot and storage medium |
CN116466723A (en) * | 2023-04-26 | 2023-07-21 | 曲阜师范大学 | Obstacle avoidance method, system and equipment for killing robot |
CN117697760B (en) * | 2024-01-03 | 2024-05-28 | 佛山科学技术学院 | Robot safety motion control method and system |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108663681A (en) * | 2018-05-16 | 2018-10-16 | 华南理工大学 | Mobile Robotics Navigation method based on binocular camera Yu two-dimensional laser radar |
CN108710376A (en) * | 2018-06-15 | 2018-10-26 | 哈尔滨工业大学 | The mobile chassis of SLAM and avoidance based on Multi-sensor Fusion |
CN110874102A (en) * | 2020-01-16 | 2020-03-10 | 天津联汇智造科技有限公司 | Virtual safety protection area protection system and method for mobile robot |
CN111105495A (en) * | 2019-11-26 | 2020-05-05 | 四川阿泰因机器人智能装备有限公司 | Laser radar mapping method and system fusing visual semantic information |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8355818B2 (en) * | 2009-09-03 | 2013-01-15 | Battelle Energy Alliance, Llc | Robots, systems, and methods for hazard evaluation and visualization |
CN106774334A (en) * | 2016-12-30 | 2017-05-31 | 云南昆船智能装备有限公司 | The las er-guidance AGV navigation locating methods and device of a kind of many laser scanners |
CN108803588A (en) * | 2017-04-28 | 2018-11-13 | 深圳乐动机器人有限公司 | The control system of robot |
CN107966989A (en) * | 2017-12-25 | 2018-04-27 | 北京工业大学 | A kind of robot autonomous navigation system |
CN110833357A (en) * | 2018-08-15 | 2020-02-25 | 格力电器(武汉)有限公司 | Obstacle identification method and device |
CN110147106A (en) * | 2019-05-29 | 2019-08-20 | 福建(泉州)哈工大工程技术研究院 | Has the intelligent Mobile Service robot of laser and vision fusion obstacle avoidance system |
CN110673614A (en) * | 2019-10-25 | 2020-01-10 | 湖南工程学院 | Mapping system and mapping method of small robot group based on cloud server |
CN111461245B (en) * | 2020-04-09 | 2022-11-04 | 武汉大学 | Wheeled robot semantic mapping method and system fusing point cloud and image |
CN111880525A (en) * | 2020-06-15 | 2020-11-03 | 北京旷视机器人技术有限公司 | Robot obstacle avoidance method and device, electronic equipment and readable storage medium |
-
2021
- 2021-01-25 CN CN202110098476.3A patent/CN112859873B/en active Active
- 2021-11-09 AU AU2021266203A patent/AU2021266203B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108663681A (en) * | 2018-05-16 | 2018-10-16 | 华南理工大学 | Mobile Robotics Navigation method based on binocular camera Yu two-dimensional laser radar |
CN108710376A (en) * | 2018-06-15 | 2018-10-26 | 哈尔滨工业大学 | The mobile chassis of SLAM and avoidance based on Multi-sensor Fusion |
CN111105495A (en) * | 2019-11-26 | 2020-05-05 | 四川阿泰因机器人智能装备有限公司 | Laser radar mapping method and system fusing visual semantic information |
CN110874102A (en) * | 2020-01-16 | 2020-03-10 | 天津联汇智造科技有限公司 | Virtual safety protection area protection system and method for mobile robot |
Non-Patent Citations (3)
Title |
---|
GAO, M. et al., ‘An Obstacle Detection and Avoidance System for Mobile Robot with a Laser Radar’, 2019 IEEE 16th International Conference on Networking, Sensing and Control (ICNSC), 09-11 May 2019, Banff, AB, Canada. * |
LI, Y. et al., 'Vision-based Obstacle Avoidance Algorithm for Mobile Robot', 2020 Chinese Automation Congress (CAC), 06-08 November 2020, Shanghai, China. * |
WEI, P. et al., ‘LiDAR and Camera Detection Fusion in a Real-Time Industrial Multi-Sensor Collision Avoidance System’, Electronics 2018, Vol. 7, No. 84, published on 30 May 2018. * |
Also Published As
Publication number | Publication date |
---|---|
AU2021266203A9 (en) | 2022-10-27 |
AU2021266203A1 (en) | 2022-08-11 |
CN112859873B (en) | 2022-11-25 |
CN112859873A (en) | 2021-05-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
AU2021266203B2 (en) | Semantic laser-based multilevel obstacle avoidance system and method for mobile robot | |
CN111693050B (en) | Indoor medium and large robot navigation method based on building information model | |
Sato et al. | Multilayer lidar-based pedestrian tracking in urban environments | |
CN201918032U (en) | Low-altitude flying anti-collision device of aircraft | |
CN105892489A (en) | Multi-sensor fusion-based autonomous obstacle avoidance unmanned aerial vehicle system and control method | |
CN110737271B (en) | Autonomous cruising system and method for water surface robot | |
CN105629970A (en) | Robot positioning obstacle-avoiding method based on supersonic wave | |
CN114474061A (en) | Robot multi-sensor fusion positioning navigation system and method based on cloud service | |
CN111949032A (en) | 3D obstacle avoidance navigation system and method based on reinforcement learning | |
An et al. | Development of mobile robot SLAM based on ROS | |
CN104133482A (en) | Unmanned-plane fuzzy-control flight method | |
CN110844402A (en) | Garbage bin system is summoned to intelligence | |
Kenk et al. | Human-aware Robot Navigation in Logistics Warehouses. | |
Zeng et al. | Mobile robot exploration based on rapidly-exploring random trees and dynamic window approach | |
CN111026121A (en) | Multi-level three-dimensional obstacle avoidance control method and device for intelligent sweeper | |
CN113467483B (en) | Local path planning method and device based on space-time grid map in dynamic environment | |
Kannan et al. | Autonomous drone delivery to your door and yard | |
Wang et al. | Research on autonomous planning method based on improved quantum Particle Swarm Optimization for Autonomous Underwater Vehicle | |
Kondaxakis et al. | Robot–robot gesturing for anchoring representations | |
Gu et al. | Range sensor overview and blind-zone reduction of autonomous vehicle shuttles | |
Butt et al. | A review of perception sensors, techniques, and hardware architectures for autonomous low-altitude UAVs in non-cooperative local obstacle avoidance | |
Yu et al. | Indoor Localization Based on Fusion of AprilTag and Adaptive Monte Carlo | |
US11762390B1 (en) | Autonomous machine safety management in a dynamic environment | |
CN111352128B (en) | Multi-sensor fusion sensing method and system based on fusion point cloud | |
Yee et al. | Autonomous mobile robot navigation using 2D LiDAR and inclined laser rangefinder to avoid a lower object |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
SREP | Specification republished | ||
FGA | Letters patent sealed or granted (standard patent) |