CN112327329B - Obstacle avoidance method, target device, and storage medium - Google Patents

Obstacle avoidance method, target device, and storage medium Download PDF

Info

Publication number
CN112327329B
CN112327329B CN202011337318.0A CN202011337318A CN112327329B CN 112327329 B CN112327329 B CN 112327329B CN 202011337318 A CN202011337318 A CN 202011337318A CN 112327329 B CN112327329 B CN 112327329B
Authority
CN
China
Prior art keywords
point cloud
cloud data
coordinate system
visual point
visual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011337318.0A
Other languages
Chinese (zh)
Other versions
CN112327329A (en
Inventor
邵唱
许义恒
赵家阳
张弥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Sineva Intelligent Technology Co ltd
Original Assignee
Zhejiang Sineva Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Sineva Intelligent Technology Co ltd filed Critical Zhejiang Sineva Intelligent Technology Co ltd
Priority to CN202011337318.0A priority Critical patent/CN112327329B/en
Publication of CN112327329A publication Critical patent/CN112327329A/en
Application granted granted Critical
Publication of CN112327329B publication Critical patent/CN112327329B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/933Lidar systems specially adapted for specific applications for anti-collision purposes of aircraft or spacecraft
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The invention discloses an obstacle avoidance method, target equipment and a storage medium, wherein the method is applied to the target equipment and comprises the following steps: fusing laser point cloud data and visual point cloud data of each barrier to obtain point cloud data; determining the position of each obstacle according to the point cloud data; and carrying out obstacle avoidance operation based on the positions of the obstacles and the preset track of the target equipment. The accuracy of point cloud data is improved, the determined position of the obstacle is more accurate, and the obstacle avoidance efficiency is higher and more reasonable.

Description

Obstacle avoidance method, target device, and storage medium
Technical Field
The present invention relates to the field of automatic navigation technologies, and in particular, to an obstacle avoidance method, a target device, and a storage medium.
Background
The obstacle avoidance method has the advantages that various obstacle avoidance actions are performed when the outside obstructs the movement direction of an object, the obstacle avoidance process is the process, and the obstacle avoidance method can be applied to robots, machine trolleys, quadrotors and other equipment.
The obstacle detection and real-time obstacle avoidance are one of core and difficult problems in the navigation process of equipment such as a robot and the like, in the related technology, a common obstacle avoidance sensor comprises a laser radar, a visual camera, an ultrasonic wave and the like, and then a local path planning algorithm such as a vector field histogram or a bug algorithm is applied to carry out path planning.
However, the obstacle avoidance algorithm in the related art has poor accuracy in determining the position of the obstacle, so that the obstacle avoidance efficiency and the path planning efficiency are low, and the obstacle avoidance rationality is low.
Disclosure of Invention
The invention provides an obstacle avoidance method, target equipment and a storage medium, and aims to solve the problems that in the related art, equipment such as a robot and the like has low obstacle avoidance efficiency and low path planning efficiency in an obstacle avoidance process, and the obstacle avoidance reasonability is low.
In a first aspect, an embodiment of the present application provides an obstacle avoidance method, which is applied to a target device, and the method includes:
fusing laser point cloud data and visual point cloud data of each barrier to obtain point cloud data;
determining the position of each obstacle according to the point cloud data;
and carrying out obstacle avoidance operation based on the positions of the obstacles and the preset track of the target equipment.
According to the embodiment, the laser point cloud data and the visual point cloud data of the barrier are fused by utilizing the accuracy of the laser point cloud data and the consistency of the visual point cloud data, so that the accuracy of the fused point cloud is improved, and the fused point cloud data covers more accurate barrier information. Therefore, the position of the obstacle determined by the fused point cloud data is more accurate, and the reasonability and the efficiency of obstacle avoidance operation are improved.
In some exemplary embodiments, the fusing the laser point cloud data and the visual point cloud data of each obstacle includes:
acquiring a plurality of groups of laser point cloud data and a plurality of groups of visual point cloud data in a target area in the process of moving the preset track;
each group of laser point cloud data and each group of visual point cloud data are acquired, and the acquired group of laser point cloud data and the acquired group of visual point cloud data are fused;
wherein a time difference between a first time of acquiring the set of laser point cloud data and a second time of acquiring the set of visual point cloud data is less than a set time difference threshold.
In the embodiment, the acquisition time difference of the group of laser point cloud data and the group of visual point cloud data which participate in the fusion is smaller than the set time difference threshold, so that the information overlapping degree of the obstacles covered by the two groups of data is high, and the actual obstacle avoidance requirements are better met; therefore, a plurality of groups of fused point cloud data are obtained, and the information of the obstacle is more comprehensive.
In some exemplary embodiments, a set of laser point cloud data within a target region is acquired by:
acquiring initial laser point cloud data under a laser radar coordinate system through a laser radar;
and converting the initial laser point cloud data into laser point cloud data under a world coordinate system based on the initial motion state data of the target equipment.
In the embodiment, the directly acquired initial laser point cloud data under the laser radar coordinate system is converted into the laser point cloud data under the world coordinate system, so that the time and space errors of the laser point cloud and the visual point cloud are preliminarily reduced; the normal operation in the data fusion process is ensured.
In some exemplary embodiments, the converting the initial laser point cloud data into laser point cloud data in a world coordinate system based on the initial motion state data of the target device includes:
calculating the pose of the target equipment at the first moment according to the initial motion state data;
determining a first transformation matrix from the equipment center coordinate system to a world coordinate system at the first moment according to the pose of the target equipment at the first moment;
and converting the laser point cloud data into laser point cloud data under a world coordinate system based on the first transformation matrix and a second transformation matrix, wherein the second transformation matrix is a transformation matrix from a laser radar coordinate system to an equipment center coordinate system.
In the above embodiment, the initial laser point cloud data is converted into the laser point cloud data in the world coordinate system by using the motion state data, so that the spatial error of the laser point cloud data and the visual point cloud data is further reduced.
In some exemplary embodiments, a set of visual point cloud data within a target region is obtained by:
acquiring initial visual point cloud data under a visual sensor coordinate system through a visual camera;
and converting the initial visual point cloud data into visual point cloud data under a world coordinate system based on the initial motion state data of the target equipment.
In the embodiment, the directly acquired initial visual point cloud data under the visual camera coordinate system is converted into the visual point cloud data under the world coordinate system, so that the time and space errors of the laser point cloud and the visual point cloud are preliminarily reduced; the normal operation in the data fusion process is ensured.
In some exemplary embodiments, the converting the initial visual point cloud data into visual point cloud data in a world coordinate system based on the initial motion state data of the target device includes:
calculating the pose of the target equipment at the second moment according to the initial motion state data;
determining a third transformation matrix from the equipment center coordinate system to a world coordinate system at the second moment according to the pose of the target equipment at the second moment;
and converting the initial visual point cloud data into visual point cloud data under a world coordinate system by applying the third transformation matrix and a fourth transformation matrix, wherein the fourth transformation matrix is a transformation matrix from a visual sensor coordinate system to an equipment center coordinate system.
In the above embodiment, the initial visual point cloud data is converted into the laser point cloud data in the world coordinate system by using the motion state data, so that the spatial error of the laser point cloud data and the visual point cloud data is further reduced.
In some exemplary embodiments, the fusing the acquired set of laser point cloud data and the set of visual point cloud data includes:
screening the visual point cloud data by applying the laser point cloud data;
performing point cloud registration on the screened visual point cloud data by using the laser point cloud data to obtain a correction matrix;
correcting the visual point cloud data by applying the correction matrix to obtain corrected visual point cloud data;
and fusing the corrected visual point cloud data and the laser point cloud data.
According to the embodiment, the accuracy of the laser point cloud data is utilized to perform synchronous registration on the time and the space of the visual point cloud data, so that the error of the laser point cloud data and the spatial point cloud data is further reduced, and the accuracy of the fused point cloud is improved.
In some exemplary embodiments, the avoiding an obstacle based on the position of each obstacle and the preset track of the target device includes:
aiming at any obstacle, judging whether the obstacle collides with the preset track or not according to the position of the obstacle, the current pose and the motion direction of the target equipment;
if so, generating an obstacle avoidance route according to the position of the obstacle and the preset track, and determining the offset distance between the obstacle avoidance route and the preset track;
if the offset distance is smaller than or equal to a set distance threshold, moving according to the obstacle avoidance route;
if the offset distance is larger than the set distance threshold, replanning the preset track according to the set planning requirement, and moving according to the replanned track.
According to the embodiment, different obstacle avoidance strategies are adopted according to different influence conditions of the obstacle on the preset track, and the rationality and the obstacle avoidance efficiency of obstacle avoidance are improved.
In a second aspect, an embodiment of the present application provides a target device, where the target device includes:
the data fusion module is used for fusing the laser point cloud data and the visual point cloud data of each barrier to obtain point cloud data;
the obstacle position determining module is used for determining the position of each obstacle according to the point cloud data;
and the obstacle avoidance operation module is used for carrying out obstacle avoidance operation based on the positions of the obstacles and the preset track of the target equipment.
In a third aspect, an embodiment of the present application provides a target device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of any one of the methods when executing the computer program.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium having stored thereon computer program instructions, which, when executed by a processor, implement the steps of any of the above-described methods.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a schematic diagram of a static map provided by an embodiment of the present application;
FIG. 2 is a flow chart of converting initial laser point cloud data into laser point cloud data in a world coordinate system, which is suitable for use in the embodiment of the present application;
FIG. 3 is a flowchart illustrating an example of converting initial visual point cloud data into visual point cloud data in a world coordinate system according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a data fusion applicable to an embodiment of the present application;
FIG. 5 is a schematic diagram of a point cloud data synchronization and fusion applicable to the embodiment of the present application;
FIG. 6 is a schematic illustration of collision prediction and local planning as applied in an embodiment of the present application;
FIG. 7 is a schematic diagram of collision prediction and global planning as applied in an embodiment of the present application;
fig. 8 is a flowchart of an obstacle avoidance method according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a target device provided in an embodiment of the present application;
fig. 10 is a schematic structural diagram of another target device provided in an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
First, terms or nouns related to the application of the embodiments of the present application will be described.
(1) Point cloud: in the reverse engineering, a point data set of the product appearance surface obtained by a measuring instrument is called point cloud, the number of points obtained by using a three-dimensional coordinate measuring machine is small, the distance between the points is large, and the point cloud is called sparse point cloud; the point clouds obtained by using the three-dimensional laser scanner or the photographic scanner have larger and denser point quantities, and are called dense point clouds.
(2) Laser point cloud data: the point cloud data obtained by scanning the laser radar is called laser point cloud data.
(3) Visual point cloud data: the point cloud data acquired by shooting and scanning by using the camera is called visual point cloud data.
(4) Transforming the matrix: the transformation matrix between different coordinate systems specifically comprises:
since the target device is in the process of moving, the transformation matrix from the device center coordinate system to the world coordinate system is changed. Illustratively, B is an abbreviation for Body, representing the device center coordinate system; w is an abbreviation for World, representing the World coordinate system, and therefore, W T B1 a transformation matrix representing a device center coordinate system to a world coordinate system at a first time. Similarly, the transformation matrix from the device center coordinate system to the world coordinate system at the second time is the third transformation matrix, which is recorded as the third transformation matrix W T B2
In particular, the method comprises the following steps of, W T B1 the calculation method of (c) is as follows:
suppose the pose of the center of the device at the first time is ( W x B1W y B1W z B1W α B1W β B1W θ B1 ) When the rotation matrix from the center coordinate system of the device to the world coordinate system is W R B1
W R B1W R zB1 W R yB1 W R xB1
Wherein:
Figure BDA0002797553260000061
Figure BDA0002797553260000062
Figure BDA0002797553260000071
the translation matrix from the device center coordinate system to the world coordinate system is W M B1
Figure BDA0002797553260000072
Comprehensive rotation matrix W R B1 And translation matrix W M B1 Finally, obtaining a transformation matrix from the central coordinate system of the first time equipment to the world coordinate system:
Figure BDA0002797553260000073
in particular, the method comprises the following steps of, W T B2 the calculation method of (c) is as follows:
suppose the pose of the center of the device at the second moment is ( W x B2W y B2W z B2W α B2W β B2W θ B2 ) When the rotation matrix from the center coordinate system of the device to the world coordinate system is W R B2
W R B2W R zB2 W R yB2 W R xB2
Wherein:
Figure BDA0002797553260000074
Figure BDA0002797553260000075
Figure BDA0002797553260000076
the translation matrix from the device center coordinate system to the world coordinate system is W M B2
Figure BDA0002797553260000081
Comprehensive rotation matrix W R B2 And translation matrix W M B2 Finally, obtaining a transformation matrix from the central coordinate system of the equipment at the second moment to the world coordinate system:
Figure BDA0002797553260000082
in addition, L is the abbreviation of Laser, and the transformation matrix from the Laser radar coordinate system to the equipment center coordinate system is called a second transformation matrix and is recorded as the second transformation matrix B T L
In particular, the method comprises the following steps of, B T L the calculation method of (c) is as follows:
the pose of the laser radar in the central coordinate system of the equipment is assumed to be ( B x LB y LB z LB α LB β LB θ L ) When the rotation matrix from the laser radar coordinate system to the equipment center coordinate system is B R L
B R LB R zL B R yL B R xL
Wherein:
Figure BDA0002797553260000083
Figure BDA0002797553260000084
Figure BDA0002797553260000085
the translation matrix from the laser radar coordinate system to the equipment center coordinate system is B M L
Figure BDA0002797553260000091
Comprehensive rotation matrix B R L And translation matrix D M L Finally, obtaining a transformation matrix from the laser radar coordinate system to the equipment center coordinate system:
Figure BDA0002797553260000092
in addition, if C is a short for Camera, the transformation matrix from the coordinate system of the vision sensor to the coordinate system of the center of the device is called a fourth transformation matrix and is recorded as B T C
In particular, the method comprises the following steps of, B T C the calculation method of (c) is as follows:
the pose of the vision sensor in the central coordinate system of the equipment is assumed to be ( B x CB y CB z CB α CB β CB θ C ) (ii) a In this case, the rotation matrix from the vision sensor to the central coordinate system of the device is B R C
B R CB R zC B R yC B R xC
Wherein:
Figure BDA0002797553260000093
Figure BDA0002797553260000094
Figure BDA0002797553260000095
the translation matrix from the vision sensor coordinate system to the equipment center coordinate system is B M C
Figure BDA0002797553260000096
Comprehensive rotation matrix B R C And translation matrix B M C And finally obtaining a transformation matrix from the laser radar coordinate system to the equipment center coordinate system:
Figure BDA0002797553260000101
next, the target device takes a robot as an example to explain the flow of the entire obstacle avoidance method.
The laser radar and the vision camera are usually arranged on a robot, the relative positions of the laser radar and the vision camera are fixed, initial laser point cloud data acquired by the laser radar are data under a laser radar coordinate system, initial vision point cloud data acquired by the vision camera are data under a vision sensor coordinate system, in order to achieve accurate vision fusion, internal and external parameters of sensors such as the laser radar and the vision camera are calibrated firstly, and the accuracy of the acquired laser point cloud data and the acquired vision point cloud data is improved. In addition, in consideration of the fact that the coordinate systems of the laser point cloud data and the visual point cloud data are different during acquisition, in the process of converting the laser point cloud data and the visual point cloud data into the world coordinate system, a corresponding transformation matrix of the coordinate system relative to the equipment center coordinate system, a second transformation matrix from the laser radar coordinate system to the equipment center coordinate system, and a fourth transformation matrix from the visual sensor coordinate system to the equipment center coordinate system are needed.
A target area is selected, and obstacle avoidance processes in the area are studied. The target area contains inherent obstacles such as supermarkets, bus stations and the like, and the area where the fixed obstacles are located is an impassable area. A map of a target area including an inherent obstacle is referred to as a static map, the static map is a two-dimensional occupancy probability grid, and binarization processing is performed on the static map, so that a passable area and a non-passable area can be distinguished, as shown in fig. 1, where 0 indicates passable and 1 indicates non-passable. After the robot starts to travel, some objects that are not inherent obstacles may become obstacles of the robot, or other moving objects may become obstacles of the robot at any time, so that in the process of the robot traveling according to the preset track, the encountered obstacles change at any time, for example, the position originally 0 in the static map becomes 1, which indicates that the obstacles increase, and the static map may be updated.
In order to ensure the accuracy of data fusion, the time difference between the first time of acquiring a group of laser point cloud data and the second time of acquiring a group of visual point cloud data is less than a set time difference threshold value. The process of acquiring a set of laser point cloud data and visual point cloud data is explained, and the process of acquiring each set of laser point cloud data and visual point cloud data is the same.
In one embodiment, a set of laser point cloud data within a target area is acquired by: obtaining initial laser point cloud data under laser radar coordinate system through laser radar L p; and converting the initial laser point cloud data into laser point cloud data under a world coordinate system based on the initial motion state data of the target equipment.
Referring to fig. 2, a specific conversion process of the initial laser point cloud data into the laser point cloud data in the world coordinate system includes:
s201, calculating the pose of the target equipment at the first moment according to the initial motion state data.
The laser point cloud data is converted by introducing robot motion state data in consideration of the asynchronous time of the laser radar and the vision camera.
The starting time of the robot is taken as the initial time t 0 Its initial velocity in world coordinate system is noted as
Figure BDA0002797553260000111
And the initial pose->
Figure BDA0002797553260000112
/>
Acquiring laser point cloud data at a first time L p, the coordinate system is the laser radar, and the sampling of the laser point cloud data and the robot motion state is asynchronous, so that the first time t is mainly carried out 1 The obtained laser point cloud is subjected to motion calibration, and the robot is predicted to be at t 1 The pose of the moment is predicted in the following mode:
Figure BDA0002797553260000113
wherein,
Figure BDA0002797553260000114
for the robot at t 1 The pose at the moment.
S202, determining a first transformation matrix from the equipment center coordinate system to the world coordinate system at the first moment according to the pose of the target equipment at the first moment.
Since the robot is mobile, the transformation matrix of the device center coordinate system to the world coordinate system changes and is related to the pose of the robot at the current time. Correspondingly, a first transformation matrix from the equipment center coordinate system to the world coordinate system at the first moment can be determined according to the pose of the robot at the first moment W T B1
S203, converting the initial laser point cloud data into laser point cloud data under a world coordinate system based on the first transformation matrix and the second transformation matrix, wherein the second transformation matrix is a transformation matrix from a laser radar coordinate system to an equipment center coordinate system.
Specifically, let t 1 And converting the initial laser point cloud data acquired at any moment into a world coordinate system in the following manner:
W p lW T B1 × B T L × L p i
wherein, L p i is as follows L p, the ith data in the p, the laser point cloud data under the corresponding world coordinate system after the data conversion is recorded as W p lL And p is a general name of laser point cloud data, and each piece of laser point cloud data is converted in the mode.
In the embodiment, the initial laser point cloud data in the laser radar coordinate system is converted into the laser point cloud data in the world coordinate system, so that the time and space errors of the laser point cloud and the visual point cloud are reduced preliminarily.
In one embodiment, a set of visual point cloud data within a target region is obtained by: acquiring initial visual point cloud data under a visual sensor coordinate system through a visual camera; and converting the initial visual point cloud data into visual point cloud data under a world coordinate system based on the initial motion state data of the target equipment.
Referring to fig. 3, a specific conversion process of converting the initial visual point cloud data into the visual point cloud data under the world coordinate system includes:
and S301, calculating the pose of the target equipment at the second moment according to the initial motion state data.
The laser radar and the vision camera are not synchronized in time, so the vision point cloud data is converted by introducing the robot motion state data.
Starting from an initial position, t, since the robot is moving at all times 0 To t 2 At any moment, although the actual pose of the robot changes, the changed pose may not be updated. In addition, in order to improve the pose prediction accuracy, t is applied 0 Pose and velocity prediction t of time 2 The pose of the moment is predicted in the following mode:
Figure BDA0002797553260000131
wherein,
Figure BDA0002797553260000132
for the robot at t 2 The pose at that moment.
S302, determining a third transformation matrix from the equipment center coordinate system to the world coordinate system at the second moment according to the pose of the target equipment at the second moment.
The third transformation matrix is calculated in the same manner as the first transformation matrix, refer to S202.
And S303, converting the initial visual point cloud data into visual point cloud data under a world coordinate system by applying a third transformation matrix and a fourth transformation matrix, wherein the fourth transformation matrix is a transformation matrix from a visual sensor coordinate system to an equipment center coordinate system.
Exemplarily, at t 2 The initial visual point cloud data under the visual sensor coordinate system is obtained constantly, and in order to improve the calculation efficiency, the initial visual point cloud data can be screened instead of transforming all the initial visual point cloud data. The screening rule may be: according to the measurement accuracy of the depth value of the visual sensor and the requirement of the measurement accuracy, intercepting depth point cloud or dense point cloud within the accuracy range meeting the use requirement C p is transformed.
Specifically, the method for converting the initial visual point cloud data into the visual point cloud data under the world coordinate system includes:
W p cW T B2 × B T C × C p j
wherein, C p j as visual point cloud data under world coordinate system W p c The (j) th piece of data, C and p is a general name of visual point cloud data, and each piece of laser point cloud data is converted in the mode.
In this embodiment, the initial visual point cloud data in the visual camera coordinate system is converted into the visual point cloud data in the world coordinate system, so as to preliminarily reduce the errors of the laser point cloud and the visual point cloud in time and space.
In one embodiment, each set of laser point cloud data and visual point cloud data in the world coordinate system is obtained and then fused, and the detailed process of the fusion refers to fig. 4.
S401, screening the visual point cloud data by using the laser point cloud data.
Specifically, for t 2 The visual point cloud data under the world coordinate system of the moment is screened, the screening rule can be plane interception, for example, an intercepted plane is coplanar with a laser radar, and the intercepted visual point cloud data is W p c_h . Therefore, the point cloud registration is performed by using the screened visual point cloud data, so that the registration process is more accurate.
S402, carrying out point cloud registration on the screened visual point cloud data by using the laser point cloud data to obtain a correction matrix.
The point cloud data can obtain accurate topological structure and geometric structure of the object with low storage cost, and in the actual acquisition process, because the size of the object to be detected is overlarge, the surface of the object is shielded or the scanning angle of three-dimensional scanning equipment and other factors, complete geometric information of the object cannot be obtained through single scanning. Therefore, in order to obtain complete geometric information of the measured object, two or more groups of point clouds at different viewing angles, i.e. different reference coordinates, need to be unified to the same coordinate system for point cloud registration.
The basic principle of point cloud registration is to find the nearest neighbor point in two groups of point cloud data to be registered according to a certain constraint condition, and then calculate the optimal matching parameter to minimize the error function.
In the embodiment of the present application, W p l and W p c_h are two-dimensional plane point clouds and are under the world coordinate system. Because the accuracy of the laser point cloud is higher than that of the visual point cloud, the laser point cloud is taken as a reference object to the visual point cloud W p c_h Point cloud registration is carried out, and the relative pose relation T between the point cloud registration and the point cloud registration is obtained C-L The following relationship is satisfied:
W p l =T C-L × W p c_h
the expression matrix T of the relative pose relationship C-L I.e. the correction matrix.
And S403, correcting the visual point cloud data by using the correction matrix to obtain corrected visual point cloud data.
By T C-L Correcting all visual point cloud data to obtain corrected visual point cloud data W p c ' the correction is as follows:
W p c ′=T C-L × W p c
s404, fusing the corrected visual point cloud data and the laser point cloud data.
Specifically, in order to improve the point cloud obstacle avoidance efficiency, the corrected visual point cloudThe data and the laser point cloud data are fused, for example, a union set of two groups of data is taken, so that the obtained information of the obstacle can be more accurate and comprehensive. Exemplaryly, W p is the fused point cloud data, and the fusion mode is as follows:
W p= W p lW p c
in the implementation, the accuracy of the laser point cloud data is utilized to perform synchronous registration on the time and the space of the visual point cloud data, so that the error of the laser point cloud data and the visual point cloud data in the space is further reduced, and the accuracy of the fused point cloud is improved.
In a specific example, fig. 5 shows a schematic diagram of synchronization and fusion of point cloud data, and referring to fig. 5, the corresponding time of the robot state is t 0 The acquisition time of the laser point cloud data is t 1 The acquisition time of the visual point cloud data is t 2 And fusing each set of the acquired laser point cloud data and the visual point cloud data.
In one embodiment, after the fused point cloud data is determined, whether each obstacle interferes with a preset track or not is judged according to the fused point cloud data, and if interference occurs, how to avoid the obstacle is required. When judging whether the obstacle interferes with the preset track, a detection range can be determined in advance, and judgment is carried out in the detection range, so that the real-time performance of judgment can be improved.
The detection range is represented by R, which may be specifically adjusted according to the current speed and the current acceleration of the robot:
Figure BDA0002797553260000161
wherein v represents the current linear velocity of the robot, a represents the acceleration of the robot, δ t represents the delay time of the point cloud data, and r represents the reserved safety distance.
Correspondingly, in the detection range R, for example, the pose and the motion direction of the robot are not considered, points which are possibly influenced on the preset track are screened, and judgment can be carried out by judging the relation between the distance from the obstacle to the preset track line and the maximum size of the robot. When the distance from the obstacle to the track line is larger than the maximum size of the robot, the collision is not generated, and the robot can continue to run; when the distance from the obstacle to the preset trajectory line is less than or equal to the maximum size of the robot, it is indicated that collision may occur, and therefore, the points are screened out as points that may be affected.
And then, considering the pose and the motion direction of the robot, and aiming at any barrier, judging whether the barrier collides with a preset track according to the position of the barrier and the current pose and the motion direction of the robot. And if so, judging that the obstacle collides with the preset track.
If the obstacle collides with the preset track, an obstacle avoidance route is generated according to the influenced track points and the positions of the obstacle, and the obstacle avoidance route can be a route obtained after the influenced track points are re-planned. And calculating the offset distance between the obstacle avoidance route and the preset track, wherein the offset distance is used for representing whether the obstacle avoidance route is feasible or not. Specifically, if the offset distance is smaller than or equal to the set distance threshold, it is indicated that the local trajectory replanning does not affect the actual effect of the preset trajectory, the robot moves according to the obstacle avoidance route, returns to the original preset trajectory after the movement is completed, continues to move, and judges whether collision occurs with the robot again; if the offset distance is greater than the set distance threshold, it is indicated that the obstacle avoidance route can ensure that the robot and the obstacle do not collide, but the deviation between the obstacle avoidance route and the original preset track is too large, which is not in accordance with the purpose of robot track planning, or does not satisfy the actual requirements of robot motion, at this time, the possibility that the obstacle avoidance track is not the optimal route is high, the preset track needs to be re-planned according to the set planning requirements with the current robot pose as a starting point, global planning is performed, and the obstacle avoidance track moves according to the re-planned track.
In the embodiment, different planning strategies are adopted according to the influence degree of the obstacles on the route, so that the obstacle avoidance efficiency and the obstacle avoidance rationality are improved, and the route planning rationality is also improved.
In a specific example, fig. 6 shows a schematic diagram of collision prediction and local planning, and referring to fig. 6, for example, the affected point may be shifted according to a shift direction, the shift distance is calculated according to the above embodiment, and if the shift distance is smaller than or equal to the set distance threshold, the situation is the local planning. In this specific example, the robot moves according to a preset track 601 in the initial stage; when an obstacle is encountered, the obstacle avoidance route 602 after replanning along the affected point is adopted for obstacle avoidance; after the obstacle avoidance is finished, the robot still moves according to the original preset track 603.
In another specific example, fig. 7 shows a schematic diagram of collision prediction and global planning, referring to fig. 7, for example, after an affected point is shifted according to a shift direction, a calculated shift distance is greater than a set distance threshold, in this case, an obstacle avoidance route does not meet the purpose of robot trajectory planning, at this time, the current pose of the robot is taken as a starting point, global planning is performed according to a set planning requirement, a trajectory after re-planning is obtained, and the robot moves according to the trajectory after re-planning. Wherein 701 represents the original preset track, and 702 represents the re-planned track.
Fig. 8 is a flowchart of an obstacle avoidance method provided in an embodiment of the present application, and referring to fig. 5, the method specifically includes the following steps:
s801, fusing laser point cloud data and visual point cloud data of each obstacle to obtain point cloud data.
And S802, determining the position of each obstacle according to the point cloud data.
And S803, carrying out obstacle avoidance operation based on the positions of the obstacles and the preset track of the target equipment.
According to the embodiment, the laser point cloud data and the visual point cloud data of the barrier are fused by utilizing the accuracy of the laser point cloud data and the consistency of the visual point cloud data, so that the accuracy of the fused point cloud is improved, and the fused point cloud data covers more accurate barrier information. Therefore, the position of the obstacle determined by the fused point cloud data is more accurate, and the reasonability and the efficiency of obstacle avoidance operation are improved.
Referring to fig. 9, based on the same inventive concept as the above-mentioned obstacle avoidance method, an embodiment of the present application further provides a target device, where the target device mainly includes a data fusion module 901, an obstacle position determination module 902, and an obstacle avoidance operation module 903.
The data fusion module 901 is configured to fuse laser point cloud data and visual point cloud data of each obstacle to obtain point cloud data; an obstacle position determining module 902, configured to determine the position of each obstacle according to the point cloud data; and an obstacle avoidance operation module 903, configured to perform obstacle avoidance operations based on the positions of the obstacles and the preset trajectory of the target device.
According to the embodiment, the laser point cloud data and the visual point cloud data of the barrier are fused by utilizing the accuracy of the laser point cloud data and the consistency of the visual point cloud data, so that the accuracy of the fused point cloud is improved, and the fused point cloud data covers more accurate barrier information. Therefore, the position of the obstacle determined by the fused point cloud data is more accurate, and the reasonability and the efficiency of obstacle avoidance operation are improved.
Optionally, the data fusion module 901 is specifically configured to:
acquiring a plurality of groups of laser point cloud data and a plurality of groups of visual point cloud data in a target area in the process of moving a preset track;
each group of laser point cloud data and each group of visual point cloud data are acquired, and the acquired group of laser point cloud data and the acquired group of visual point cloud data are fused;
and the time difference between the first moment of acquiring the group of laser point cloud data and the second moment of acquiring the group of visual point cloud data is less than a set time difference threshold value.
Optionally, the data fusion module 901 is specifically configured to obtain a set of laser point cloud data in the target area in the following manner:
acquiring initial laser point cloud data under a laser radar coordinate system through a laser radar;
and converting the initial laser point cloud data into laser point cloud data under a world coordinate system based on the initial motion state data of the target equipment.
Optionally, the data fusion module 901 is specifically configured to:
calculating the pose of the target equipment at the first moment according to the initial motion state data;
determining a first transformation matrix from an equipment center coordinate system to a world coordinate system at a first moment according to the pose of the target equipment at the first moment;
and converting the initial laser point cloud data into laser point cloud data under a world coordinate system based on a first transformation matrix and a second transformation matrix, wherein the second transformation matrix is a transformation matrix from a laser radar coordinate system to an equipment center coordinate system.
Optionally, the data fusion module 901 is specifically configured to obtain a set of visual point cloud data in the target area in the following manner:
acquiring initial visual point cloud data under a visual sensor coordinate system through a visual camera;
and converting the initial visual point cloud data into visual point cloud data under a world coordinate system based on the initial motion state data of the target equipment.
Optionally, the data fusion module 901 is specifically configured to:
calculating the pose of the target equipment at the second moment according to the initial motion state data;
determining a third transformation matrix from the equipment center coordinate system to the world coordinate system at the second moment according to the pose of the target equipment at the second moment;
and converting the initial visual point cloud data into visual point cloud data under a world coordinate system by applying a third transformation matrix and a fourth transformation matrix, wherein the fourth transformation matrix is a transformation matrix from a visual sensor coordinate system to an equipment center coordinate system.
Optionally, the data fusion module 901 is specifically configured to:
screening the visual point cloud data by using the laser point cloud data;
performing point cloud registration on the screened visual point cloud data by using the laser point cloud data to obtain a correction matrix;
correcting the visual point cloud data by using the correction matrix to obtain corrected visual point cloud data;
and fusing the corrected visual point cloud data and the laser point cloud data.
Optionally, the obstacle avoidance module 903 is specifically configured to perform obstacle avoidance based on the position of each obstacle and a preset track of the target device, and includes:
aiming at any one obstacle, judging whether the obstacle collides with a preset track or not according to the position of the obstacle, the current pose and the motion direction of the target equipment;
if so, generating an obstacle avoidance route according to the position of the obstacle and the preset track, and determining the offset distance between the obstacle avoidance route and the preset track;
if the offset distance is smaller than or equal to the set distance threshold, moving according to the obstacle avoidance route;
and if the offset distance is greater than the set distance threshold, replanning the preset track according to the set planning requirement, and moving according to the replanned track.
The obstacle avoidance device and the obstacle avoidance method provided by the embodiment of the application adopt the same inventive concept and the same implementation means, can obtain the same beneficial effects, and are not repeated herein.
Based on the same inventive concept as the obstacle avoidance method, the embodiment of the present application further provides a target device, which may be specifically a desktop computer, a portable computer, a smart phone, a tablet computer, a Personal Digital Assistant (PDA), a server, and the like; in the embodiment of the present application, the target device may be a robot. As shown in fig. 10, the target device 100 may include a processor 1001, a memory 1002, a lidar 1003, and a vision camera 1004. The laser radar 1003 acquires initial laser point cloud data and then sends the initial laser point cloud data to the processor 1001, the visual camera 1004 acquires initial visual point cloud data and then sends the initial visual point cloud data to the processor 1001, and the processor 1001 processes and then fuses the received initial laser point cloud data and the initial visual point cloud data.
The Processor 1001 may be a general-purpose Processor, such as a Central Processing Unit (CPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component, and may implement or execute the methods, steps, and logic blocks disclosed in the embodiments of the present Application. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in a processor.
The memory 1002, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The Memory may include at least one type of storage medium, and may include, for example, a flash Memory, a hard disk, a multimedia card, a card-type Memory, a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Programmable Read Only Memory (PROM), a Read Only Memory (ROM), a charged Erasable Programmable Read Only Memory (EEPROM), a magnetic Memory, a magnetic disk, an optical disk, and so on. The memory is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 1002 in the embodiments of the present application may also be circuitry or any other device capable of performing a storage function for storing program instructions and/or data.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; the computer storage media may be any available media or data storage device that can be accessed by a computer, including but not limited to: a mobile storage device, a Random Access Memory (RAM), a magnetic Memory (e.g., a flexible disk, a hard disk, a magnetic tape, a magneto-optical disk (MO), etc.), an optical Memory (e.g., a CD, a DVD, a BD, an HVD, etc.), and a semiconductor Memory (e.g., a ROM, an EPROM, an EEPROM, a nonvolatile Memory (NAND FLASH), a Solid State Disk (SSD)) and various media that can store program codes.
Alternatively, the integrated units described above in the present application may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as independent products. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or portions thereof contributing to the prior art may be embodied in the form of a software product stored in a storage medium, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media that can store program codes include a removable Memory device, a Random Access Memory (RAM), a magnetic Memory (e.g., a flexible disk, a hard disk, a magnetic tape, a magneto-optical disk (MO), etc.), an optical Memory (e.g., a CD, a DVD, a BD, an HVD, etc.), and a semiconductor Memory (e.g., a ROM, an EPROM, an EEPROM, a nonvolatile Memory (NAND FLASH), a Solid State Disk (SSD)).
The above embodiments are only used to describe the technical solutions of the present application in detail, but the above embodiments are only used to help understanding the method of the embodiments of the present application, and should not be construed as limiting the embodiments of the present application. Modifications and substitutions that may be readily apparent to those skilled in the art are intended to be included within the scope of the embodiments of the present application.

Claims (10)

1. An obstacle avoidance method is applied to target equipment, and the method comprises the following steps:
each group of laser point cloud data and each group of visual point cloud data are acquired, and the acquired group of laser point cloud data and the acquired group of visual point cloud data are fused; the system comprises a target device, a set of visual point cloud data and a set of laser point cloud data, wherein the set of visual point cloud data and the set of laser point cloud data are respectively one of a plurality of sets of laser point cloud data and a plurality of sets of visual point cloud data in a target area acquired by the target device in a preset track moving process;
determining the position of each obstacle according to the point cloud data;
carrying out obstacle avoidance operation based on the position of each obstacle and a preset track of the target equipment;
wherein the fusing the acquired set of laser point cloud data and the set of visual point cloud data comprises:
screening the visual point cloud data by applying the laser point cloud data; performing point cloud registration on the screened visual point cloud data by using the laser point cloud data to obtain a correction matrix; correcting the visual point cloud data by applying the correction matrix to obtain corrected visual point cloud data; and fusing the corrected visual point cloud data and the laser point cloud data.
2. The method of claim 1, wherein a time difference between a first time of acquiring the set of laser point cloud data and a second time of acquiring the set of visual point cloud data is less than a set time difference threshold.
3. The method of claim 2, wherein the set of laser point cloud data within the target area is obtained by:
acquiring initial laser point cloud data under a laser radar coordinate system through a laser radar;
and converting the initial laser point cloud data into laser point cloud data under a world coordinate system based on the initial motion state data of the target equipment.
4. The method of claim 3, wherein the converting the initial laser point cloud data to laser point cloud data in a world coordinate system based on the initial motion state data of the target device comprises:
calculating the pose of the target equipment at the first moment according to the initial motion state data;
determining a first transformation matrix from the equipment center coordinate system to a world coordinate system at the first moment according to the pose of the target equipment at the first moment;
and converting the laser point cloud data into laser point cloud data under a world coordinate system based on the first transformation matrix and a second transformation matrix, wherein the second transformation matrix is a transformation matrix from a laser radar coordinate system to an equipment center coordinate system.
5. The method of claim 2, wherein the set of visual point cloud data within the target area is obtained by:
acquiring initial visual point cloud data under a visual sensor coordinate system through a visual camera;
and converting the initial visual point cloud data into visual point cloud data under a world coordinate system based on the initial motion state data of the target equipment.
6. The method of claim 5, wherein the converting the initial visual point cloud data to visual point cloud data in a world coordinate system based on the initial motion state data of the target device comprises:
calculating the pose of the target equipment at the second moment according to the initial motion state data;
determining a third transformation matrix from the equipment center coordinate system to a world coordinate system at the second moment according to the pose of the target equipment at the second moment;
and converting the initial visual point cloud data into visual point cloud data under a world coordinate system by applying the third transformation matrix and a fourth transformation matrix, wherein the fourth transformation matrix is a transformation matrix from a visual sensor coordinate system to an equipment center coordinate system.
7. The method according to any one of claims 1 to 6, wherein the avoiding an obstacle based on the position of each obstacle and the preset track of the target device comprises:
aiming at any obstacle, judging whether the obstacle collides with the preset track or not according to the position of the obstacle, the current pose and the motion direction of the target equipment;
if so, generating an obstacle avoidance route according to the position of the obstacle and the preset track, and determining the offset distance between the obstacle avoidance route and the preset track;
if the offset distance is smaller than or equal to a set distance threshold, moving according to the obstacle avoidance route;
if the offset distance is larger than the set distance threshold, replanning the preset track according to the set planning requirement, and moving according to the replanned track.
8. A target device, comprising:
the data fusion module is used for fusing the acquired group of laser point cloud data and the acquired group of visual point cloud data when acquiring the group of laser point cloud data and the group of visual point cloud data; the system comprises a target device, a set of visual point cloud data and a set of laser point cloud data, wherein the set of visual point cloud data and the set of laser point cloud data are respectively one of a plurality of sets of laser point cloud data and a plurality of sets of visual point cloud data in a target area acquired by the target device in a preset track moving process;
the obstacle position determining module is used for determining the position of each obstacle according to the point cloud data;
the obstacle avoidance operation module is used for carrying out obstacle avoidance operation based on the positions of the obstacles and the preset track of the target equipment;
wherein the data fusion module is further configured to:
screening the visual point cloud data by applying the laser point cloud data; performing point cloud registration on the screened visual point cloud data by using the laser point cloud data to obtain a correction matrix; correcting the visual point cloud data by applying the correction matrix to obtain corrected visual point cloud data; and fusing the corrected visual point cloud data and the laser point cloud data.
9. A target device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 7 are implemented when the computer program is executed by the processor.
10. A computer-readable storage medium having computer program instructions stored thereon, which, when executed by a processor, implement the steps of the method of any one of claims 1 to 7.
CN202011337318.0A 2020-11-25 2020-11-25 Obstacle avoidance method, target device, and storage medium Active CN112327329B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011337318.0A CN112327329B (en) 2020-11-25 2020-11-25 Obstacle avoidance method, target device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011337318.0A CN112327329B (en) 2020-11-25 2020-11-25 Obstacle avoidance method, target device, and storage medium

Publications (2)

Publication Number Publication Date
CN112327329A CN112327329A (en) 2021-02-05
CN112327329B true CN112327329B (en) 2023-04-18

Family

ID=74307937

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011337318.0A Active CN112327329B (en) 2020-11-25 2020-11-25 Obstacle avoidance method, target device, and storage medium

Country Status (1)

Country Link
CN (1) CN112327329B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113031008A (en) * 2021-03-12 2021-06-25 中国电子科技集团公司第二十研究所 Foresight prediction alarm method capable of actively detecting sea surface threat
CN114137905B (en) * 2021-11-18 2023-10-03 合肥欣奕华智能机器股份有限公司 Error compensation method, device and storage medium
CN113901970B (en) * 2021-12-08 2022-05-24 深圳市速腾聚创科技有限公司 Obstacle detection method and apparatus, medium, and electronic device
CN114998864A (en) * 2022-05-27 2022-09-02 重庆长安汽车股份有限公司 Obstacle detection method, device, equipment and storage medium
CN115525049A (en) * 2022-09-06 2022-12-27 劢微机器人科技(深圳)有限公司 Vehicle obstacle avoidance method, device, equipment and storage medium
CN117218152A (en) * 2023-08-16 2023-12-12 南方科技大学 Method and device for predicting motion of artificial limb and terminal equipment

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107480638B (en) * 2017-08-16 2020-06-30 北京京东尚科信息技术有限公司 Vehicle obstacle avoidance method, controller, device and vehicle
CN109283538B (en) * 2018-07-13 2023-06-13 上海大学 Marine target size detection method based on vision and laser sensor data fusion
CN109859154A (en) * 2019-01-31 2019-06-07 深兰科技(上海)有限公司 A kind of data fusion method, device, equipment and medium
CN111624622B (en) * 2020-04-24 2023-05-02 库卡机器人(广东)有限公司 Obstacle detection method and device
CN111724598B (en) * 2020-06-29 2022-04-05 北京百度网讯科技有限公司 Method, device, equipment and storage medium for automatically driving and planning path
CN111619560B (en) * 2020-07-29 2020-11-03 北京三快在线科技有限公司 Vehicle control method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Peng Y et.al.《The obstacle detection and obstacle avoidance algorithm based on 2-D lidar》.《IEEE International Conference on Information and Automation》.2015,第1648-1653页. *
孙祥峻 ; 刘志刚 ; .矿井环境基于稀疏激光雷达数据的动态物体检测与追踪.工业控制计算机.2020,(第07期),第94-96页. *

Also Published As

Publication number Publication date
CN112327329A (en) 2021-02-05

Similar Documents

Publication Publication Date Title
CN112327329B (en) Obstacle avoidance method, target device, and storage medium
CN110645974B (en) Mobile robot indoor map construction method fusing multiple sensors
CN108709562B (en) Method for constructing rolling grid map of mobile robot
US9563808B2 (en) Target grouping techniques for object fusion
US20220371602A1 (en) Vehicle positioning method, apparatus, and controller, intelligent vehicle, and system
CN113376650B (en) Mobile robot positioning method and device, electronic equipment and storage medium
US11789141B2 (en) Omnidirectional sensor fusion system and method and vehicle including the same
WO2019062649A1 (en) Adaptive region division method and system
CN111066064A (en) Grid occupancy mapping using error range distribution
CN113432533B (en) Robot positioning method and device, robot and storage medium
CN112362065B (en) Obstacle detouring track planning method and device, storage medium, control unit and equipment
CN114236564B (en) Method for positioning robot in dynamic environment, robot, device and storage medium
WO2022141116A1 (en) Three-dimensional point cloud segmentation method and apparatus, and movable platform
JP2017526083A (en) Positioning and mapping apparatus and method
WO2022099620A1 (en) Three-dimensional point cloud segmentation method and apparatus, and mobile platform
CN111257826B (en) Multi-source heterogeneous sensor composite tracking method
CN117765508A (en) Method, device and equipment for detecting non-running area of vehicle
Lundquist et al. Estimation of the free space in front of a moving vehicle
Ramesh et al. Landmark-based RADAR SLAM for autonomous driving
CN113203424B (en) Multi-sensor data fusion method and device and related equipment
CN112506203B (en) Robot motion dynamic feedback method and system
Richter et al. Advanced occupancy grid techniques for lidar based object detection and tracking
CN117677862A (en) Pseudo image point identification method, terminal equipment and computer readable storage medium
Yang et al. Two-stage multi-sensor fusion positioning system with seamless switching for cooperative mobile robot and manipulator system
Junjing et al. Occupancy grid mapping based on DSmT for dynamic environment perception

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant