CN113345009A - Unmanned aerial vehicle dynamic obstacle detection method based on laser odometer - Google Patents

Unmanned aerial vehicle dynamic obstacle detection method based on laser odometer Download PDF

Info

Publication number
CN113345009A
CN113345009A CN202110604058.7A CN202110604058A CN113345009A CN 113345009 A CN113345009 A CN 113345009A CN 202110604058 A CN202110604058 A CN 202110604058A CN 113345009 A CN113345009 A CN 113345009A
Authority
CN
China
Prior art keywords
point
point cloud
characteristic
laser
obstacle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110604058.7A
Other languages
Chinese (zh)
Other versions
CN113345009B (en
Inventor
秦晓辉
芦涛
谢国涛
秦兆博
胡满江
王晓伟
边有钢
徐彪
秦洪懋
丁荣军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University
Original Assignee
Hunan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University filed Critical Hunan University
Priority to CN202110604058.7A priority Critical patent/CN113345009B/en
Publication of CN113345009A publication Critical patent/CN113345009A/en
Application granted granted Critical
Publication of CN113345009B publication Critical patent/CN113345009B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • G05D1/106Change initiated in response to external conditions, e.g. avoidance of elevated terrain or of no-fly zones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/64Analysis of geometric attributes of convexity or concavity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Geometry (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses an unmanned aerial vehicle dynamic obstacle detection method based on a laser odometer, which comprises the following steps: step 1, extracting point clouds conforming to preset curvature characteristics as characteristic points through the geometrical characteristics of laser point cloud data, establishing a matching relation of the same characteristic point in two frames of laser point cloud data at adjacent moments, constructing a reprojection cost function, and iterating the laser point clouds in a three-dimensional solution space by using a nonlinear optimization technology until the pose is converged to obtain pose information of the unmanned aerial vehicle; step 2, it includes specifically: step 21, clustering the whole point cloud into a plurality of point cloud clusters according to the obstacles by using a region growing method according to the pose information of the unmanned aerial vehicle output in the step 1 so as to distinguish the point cloud clusters into independent obstacle targets; step 22, calculating a difference function according to the two frames of relative movement calculated by the laser odometer and the mass center and normal difference degree of the obstacles in the multi-frame laser point cloud data, establishing an association matrix, and associating the obstacles in the multi-frame laser point cloud data; and step 23, judging the motion state of the obstacle by combining the interframe relative motion calculated by the laser odometer according to the association condition of the dynamic obstacle, so as to obtain the motion direction and speed of the dynamic obstacle, namely state estimation.

Description

Unmanned aerial vehicle dynamic obstacle detection method based on laser odometer
Technical Field
The invention relates to a dynamic obstacle detection method for the field of unmanned aerial vehicles, in particular to an unmanned aerial vehicle dynamic obstacle detection method based on a laser odometer.
Background
In the field of unmanned aerial vehicles, due to the requirement of flight safety, the autonomous obstacle avoidance technology of the unmanned aerial vehicle has been one of research hotspots. When the unmanned aerial vehicle executes tasks, besides terrain and static obstacles, dynamic obstacles exist in a complex flight environment. If a dynamic obstacle is treated as a static obstacle, an incorrect collision countdown is derived, which is unacceptable for the autonomous obstacle avoidance function. The autonomous obstacle avoidance function usually hopes to achieve the purpose of flexibly modifying the flight path by predicting the motion state of the dynamic obstacle, so that dynamic obstacle detection is an indispensable part in the autonomous obstacle avoidance technology.
The conventional dynamic obstacle detection mode is realized by obstacle state estimation, namely, after the obstacles of multi-frame data are observed and associated, the moving state of the obstacles is judged to distinguish the dynamic obstacles from the static obstacles. On the premise of not depending on other sensors, the motion state of the current unmanned aerial vehicle cannot be acquired, and only depending on the state of the multi-frame obstacle, the estimated and judged motion state of the obstacle is relative to the motion state of the current unmanned aerial vehicle, for example: when a plurality of unmanned aerial vehicles are in distributed uniform flight, the relative speed between the unmanned aerial vehicles is zero, and the unmanned aerial vehicles can be judged as static obstacles. If additional sensors, such as inertial navigation units, are relied upon, additional calibration effort and hardware topology complexity may be added. Generally, in the practical application of the unmanned aerial vehicle, the topological complexity of hardware is expected to be reduced as much as possible, so that the system is simpler and more reliable. At present, an unmanned aerial vehicle dynamic obstacle detection method based on a single sensor is still not mature, and a related theory needs to be further improved.
Disclosure of Invention
It is an object of the present invention to provide a laser odometer based method of dynamic obstacle detection for unmanned aerial vehicles that overcomes or at least mitigates at least one of the above-mentioned disadvantages of the prior art.
In order to achieve the purpose, the invention provides an unmanned aerial vehicle dynamic obstacle detection method based on a laser odometer, which comprises the following steps:
step 1, extracting point clouds conforming to preset curvature characteristics as characteristic points through the geometrical characteristics of laser point cloud data, establishing a matching relation of the same characteristic point in two frames of laser point cloud data at adjacent moments, constructing a reprojection cost function, and iterating the laser point clouds in a three-dimensional solution space by using a nonlinear optimization technology until the pose is converged to obtain pose information of the unmanned aerial vehicle;
step 2, it includes specifically:
step 21, clustering the whole point cloud into a plurality of point cloud clusters according to the obstacles by using a region growing method according to the pose information of the unmanned aerial vehicle output in the step 1 so as to distinguish the point cloud clusters into independent obstacle targets;
step 22, calculating a difference function according to the two frames of relative movement calculated by the laser odometer and the mass center and normal difference degree of the obstacles in the multi-frame laser point cloud data, establishing an association matrix, and associating the obstacles in the multi-frame laser point cloud data;
and step 23, judging the motion state of the obstacle by combining the interframe relative motion obtained by the calculation of the laser odometer according to the association condition of the dynamic obstacle, thereby obtaining the motion direction and the motion speed of the dynamic obstacle.
Further, in step 1, the method for extracting the point cloud conforming to the preset curvature feature as the feature point specifically includes:
step 11, carrying out voxel filtering on point cloud data;
step 12, calculating curvature values r of the point clouds in the point cloud data by using the following formula (1), and sorting according to the curvature values r, wherein the point cloud with the largest curvature value is defined as a characteristic angular point, and the point cloud with the smallest curvature value is defined as a characteristic plane point:
Figure BDA0003093780800000021
in the formula, X(k,i)Representing the k-th frame point cloud data PkX of the ith point cloud in the radar coordinate system(k,j)Represents PkThe coordinates of the j point cloud in the radar coordinate system, S represents X(k,i)Of the neighborhood of (c).
Further, in the case that the feature points are feature points, in step 1, the method for matching the same feature point in the point cloud data of two adjacent frames includes the steps of matching the feature points with the feature linesDistance of point cloud line
Figure BDA0003093780800000022
And/or the distance between a characteristic plane point and a point cloud surface between characteristic surfaces
Figure BDA0003093780800000023
Under the condition that the characteristic points are characteristic angular points, point cloud data P is obtained in the (k-1) th framek-1In finding PkThe a-th characteristic corner point X in(k,a)Corresponding characteristic line segment, wherein the line segment is provided with a point cloud b and a point cloud c, and the point cloud b is Pk-1Middle and characteristic angular point X(k,a)The point cloud with the closest distance is the point cloud c is Pk-1Central characteristic corner point X(k,a)Adjacent scanning line and characteristic angular point X(k,a)The point cloud closest in distance to the point,
Figure BDA0003093780800000024
represented by the following formula (2):
Figure BDA0003093780800000025
in the formula, X(k-1,b)Represents Pk-1Coordinate of the b-th characteristic corner point in the radar coordinate system, X(k-1,c)Represents Pk-1The coordinate of the c-th characteristic angular point in a radar coordinate system;
in the case where the feature point is a feature plane point, at Pk-1In searching for the d-th characteristic plane point X(k,d)Corresponding feature surface having point cloud e, point cloud f and point cloud g, wherein point cloud e is Pk-1Median characteristic plane point X(k,d)The point cloud with the closest distance f is Pk-1The point cloud g is the point cloud closest to the point cloud d on the same adjacent scanning line of the point cloud e, and the point cloud g is the point X on the adjacent scanning line and the characteristic plane point(k,d)Point cloud with closest distance, point cloud surface distance
Figure BDA00030937808000000311
Represented by the following formula (3):
Figure BDA0003093780800000031
in the formula, X(k-1,e)Represents Pk-1Coordinate of the e-th characteristic plane point in the radar coordinate system, X(k-1,f)Represents Pk-1Of the f-th characteristic plane point in the radar coordinate system, X(k-1,g)Represents Pk-1Coordinates of the g-th feature plane point in (1) in the radar coordinate system.
Further, the specific method of "constructing a reprojection cost function, and using a nonlinear optimization technology to iterate the reprojection cost function in a three-dimensional solution space until the pose is converged to obtain the pose information of the unmanned aerial vehicle" in the step 1 includes:
according to
Figure BDA00030937808000000313
And
Figure BDA00030937808000000312
establishing a reprojection cost function, converting the pose solving problem into a nonlinear least square solving problem represented by formula (4), and solving an optimal transformation matrix estimation value
Figure BDA0003093780800000032
Figure BDA0003093780800000033
Wherein,
Figure BDA0003093780800000034
represents PkThe coordinates of the ith characteristic point in a radar coordinate system are obtained, and n represents the number of the characteristic points;
Figure BDA0003093780800000035
is composed of
Figure BDA0003093780800000036
At Pk-1Of (2), i.e. homogeneous coordinates
Figure BDA0003093780800000037
Figure BDA0003093780800000038
To represent
Figure BDA0003093780800000039
At PkHomogeneous coordinates of (a).
Repeating the steps of establishing the matching relation of the same characteristic point in the two frames of laser point cloud data at the adjacent time and solving according to the reprojection cost function in the step 1
Figure BDA00030937808000000310
Until the pose is converged.
Further, step 21 specifically includes:
step 211, using the point cloud with the minimum curvature value obtained in step 12 as the starting seed point p0
Step 212, for each seed point piSearching all the neighboring points of the point, and executing the following steps:
step 2121, calculating each neighboring point and current seed point piIf the difference of the normal angles is smaller than the set angle threshold, the process goes to step 2122;
step 2122, calculating the curvature of the neighboring point, and if the curvature value is smaller than a set curvature threshold value, adding the point into the current seed point set;
returning to step 211 until the current set of seed points
Figure BDA00030937808000000412
And all the adjacent points do not meet the growth criterion, and then the seed point is deleted from the originally collected point cloud set and the seed point set is stored as an obstacle point cloud cluster.
Further, the "data association" in step 22 specifically includes:
step 221, two frames of relative motion are calculated according to the laser odometer as
Figure BDA0003093780800000041
And calculating the difference degree function by adopting the formula (7) according to the centroid and normal difference degree of the obstacles in the multi-frame laser point cloud data
Figure BDA0003093780800000042
Figure BDA0003093780800000043
Wherein,
Figure BDA0003093780800000044
is composed of
Figure BDA0003093780800000045
The corresponding rotation matrix SO (3),
Figure BDA0003093780800000046
the centroid of the v-th obstacle of the (k-1) th frame is a homogeneous coordinate representation form of the position in the three-dimensional space, and alpha and beta are centroid distance difference weight and normal angle difference weight respectively;
step 222, according to Pk-1The total number of co-detected obstacles is m, and is recorded as
Figure BDA0003093780800000047
Figure BDA0003093780800000048
Wherein u is 1,2, …, m; pkThe total number of co-detected obstacles is n, and is recorded as
Figure BDA0003093780800000049
Where v is 1,2, …, n, and correlating the matrix
Figure BDA00030937808000000410
Represented by the following formula (8):
Figure BDA00030937808000000411
in the formula,
Figure BDA00030937808000000413
represents the degree of difference calculated by the formula (10);
step 223, according to the correlation matrix (8) established in step 222, correlating the current frame PkAnd the last frame Pk-1The dynamic barrier of (1); repeating the above steps until the current frame PkThe association of the barriers corresponding to each row of the association matrix is finished; if the minimum difference corresponds to Pk-1Subscript u > v of the middle element indicates the occurrence of barrier loss; if the minimum difference corresponds to Pk-1The subscript u < v for the middle element indicates the presence of a new dynamic barrier.
Further, step 23 specifically includes:
according to the dynamic barrier associated in the step 21, the relative motion of the two frames of point cloud data obtained by combining the calculation in the step 1 is
Figure BDA00030937808000000414
Judging the motion state of the obstacle, and calculating the motion distance of the obstacle by using the formula (9):
Figure BDA0003093780800000051
according to the frame rate f of the laser radar sensor, the speed of the dynamic obstacle is V ═ Δ d × f, and the direction is
Figure BDA0003093780800000052
Figure BDA0003093780800000053
Represents tkThe homogeneous coordinate of (a) is,
Figure BDA0003093780800000054
represents tk-1Homogeneous coordinates of (a).
Due to the adoption of the technical scheme, the invention has the following advantages:
1. the dynamic obstacle detection is carried out on the premise of using a single sensor without combining multiple sensors, so that the system is more efficient.
2. The pose estimation of the laser odometer in a short time is accurate, the null shift phenomenon cannot occur, and extra calibration work is not needed, so that the system can be simpler and more stable by using the laser odometer to replace inertial navigation, the null shift phenomenon can occur in inertial navigation, and external reference calibration needs to be performed on the laser odometer and the laser radar, so that the system becomes complex, and uncertain factors are increased.
3. The ground is probably a static obstacle, so ground segmentation is not needed, and compared with a general obstacle detection algorithm for clustering after ground segmentation, the segmentation and clustering of point clouds by using a region growing method based on a normal direction can be more suitable for the flight scene of the unmanned aerial vehicle on the premise of reducing the complexity of the segmentation and clustering algorithm. In addition, when the region growing method is used, curvature information calculated in the laser odometer process is utilized, and redundant calculation is avoided; meanwhile, normal similarity judgment is added in the data association stage, and obstacle information generated in the segmentation and clustering stage is fully utilized.
4. Because the obstacle tracking is carried out on the basis of estimating the self pose, compared with a tracking algorithm without considering the self pose of the unmanned aerial vehicle, the obstacle tracking method has better obstacle tracking effect.
Drawings
Fig. 1 is a schematic diagram of an algorithm architecture for detecting a dynamic obstacle of an unmanned aerial vehicle according to the present invention.
Detailed Description
The invention is described in detail below with reference to the figures and examples.
The method and the device perform registration operation by using spatial feature point information of multi-frame point cloud data, solve the pose of the unmanned aerial vehicle in real time, achieve the function of the laser odometer, and achieve the purposes of detecting dynamic obstacles and comprehensively and autonomously avoiding obstacles by combining the self motion state obtained by the laser odometer through an obstacle detection and tracking technology.
As shown in fig. 1, the dynamic obstacle detection system of the unmanned aerial vehicle based on the laser odometer provided by the embodiment of the present invention includes a laser odometer module and a dynamic obstacle candidate detection module, wherein: the laser odometer module calculates the matching relation of the same characteristic point in the laser point cloud data of two adjacent frames of time through the following step 1 so as to calculate and obtain the pose information of the unmanned aerial vehicle. The dynamic obstacle candidate detection module detects all obstacles in the laser point cloud data through the following step 2, judges the motion state of the dynamic obstacles, and calculates the motion speed of the dynamic obstacles. In consideration of the real-time requirement of the algorithm, the algorithm framework is divided into two threads, the laser odometer module and the dynamic obstacle candidate detection module respectively occupy one thread, parallel calculation is provided, information is integrated into the state estimation stage, and finally all obstacles and movement information thereof are output. The implementation of each module is described below in the form of a step.
Step 1, extracting point clouds conforming to preset curvature characteristics as characteristic points through the geometrical characteristics of laser point cloud data, establishing a matching relation of the same characteristic point in two frames of laser point cloud data at adjacent moments, establishing a reprojection cost function, and enabling the pose to iterate in a three-dimensional solution space by using a nonlinear optimization technology until the pose is converged to obtain the pose information of the unmanned aerial vehicle.
In one embodiment, the method for extracting a point cloud conforming to a preset curvature feature as a feature point in step 1 specifically includes:
and step 11, performing voxel filtering on the laser point cloud data to greatly reduce the data volume of the point cloud on the premise of ensuring that the shape characteristics are not changed, and providing favorable conditions for reducing the algorithm complexity.
And step 12, calculating curvature values of the point clouds in the laser point cloud data, for example, using the following formula (1), and sorting according to the curvature values, wherein the point cloud with the largest curvature value is defined as a characteristic angular point, and the point cloud with the smallest curvature value is defined as a characteristic plane point. And selecting the characteristic angular points and the characteristic plane points as characteristic points to prepare for subsequent point cloud registration.
Figure BDA0003093780800000061
In the formula, X(k,i)Representing the k-th frame of laser point cloud data PkThe coordinates of the ith point cloud in the radar coordinate system are described as: x(k,i)=[x y z]T∈Pk;X(k,j)Represents PkCoordinates of the jth point cloud in the radar coordinate system; s represents X(k,i)Which can be obtained quickly by existing methods such as KD-trees.
Further, in step 1, the feature points are extracted by regions, so that the feature points can be uniformly distributed, that is, the point cloud data is divided into different regions, and a certain threshold number of feature points are extracted in each region. In addition, in the process of selecting the feature points, the point clouds meeting the following three conditions should be avoided being selected as the feature points: 1) a point cloud with a local plane approximately parallel to the laser beam, such a point cloud has low reliability, wherein the local plane can be understood as a plane formed by the point cloud around (in a certain neighborhood) and the point cloud in a coplanar manner; 2) a point cloud with existing feature points within a certain radius (empirically); 3) where the point cloud is located on the boundary of the occlusion region.
The step 1 of establishing the matching relationship of the same feature point in the two frames of laser point cloud data at adjacent moments specifically includes the point cloud line distance between the feature angular point and the feature line
Figure BDA0003093780800000065
Or the distance between a characteristic plane point and a point cloud surface
Figure BDA0003093780800000062
By passing
Figure BDA0003093780800000063
And
Figure BDA0003093780800000064
and obtaining the matching relation of the same characteristic point in the laser point cloud data of two adjacent frames so as to calculate the relative motion relation of the characteristic point.
Setting: the current k frame laser point cloud data is represented as PkThe last frame (k-1) of laser point cloud data is represented as Pk-1The processing for the two types of feature points is as follows:
for PkOf the first and second sub-pixels, and the a-th characteristic corner point X(k,a)For example, a method for establishing a matching relationship between the same feature point in two frames of laser point cloud data at adjacent time points in step 1 will be described.
At Pk-1Finding characteristic angular point X in(k,a)Corresponding characteristic line segment which can be represented by two point clouds (b, c), wherein the point cloud b is Pk-1Middle and characteristic angular point X(k,a)The closest point cloud, point cloud c is Pk-1Central characteristic corner point X(k,a)Adjacent scanning line and characteristic angular point X(k,a)And the nearest point cloud, namely the adjacent scanning line is the scanning line which is closest to the scanning line in the radial direction in the radar scanning lines, wherein,
Figure BDA0003093780800000075
represented by the following formula (2):
Figure BDA0003093780800000071
in the formula, X(k,a)Represents PkThe coordinate of the a-th characteristic corner point in the radar coordinate system, X(k-1,b)Represents Pk-1Coordinate of the b-th characteristic corner point in the radar coordinate system, X(k-1,c)Represents Pk-1The coordinates of the c-th feature corner point in the radar coordinate system.
For PkOf each characteristic plane point, hereinafter denoted by PkD characteristic plane point X in (1)(k,d)For example, the steps1, the method for establishing the matching relationship of the same characteristic point in the laser point cloud data of two adjacent frames is explained.
At Pk-1Finding out the characteristic plane corresponding to the characteristic plane point, wherein the characteristic plane can be represented by three point clouds (e, f, g), and the point cloud e is Pk-1Median characteristic plane point X(k,d)The point cloud with the closest distance f is Pk-1The point cloud g is the point cloud closest to the point cloud d on the same adjacent scanning line of the point cloud e, and the point cloud g is the point X on the adjacent scanning line and the characteristic plane point(k,d)The closest point cloud, wherein,
Figure BDA0003093780800000073
represented by the following formula (3):
Figure BDA0003093780800000072
in the formula, X(k-1,e)Represents Pk-1Coordinate of the e-th characteristic plane point in the radar coordinate system, X(k-1,f)Represents Pk-1Of the f-th characteristic plane point in the radar coordinate system, X(k-1,g)Represents Pk-1Coordinates of the g-th feature plane point in (1) in the radar coordinate system.
The specific method for constructing the reprojection cost function and enabling the reprojection cost function to be iterated in the three-dimensional solution space by using the nonlinear optimization technology until the pose is converged to obtain the pose information of the unmanned aerial vehicle in the step 1 comprises the following steps:
according to
Figure BDA0003093780800000076
And
Figure BDA0003093780800000074
establishing a reprojection cost function, converting the pose solving problem into a nonlinear least square solving problem represented by formula (4), and solving an optimal transformation matrix estimation value
Figure BDA0003093780800000077
Figure BDA0003093780800000081
Wherein,
Figure BDA0003093780800000082
represents PkCoordinates of the ith characteristic point in a radar coordinate system, wherein n represents the number of the characteristic points;
Figure BDA0003093780800000083
is composed of
Figure BDA0003093780800000084
At Pk-1Of (2), i.e. homogeneous coordinates
Figure BDA0003093780800000085
Figure BDA0003093780800000086
To represent
Figure BDA0003093780800000087
At PkHomogeneous coordinates of (a).
Repeating the steps of establishing the matching relation of the same characteristic point in the two frames of laser point cloud data at the adjacent time and solving according to the reprojection cost function in the step 1
Figure BDA0003093780800000088
Until the pose is converged.
In one embodiment, the method can be solved by a Levenberg-Marquardt method, and the method is continuously iterated until the pose converges by using gradient descent, and specifically comprises the following steps:
first, the Jacobian matrix thereof is calculated by the following formula (5)
Figure BDA0003093780800000089
Figure BDA00030937808000000810
In the formula, f is a reprojection cost function represented by formula (4).
Then, updating the pose according to the following formula (6), and continuously iterating until the reprojection cost function converges:
Figure BDA00030937808000000811
wherein, lambda is an iteration parameter of Levenberg-Marquardt, and is more close to Gauss Newton method when lambda is smaller, and is more close to gradient descent method when lambda is larger.
The step 2 specifically comprises the following steps:
and step 21, clustering the whole point cloud into a plurality of point cloud clusters according to the obstacles by using a region growing method according to the pose information of the unmanned aerial vehicle output in the step 1 so as to distinguish the point cloud clusters into independent obstacle targets.
And step 22, calculating a difference function according to the two frames of relative motion calculated by the laser odometer and the mass center and normal difference degree of the obstacle in the multi-frame laser point cloud data, establishing an association matrix, and associating the same obstacle in the multi-frame laser point cloud data.
Step 23, calculating inter-frame relative movement by combining a laser odometer according to the correlation condition of the dynamic barrier
Figure BDA00030937808000000812
And judging the motion state of the obstacle so as to obtain the motion direction and speed of the dynamic obstacle, namely state estimation.
In an embodiment, since the ground may be an obstacle in the flight environment of the unmanned aerial vehicle, a conventional ground segmentation operation is not required, and in consideration of a large normal angle change between different obstacles, the method for segmenting and clustering the laser point cloud data in the flight environment of the unmanned aerial vehicle by using a curvature-based region growing method is selected, and step 21 specifically includes:
step 211, selecting seed points: step 12 is to obtainPoint cloud with the minimum curvature value as the starting seed point p0. This is because the point of minimum curvature is located in the flat region, and starting growth from the flattest region can reduce the total number of segmentations.
Step 212, establishing a growth criterion: for each seed point piSearching all the neighboring points of the point, and executing the following steps:
step 2121, calculating each neighboring point and current seed point piIf the difference of the normal angle is smaller than the set angle threshold, the process proceeds to step 2122. Where the normal angle variation of the general plane is gradual, a threshold value of 20 ° may be taken, that is, if 20 ° is exceeded, a large normal variation may be considered.
Step 2122, calculating the curvature of the neighboring point, and if the curvature value is smaller than a set curvature threshold (for example, 1.0), adding the point into the current seed point set
Figure BDA0003093780800000098
Returning to step 211 until the current set of seed points
Figure BDA0003093780800000099
All neighbors do not meet the growth criteria. These seed points p are then deleted from the originally acquired point cloud setiAnd collecting the seed points
Figure BDA00030937808000000910
Saved as an obstacle point cloud cluster.
Step 213, setting termination conditions: repeating the steps 211 to 212 until all the point clouds in the point cloud set are processed.
Of course, besides the region growing method, other existing methods may be used to perform the segmentation operation.
The "data association" in step 22 specifically includes:
step 221, two frames of relative motion are calculated according to the laser odometer as
Figure BDA0003093780800000091
And calculating the difference degree function by adopting the formula (7) according to the centroid and normal difference degree of the obstacles in the multi-frame laser point cloud data
Figure BDA0003093780800000092
Figure BDA0003093780800000093
Wherein,
Figure BDA0003093780800000094
is composed of
Figure BDA0003093780800000095
The corresponding rotation matrix SO (3),
Figure BDA0003093780800000096
the centroid of the v-th obstacle in the k-1 frame is a homogeneous coordinate representation of the position in the three-dimensional space, α and β are a centroid distance difference weight and a normal angle difference weight, respectively, the values of α and β are different in the dimension of the preceding item and the subsequent item of the addition sign in the formula (7), and it is necessary to normalize the mean value of the two items, for example, α, which corresponds to an approximate mean value of the preceding item of 2m, and then α is a centroid distance importance degree/2. In the same way, the method for preparing the composite material,
Figure BDA0003093780800000097
and two molecules: centroid distance significance + normal angle significance is 1.
Step 222, relating to establishing a connection matrix: according to Pk-1The total number of co-detected obstacles is m, and is recorded as
Figure BDA0003093780800000101
Wherein u is 1,2, …, m; pkThe total number of co-detected obstacles is n, and is recorded as
Figure BDA0003093780800000102
Where v is 1,2, …, n, and correlating the matrix
Figure BDA0003093780800000103
Represented by the following formula (8):
Figure BDA0003093780800000104
in the formula,
Figure BDA0003093780800000105
the degree of difference calculated by equation (10) is expressed, where the second subscript of each row is the same, which means that each row corresponds to the same obstacle of the current frame, and then, the subscripts of the first row are both 1, which means the first obstacle; the subscripts of the second row are all 2, indicating a second obstacle; … …, respectively; and the subscripts of the v-th row are all v, so that the v-th obstacle is represented.
Step 223, according to the correlation matrix (8) established in step 222, correlating the current frame PkAnd the last frame Pk-1The dynamic barrier of (1). Specifically, when a certain obstacle is associated, the current frame P is divided into twokThe barrier corresponding to the selected row of the incidence matrix is taken as the selected barrier, and then each element in the selected row is calculated to be respectively corresponding to the previous frame Pk-1The difference degrees of all elements in the medium incidence matrix are calculated, and the P corresponding to the minimum difference degree is obtainedk-1The barrier corresponding to the middle element is taken as the related barrier of the selected barrier, and the selected barrier and the related barrier are judged to be the same barrier at the moment.
Repeating the above steps until the current frame PkThe corresponding obstacles in each row of the incidence matrix are completely correlated.
Since the above operations are implemented on the basis of the potential dynamic point cloud, the obstacle successfully associated with the potential dynamic point cloud is determined to be a dynamic obstacle.
Further, if P corresponding to the minimum difference degreek-1Subscript u > v of the middle element indicates the occurrence of barrier loss; if the minimum difference corresponds toP ofk-1The subscript u < v for the middle element indicates the presence of a new dynamic barrier.
Such as: pkIn which there are 10 obstacles, the last frame Pk-1There are 10 obstacles, and these 10 obstacles can be in one-to-one correspondence in these two frames of point cloud data, then the correlation matrix is a matrix of 10 rows and 10 columns, and 10 elements in the first row are corresponded to the current frame PkAs the selected obstacle. And step 223 is to combine the 10 elements of the first row with Pk-1The difference degree calculation is performed on all elements in the incidence matrix by using the formula (10), wherein the calculated minimum difference degree corresponds to Pk-1The obstacle corresponding to the middle element is taken as the related obstacle of the selected obstacle, and the two obstacles are considered as the same obstacle.
Step 23 specifically includes:
according to the dynamic barrier associated in the step 21, the relative motion of the two frames of point cloud data obtained by combining the calculation in the step 1 is
Figure BDA0003093780800000111
And judging the motion state of the obstacle so as to obtain the motion direction and speed of the dynamic obstacle.
For example: step 23 may be achieved by the following steps, or may be obtained by other methods known in the art:
if an obstacle is associated in two consecutive frames of point cloud data, it is at Pk-1Has the coordinates of
Figure BDA0003093780800000112
Figure BDA0003093780800000113
PkHas the coordinates of
Figure BDA0003093780800000114
The relative motion of the two frames of point cloud data obtained by calculation in the step 1 is
Figure BDA0003093780800000115
Then use the formula (9) Calculating the movement distance of the obstacle:
Figure BDA0003093780800000116
according to the frame rate f of the laser radar sensor, the speed of the dynamic obstacle is V-delta d x f, and the direction is V-tk-tk-1
Figure BDA0003093780800000117
Represents tkThe homogeneous coordinate of (a) is,
Figure BDA0003093780800000118
represents tk-1Homogeneous coordinates of (a).
The dynamic obstacle information in the point cloud data can be obtained through the steps.
If static barrier information is to be obtained, a grid map can be established, the point cloud of the corresponding range is stored in the grid map according to the height of the unmanned aerial vehicle, if the grid is occupied, the point cloud is represented as that the unmanned aerial vehicle cannot pass through, and therefore the purpose of high-efficiency safe barrier avoidance under the dynamic environment is achieved.
The invention achieves the purposes of detecting dynamic obstacles and improving the system safety by an obstacle detection and tracking technology and combining the relative motion of two frames of point cloud data.
Finally, it should be pointed out that: the above examples are only for illustrating the technical solutions of the present invention, and are not limited thereto. Those of ordinary skill in the art will understand that: modifications can be made to the technical solutions described in the foregoing embodiments, or some technical features may be equivalently replaced; such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (7)

1. A method for detecting dynamic obstacles of an unmanned aerial vehicle based on a laser odometer is characterized by comprising the following steps:
step 1, extracting point clouds conforming to preset curvature characteristics as characteristic points through the geometrical characteristics of laser point cloud data, establishing a matching relation of the same characteristic point in two frames of laser point cloud data at adjacent moments, constructing a reprojection cost function, and iterating the laser point clouds in a three-dimensional solution space by using a nonlinear optimization technology until the pose is converged to obtain pose information of the unmanned aerial vehicle;
step 2, it includes specifically:
step 21, clustering the whole point cloud into a plurality of point cloud clusters according to the obstacles by using a region growing method according to the pose information of the unmanned aerial vehicle output in the step 1 so as to distinguish the point cloud clusters into independent obstacle targets;
step 22, calculating a difference function according to the two frames of relative movement calculated by the laser odometer and the mass center and normal difference degree of the obstacles in the multi-frame laser point cloud data, establishing an association matrix, and associating the obstacles in the multi-frame laser point cloud data;
and step 23, judging the motion state of the obstacle by combining the interframe relative motion obtained by the calculation of the laser odometer according to the association condition of the dynamic obstacle, thereby obtaining the motion direction and the motion speed of the dynamic obstacle.
2. The method for detecting the dynamic obstacle of the laser radar based on the unmanned aerial vehicle pose estimation as claimed in claim 1, wherein in the step 1, the method for extracting the point cloud conforming to the preset curvature feature as the feature point specifically comprises:
step 11, carrying out voxel filtering on point cloud data;
step 12, calculating curvature values r of the point clouds in the point cloud data by using the following formula (1), and sorting according to the curvature values r, wherein the point cloud with the largest curvature value is defined as a characteristic angular point, and the point cloud with the smallest curvature value is defined as a characteristic plane point:
Figure FDA0003093780790000011
in the formula, X(k,i)Representing the k-th frame point cloud data PkX of the ith point cloud in the radar coordinate system(k,j)Represents PkPoint j in (1)Coordinates of cloud in radar coordinate system, S represents X(k,i)Of the neighborhood of (c).
3. The method for detecting dynamic obstacles of laser radar based on unmanned aerial vehicle pose estimation as claimed in claim 2, wherein in the case that the feature points are feature points, in step 1, the method for matching the same feature point in the point cloud data of two adjacent frames of time comprises the point cloud line distance between the feature points and the feature lines
Figure FDA0003093780790000021
And/or the distance between a characteristic plane point and a point cloud surface between characteristic surfaces
Figure FDA0003093780790000022
Under the condition that the characteristic points are characteristic angular points, point cloud data P is obtained in the (k-1) th framek-1In finding PkThe a-th characteristic corner point X in(k,a)Corresponding characteristic line segment, wherein the line segment is provided with a point cloud b and a point cloud c, and the point cloud b is Pk-1Middle and characteristic angular point X(k,a)The point cloud with the closest distance is the point cloud c is Pk-1Central characteristic corner point X(k,a)Adjacent scanning line and characteristic angular point X(k,a)The point cloud closest in distance to the point,
Figure FDA0003093780790000023
represented by the following formula (2):
Figure FDA0003093780790000024
in the formula, X(k-1,b)Represents Pk-1Coordinate of the b-th characteristic corner point in the radar coordinate system, X(k-1,c)Represents Pk-1The coordinate of the c-th characteristic angular point in a radar coordinate system;
in the case where the feature point is a feature plane point, at Pk-1In searching for the d-th characteristic plane point X(k,d)Corresponding feature surface having point cloud e, point cloud f and point cloud g, wherein point cloud e is Pk-1Median characteristic plane point X(k,d)The point cloud with the closest distance f is Pk-1The point cloud g is the point cloud closest to the point cloud d on the same adjacent scanning line of the point cloud e, and the point cloud g is the point X on the adjacent scanning line and the characteristic plane point(k,d)Point cloud with closest distance, point cloud surface distance
Figure FDA0003093780790000025
Represented by the following formula (3):
Figure FDA0003093780790000026
in the formula, X(k-1,e)Represents Pk-1Coordinate of the e-th characteristic plane point in the radar coordinate system, X(k-1,f)Represents Pk-1Of the f-th characteristic plane point in the radar coordinate system, X(k-1,g)Represents Pk-1Coordinates of the g-th feature plane point in (1) in the radar coordinate system.
4. The lidar dynamic obstacle detection method taking into account unmanned aerial vehicle pose estimation as claimed in any one of claims 1 to 3, wherein the method for "constructing a reprojection cost function, iterating the reprojection cost function in a three-dimensional solution space by using a nonlinear optimization technique until the pose converges and obtaining unmanned aerial vehicle pose information" in step 1 specifically comprises:
according to
Figure FDA0003093780790000027
And
Figure FDA0003093780790000028
establishing a reprojection cost function, converting the pose solving problem into a nonlinear least square solving problem represented by formula (4), and solving an optimal transformation matrix estimation value
Figure FDA0003093780790000029
Figure FDA00030937807900000210
Wherein,
Figure FDA00030937807900000211
represents PkCoordinates of the ith characteristic point in a radar coordinate system, wherein n represents the number of the characteristic points;
Figure FDA0003093780790000031
is composed of
Figure FDA0003093780790000032
At Pk-1Of (2), i.e. homogeneous coordinates
Figure FDA0003093780790000033
Figure FDA00030937807900000315
To represent
Figure FDA0003093780790000034
At PkHomogeneous coordinates of (a).
Repeating the steps of establishing the matching relation of the same characteristic point in the two frames of laser point cloud data at the adjacent time and solving according to the reprojection cost function in the step 1
Figure FDA0003093780790000035
Until the pose is converged.
5. The lidar dynamic obstacle detection method taking into account unmanned aerial vehicle pose estimation according to claim 4, wherein step 21 specifically comprises:
step 211, minimizing the curvature value obtained in step 12As starting seed point p0
Step 212, for each seed point piSearching all the neighboring points of the point, and executing the following steps:
step 2121, calculating each neighboring point and current seed point piIf the difference of the normal angles is smaller than the set angle threshold, the process goes to step 2122;
step 2122, calculating the curvature of the neighboring point, and if the curvature value is smaller than a set curvature threshold value, adding the point into the current seed point set;
returning to step 211 until the current set of seed points
Figure FDA00030937807900000313
And all the adjacent points do not meet the growth criterion, and then the seed point is deleted from the originally collected point cloud set and the seed point set is stored as an obstacle point cloud cluster.
6. The lidar dynamic obstacle detection method taking into account pose estimation of an unmanned aerial vehicle of claim 5, wherein the "data association" in step 22 specifically comprises:
step 221, two frames of relative motion are calculated according to the laser odometer as
Figure FDA0003093780790000036
And calculating the difference degree function by adopting the formula (7) according to the centroid and normal difference degree of the obstacles in the multi-frame laser point cloud data
Figure FDA00030937807900000314
Figure FDA0003093780790000037
Wherein,
Figure FDA0003093780790000038
is composed of
Figure FDA0003093780790000039
The corresponding rotation matrix SO (3),
Figure FDA00030937807900000310
the centroid of the v-th obstacle of the (k-1) th frame is a homogeneous coordinate representation form of the position in the three-dimensional space, and alpha and beta are centroid distance difference weight and normal angle difference weight respectively;
step 222, according to Pk-1The total number of co-detected obstacles is m, and is recorded as
Figure FDA00030937807900000311
Wherein u is 1,2, …, m; pkThe total number of co-detected obstacles is n, and is recorded as
Figure FDA00030937807900000312
Where v is 1,2, …, n, and correlating the matrix
Figure FDA0003093780790000041
Represented by the following formula (8):
Figure FDA0003093780790000042
in the formula,
Figure FDA0003093780790000043
represents the degree of difference calculated by the formula (10);
step 223, according to the correlation matrix (8) established in step 222, correlating the current frame PkAnd the last frame Pk-1The dynamic barrier of (1); repeating the above steps until the current frame PkThe association of the barriers corresponding to each row of the association matrix is finished; if the minimum difference corresponds to Pk-1Subscript u > v of the middle element indicates the occurrence of barrier loss; if the minimum difference is correctShould be Pk-1The subscript u < v for the middle element indicates the presence of a new dynamic barrier.
7. The lidar dynamic obstacle detection method taking into account pose estimation of an unmanned aerial vehicle of claim 5, wherein step 23 specifically comprises:
according to the dynamic barrier associated in the step 21, the relative motion of the two frames of point cloud data obtained by combining the calculation in the step 1 is
Figure FDA0003093780790000044
Judging the motion state of the obstacle, and calculating the motion distance of the obstacle by using the formula (9):
Figure FDA0003093780790000045
according to the frame rate f of the laser radar sensor, the speed of the dynamic obstacle is V ═ Δ d × f, and the direction is
Figure FDA0003093780790000046
Figure FDA0003093780790000048
Represents tkThe homogeneous coordinate of (a) is,
Figure FDA0003093780790000047
represents tk-1Homogeneous coordinates of (a).
CN202110604058.7A 2021-05-31 2021-05-31 Unmanned aerial vehicle dynamic obstacle detection method based on laser odometer Active CN113345009B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110604058.7A CN113345009B (en) 2021-05-31 2021-05-31 Unmanned aerial vehicle dynamic obstacle detection method based on laser odometer

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110604058.7A CN113345009B (en) 2021-05-31 2021-05-31 Unmanned aerial vehicle dynamic obstacle detection method based on laser odometer

Publications (2)

Publication Number Publication Date
CN113345009A true CN113345009A (en) 2021-09-03
CN113345009B CN113345009B (en) 2022-06-14

Family

ID=77473418

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110604058.7A Active CN113345009B (en) 2021-05-31 2021-05-31 Unmanned aerial vehicle dynamic obstacle detection method based on laser odometer

Country Status (1)

Country Link
CN (1) CN113345009B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114442054A (en) * 2021-12-22 2022-05-06 上海宾通智能科技有限公司 Sensor and chassis combined calibration system and method for mobile robot
CN114812503A (en) * 2022-04-14 2022-07-29 湖北省水利水电规划勘测设计院 Cliff point cloud extraction method based on airborne laser scanning
CN115273068A (en) * 2022-08-02 2022-11-01 湖南大学无锡智能控制研究院 Laser point cloud dynamic obstacle removing method and device and electronic equipment
CN116878471A (en) * 2023-09-06 2023-10-13 湖南湘船重工有限公司 Unmanned survey and drawing system on water
CN117148837A (en) * 2023-08-31 2023-12-01 上海木蚁机器人科技有限公司 Dynamic obstacle determination method, device, equipment and medium
CN117291973A (en) * 2023-11-24 2023-12-26 城市之光(深圳)无人驾驶有限公司 Quick robust initial positioning method based on laser point cloud

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3062066A1 (en) * 2015-02-26 2016-08-31 Hexagon Technology Center GmbH Determination of object data by template-based UAV control
CN108594851A (en) * 2015-10-22 2018-09-28 飞智控(天津)科技有限公司 A kind of autonomous obstacle detection system of unmanned plane based on binocular vision, method and unmanned plane
CN110045364A (en) * 2019-03-01 2019-07-23 上海大学 Dynamic target tracking and static object detection system and method based on the identification of gradual fault image feature
CN110221603A (en) * 2019-05-13 2019-09-10 浙江大学 A kind of long-distance barrier object detecting method based on the fusion of laser radar multiframe point cloud

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3062066A1 (en) * 2015-02-26 2016-08-31 Hexagon Technology Center GmbH Determination of object data by template-based UAV control
CN108594851A (en) * 2015-10-22 2018-09-28 飞智控(天津)科技有限公司 A kind of autonomous obstacle detection system of unmanned plane based on binocular vision, method and unmanned plane
CN110045364A (en) * 2019-03-01 2019-07-23 上海大学 Dynamic target tracking and static object detection system and method based on the identification of gradual fault image feature
CN110221603A (en) * 2019-05-13 2019-09-10 浙江大学 A kind of long-distance barrier object detecting method based on the fusion of laser radar multiframe point cloud

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XIAOLONG CHEN.ET.: "Non-Signalized Intersection Network Management With Connected and Automated Vehicles", 《IEEE ACCESS》 *
宿勇: "基于曲率连续曲线的无人机路径规划方法", 《舰船电子工程》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114442054A (en) * 2021-12-22 2022-05-06 上海宾通智能科技有限公司 Sensor and chassis combined calibration system and method for mobile robot
CN114812503A (en) * 2022-04-14 2022-07-29 湖北省水利水电规划勘测设计院 Cliff point cloud extraction method based on airborne laser scanning
CN114812503B (en) * 2022-04-14 2024-05-28 湖北省水利水电规划勘测设计院 Cliff point cloud extraction method based on airborne laser scanning
CN115273068A (en) * 2022-08-02 2022-11-01 湖南大学无锡智能控制研究院 Laser point cloud dynamic obstacle removing method and device and electronic equipment
CN115273068B (en) * 2022-08-02 2023-05-12 湖南大学无锡智能控制研究院 Laser point cloud dynamic obstacle removing method and device and electronic equipment
CN117148837A (en) * 2023-08-31 2023-12-01 上海木蚁机器人科技有限公司 Dynamic obstacle determination method, device, equipment and medium
CN116878471A (en) * 2023-09-06 2023-10-13 湖南湘船重工有限公司 Unmanned survey and drawing system on water
CN116878471B (en) * 2023-09-06 2023-11-10 湖南湘船重工有限公司 Unmanned survey and drawing system on water
CN117291973A (en) * 2023-11-24 2023-12-26 城市之光(深圳)无人驾驶有限公司 Quick robust initial positioning method based on laser point cloud
CN117291973B (en) * 2023-11-24 2024-02-13 城市之光(深圳)无人驾驶有限公司 Quick robust initial positioning method based on laser point cloud

Also Published As

Publication number Publication date
CN113345009B (en) 2022-06-14

Similar Documents

Publication Publication Date Title
CN113345009B (en) Unmanned aerial vehicle dynamic obstacle detection method based on laser odometer
CN110781827B (en) Road edge detection system and method based on laser radar and fan-shaped space division
CN113345008B (en) Laser radar dynamic obstacle detection method considering wheel type robot position and posture estimation
CN108647646B (en) Low-beam radar-based short obstacle optimized detection method and device
CN111260683A (en) Target detection and tracking method and device for three-dimensional point cloud data
Cole et al. Using laser range data for 3D SLAM in outdoor environments
CN113506318B (en) Three-dimensional target perception method under vehicle-mounted edge scene
WO2022188663A1 (en) Target detection method and apparatus
CN112634451A (en) Outdoor large-scene three-dimensional mapping method integrating multiple sensors
CN109255302A (en) Object recognition methods and terminal, mobile device control method and terminal
CN108986148B (en) Method for realizing multi-intelligent-trolley collaborative search, identification and tracking of specific target group
CN111340875A (en) Space moving target detection method based on three-dimensional laser radar
Huh et al. Vision-based sense-and-avoid framework for unmanned aerial vehicles
Werber et al. Interesting areas in radar gridmaps for vehicle self-localization
CN116879870A (en) Dynamic obstacle removing method suitable for low-wire-harness 3D laser radar
CN111582156A (en) Oblique photography-based tall and big vegetation extraction method for urban three-dimensional model
CN110531340B (en) Identification processing method of laser radar point cloud data based on deep learning
CN112509014A (en) Robust interpolation light stream computing method matched with pyramid shielding detection block
CN116048120B (en) Autonomous navigation system and method for small four-rotor unmanned aerial vehicle in unknown dynamic environment
CN116299525A (en) Dynamic environment RGB-D vision SLAM method based on point cloud region correlation
CN111239761B (en) Method for indoor real-time establishment of two-dimensional map
Wang et al. A new grid map construction method for autonomous vehicles
CN113554705A (en) Robust positioning method for laser radar in changing scene
CN112731335A (en) Multi-unmanned aerial vehicle cooperative positioning method based on whole-region laser scanning
CN113192133A (en) Monocular instant positioning and dense semantic map construction method based on semantic plane

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant