CN113341963B - Laser radar-based navigation method and system for automatically returning robot to base station - Google Patents

Laser radar-based navigation method and system for automatically returning robot to base station Download PDF

Info

Publication number
CN113341963B
CN113341963B CN202110598548.0A CN202110598548A CN113341963B CN 113341963 B CN113341963 B CN 113341963B CN 202110598548 A CN202110598548 A CN 202110598548A CN 113341963 B CN113341963 B CN 113341963B
Authority
CN
China
Prior art keywords
point cloud
point
base station
points
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110598548.0A
Other languages
Chinese (zh)
Other versions
CN113341963A (en
Inventor
孙丛林
张世范
马思宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Weirui Jingke Electronic Co ltd
Original Assignee
Shenzhen Weirui Jingke Electronic Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Weirui Jingke Electronic Co ltd filed Critical Shenzhen Weirui Jingke Electronic Co ltd
Priority to CN202110598548.0A priority Critical patent/CN113341963B/en
Publication of CN113341963A publication Critical patent/CN113341963A/en
Application granted granted Critical
Publication of CN113341963B publication Critical patent/CN113341963B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0253Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The invention relates to a laser radar-based navigation method and a laser radar-based navigation system for automatically returning a robot to a base station, wherein the navigation method comprises the following steps: collecting multi-frame first point cloud data of each direction of a base station; aiming at the same direction, establishing a corresponding base station point cloud model according to multi-frame first point cloud data of the direction; determining a corresponding model initial entry point according to the base station cloud model; determining an initial position of the robot to travel according to pre-stored starting point data; scanning surrounding second point cloud data on the initial position to obtain scene point clouds; according to scene point clouds, each base station cloud model and corresponding model initial entry points, initial entry points are obtained, and the detection range of the robot when returning to the base station is increased; scanning an intensity point cloud of the base station at an initial entry point; obtaining a final entry point according to the intensity point cloud; according to the initial position, the initial entry point and the final entry point, the navigation route of the robot returning to the base station is determined, and the accuracy of the robot in identifying and aligning the base station is improved.

Description

Laser radar-based navigation method and system for automatically returning robot to base station
Technical Field
The invention relates to the field of robot control, in particular to a laser radar-based navigation method and system for automatically returning a robot to a base station.
Background
The existing robot is generally provided with a sensor capable of realizing positioning, mapping and navigation functions, when a vision sensor on the robot is used for identifying a target base station, the position information of the target base station can be only roughly known, and the robot is difficult to be guaranteed to be aligned to the target base station quickly and accurately in effective time. At present, most of robots are used for returning to a base station in an infrared mode, an infrared emitter and an infrared receiver are arranged on the robots, the relative positions of the base station and the robots are detected through emitting and receiving infrared light, but when the robots are far away from the base station, the infrared light can be blocked by obstacles and can not be received, so that the problems of long time for the robots to find the base station, low identification precision and the like are caused, the range of the robots is narrow when the robots return to the base station, and very good hardware support is needed, so that the cost is increased. There are also laser radars or ultrasonic radars, but for more complex robots, the base station design needs to consider not only the problem of returning to the base station, but also other components attached to the base station, such as a water tank, a storage box, and the like, so that the accuracy and the manner of identification are complex and the effect is not good.
Based on the above-mentioned problems, a new navigation method for automatically returning to the base station is needed to improve the accuracy of the robot returning to the base station.
Disclosure of Invention
The invention aims to provide a laser radar-based navigation method and system for automatically returning a robot to a base station, which can improve the accuracy of automatic identification and alignment of the robot to the base station.
In order to achieve the above object, the present invention provides the following solutions:
the laser radar-based navigation method for automatically returning the robot to the base station comprises the following steps:
collecting multi-frame first point cloud data of each direction of a base station on the axis of the base station;
aiming at the same direction, establishing a corresponding base station point cloud model according to multi-frame first point cloud data of the direction;
determining a corresponding initial entry point of the model according to the base station point cloud model;
determining an initial position of the robot to travel according to pre-stored starting point data;
scanning surrounding second point cloud data on the initial position to obtain scene point clouds;
obtaining initial entry points for continuous running of the robot according to the scene point cloud, the base station cloud models and the corresponding model initial entry points;
scanning third point cloud data of the base station on the initial entry point to obtain an intensity point cloud;
obtaining a final entry point according to the intensity point cloud;
and determining a navigation route of the robot returning to the base station according to the initial position, the initial entry point and the final entry point.
Optionally, the first point cloud data includes distance information, angle information and serial numbers of points;
aiming at the same direction, establishing a corresponding base station point cloud model according to multi-frame first point cloud data of the direction, wherein the base station point cloud model specifically comprises:
determining position information of a corresponding conversion point in a Cartesian coordinate system according to distance information, angle information and sequence number of each point in first point cloud data of each frame;
judging whether the point clouds in each sequence number range contain base station clouds according to the sequence numbers of the conversion points corresponding to the first point cloud data of the frame, and discarding the point clouds in the corresponding sequence number range in the first point cloud data if the point clouds in the sequence number range do not contain the base station clouds;
and obtaining a corresponding base station point cloud model according to the processed multi-frame first point cloud data.
Optionally, determining, according to the base station point cloud model, a corresponding model initial entry point specifically includes:
selecting two groups of points with axisymmetric relation in the base station cloud model to form two groups of symmetric points;
fitting a straight line Y according to the two groups of symmetrical points;
and calculating to obtain corresponding initial entry points of the model according to the two groups of symmetrical points and the straight line Y.
Optionally, the fitting a straight line Y according to the two sets of symmetry points specifically includes:
respectively calculating the mean value points of the two groups of symmetrical points to obtain an E point and an F point;
and obtaining a straight line Y according to the E point and the F point.
Optionally, obtaining an initial entry point for the robot to continue to travel according to the scene point cloud, each base station cloud model and the corresponding model initial entry point specifically includes:
calculating normals of each point in the scene point cloud and each base station cloud model;
respectively downsampling the scene point cloud and the base station cloud models to obtain scene point cloud key points and base station point cloud model key points;
calculating descriptors of the scene point cloud according to the scene point cloud key points and normals of all points in the scene point cloud;
aiming at each base station cloud model, calculating descriptors of the corresponding base station cloud models according to key points of the base station cloud models and normals of all points in the base station cloud models;
matching the descriptor of the scene point cloud with the descriptor of each base station point cloud model to obtain a matched scene point cloud;
and obtaining initial entry points in the scene point cloud according to the matched scene point cloud and the model initial entry points corresponding to the base station point cloud model.
Optionally, the obtaining the initial entry point in the scene point cloud according to the matched scene point cloud and the model initial entry point corresponding to the base station point cloud model specifically includes:
clustering points in the matched scene point cloud to obtain a base station cloud and a rotation translation matrix in the matched scene point cloud;
according to the rotation translation matrix, correspondingly matching the base station point cloud in the matched scene point cloud with the base station point cloud model;
and obtaining initial entry points in the scene point cloud according to the matched scene point cloud and the model initial entry points corresponding to the base station point cloud model.
Optionally, the intensity point cloud includes distance information, angle information, and intensity information;
obtaining a final entry point according to the intensity point cloud specifically includes:
partitioning the intensity point cloud according to angle information and distance information of each point to obtain a plurality of point cloud blocks;
aiming at each point cloud block, calculating the intensity value of each point in the point cloud block according to the corresponding intensity information, and obtaining the intensity average value of the points in the point cloud block;
judging the intensity value of each point in the point cloud block and the intensity average value corresponding to the point cloud block;
screening out a plurality of sections of continuous point cloud points in the point cloud blocks larger than the intensity mean value;
calculating average value points of the multi-section continuous point cloud points, and fitting a straight line L;
and calculating to obtain a final entry point according to the multi-section continuous point cloud points and the straight line L.
Optionally, the obtaining a final entry point according to the intensity point cloud further includes:
and performing impurity point elimination treatment on the multi-section continuous point cloud points.
In order to achieve the above purpose, the present invention also provides the following solutions:
a laser radar-based navigation system for automatic base station return of a robot, the laser radar-based navigation system comprising:
the model point cloud acquisition unit is used for acquiring multi-frame first point cloud data in all directions on the central axis of the base station;
the model building unit is connected with the model point cloud acquisition unit and is used for building a corresponding base station point cloud model according to multi-frame first point cloud data of the direction aiming at the same direction;
the model initial entry point calculation unit is connected with the model construction unit and is used for determining a corresponding model initial entry point according to the base station point cloud model;
the scene point cloud acquisition unit is used for determining the initial position of the robot to travel according to pre-stored starting point data, and scanning surrounding second point cloud data on the initial position to obtain scene point cloud;
the initial entry point determining unit is respectively connected with the model building unit, the model initial entry point calculating unit and the scene point cloud collecting unit and is used for obtaining initial entry points for the robot to continue to travel according to the scene point cloud, each base station cloud model and the corresponding model initial entry points;
the intensity point cloud acquisition unit is connected with the initial access point determination unit and is used for scanning third point cloud data of the base station at the initial access point to obtain intensity point cloud;
the final entry point determining unit is connected with the intensity point cloud acquisition unit and is used for obtaining a final entry point according to the intensity point cloud;
the navigation route determining unit is respectively connected with the scene point cloud collecting unit, the initial access point determining unit and the final access point determining unit and is used for determining a navigation route of the robot returning to the base station according to the initial position, the initial access point and the final access point.
Optionally, the first point cloud data includes distance information, angle information and serial numbers of points;
the model construction unit includes:
the conversion module is connected with the model point cloud acquisition unit and used for determining the position information of the corresponding conversion point in the Cartesian coordinate system according to the distance information, the angle information and the sequence number of each point in the frame of first point cloud data aiming at each frame of first point cloud data;
the extraction module is connected with the conversion module and is used for judging whether the point clouds in each sequence number range contain base station clouds according to the sequence numbers of the conversion points corresponding to the first point cloud data of the frame, and if the point clouds in the sequence number range do not contain the base station clouds, the point clouds in the corresponding sequence number range in the first point cloud data are abandoned;
and the base station cloud model construction module is connected with the extraction module and is used for obtaining a corresponding base station cloud model according to the processed multi-frame base station cloud data.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects: the method comprises the steps that through the point cloud identification base station, a point cloud model of each direction of the base station is established, when a robot advances to an initial position according to a pre-stored starting point position, surrounding point cloud data are scanned and matched with the point cloud model of each direction of the base station, so that an initial entering point of continuous advancing of the robot is obtained, a detection range of the robot when the robot returns to the base station is wider, the robot moves to the position right in front of the base station, surrounding point cloud data are scanned again at the initial entering point, the intensity of the base station is identified, the accuracy of a navigation route of the robot when the robot returns to the base station is higher, the base station can be accurately aligned and returned, and the accuracy of the base station identification and alignment of the robot is further improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the drawings that are needed in the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for navigating a robot to return to a base station automatically based on a laser radar of the present invention;
FIG. 2 is a schematic diagram of a base station point cloud scanned by a lidar;
FIG. 3 is a flow chart of a base station cloud model building method;
FIG. 4 is a flow chart for determining an initial entry point of a model;
FIG. 5 is a schematic illustration of determining an initial entry point from a base station cloud;
FIG. 6 is an overall flowchart for determining an initial entry point for a robot to continue traveling;
FIG. 7 is a flow chart of determining initial entry points in a scene point cloud based on matching scene point clouds and model initial entry points;
FIG. 8 is a flow chart for calculating a final entry point;
FIG. 9 is a schematic diagram of determining a final entry point from an intensity point cloud;
fig. 10 is a schematic block diagram of a laser radar-based navigation system for automatically returning a robot to a base station according to the present invention.
Symbol description:
the system comprises a model point cloud acquisition unit-1, a model construction unit-2, a conversion module-21, an extraction module-22, a base station point cloud model construction module-23, a model initial entry point calculation unit-3, a scene point cloud acquisition unit-4, an initial entry point determination unit-5, an intensity point cloud acquisition unit-6, a final entry point determination unit-7 and a navigation route determination unit-8.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention aims to provide a laser radar-based navigation method and system for automatically returning a robot to a base station, wherein a point cloud model of each direction of the base station is established through a point cloud identification base station, when the robot advances to an initial position according to a pre-stored starting point position, surrounding point cloud data are scanned and matched with the point cloud model of each direction of the base station, so that an initial entry point for the robot to continue advancing is obtained, the detection range of the robot to return to the base station is wider, the robot moves to the position close to the position in front of the base station, surrounding point cloud data are scanned again at the initial entry point, the intensity of the base station is identified, the accuracy of a navigation route when the robot returns to the base station is higher, the base station can be accurately aligned and returned, and the accuracy of the base station identification and the base station alignment of the robot is further improved.
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
As shown in fig. 1, the laser radar-based navigation method for automatically returning a robot to a base station of the invention comprises the following steps:
s1: and acquiring multi-frame first point cloud data of all directions of the base station on the axis of the base station. Specifically, the first point cloud data includes distance information, angle information, and serial numbers of points.
Specifically, as shown in fig. 2, which is a schematic diagram of a base station point cloud scanned by a laser radar, the laser radar of the same type as the robot is used, and is placed on a central axis of the base station, and is about 1 meter away from the base station, so as to obtain point cloud data scanned by the laser radar of about ten frames. Specifically, the lidar determines the distance, azimuth, and speed of a target by transmitting a probe signal to the target, and then by measuring the arrival time of the reflected signal, the direction of the beam, the frequency variation, and other parameters. The point cloud is a set of positions of points of a target determined from information such as a range and a direction measured by a laser radar.
In this embodiment, when acquiring multi-frame first point cloud data in each direction of the base station, the first point cloud data in each direction of the base station may be acquired by placing a laser radar in each direction on the axis of the base station and by using the laser radars in different directions. The laser radar can be placed on the central axis opposite to the base station, and in the acquisition process, the base station is rotated by different angles, so that the laser radar is not opposite to the base station any more, but deviates from different angles, and the point cloud data of the base station in each direction are acquired only through one fixed laser radar.
S2: and establishing a corresponding base station point cloud model according to the multi-frame first point cloud data of the direction aiming at the same direction.
Specifically, as shown in fig. 3, S2: aiming at the same direction, establishing a corresponding base station point cloud model according to multi-frame first point cloud data of the direction, wherein the base station point cloud model specifically comprises:
s21: determining position information of a corresponding conversion point in a Cartesian coordinate system according to distance information, angle information and sequence number of each point in first point cloud data of each frame; the position information of each conversion point comprises coordinates and serial numbers of the corresponding conversion point.
Specifically, the position information of the corresponding conversion point in the cartesian coordinate system is determined according to the following formula:
x=rangei*cos(baseangle+i*skipangle)(0≤i<Maxi)
y=rangei*sin(baseangle+i*skipangle)(0≤i<Maxi);
wherein, (x, y) is the coordinate of the corresponding transformation point in the Cartesian coordinate system, rangei is the distance from the ith point to the origin, baseangle is the initial angle of the point, skip angle is the angle increment, maxi is the number of point cloud points, and i is the serial number of the point cloud points.
S22: and judging whether the point clouds in each sequence number range contain the base station clouds according to the sequence numbers of the conversion points corresponding to the first point cloud data of the frame, and discarding the point clouds corresponding to the sequence number range in the first point cloud data if the point clouds in the sequence number range do not contain the base station clouds.
Optionally, in this embodiment, a binary method is adopted to intercept a sequence number range of the conversion point corresponding to the first point cloud data of the frame. Because the point cloud scanned by the laser radar comprises the point cloud of the base station, the sequence number of the point cloud point scanned by the laser radar is halved through a dichotomy for convenience, and the sequence number range is visualized every time the sequence number range is obtained, so that whether the base station cloud is still in or not is checked, and the base station cloud model can be continuously intercepted independently.
Specifically, after each sequence number range is obtained, amplifying point cloud points in the corresponding sequence number range, performing matching judgment with the base station point cloud image, and if none of the point cloud points in the corresponding sequence number range is base station point cloud, discarding the point cloud data in the corresponding sequence number range.
S23: and obtaining a corresponding base station point cloud model according to the processed multi-frame first point cloud data.
Preferably, the first point cloud data of each frame can obtain a corresponding base station point cloud, and a plurality of point clouds intercepted by different frames are screened and combined to 2-3 point clouds. Because the point cloud scanned by the laser radar can have problems, the intercepted model is inaccurate, more scans are needed, only 2-3 normal points are reserved finally, and the accuracy of the base station point cloud model is improved.
S3: and determining a corresponding initial entry point of the model according to the base station point cloud model.
Specifically, as shown in fig. 4, S3: according to the base station point cloud model, determining a corresponding model initial entry point specifically comprises:
s31: and selecting two groups of points with axisymmetric relation in the base station cloud model to form two groups of symmetric points.
S32: fitting a straight line Y according to the two groups of symmetrical points, wherein the method specifically comprises the following steps: and respectively calculating the mean value points of the two groups of symmetrical points to obtain an E point and an F point.
And obtaining a straight line Y according to the E point and the F point.
Further, as shown in fig. 5, the position of the point C in the figure is the position of the laser radar when the base station cloud is scanned, the point obtained on the right of the point C is the point cloud of the base station, the partial point of the a region and the partial point of the B region having an axisymmetric relationship can be selected, the mean value point of the two parts can be obtained, the Y straight line can be obtained by fitting the point E or the point F, and the X straight line can be obtained by the midpoint between the Y straight line and EF.
S33: and calculating to obtain corresponding initial entry points of the model according to the two groups of symmetrical points and the straight line Y.
Optionally, a point D is calculated about 50-60 cm in front of the base station based on the X-line calculation and the two sets of symmetry points, which point is at a distance from the outside of the robot greater than the sum of the base station slope length and the robot radius, so that the robot will not collide with the base station too close to this point.
S4: and determining the initial position of the robot traveling according to the pre-stored starting point data.
S5: and scanning the surrounding second point cloud data at the initial position to obtain a scene point cloud.
S6: and obtaining initial entry points for the robot to continue to travel according to the scene point cloud, the base station cloud models and the corresponding model initial entry points.
Specifically, as shown in fig. 6, S6: obtaining initial entry points for continuous running of the robot according to the scene point cloud, each base station cloud model and the corresponding model initial entry points, wherein the method specifically comprises the following steps:
s61: and calculating normals of each point in the scene point cloud and each base station cloud model. Specifically, when calculating the normal line of each point, the k points are regarded as one curved surface by using k neighbors of the point, and the normal line of the curved surface is set to about 10 as the normal line of the point.
S62: and respectively downsampling the scene point cloud and the base station cloud models to obtain scene point cloud key points and base station point cloud model key points. In this embodiment, the scene point cloud and each base station point cloud model are respectively downsampled, and points within a radius of about 10 cm to 40cm around a certain point in the scene point cloud and each base station point cloud model are replaced by centroid points of the points. The number of points required to be calculated is reduced, and the complexity is reduced.
S63: and calculating the descriptor of the scene point cloud according to the scene point cloud key points and the normals of each point in the scene point cloud.
S64: and aiming at each base station cloud model, calculating descriptors of the corresponding base station cloud models according to the key points of the base station cloud model and the normals of all points in the base station cloud model.
Specifically, the descriptor is a feature describing a point, for example, the normal calculated in S61 is a feature, the normal of a certain point is taken as a reference, the normal included angle between the normal of the certain point and the normal of a neighboring point within a certain radius around the certain point is taken as a cos, and relevant information is counted, so that the descriptor can have enough features to describe a point, wherein the larger the description radius is, the more points participating in calculation are, the greater the complexity of the descriptor for calculating the certain point is, and the smaller the description radius is, the less obvious the description feature is, so that the size of the description radius is controlled, usually about 10 cm to 20cm, and the value needs to be set through testing of different base stations of different radars.
S65: and matching the descriptor of the scene point cloud with the descriptor of each base station point cloud model to obtain a matched scene point cloud.
Specifically, after calculating the descriptor of the scene point cloud and the descriptor of each base station point cloud model, the descriptors of the two point clouds need to be matched, which specifically includes: generating a kd_tree (a tree structure capable of searching) of descriptors of the cloud models of all base sites; and (3) bringing the descriptor of each point in the scene point cloud into a kd_tree, calculating the distance between the descriptor of each point in the scene point cloud and the descriptor of the base station point cloud model, and if the distance is smaller than a distance threshold (usually 0.25), matching the descriptor of the scene with the descriptor of the point in the model, namely that the two points are similar enough, namely that the two points represent a pair of successfully matched descriptors, namely that the two points represent points of the same object at the approximately same position.
S66: and obtaining initial entry points in the scene point cloud according to the matched scene point cloud and the model initial entry points corresponding to the base station point cloud model.
Further, as shown in fig. 7, S66: obtaining an initial entry point in the scene point cloud according to the matched scene point cloud and the model initial entry point corresponding to the base station point cloud model, wherein the method specifically comprises the following steps:
s661: and clustering the points in the matched scene point cloud to obtain a base station cloud and a rotation translation matrix in the matched scene point cloud.
Specifically, the point set corresponding to the successfully matched descriptors is taken as the shape of the base station point cloud, namely, the base station point cloud in the scene point cloud is found, and the whole base station point cloud is similar to the base station model point cloud. And comparing the base station position in the base station cloud model and the scene point cloud to obtain a rotation translation matrix.
S662: and correspondingly matching the base station point cloud in the matched scene point cloud with the base station point cloud model according to the rotation translation matrix.
S663: and obtaining initial entry points in the scene point cloud according to the matched scene point cloud and the model initial entry points corresponding to the base station point cloud model.
Specifically, according to the rotation translation matrix, the position of the base station model point cloud is rotated and translated to the position of the base station in the scene point cloud; the initial entry point calculated by the base station model point cloud can also calculate the position of the initial entry point in the scene by rotating the translation matrix.
And the second point cloud data acquired by the actual robot through the laser radar comprises the point clouds of the base station and the point clouds of other objects, and the base station point cloud model in the actual situation is found and the initial entering point position in the actual situation is calculated by matching the scene point clouds with the base station point cloud models.
S7: and scanning third point cloud data of the base station on the initial entry point to obtain an intensity point cloud.
S8: and obtaining a final entry point according to the intensity point cloud. The intensity point cloud comprises distance information, angle information, intensity information and serial numbers of each point.
Specifically, as shown in fig. 8, S8: obtaining a final entry point according to the intensity point cloud, specifically including:
s81: and partitioning the intensity point cloud according to the angle information and the distance information of each point to obtain a plurality of point cloud blocks. In this embodiment, the processing procedure of the blocking is: if the distance between two points of continuous angles in the intensity point cloud is smaller than a certain fixed value (about 5 cm), the two points are in the same block, otherwise, the two points are in different blocks, and all the points are subjected to the same operation according to the angle sequence, so that the point cloud can be screened into a plurality of blocks, and part of points of the point cloud exist in each block.
S82: and aiming at each point cloud block, calculating the intensity value of each point in the point cloud block according to the corresponding intensity information, and obtaining the intensity average value of the points in the point cloud block.
S83: and judging the intensity value of each point in the point cloud block and the intensity average value corresponding to the point cloud block.
S84: and screening out a plurality of sections of continuous point cloud points in the point cloud blocks larger than the intensity mean value.
Optionally, after the intensity point cloud is segmented, points on the intensity material are screened, and the intensity values of the points are higher than those of common points. And calculating the intensity average value of points of each block in the divided blocks, and searching whether the intensity value of continuous points is larger than the intensity average value in a certain block. The number of segments of this continuous point that need to be found is determined by the number of segments of the inner strength material.
As shown in fig. 9, the intensity value of the point of the radar on the three-section intensity material is larger than the intensity average value of the block in the affiliated block, two sections of continuous non-intensity materials are arranged in the middle of the three sections of intensity, the intensity value of the point of the radar on the two sections of non-intensity materials is smaller than the intensity average value of the block in the affiliated block, thus a section of characteristic point of '10101', wherein '1' represents the point of the intensity material, '0' represents the point of the non-intensity material, the number of the points of the radar on '1' or '0' can be stabilized in a certain section of interval through testing, and the number of the points of '1' is about 3-10 points, and the number of the points of '0' is about 4-9 points by taking a certain laser radar as an example. Thus, the specific coordinates and location of the point of "10101" can be found by the approximate range of the "10101" feature and the number of points.
I.e. according to the acquired continuous point cloud of 10101, determining a plurality of sections of continuous point cloud points.
S85: and calculating the mean value points of the multi-section continuous point cloud points, and fitting a straight line L. Preferably, ten frames of third point cloud data are continuously acquired to obtain multi-section continuous point clouds corresponding to each frame, mean value point calculation of each section is carried out on the multi-section continuous point clouds corresponding to each frame, and a straight line L is fitted. The accuracy of the straight line L is improved.
S86: and calculating to obtain a final entry point according to the multi-section continuous point cloud points and the straight line L.
Optionally, after the intensity point cloud is obtained, determining the position information of the corresponding point in the cartesian coordinate system according to the distance information, the angle information and the serial number of each point in the intensity point cloud. The laser radar scans different intensity values of different objects corresponding to a circle of different points, and the points corresponding to the base station need to be screened out.
Preferably, S8: obtaining a final entry point according to the intensity point cloud, further comprising:
and performing impurity point elimination treatment on the multi-section continuous point cloud points. And deleting the points which are scanned by the laser radar and have larger errors and are far away from most points, and reserving most points with smaller distance errors so as to reduce calculation errors.
In addition, there are many methods for attaching the intensity material to the base station, such as a method for dividing the intensity material into three parts not at the center of the base station, or dividing the intensity material into several parts, but the final entry point can be calculated by using the intensity, so that, as shown in fig. 9, the laser radar scans the intensity material on the base station, after three parts of continuous point clouds are obtained, the point clouds between the three parts can be determined, so that the point clouds between the L1 and L3 areas are selected, the point clouds between the L2 and L4 areas are respectively used to obtain the average central points I and G of the corresponding areas, the slope K of the straight line L is calculated by fitting the I and G, the slope K is calculated about 10 to 20 times (accurate), the average value of the slope K is calculated, the slope K and the L straight line are obtained after the average, the slope K1 of the M straight line is calculated by the L straight line is calculated, the center point O is calculated according to the point clouds between the L2 and L3, and the average value of the point O is calculated as the average value is about 10 to 20. And finally, obtaining the position of the final entry point H through the slope K1 of the M straight line and the point O.
S9: and determining a navigation route of the robot returning to the base station according to the initial position, the initial entry point and the final entry point.
The robot determines a route back to the base station according to the initial position, the initial entry point and the final entry point, then calculates an angle from the forward direction of the robot to the point according to the middle point of the middle continuous intensity point in the base station, and then rotates the forward direction of the robot to the center in the base station, so that the robot is accurately aligned with the central axis, faces the inside of the base station along the central axis, and slowly returns to the base station along the straight line at a certain speed.
Preferably, the robot is first returned to the initial position near the base station according to the pre-stored starting point data, if the robot cannot find the base station after returning to the initial position, i.e. if the descriptor of the scene point cloud is not successfully matched with the descriptor of each base station point cloud model at S65, the base station position may have changed and is not at the initial position, the robot starts to walk along the wall, and simultaneously repeatedly scans the surrounding second point cloud data, and performs recognition until the base station is found.
In addition, the invention also provides a laser radar-based navigation system for automatically returning the robot to the base station, which can improve the accuracy of the robot in identifying and aligning the base station.
As shown in fig. 10, the navigation system for automatically returning a robot to a base station based on a laser radar of the present invention comprises: the system comprises a model point cloud acquisition unit 1, a model construction unit 2, a model initial entry point calculation unit 3, a scene point cloud acquisition unit 4, an initial entry point determination unit 5, an intensity point cloud acquisition unit 6, a final entry point determination unit 7 and a navigation route determination unit 8.
Specifically, the model point cloud collecting unit 1 is configured to collect multi-frame first point cloud data in each direction on a base station axis.
The model construction unit 2 is connected with the model point cloud acquisition unit 1, and the model construction unit 2 is used for establishing a corresponding base station point cloud model according to multi-frame first point cloud data of the same direction.
The model initial entry point calculating unit 3 is connected with the model constructing unit 2, and the model initial entry point calculating unit 3 is used for determining a corresponding model initial entry point according to the base station point cloud model.
The scene point cloud acquisition unit 4 is used for determining an initial position of the robot traveling according to pre-stored starting point data, and scanning surrounding second point cloud data at the initial position to obtain scene point cloud.
The initial entry point determining unit 5 is respectively connected with the model building unit 2, the model initial entry point calculating unit 3 and the scene point cloud collecting unit 4, and the initial entry point determining unit 5 is used for obtaining an initial entry point for the robot to continue to travel according to the scene point cloud, each base station cloud model and the corresponding model initial entry point.
The intensity point cloud acquisition unit 6 is connected with the initial entry point determination unit 5, and the intensity point cloud acquisition unit 6 is used for scanning third point cloud data of the base station at the initial entry point to obtain intensity point cloud.
The final entry point determining unit 7 is connected to the intensity point cloud collecting unit 6, and the final entry point determining unit 7 is configured to obtain a final entry point according to the intensity point cloud.
The navigation route determining unit 8 is respectively connected with the scene point cloud collecting unit 4, the initial entry point determining unit 5 and the final entry point determining unit 7, and the navigation route determining unit 8 is used for determining a navigation route of the robot returning to the base station according to the initial position, the initial entry point and the final entry point.
Further, the first point cloud data includes distance information and angle information.
The model building unit 2 includes: the system comprises a conversion module 21, an extraction module 22 and a base station point cloud model construction module 23.
The conversion module 21 is connected to the model point cloud collecting unit 1, and the conversion module 21 is configured to determine, for each frame of first point cloud data, location information of a corresponding conversion point in a cartesian coordinate system according to distance information, angle information, and sequence number of each point in the frame of first point cloud data.
The extraction module 22 is connected to the conversion module 21, and the extraction module 22 is configured to determine, according to the sequence numbers of the conversion points corresponding to the first point cloud data of the frame, whether the point clouds in each sequence number range include a base station cloud, and if the point clouds in the sequence number range do not include the base station cloud, discard the point clouds corresponding to the sequence number range in the first point cloud data.
The base station point cloud model construction module 23 is connected with the extraction module 22, and the base station point cloud model construction module 23 is configured to obtain a corresponding base station point cloud model according to the processed multi-frame base station point cloud data.
Compared with the prior art, the laser radar-based automatic robot base station returning navigation system has the same beneficial effects as the laser radar-based automatic robot base station returning navigation method, and is not repeated herein.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the system disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
The principles and embodiments of the present invention have been described herein with reference to specific examples, the description of which is intended only to assist in understanding the methods of the present invention and the core ideas thereof; also, it is within the scope of the present invention to be modified by those of ordinary skill in the art in light of the present teachings. In view of the foregoing, this description should not be construed as limiting the invention.

Claims (7)

1. The laser radar-based navigation method for automatically returning the robot to the base station is characterized by comprising the following steps of:
collecting multi-frame first point cloud data of each direction of a base station on the axis of the base station;
aiming at the same direction, establishing a corresponding base station point cloud model according to multi-frame first point cloud data of the direction;
according to the base station point cloud model, determining a corresponding model initial entry point specifically comprises: selecting two groups of points with axisymmetric relation in the base station cloud model to form two groups of symmetric points; fitting a straight line Y according to the two groups of symmetrical points; according to the two groups of symmetrical points and the straight line Y, calculating to obtain corresponding initial entry points of the model;
determining an initial position of the robot to travel according to pre-stored starting point data;
scanning surrounding second point cloud data on the initial position to obtain scene point clouds;
obtaining initial entry points for continuous running of the robot according to the scene point cloud, each base station cloud model and the corresponding model initial entry points, wherein the method specifically comprises the following steps: calculating normals of each point in the scene point cloud and each base station cloud model; respectively downsampling the scene point cloud and the base station cloud models to obtain scene point cloud key points and base station point cloud model key points; calculating descriptors of the scene point cloud according to the scene point cloud key points and normals of all points in the scene point cloud; aiming at each base station cloud model, calculating descriptors of the corresponding base station cloud models according to key points of the base station cloud models and normals of all points in the base station cloud models; matching the descriptor of the scene point cloud with the descriptor of each base station point cloud model to obtain a matched scene point cloud; obtaining initial entry points in the scene point cloud according to the matched scene point cloud and the model initial entry points corresponding to the base station point cloud model;
scanning third point cloud data of the base station on the initial entry point to obtain an intensity point cloud; the intensity point cloud comprises distance information, angle information and intensity information;
obtaining a final entry point according to the intensity point cloud, specifically including: partitioning the intensity point cloud according to angle information and distance information of each point to obtain a plurality of point cloud blocks; aiming at each point cloud block, calculating the intensity value of each point in the point cloud block according to the corresponding intensity information, and obtaining the intensity average value of the points in the point cloud block; judging the intensity value of each point in the point cloud block and the intensity average value corresponding to the point cloud block; screening out a plurality of sections of continuous point cloud points in the point cloud blocks larger than the intensity mean value; calculating average value points of the multi-section continuous point cloud points, and fitting a straight line L; calculating to obtain a final entry point according to the multi-section continuous point cloud points and the straight line L;
and determining a navigation route of the robot returning to the base station according to the initial position, the initial entry point and the final entry point.
2. The navigation method for automatically returning a robot to a base station based on a laser radar according to claim 1, wherein the first point cloud data includes distance information, angle information and serial numbers of points;
aiming at the same direction, establishing a corresponding base station point cloud model according to multi-frame first point cloud data of the direction, wherein the base station point cloud model specifically comprises:
determining position information of a corresponding conversion point in a Cartesian coordinate system according to distance information, angle information and sequence number of each point in first point cloud data of each frame;
judging whether the point clouds in each sequence number range contain base station clouds according to the sequence numbers of the conversion points corresponding to the first point cloud data of the frame, and discarding the point clouds in the corresponding sequence number range in the first point cloud data if the point clouds in the sequence number range do not contain the base station clouds;
and obtaining a corresponding base station point cloud model according to the processed multi-frame first point cloud data.
3. The method for automatically returning the robot to the base station based on the laser radar according to claim 1, wherein the fitting the straight line Y according to the two sets of symmetry points specifically comprises:
respectively calculating the mean value points of the two groups of symmetrical points to obtain an E point and an F point;
and obtaining a straight line Y according to the E point and the F point.
4. The method for automatically returning the robot to the base station based on the laser radar according to claim 1, wherein the obtaining the initial entry point in the scene point cloud according to the matched scene point cloud and the model initial entry point corresponding to the base station point cloud model specifically comprises:
clustering points in the matched scene point cloud to obtain a base station cloud and a rotation translation matrix in the matched scene point cloud;
according to the rotation translation matrix, correspondingly matching the base station point cloud in the matched scene point cloud with the base station point cloud model;
and obtaining initial entry points in the scene point cloud according to the matched scene point cloud and the model initial entry points corresponding to the base station point cloud model.
5. The method for automatically returning a robot to a base station based on a laser radar according to claim 1, wherein the obtaining a final entry point according to the intensity point cloud further comprises:
and performing impurity point elimination treatment on the multi-section continuous point cloud points.
6. The utility model provides a laser radar based navigation system that robot returns basic station automatically which characterized in that, laser radar based navigation system that robot returns basic station automatically includes:
the model point cloud acquisition unit is used for acquiring multi-frame first point cloud data in all directions on the central axis of the base station;
the model building unit is connected with the model point cloud acquisition unit and is used for building a corresponding base station point cloud model according to multi-frame first point cloud data of the direction aiming at the same direction;
the model initial entry point calculation unit is connected with the model construction unit and is used for determining a corresponding model initial entry point according to the base station point cloud model, and specifically comprises the following steps: selecting two groups of points with axisymmetric relation in the base station cloud model to form two groups of symmetric points; fitting a straight line Y according to the two groups of symmetrical points; according to the two groups of symmetrical points and the straight line Y, calculating to obtain corresponding initial entry points of the model;
the scene point cloud acquisition unit is used for determining the initial position of the robot to travel according to pre-stored starting point data, and scanning surrounding second point cloud data on the initial position to obtain scene point cloud;
the initial entry point determining unit is respectively connected with the model building unit, the model initial entry point calculating unit and the scene point cloud collecting unit, and is used for obtaining an initial entry point for the robot to continue to travel according to the scene point cloud, each base station cloud model and the corresponding model initial entry point, and specifically comprises the following steps: calculating normals of each point in the scene point cloud and each base station cloud model; respectively downsampling the scene point cloud and the base station cloud models to obtain scene point cloud key points and base station point cloud model key points; calculating descriptors of the scene point cloud according to the scene point cloud key points and normals of all points in the scene point cloud; aiming at each base station cloud model, calculating descriptors of the corresponding base station cloud models according to key points of the base station cloud models and normals of all points in the base station cloud models; matching the descriptor of the scene point cloud with the descriptor of each base station point cloud model to obtain a matched scene point cloud; obtaining initial entry points in the scene point cloud according to the matched scene point cloud and the model initial entry points corresponding to the base station point cloud model;
the intensity point cloud acquisition unit is connected with the initial access point determination unit and is used for scanning third point cloud data of the base station at the initial access point to obtain intensity point cloud; the intensity point cloud comprises distance information, angle information and intensity information;
the final entry point determining unit is connected with the intensity point cloud acquisition unit and is used for obtaining a final entry point according to the intensity point cloud, and specifically comprises the following steps: partitioning the intensity point cloud according to angle information and distance information of each point to obtain a plurality of point cloud blocks; aiming at each point cloud block, calculating the intensity value of each point in the point cloud block according to the corresponding intensity information, and obtaining the intensity average value of the points in the point cloud block; judging the intensity value of each point in the point cloud block and the intensity average value corresponding to the point cloud block; screening out a plurality of sections of continuous point cloud points in the point cloud blocks larger than the intensity mean value; calculating average value points of the multi-section continuous point cloud points, and fitting a straight line L; calculating to obtain a final entry point according to the multi-section continuous point cloud points and the straight line L;
the navigation route determining unit is respectively connected with the scene point cloud collecting unit, the initial access point determining unit and the final access point determining unit and is used for determining a navigation route of the robot returning to the base station according to the initial position, the initial access point and the final access point.
7. The navigation system for automatic base station return of a laser radar based robot according to claim 6, wherein the first point cloud data includes distance information, angle information, and serial numbers of points;
the model construction unit includes:
the conversion module is connected with the model point cloud acquisition unit and used for determining the position information of the corresponding conversion point in the Cartesian coordinate system according to the distance information, the angle information and the sequence number of each point in the frame of first point cloud data aiming at each frame of first point cloud data;
the extraction module is connected with the conversion module and is used for judging whether the point clouds in each sequence number range contain base station clouds according to the sequence numbers of the conversion points corresponding to the first point cloud data of the frame, and if the point clouds in the sequence number range do not contain the base station clouds, the point clouds in the corresponding sequence number range in the first point cloud data are abandoned;
and the base station cloud model construction module is connected with the extraction module and is used for obtaining a corresponding base station cloud model according to the processed multi-frame base station cloud data.
CN202110598548.0A 2021-05-31 2021-05-31 Laser radar-based navigation method and system for automatically returning robot to base station Active CN113341963B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110598548.0A CN113341963B (en) 2021-05-31 2021-05-31 Laser radar-based navigation method and system for automatically returning robot to base station

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110598548.0A CN113341963B (en) 2021-05-31 2021-05-31 Laser radar-based navigation method and system for automatically returning robot to base station

Publications (2)

Publication Number Publication Date
CN113341963A CN113341963A (en) 2021-09-03
CN113341963B true CN113341963B (en) 2023-08-22

Family

ID=77472309

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110598548.0A Active CN113341963B (en) 2021-05-31 2021-05-31 Laser radar-based navigation method and system for automatically returning robot to base station

Country Status (1)

Country Link
CN (1) CN113341963B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116407030A (en) * 2021-12-30 2023-07-11 追觅创新科技(苏州)有限公司 Method and system for returning self-mobile robot to base station and self-mobile robot

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105674991A (en) * 2016-03-29 2016-06-15 深圳市华讯方舟科技有限公司 Robot positioning method and device
AU2017421870A1 (en) * 2017-12-11 2019-06-27 Beijing Voyager Technology Co., Ltd. Systems and methods for identifying and positioning objects around a vehicle
CN110842940A (en) * 2019-11-19 2020-02-28 广东博智林机器人有限公司 Building surveying robot multi-sensor fusion three-dimensional modeling method and system
CN111324121A (en) * 2020-02-27 2020-06-23 四川阿泰因机器人智能装备有限公司 Mobile robot automatic charging method based on laser radar
CN111427351A (en) * 2020-03-31 2020-07-17 深圳乐动机器人有限公司 Robot recharging method and robot
CN112014857A (en) * 2020-08-31 2020-12-01 上海宇航***工程研究所 Three-dimensional laser radar positioning and navigation method for intelligent inspection and inspection robot
CN112198871A (en) * 2020-09-02 2021-01-08 创新工场(北京)企业管理股份有限公司 Method and apparatus for autonomous charging of mobile robot
CN114527755A (en) * 2022-02-21 2022-05-24 山东新一代信息产业技术研究院有限公司 Method, equipment and storage medium for automatic pile returning and charging of robot

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105674991A (en) * 2016-03-29 2016-06-15 深圳市华讯方舟科技有限公司 Robot positioning method and device
AU2017421870A1 (en) * 2017-12-11 2019-06-27 Beijing Voyager Technology Co., Ltd. Systems and methods for identifying and positioning objects around a vehicle
CN110842940A (en) * 2019-11-19 2020-02-28 广东博智林机器人有限公司 Building surveying robot multi-sensor fusion three-dimensional modeling method and system
CN111324121A (en) * 2020-02-27 2020-06-23 四川阿泰因机器人智能装备有限公司 Mobile robot automatic charging method based on laser radar
CN111427351A (en) * 2020-03-31 2020-07-17 深圳乐动机器人有限公司 Robot recharging method and robot
CN112014857A (en) * 2020-08-31 2020-12-01 上海宇航***工程研究所 Three-dimensional laser radar positioning and navigation method for intelligent inspection and inspection robot
CN112198871A (en) * 2020-09-02 2021-01-08 创新工场(北京)企业管理股份有限公司 Method and apparatus for autonomous charging of mobile robot
CN114527755A (en) * 2022-02-21 2022-05-24 山东新一代信息产业技术研究院有限公司 Method, equipment and storage medium for automatic pile returning and charging of robot

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种应用于机器人导航的激光点云聚类算法;袁夏;机器人;第33卷(第1期);全文 *

Also Published As

Publication number Publication date
CN113341963A (en) 2021-09-03

Similar Documents

Publication Publication Date Title
CN106204705B (en) A kind of 3D point cloud dividing method based on multi-line laser radar
US20190079193A1 (en) Multiple Resolution, Simultaneous Localization and Mapping Based On 3-D LIDAR Measurements
US6728608B2 (en) System and method for the creation of a terrain density model
CN107632308B (en) Method for detecting contour of obstacle in front of vehicle based on recursive superposition algorithm
CN102508246B (en) Method for detecting and tracking obstacles in front of vehicle
CN109085838A (en) A kind of dynamic barrier rejecting algorithm based on laser positioning
CN111781608B (en) Moving target detection method and system based on FMCW laser radar
CN113792699B (en) Object-level rapid scene recognition method based on semantic point cloud
CN112346463B (en) Unmanned vehicle path planning method based on speed sampling
CN112464812B (en) Vehicle-based concave obstacle detection method
CN109598765A (en) Join combined calibrating method outside monocular camera and millimetre-wave radar based on spherical calibration object
Park et al. Radar localization and mapping for indoor disaster environments via multi-modal registration to prior LiDAR map
CN112051568B (en) Pitching angle measurement method of two-coordinate radar
CN104880160B (en) Two-dimensional-laser real-time detection method of workpiece surface profile
CN113341963B (en) Laser radar-based navigation method and system for automatically returning robot to base station
CN112505671B (en) Millimeter wave radar target positioning method and device under GNSS signal missing environment
CN111999744A (en) Unmanned aerial vehicle multi-azimuth detection and multi-angle intelligent obstacle avoidance method
CN108876862A (en) A kind of noncooperative target point cloud position and attitude calculation method
CN110907903B (en) Self-adaptive tracking processing method based on multiple sensors
CN115201849A (en) Indoor map building method based on vector map
CN113030997B (en) Method for detecting travelable area of open-pit mine area based on laser radar
CN115167435A (en) Forward-looking line laser obstacle identification method and device for mobile robot
CN114217641A (en) Unmanned aerial vehicle power transmission and transformation equipment inspection method and system in non-structural environment
CN114265083A (en) Robot position identification method and device by using laser radar
CN111723797B (en) Method and system for determining bounding box of three-dimensional target

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant