CN113376638A - Unmanned logistics trolley environment sensing method and system - Google Patents

Unmanned logistics trolley environment sensing method and system Download PDF

Info

Publication number
CN113376638A
CN113376638A CN202110639869.0A CN202110639869A CN113376638A CN 113376638 A CN113376638 A CN 113376638A CN 202110639869 A CN202110639869 A CN 202110639869A CN 113376638 A CN113376638 A CN 113376638A
Authority
CN
China
Prior art keywords
point cloud
logistics trolley
environment
unmanned logistics
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110639869.0A
Other languages
Chinese (zh)
Inventor
杨波
杨涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Technology WUT
Original Assignee
Wuhan University of Technology WUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Technology WUT filed Critical Wuhan University of Technology WUT
Priority to CN202110639869.0A priority Critical patent/CN113376638A/en
Publication of CN113376638A publication Critical patent/CN113376638A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/86Combinations of sonar systems with lidar systems; Combinations of sonar systems with systems not using wave reflection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/02Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems using reflection of acoustic waves
    • G01S15/06Systems determining the position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/89Sonar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/87Combinations of systems using electromagnetic waves other than radio waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Acoustics & Sound (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides an unmanned logistics trolley environment perception method and system, wherein the method comprises the following steps: scanning surrounding environment information of the unmanned logistics trolley by at least two laser radars to generate an environment point cloud image, wherein the environment point cloud image comprises a plurality of point cloud data, and the ranging ranges of the at least two laser radars are different; determining a target ground point cloud set and a target non-ground point cloud set in the plurality of point cloud data; acquiring first local environment information of the unmanned logistics trolley through an ultrasonic radar, and fusing the first local environment information with the target non-ground point cloud set to generate a target point cloud image; respectively acquiring at least two coordinate values of the same obstacle information in the target point cloud image according to at least two laser radars; and fusing at least two coordinate values according to an optimal distributed estimation fusion algorithm to generate a fused coordinate value, and taking the fused coordinate value as a coordinate value of the obstacle information. The method improves the robustness and the precision of the environment perception of the unmanned logistics trolley.

Description

Unmanned logistics trolley environment sensing method and system
Technical Field
The invention relates to the technical field of unmanned driving, in particular to an unmanned logistics trolley environment sensing method and system.
Background
In recent years, with the development of e-commerce and online shopping, more and more people choose to shop online. At the same time, due to the acceleration of information exchange and goods circulation, more and more document transfer and goods circulation are also more and more dependent on transportation by logistics. Thus, logistics front-line personnel are increasingly working. Often, some violent express events occur, which causes people to be dissatisfied with the logistics industry, particularly couriers and even the logistics industry. Therefore, the unmanned logistics trolley is produced by transportation.
However, the unmanned logistics trolley in the prior art has the technical problems that the sensing detection range is small, and blind areas are large around the trolley body, so that the environmental sensing technology of the unmanned logistics trolley is poor in robustness, low in detection precision and the like.
Disclosure of Invention
The invention provides an unmanned logistics trolley environment sensing method and system, and aims to solve the technical problems that an unmanned logistics trolley environment sensing technology in the prior art is poor in robustness, low in detection precision and the like.
In one aspect, the invention provides an environment sensing method for an unmanned logistics trolley, which comprises the following steps:
scanning surrounding environment information of the unmanned logistics trolley by at least two laser radars to generate an environment point cloud image, wherein the environment point cloud image comprises a plurality of point cloud data, and the ranging ranges of the at least two laser radars are different;
determining a target ground point cloud set and a target non-ground point cloud set in the plurality of point cloud data;
acquiring first local environment information of the unmanned logistics trolley through an ultrasonic radar, and fusing the first local environment information with the target non-ground point cloud set to generate a target point cloud image;
respectively acquiring at least two coordinate values of the same obstacle information in the target point cloud image according to the at least two laser radars;
and fusing the at least two coordinate values according to an optimal distributed estimation fusion algorithm to generate a fused coordinate value, and taking the fused coordinate value as the coordinate value of the obstacle information.
In a possible implementation manner of the present invention, the generating the point cloud image of the unmanned logistics trolley environment by at least two laser radars includes:
scanning surrounding environment information of the unmanned logistics trolley through a first laser radar to generate an initial point cloud image;
carrying out distortion correction on the initial point cloud image through a GPS/IMU (global positioning system/inertial measurement unit) to generate a corrected point cloud image;
and acquiring second local environment information of the unmanned logistics trolley through a second laser radar, and fusing the second local environment information and the correction point cloud image to generate the environment point cloud image.
In a possible implementation manner of the present invention, the distortion correction of the initial point cloud image by the GPS/IMU includes:
respectively acquiring first position and attitude information of a current frame and second position and attitude information of a previous frame adjacent to the current frame of the unmanned logistics trolley in a scanning period of the first laser radar through a GPS/IMU;
and carrying out distortion correction on the initial point cloud image through preset coordinate conversion, the first position and posture information and the second position and posture information.
In a possible implementation manner of the present invention, the first attitude information includes a first yaw angle, a first pitch angle, a first heading angle, and a displacement, and the second attitude information includes a second yaw angle, a second pitch angle, and a second heading angle; the distortion correction of the initial point cloud image through the preset coordinate conversion, the first position and posture information and the second position and posture information specifically comprises the following steps:
p’=Ripi+Ti
Figure BDA0003105872760000021
Figure BDA0003105872760000022
Figure BDA0003105872760000023
in the formula, p' is a coordinate value of each point cloud data in the initial point cloud image after distortion correction; riIs a total rotation matrix; p is a radical ofiThe coordinate values of each point cloud data in the initial point cloud image before distortion correction are obtained; t isiIs a displacement matrix;
Figure BDA0003105872760000024
is an attitude matrix; c is the scanning period of the first laser radar; []TIs a transposed matrix; t is tiA time difference between a current frame and a previous frame adjacent to the current frame;
Figure BDA0003105872760000025
is a first yaw angle;
Figure BDA0003105872760000026
is a first pitch angle;
Figure BDA0003105872760000027
is a first course angle quantity; α is a second yaw angle; beta is a second pitch angle; gamma is a second course angle; x, y, z are the three coordinate components of the displacement.
In one possible implementation manner of the present invention, the determining a target ground point cloud set and a target non-ground point cloud set in the plurality of point cloud data includes:
determining an initial plane model;
determining a seed ground point cloud set and a seed non-ground point cloud set;
optimizing the initial plane model;
calculating an orthogonal projection distance between the point cloud data in the seed ground point cloud set and the optimized plane model, wherein if the orthogonal projection distance is smaller than a threshold distance, the point cloud data belongs to the seed ground point cloud set, and if the orthogonal projection distance is larger than or equal to the threshold distance, the point cloud data belongs to the seed non-ground point cloud set;
judging whether the optimization times of the initial plane model are smaller than threshold times, if so, optimizing the initial plane model again; if not, stopping optimizing the initial plane model; the seed ground point cloud set and the seed non-ground point cloud set are the target ground point cloud set and the target non-ground point cloud set respectively.
In one possible implementation manner of the present invention, the determining the seed ground point cloud set and the seed non-ground point cloud set includes:
sequencing the plurality of point cloud data according to a preset height sequence;
calculating an average height of the plurality of point cloud data;
traversing the plurality of point cloud data, and taking the point cloud data with the height smaller than the average height as an initial ground point set;
and calculating the orthogonal projection distance between the point cloud data in the initial ground point set and the initial plane model, wherein if the orthogonal projection distance is less than a threshold distance, the point cloud data belongs to the seed ground point cloud set, and if the orthogonal projection distance is greater than or equal to the threshold distance, the point cloud data belongs to the seed non-ground point cloud set.
In a possible implementation manner of the present invention, the first laser radar is a sixteen-line mechanical laser radar, and the second laser radar is a four-line solid state laser radar.
In a possible implementation manner of the present invention, the range of the sixteen-line mechanical laser radar is 20 m; the range of the four-line solid laser radar is 50 m.
On the other hand, the invention also provides an environment sensing system of the unmanned logistics trolley, which comprises the following components:
the system comprises a first perception module, a second perception module and a third perception module, wherein the first perception module is used for scanning surrounding environment information of the unmanned logistics trolley through at least two laser radars to generate an environment point cloud image, the environment point cloud image comprises a plurality of point cloud data, and the distance measurement ranges of the at least two laser radars are different;
a segmentation module for determining a target ground point cloud set and a target non-ground point cloud set in the plurality of point cloud data;
the second sensing module is used for obtaining first local environment information of the unmanned logistics trolley through an ultrasonic radar, and fusing the first local environment information with the target non-ground point cloud set to generate a target point cloud image;
the target detection module is used for respectively acquiring at least two coordinate values of the same obstacle information in the target point cloud image according to the at least two laser radars;
and the fusion module is used for fusing the at least two coordinate values according to an optimal distributed estimation fusion algorithm to generate a fusion coordinate value, and the fusion coordinate value is used as the coordinate value of the obstacle information.
The method comprises the steps of scanning surrounding environment information of the unmanned logistics trolley through at least two laser radars to generate an environment point cloud image, obtaining first local environment information of the unmanned logistics trolley through an ultrasonic radar after determining a target ground point cloud set and a target non-ground point cloud set in a plurality of point cloud data, and fusing the first local environment information and the target non-ground point cloud set to generate the target point cloud image. The target point cloud image is generated by at least two laser radars and the ultrasonic radar together, so that the robustness of environment perception of the unmanned logistics trolley can be improved, and the detection precision is improved.
Furthermore, the method and the device fuse at least two coordinate values according to a pre-optimal distributed estimation fusion algorithm to generate a fused coordinate value, and the fused coordinate value is used as the coordinate value of the obstacle information, so that the robustness and the accuracy of obstacle detection can be further improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic flow chart of an embodiment of an environment sensing method for an unmanned logistics trolley according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of an embodiment of S101 according to the present invention;
fig. 3 is a flowchart illustrating an embodiment of S202 according to the present invention;
fig. 4 is a schematic flowchart of an embodiment of S102 according to the present invention;
fig. 5 is a flowchart illustrating an embodiment of S402 according to the present invention;
fig. 6 is a schematic structural diagram of an embodiment of a hardware platform of an unmanned logistics trolley environment sensing method provided by the embodiment of the invention;
fig. 7 is a schematic structural diagram of an embodiment of an unmanned logistics trolley environment perception system provided by the embodiment of the invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The invention provides an unmanned logistics trolley environment sensing method and system, which are respectively explained in detail below.
Fig. 1 is a schematic flow diagram of an embodiment of an environment sensing method for an unmanned logistics trolley according to an embodiment of the present invention, and as shown in fig. 1, the environment sensing method for an unmanned logistics trolley includes:
s101, scanning surrounding environment information of the unmanned logistics trolley through at least two laser radars to generate an environment point cloud image, wherein the environment point cloud image comprises a plurality of point cloud data, and the ranging ranges of the at least two laser radars are different;
s102, determining a target ground point cloud set and a target non-ground point cloud set in the plurality of point cloud data;
s103, obtaining first local environment information of the unmanned logistics trolley through an ultrasonic radar, and fusing the first local environment information and the target non-ground point cloud set to generate a target point cloud image;
s104, respectively acquiring at least two coordinate values of the same obstacle information in the target point cloud image according to at least two laser radars;
specifically, data correlation of at least two lidar is achieved by a Global Nearest Neighbor (GNN) algorithm.
And S105, fusing the at least two coordinate values according to an optimal distributed estimation fusion algorithm to generate a fused coordinate value, and taking the fused coordinate value as a coordinate value of the obstacle information.
The unmanned logistics trolley environment sensing method provided by the embodiment of the invention scans the surrounding environment information of the unmanned logistics trolley through at least two laser radars to generate an environment point cloud image, obtains the first local environment information of the unmanned logistics trolley through an ultrasonic radar after determining a target ground point cloud set and a target non-ground point cloud set in a plurality of point cloud data, and fuses the first local environment information and the target non-ground point cloud set to generate the target point cloud image. Namely: the target point cloud image is generated by at least two laser radars and the ultrasonic radar with different ranging ranges, so that the robustness of environment perception of the unmanned logistics trolley can be improved, and the detection precision is improved.
Furthermore, the method and the device fuse at least two coordinate values according to a pre-optimal distributed estimation fusion algorithm to generate a fused coordinate value, and the fused coordinate value is used as the coordinate value of the obstacle information, so that multi-target tracking can be realized, and the robustness and the accuracy of obstacle detection are further improved.
Further, in some embodiments of the present invention, the environmental information around the unmanned logistics trolley is scanned by a first laser radar and a second laser radar, and the first laser radar is a sixteen-line mechanical laser radar and the second laser radar is a four-line solid state laser radar.
Specifically, the range of the sixteen-line mechanical laser radar is 20m, the vertical detection angle is 30 °, and the horizontal detection angle is 360 °. The range of the four-line solid laser radar is 50m, the vertical detection angle is 3.2 degrees, and the horizontal detection angle is 110 degrees. The range of the ultrasonic radar is 5m, and the horizontal detection angle is 60 degrees.
The introduction of the ultrasonic radar can compensate the driving blind area of the sixteen-line laser radar; the four-wire solid laser radar solves the problems that sixteen-wire laser radar is low in vertical resolution, road edge information cannot be effectively extracted, and effective return rate of remote point clouds is low.
Specifically, at least two coordinate values are fused according to an optimal distributed estimation fusion algorithm, and the generated fusion coordinate value is as follows: and respectively calculating at least two weights which are in one-to-one correspondence with the at least two coordinate values, and fusing according to the weights and the corresponding coordinate values to generate fused coordinate values.
It should be understood that: when the distance between the obstacle information and the unmanned logistics trolley is smaller than 20m, the weight of the sixteen-line mechanical laser radar is larger than that of the four-line solid-state laser radar, and when the distance between the obstacle information and the unmanned logistics trolley is larger than 20m, the weight of the sixteen-line mechanical laser radar is smaller than that of the four-line solid-state laser radar.
Further, in some embodiments of the present invention, as shown in fig. 2, S101 includes:
s201, scanning surrounding environment information of the unmanned logistics trolley through a first laser radar to generate an initial point cloud image;
specifically, the first laser radar scans the surrounding environment information of the unmanned logistics trolley, and generates an initial point cloud image by adopting Euclidean clustering of different clustering radiuses and clustering point transportation according to the distance.
S202, distortion correction is carried out on the initial point cloud image through a GPS/IMU, and a corrected point cloud image is generated;
s203, collecting second local environment information of the unmanned logistics trolley through a second laser radar, and fusing the second local environment information and the correction point cloud image to generate an environment point cloud image.
Through the setting, the distortion of the initial point cloud image can be corrected, and the environment perception capability of the unmanned logistics trolley is further improved.
Further, as shown in fig. 3, S202 includes:
s301, respectively acquiring first position information of a current frame and second position information of a previous frame adjacent to the current frame of the unmanned logistics trolley in a scanning period of a first laser radar through a GPS/IMU;
s302, distortion correction is carried out on the initial point cloud image through preset coordinate conversion, the first position and posture information and the second position and posture information.
Specifically, the first attitude information includes a first yaw angle, a first pitch angle, a first course angle and displacement, and the second attitude information includes a second yaw angle, a second pitch angle and a second course angle; s302 specifically comprises the following steps:
p’=Ripi+Ti
Figure BDA0003105872760000071
Figure BDA0003105872760000072
Figure BDA0003105872760000081
in the formula, p' is a coordinate value of each point cloud data in the initial point cloud image after distortion correction; riIs a total rotation matrix; p is a radical ofiThe coordinate values of each point cloud data in the initial point cloud image before distortion correction are obtained; t isiIs a displacement matrix;
Figure BDA0003105872760000082
is an attitude matrix; c is the scanning period of the first laser radar; []TIs a transposed matrix; t is tiA time difference between a current frame and a previous frame adjacent to the current frame;
Figure BDA0003105872760000083
is a first yaw angle;
Figure BDA0003105872760000084
is a first pitch angle;
Figure BDA0003105872760000085
is a first course angle quantity; α is a second yaw angle; beta is a second pitch angle; gamma is a second course angle; x, y, z are three coordinate components of the displacement; rx,Ry,RzThree coordinate components of displacement are respectively a rotation matrix around an X-axis, a Y-axis and a Z-axis.
Further, in some embodiments of the present invention, as shown in fig. 4, S102 includes:
s401, determining an initial plane model;
wherein, in some embodiments of the invention, the initial planar model is a linear model.
S402, determining a seed ground point cloud set and a seed non-ground point cloud set;
s403, optimizing the initial plane model;
s404, calculating an orthogonal projection distance between the point cloud data in the seed ground point cloud set and the optimized plane model, wherein if the orthogonal projection distance is smaller than a threshold distance, the point cloud data belongs to the seed ground point cloud set, and if the orthogonal projection distance is larger than or equal to the threshold distance, the point cloud data belongs to the seed non-ground point cloud set;
s405, judging whether the optimization times of the initial plane model are smaller than the threshold times, if so, repeating S403-S404; if not, stopping optimizing the initial plane model; the seed ground point cloud set and the seed non-ground point cloud set are respectively a target ground point cloud set and a non-ground point cloud set.
Further, as shown in fig. 5, S402 includes:
s501, sequencing the plurality of point cloud data according to a preset height sequence;
s502, calculating the average height of a plurality of point cloud data;
s503, traversing a plurality of point cloud data, and taking the point cloud data with the height smaller than the average height as an initial ground point set;
s504, calculating an orthogonal projection distance between the point cloud data in the initial ground point set and the initial plane model, wherein if the orthogonal projection distance is smaller than a threshold distance, the point cloud data belongs to a seed ground point cloud set, and if the orthogonal projection distance is larger than or equal to the threshold distance, the point cloud data belongs to a seed non-ground point cloud set.
Through the arrangement, the drivable area of the unmanned logistics trolley can be determined for subsequent path planning of the unmanned logistics trolley.
By identifying the outliers and deleting the outliers, the details of the environmental characteristics can be kept while the noise points of rain and snow are effectively removed, and the robustness and the accuracy of the environment perception of the unmanned logistics trolley are further improved.
Further, as shown in fig. 6, for the hardware platform of the unmanned logistics trolley environment sensing method provided by the embodiment of the invention, the hardware platform processes sensing data from sixteen-line mechanical lidar, four-line solid-state lidar and ultrasonic radar by using NVIDIA TX2 of a Robot Operating System (ROS), and one CAN channel of TX2 is connected with the ultrasonic radar and the GPS/IMU. The notification data of the sixteen-wire mechanical lidar, the four-wire solid-state lidar, is transmitted to TX2 through ethernet. Finally, the detected environment information is output to CompactRIO, which is a lower computer for path planning and motion control, through another CAN channel of TX 2.
On the other hand, in order to better implement the method for sensing the environment of the unmanned logistics trolley in the embodiment of the present invention, on the basis of the method for sensing the environment of the unmanned logistics trolley, as shown in fig. 7, correspondingly, the embodiment of the present invention further provides a system for sensing the environment of the unmanned logistics trolley, where the system 700 for sensing the environment of the unmanned logistics trolley includes:
the first perception module 701 is used for scanning surrounding environment information of the unmanned logistics trolley through at least two laser radars to generate an environment point cloud image, wherein the environment point cloud image comprises a plurality of point cloud data, and the distance measurement ranges of the at least two laser radars are different;
a segmentation module 702 configured to determine a target ground point cloud set and a target non-ground point cloud set in the plurality of point cloud data;
the second sensing module 703 is configured to obtain first local environment information of the unmanned logistics trolley through an ultrasonic radar, and fuse the first local environment information with a target non-ground point cloud set to generate a target point cloud image;
a target detection module 704, configured to collect at least two coordinate values of the same obstacle information in the target point cloud image according to at least two laser radars, respectively;
the fusion module 705 is configured to fuse the at least two coordinate values according to an optimal distributed estimation fusion algorithm to generate a fusion coordinate value, and use the fusion coordinate value as the coordinate value of the obstacle information.
On the other hand, an embodiment of the present invention further provides a computer device, which integrates any one of the unmanned logistics trolley environment sensing systems provided by the embodiment of the present invention, where the computer device includes:
one or more processors;
a memory; and
one or more application programs, wherein the one or more application programs are stored in the memory and configured to be executed by the processor to perform the steps of the unmanned logistics car environment awareness method in any of the above embodiments of the unmanned logistics car environment awareness method.
Specifically, in this embodiment, the computer device can thus implement various functions, as follows:
scanning surrounding environment information of the unmanned logistics trolley by at least two laser radars to generate an environment point cloud image, wherein the environment point cloud image comprises a plurality of point cloud data, and the ranging ranges of the at least two laser radars are different;
determining a target ground point cloud set and a target non-ground point cloud set in the plurality of point cloud data;
acquiring first local environment information of the unmanned logistics trolley through an ultrasonic radar, and fusing the first local environment information with the target non-ground point cloud set to generate a target point cloud image;
respectively acquiring at least two coordinate values of the same obstacle information in the target point cloud image according to the at least two laser radars;
and fusing the at least two coordinate values according to an optimal distributed estimation fusion algorithm to generate a fused coordinate value, and taking the fused coordinate value as the coordinate value of the obstacle information.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, an embodiment of the present invention provides a computer-readable storage medium, which may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like. The system comprises a storage device, a processor and a computer program, wherein the storage device stores the computer program, and the computer program is loaded by the processor to execute the steps of any one of the unmanned logistics trolley environment perception methods provided by the embodiment of the invention. For example, the computer program may be loaded by a processor to perform the steps of:
scanning surrounding environment information of the unmanned logistics trolley by at least two laser radars to generate an environment point cloud image, wherein the environment point cloud image comprises a plurality of point cloud data, and the ranging ranges of the at least two laser radars are different;
determining a target ground point cloud set and a target non-ground point cloud set in the plurality of point cloud data;
acquiring first local environment information of the unmanned logistics trolley through an ultrasonic radar, and fusing the first local environment information with the target non-ground point cloud set to generate a target point cloud image;
respectively acquiring at least two coordinate values of the same obstacle information in the target point cloud image according to the at least two laser radars;
and fusing the at least two coordinate values according to an optimal distributed estimation fusion algorithm to generate a fused coordinate value, and taking the fused coordinate value as the coordinate value of the obstacle information.
In a specific implementation, each unit or structure may be implemented as an independent entity, or may be combined arbitrarily to be implemented as one or several entities, and the specific implementation of each unit or structure may refer to the foregoing method embodiment, which is not described herein again.
The method and the system for sensing the environment of the unmanned logistics trolley provided by the invention are described in detail, a specific example is applied in the method to explain the principle and the implementation mode of the invention, and the description of the embodiment is only used for helping to understand the method and the core idea of the invention; meanwhile, for those skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (9)

1. An unmanned logistics trolley environment perception method is characterized by comprising the following steps:
scanning surrounding environment information of the unmanned logistics trolley by at least two laser radars to generate an environment point cloud image, wherein the environment point cloud image comprises a plurality of point cloud data, and the ranging ranges of the at least two laser radars are different;
determining a target ground point cloud set and a target non-ground point cloud set in the plurality of point cloud data;
acquiring first local environment information of the unmanned logistics trolley through an ultrasonic radar, and fusing the first local environment information with the target non-ground point cloud set to generate a target point cloud image;
respectively acquiring at least two coordinate values of the same obstacle information in the target point cloud image according to the at least two laser radars;
and fusing the at least two coordinate values according to an optimal distributed estimation fusion algorithm to generate a fused coordinate value, and taking the fused coordinate value as the coordinate value of the obstacle information.
2. The unmanned logistics trolley environment perception method of claim 1, wherein the generating of the unmanned logistics trolley environment point cloud image by at least two laser radars comprises:
scanning surrounding environment information of the unmanned logistics trolley through a first laser radar to generate an initial point cloud image;
carrying out distortion correction on the initial point cloud image through a GPS/IMU (global positioning system/inertial measurement unit) to generate a corrected point cloud image;
and acquiring second local environment information of the unmanned logistics trolley through a second laser radar, and fusing the second local environment information and the correction point cloud image to generate the environment point cloud image.
3. The unmanned logistics trolley environment perception method of claim 2, wherein the distortion correction of the initial point cloud image by the GPS/IMU comprises:
respectively acquiring first position and attitude information of a current frame and second position and attitude information of a previous frame adjacent to the current frame of the unmanned logistics trolley in a scanning period of the first laser radar through a GPS/IMU;
and carrying out distortion correction on the initial point cloud image through preset coordinate conversion, the first position and posture information and the second position and posture information.
4. The unmanned logistics trolley environment sensing method of claim 3, wherein the first position information comprises a first yaw angle, a first pitch angle, a first course angle and displacement, and the second position information comprises a second yaw angle, a second pitch angle and a second course angle; the distortion correction of the initial point cloud image through the preset coordinate conversion, the first position and posture information and the second position and posture information specifically comprises the following steps:
p’=Ripi+Ti
Figure FDA0003105872750000021
Figure FDA0003105872750000022
Figure FDA0003105872750000023
in the formula, p' is a coordinate value of each point cloud data in the initial point cloud image after distortion correction; riIs a total rotation matrix; p is a radical ofiThe coordinate values of each point cloud data in the initial point cloud image before distortion correction are obtained; t isiIs a displacement matrix;
Figure FDA0003105872750000024
is an attitude matrix; c is the scanning period of the first laser radar; []TIs a transposed matrix; t is tiA time difference between a current frame and a previous frame adjacent to the current frame;
Figure FDA0003105872750000025
is a first yaw angle;
Figure FDA0003105872750000026
is a first pitch angle;
Figure FDA0003105872750000027
is a first course angle quantity; α is a second yaw angle; beta is a second pitch angle; gamma is a second course angle; x, y, z are three coordinate components of the displacement; rx,Ry,RzThree coordinate components of displacement are respectively a rotation matrix around an X-axis, a Y-axis and a Z-axis.
5. The unmanned logistics trolley environment perception method of claim 4, wherein the determining a target ground point cloud set and a target non-ground point cloud set of the plurality of point cloud data comprises:
determining an initial plane model;
determining a seed ground point cloud set and a seed non-ground point cloud set;
optimizing the initial plane model;
calculating an orthogonal projection distance between the point cloud data in the seed ground point cloud set and the optimized plane model, wherein if the orthogonal projection distance is smaller than a threshold distance, the point cloud data belongs to the seed ground point cloud set, and if the orthogonal projection distance is larger than or equal to the threshold distance, the point cloud data belongs to the seed non-ground point cloud set;
judging whether the optimization times of the initial plane model are smaller than threshold times, if so, optimizing the initial plane model again; if not, stopping optimizing the initial plane model; the seed ground point cloud set and the seed non-ground point cloud set are the target ground point cloud set and the target non-ground point cloud set respectively.
6. The unmanned logistics trolley environment perception method of claim 5, wherein determining a seed ground point cloud set and a seed non-ground point cloud set comprises:
sequencing the plurality of point cloud data according to a preset height sequence;
calculating an average height of the plurality of point cloud data;
traversing the plurality of point cloud data, and taking the point cloud data with the height smaller than the average height as an initial ground point set;
and calculating the orthogonal projection distance between the point cloud data in the initial ground point set and the initial plane model, wherein if the orthogonal projection distance is less than a threshold distance, the point cloud data belongs to the seed ground point cloud set, and if the orthogonal projection distance is greater than or equal to the threshold distance, the point cloud data belongs to the seed non-ground point cloud set.
7. The unmanned logistics trolley environment perception method of claim 2, wherein the first lidar is a sixteen-line mechanical lidar and the second lidar is a four-line solid state lidar.
8. The unmanned logistics trolley environment sensing method of claim 7, wherein the range of the sixteen-line mechanical laser radar is 20 m; the range of the four-line solid laser radar is 50 m.
9. An unmanned logistics trolley environment perception system is characterized by comprising:
the system comprises a first perception module, a second perception module and a third perception module, wherein the first perception module is used for scanning surrounding environment information of the unmanned logistics trolley through at least two laser radars to generate an environment point cloud image, the environment point cloud image comprises a plurality of point cloud data, and the distance measurement ranges of the at least two laser radars are different;
a segmentation module for determining a target ground point cloud set and a target non-ground point cloud set in the plurality of point cloud data;
the second sensing module is used for obtaining first local environment information of the unmanned logistics trolley through an ultrasonic radar, and fusing the first local environment information with the target non-ground point cloud set to generate a target point cloud image;
the target detection module is used for respectively acquiring at least two coordinate values of the same obstacle information in the target point cloud image according to the at least two laser radars;
and the fusion module is used for fusing the at least two coordinate values according to an optimal distributed estimation fusion algorithm to generate a fusion coordinate value, and the fusion coordinate value is used as the coordinate value of the obstacle information.
CN202110639869.0A 2021-06-08 2021-06-08 Unmanned logistics trolley environment sensing method and system Pending CN113376638A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110639869.0A CN113376638A (en) 2021-06-08 2021-06-08 Unmanned logistics trolley environment sensing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110639869.0A CN113376638A (en) 2021-06-08 2021-06-08 Unmanned logistics trolley environment sensing method and system

Publications (1)

Publication Number Publication Date
CN113376638A true CN113376638A (en) 2021-09-10

Family

ID=77573062

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110639869.0A Pending CN113376638A (en) 2021-06-08 2021-06-08 Unmanned logistics trolley environment sensing method and system

Country Status (1)

Country Link
CN (1) CN113376638A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113607166A (en) * 2021-10-08 2021-11-05 广东省科学院智能制造研究所 Indoor and outdoor positioning method and device for autonomous mobile robot based on multi-sensor fusion
CN113759947A (en) * 2021-09-10 2021-12-07 中航空管***装备有限公司 Airborne flight obstacle avoidance auxiliary method, device and system based on laser radar

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108037515A (en) * 2017-12-27 2018-05-15 清华大学苏州汽车研究院(吴江) A kind of laser radar and ultrasonic radar information fusion system and method
KR20180061929A (en) * 2016-11-30 2018-06-08 주식회사 모디엠 MOBILE 3D MAPPING SYSTEM OF RAILWAY FACILITIES EQUIPPED WITH DUAL LIDAR and 3D MAPPING METHOD USING THE SAME
CN108333589A (en) * 2018-03-13 2018-07-27 苏州青飞智能科技有限公司 A kind of automatic driving vehicle obstacle detector
WO2018205119A1 (en) * 2017-05-09 2018-11-15 深圳市速腾聚创科技有限公司 Roadside detection method and system based on laser radar scanning
CN110244321A (en) * 2019-04-22 2019-09-17 武汉理工大学 A kind of road based on three-dimensional laser radar can traffic areas detection method
DE102019123483A1 (en) * 2019-09-02 2021-03-04 Audi Ag Method and motor vehicle control unit for detecting the surroundings of a motor vehicle by merging sensor data at point cloud level
CN112731449A (en) * 2020-12-23 2021-04-30 深圳砺剑天眼科技有限公司 Laser radar obstacle identification method and system
CN112859051A (en) * 2021-01-11 2021-05-28 桂林电子科技大学 Method for correcting laser radar point cloud motion distortion

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180061929A (en) * 2016-11-30 2018-06-08 주식회사 모디엠 MOBILE 3D MAPPING SYSTEM OF RAILWAY FACILITIES EQUIPPED WITH DUAL LIDAR and 3D MAPPING METHOD USING THE SAME
WO2018205119A1 (en) * 2017-05-09 2018-11-15 深圳市速腾聚创科技有限公司 Roadside detection method and system based on laser radar scanning
CN108037515A (en) * 2017-12-27 2018-05-15 清华大学苏州汽车研究院(吴江) A kind of laser radar and ultrasonic radar information fusion system and method
CN108333589A (en) * 2018-03-13 2018-07-27 苏州青飞智能科技有限公司 A kind of automatic driving vehicle obstacle detector
CN110244321A (en) * 2019-04-22 2019-09-17 武汉理工大学 A kind of road based on three-dimensional laser radar can traffic areas detection method
DE102019123483A1 (en) * 2019-09-02 2021-03-04 Audi Ag Method and motor vehicle control unit for detecting the surroundings of a motor vehicle by merging sensor data at point cloud level
CN112731449A (en) * 2020-12-23 2021-04-30 深圳砺剑天眼科技有限公司 Laser radar obstacle identification method and system
CN112859051A (en) * 2021-01-11 2021-05-28 桂林电子科技大学 Method for correcting laser radar point cloud motion distortion

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
杨波;程维政;朱超;: "小世界效应加速生物地理学优化的社团识别算法", 哈尔滨工业大学学报, vol. 52, no. 3 *
潘泉: "机载LiDAR***原理与点云处理方法", 31 October 2009, 国防工业出版社, pages: 148 - 91 *
袁金国编著: "遥感图像数字处理", 28 February 2006, 中国环境科学出版社, pages: 252 - 253 *
赵胜强: "车载激光点云典型地物提取技术研究", 铁道勘察, no. 4 *
郭鹏飞: "融合LiDAR点云与影像数据的矿区建筑物提取", 31 December 2019, 西安交通大学出版社, pages: 30 - 31 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113759947A (en) * 2021-09-10 2021-12-07 中航空管***装备有限公司 Airborne flight obstacle avoidance auxiliary method, device and system based on laser radar
CN113759947B (en) * 2021-09-10 2023-08-08 中航空管***装备有限公司 Airborne flight obstacle avoidance assisting method, device and system based on laser radar
CN113607166A (en) * 2021-10-08 2021-11-05 广东省科学院智能制造研究所 Indoor and outdoor positioning method and device for autonomous mobile robot based on multi-sensor fusion
CN113607166B (en) * 2021-10-08 2022-01-07 广东省科学院智能制造研究所 Indoor and outdoor positioning method and device for autonomous mobile robot based on multi-sensor fusion

Similar Documents

Publication Publication Date Title
CN110645974B (en) Mobile robot indoor map construction method fusing multiple sensors
KR102581263B1 (en) Method, apparatus, computing device and computer-readable storage medium for positioning
EP3264364B1 (en) Method and apparatus for obtaining range image with uav, and uav
CN110068836B (en) Laser radar road edge sensing system of intelligent driving electric sweeper
CN110675307A (en) Implementation method of 3D sparse point cloud to 2D grid map based on VSLAM
CN110794406B (en) Multi-source sensor data fusion system and method
CN111699410B (en) Processing method, equipment and computer readable storage medium of point cloud
CN112414417B (en) Automatic driving map generation method and device, electronic equipment and readable storage medium
CN110889808A (en) Positioning method, device, equipment and storage medium
CN103065323A (en) Subsection space aligning method based on homography transformational matrix
CN113376638A (en) Unmanned logistics trolley environment sensing method and system
CN113432533B (en) Robot positioning method and device, robot and storage medium
CN111257882B (en) Data fusion method and device, unmanned equipment and readable storage medium
CN112363158A (en) Pose estimation method for robot, and computer storage medium
CN111273312A (en) Intelligent vehicle positioning and loop-back detection method
Song et al. Cooperative vehicle localisation method based on the fusion of GPS, inter‐vehicle distance, and bearing angle measurements
CN116608847A (en) Positioning and mapping method based on area array laser sensor and image sensor
CN117389305A (en) Unmanned aerial vehicle inspection path planning method, system, equipment and medium
CN112486172A (en) Road edge detection method and robot
CN116465393A (en) Synchronous positioning and mapping method and device based on area array laser sensor
CN116429090A (en) Synchronous positioning and mapping method and device based on line laser and mobile robot
CN114398455A (en) Heterogeneous multi-robot cooperative SLAM map fusion method
Zheng et al. Integrated navigation system with monocular vision and LIDAR for indoor UAVs
Feller et al. Investigation of Lidar Data for Autonomous Driving with an Electric Bus
CN113534193B (en) Method and device for determining target reflection point, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination