CN115546522A - Moving object identification method and related device - Google Patents

Moving object identification method and related device Download PDF

Info

Publication number
CN115546522A
CN115546522A CN202210046811.XA CN202210046811A CN115546522A CN 115546522 A CN115546522 A CN 115546522A CN 202210046811 A CN202210046811 A CN 202210046811A CN 115546522 A CN115546522 A CN 115546522A
Authority
CN
China
Prior art keywords
point cloud
cloud data
cluster
data
clustered
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210046811.XA
Other languages
Chinese (zh)
Inventor
吴伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
DeepRoute AI Ltd
Original Assignee
DeepRoute AI Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by DeepRoute AI Ltd filed Critical DeepRoute AI Ltd
Priority to CN202210046811.XA priority Critical patent/CN115546522A/en
Publication of CN115546522A publication Critical patent/CN115546522A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/02Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
    • G01S13/06Systems determining position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Automation & Control Theory (AREA)
  • Human Computer Interaction (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application discloses a moving target identification method and a related device. The method comprises the following steps: merging the first point cloud data and the second point cloud data to obtain third point cloud data; the second point cloud data is acquired at the next moment of the first point cloud data; clustering the third point cloud data to obtain clustered third point cloud data; splitting the clustered third point cloud data to obtain fourth point cloud data and fifth point cloud data; and determining a moving target in the point cloud data based on the fourth point cloud data and the fifth point cloud data. By the method, the clustering effect can be improved, the splitting accuracy can be improved, and the accuracy of determining the moving target can be improved, so that the follow-up processing based on the moving target can be facilitated.

Description

Moving object identification method and related device
Technical Field
The present application relates to the field of point cloud processing technologies, and in particular, to a method and a device for identifying a moving object.
Background
The point cloud data is typically obtained by a radar sensor, and the moving object can be determined by the corresponding point cloud data. For example, in the field of unmanned driving, the main task of the sensing system is to use a radar sensor to collect point cloud data to sense information such as position, speed, category, behavior prediction and the like of a target, wherein the most important is detection and tracking of a moving target. If the perception of the moving target is wrong, for example, the static target a is identified as dynamic, the trajectory planning module subsequent to unmanned driving may plan the trajectory of the position a, thereby causing a traffic accident, and similarly, if the dynamic target is identified as static, sudden braking may be caused, thereby affecting the riding experience.
Disclosure of Invention
In order to solve the above problems, the present application provides a moving object identification method and a related apparatus, which can improve the clustering effect and the splitting accuracy, and further improve the accuracy of determining a moving object, so as to facilitate subsequent processing based on the moving object.
In order to solve the technical problem, the application adopts a technical scheme that: a method for identifying a moving target is provided, which comprises the following steps: merging the first point cloud data and the second point cloud data to obtain third point cloud data; the second point cloud data is acquired at the next moment of the first point cloud data; clustering the third point cloud data to obtain clustered third point cloud data; splitting the clustered third point cloud data to obtain fourth point cloud data and fifth point cloud data; and determining the moving target in the point cloud data based on the fourth point cloud data and the fifth point cloud data.
The clustering of the third point cloud data to obtain clustered third point cloud data includes: clustering the third point cloud data to obtain at least one clustered point cloud cluster; each point cloud cluster corresponds to a target object; splitting the clustered third point cloud data to obtain fourth point cloud data and fifth point cloud data, and the method comprises the following steps: and splitting each point cloud cluster to obtain a first point cloud cluster corresponding to the first point cloud data and a second point cloud cluster corresponding to the second point cloud data.
Splitting each point cloud cluster to obtain a first point cloud cluster corresponding to the first point cloud data and a second point cloud cluster corresponding to the second point cloud data, and the method comprises the following steps: acquiring a time sequence number of a data point in each point cloud cluster; and obtaining a first point cloud cluster and a second point cloud cluster based on the point clouds with the same time sequence number.
Wherein, determining the moving target in the point cloud data based on the fourth point cloud data and the fifth point cloud data comprises: and determining whether the corresponding target object is a moving target or not based on the first point cloud cluster and the second point cloud cluster.
Determining whether the corresponding object is a moving target based on the first point cloud cluster and the second point cloud cluster, including: determining the coincidence condition of the first point cloud cluster and the second point cloud cluster; and determining whether the corresponding target object is a moving target or not based on the coincidence condition.
Wherein, clustering third cloud data, before obtaining clustered third cloud data, includes: filtering the third point cloud data to remove the ground plane point cloud in the third point cloud data; clustering the third point cloud data to obtain clustered third point cloud data, including: and clustering the removed third point cloud data to obtain clustered third point cloud data.
The method for combining the first point cloud data and the second point cloud data to obtain third point cloud data comprises the following steps: and projecting the first point cloud data to the second point cloud data to obtain third point cloud data.
Wherein, in projecting first point cloud data to second point cloud data, obtain third point cloud data, include: acquiring first position information of the unmanned vehicle corresponding to the first point cloud data and second position information of the unmanned vehicle corresponding to the second point cloud data; converting a coordinate system of the first point cloud data by using the second position information and the first position information; and projecting the first point cloud data after the coordinate system conversion to the second point cloud data to obtain third point cloud data.
In order to solve the above technical problem, another technical solution adopted by the present application is: there is provided an in-vehicle control system including: the radar sensor is used for acquiring first point cloud data and second point cloud data; and the processor is connected with the radar sensor and is used for realizing the method provided by the technical scheme.
In order to solve the above technical problem, another technical solution adopted by the present application is: an unmanned vehicle is provided, and the unmanned vehicle comprises the vehicle-mounted control system provided by the technical scheme.
In order to solve the above technical problem, another technical solution adopted by the present application is: a computer-readable storage medium is provided for storing a computer program for implementing the method as provided in the above solution when the computer program is executed by a processor.
The beneficial effects of the embodiment of the application are that: different from the prior art, the moving object identification method provided by the application comprises the following steps: merging the first point cloud data and the second point cloud data to obtain third point cloud data; the second point cloud data is acquired at the next moment of the first point cloud data; clustering the third point cloud data to obtain clustered third point cloud data; splitting the clustered third point cloud data to obtain fourth point cloud data and fifth point cloud data; and determining a moving target in the point cloud data based on the fourth point cloud data and the fifth point cloud data. Through the mode, on the one hand, the point cloud data at the adjacent moments are combined, and the combined data are clustered, so that simultaneous clustering of the point cloud data at the two moments is completed, more point cloud data are combined, the clustering effect can be improved, on the other hand, the clustered third point cloud data are split, the splitting accuracy can be improved, the accuracy of determining the moving target is improved, and the follow-up moving target-based processing is facilitated.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts. Wherein:
fig. 1 is a schematic flow chart of a first embodiment of a moving object identification method provided in the present application;
FIG. 2 is a flowchart illustrating a second embodiment of a method for identifying a moving object provided by the present application;
FIG. 3 is a schematic flow chart diagram illustrating an embodiment of step 23 provided herein;
FIG. 4 is a schematic flow chart diagram illustrating one embodiment of step 24 provided herein;
FIG. 5 is a flowchart illustrating a third embodiment of a method for identifying a moving object according to the present application;
FIG. 6 is a schematic flowchart of an embodiment of step 51 provided herein;
FIG. 7 is a schematic structural diagram of an embodiment of an onboard control system provided by the present application;
FIG. 8 is a schematic block diagram of an embodiment of an unmanned vehicle as provided herein;
FIG. 9 is a schematic structural diagram of an embodiment of a computer-readable storage medium provided in the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be further noted that, for the convenience of description, only some of the structures related to the present application are shown in the drawings, not all of the structures. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein may be combined with other embodiments.
Referring to fig. 1, fig. 1 is a schematic flow chart diagram of a first embodiment of a moving object identification method provided in the present application. The method comprises the following steps:
step 11: merging the first point cloud data and the second point cloud data to obtain third point cloud data; the second point cloud data is acquired at the next moment of the first point cloud data.
In some embodiments, the point cloud data is acquired using a radar sensor. The radar sensor may be a lidar. Such as mechanical lidar or solid state lidar. Therefore, the radar sensor can acquire corresponding point cloud data at different moments. For example, the second point cloud data is acquired at the time t-1, and the first point cloud data is acquired at the time t.
When the acquisition frequency of the radar sensor is high and the interval between every two moments is small, the point cloud data acquired at adjacent moments can be used for determining whether the states of the target objects corresponding to the point cloud data are motion states. And determining the target object belonging to the motion state as the motion target.
In step 11, the first point cloud data and the second point cloud data may be merged by projecting the first point cloud data onto the second point cloud data to obtain third point cloud data. Or projecting the second point cloud data to the first point cloud data to obtain third point cloud data.
In some embodiments, if the radar sensor is a fixed location, no displacement is produced over a time interval. The manner of merging the first point cloud data and the second point cloud data may be to project the first point cloud data onto the second point cloud data, or to project the second point cloud data onto the first point cloud data, so as to obtain third point cloud data.
In some embodiments, if the radar sensor generates displacement in the time interval, coordinate information before and after the displacement needs to be determined, and the first point cloud data and the second point cloud data are merged based on the coordinate information before and after the displacement. For example, the radar sensor is arranged on an unmanned vehicle, and the vehicle collects point cloud data by the radar sensor in the moving process.
Step 12: and clustering the third point cloud data to obtain clustered third point cloud data.
When the point cloud data is acquired by using the radar sensor, all target objects in the acquisition range of the radar sensor have corresponding data points. Because some of the data points correspond to the same target object, the point cloud data may be clustered to divide the point cloud data into data points corresponding to each target object.
The clustering mode of the third point cloud data can adopt any one of K-Means clustering, mean shift clustering, a density-based clustering method, maximum Expectation (EM) clustering by using a Gaussian Mixture Model (GMM), coacervation hierarchical clustering, graph group detection, a connected domain clustering method and a DBSCAN algorithm. To cluster the data points in the third point cloud data into different point cloud clusters. The goal is to cluster together data points that belong to the same target object.
Step 13: and splitting the clustered third point cloud data to obtain fourth point cloud data and fifth point cloud data.
And clustering the clustered third point cloud data together according to the same characteristics, wherein the clustered third point cloud data is a data point which originally belongs to the first point cloud data and a data point which originally belongs to the second point cloud data. Therefore, the clustered data points need to be split to distinguish the fourth point cloud data corresponding to the first point cloud data and the fifth point cloud data corresponding to the second point cloud data.
Because the clustered third point cloud data is split, the split fourth point cloud data and the split fifth point cloud data also have point cloud clusters at corresponding moments after clustering.
Step 14: and determining a moving target in the point cloud data based on the fourth point cloud data and the fifth point cloud data.
Because the data is split into the fourth point cloud data and the fifth point cloud data. That is, each clustered target object is split into 2 point cloud clusters. One point cloud cluster corresponds to the fourth point cloud data, one point cloud cluster corresponds to the fifth point cloud data, the 2 point cloud clusters are located at respective positions in a world coordinate system, whether the target object moves or not can be judged through the position relation of the point cloud clusters, for example, if the target object is static, the contact ratio of the 2 point cloud clusters is very high, otherwise, if the target object is moving, some intervals exist among the 2 point cloud clusters.
Therefore, whether the clustered target objects are moving targets can be distinguished in such a manner.
In an application scenario, the present embodiment is applied to an unmanned vehicle, on which a corresponding radar sensor is provided. After the control system of the unmanned vehicle determines the moving target in the above manner, the driving route and the driving speed of the vehicle can be planned according to the moving target.
In the embodiment, the third point cloud data is obtained by combining the first point cloud data and the second point cloud data; the second point cloud data is acquired at the next moment of the first point cloud data; clustering the third point cloud data to obtain clustered third point cloud data; splitting the clustered third point cloud data to obtain fourth point cloud data and fifth point cloud data; and determining the moving target in the point cloud data based on the fourth point cloud data and the fifth point cloud data. On one hand, the point cloud data at the adjacent moments are merged and the merged data are clustered, so that the point cloud data at the two moments are clustered simultaneously, the clustering effect can be improved due to more point cloud data after merging, and on the other hand, the clustered third point cloud data are split, so that the splitting accuracy can be improved, and the accuracy of determining the moving target is improved, and the subsequent processing based on the moving target is facilitated.
Referring to fig. 2, fig. 2 is a schematic flow chart of a second embodiment of the moving object identification method provided in the present application. The method comprises the following steps:
step 21: merging the first point cloud data and the second point cloud data to obtain third point cloud data; the second point cloud data is acquired at the next moment of the first point cloud data.
Step 22: clustering the third point cloud data to obtain at least one clustered point cloud cluster; each point cloud cluster corresponds to a target object.
Step 21 and step 22 have the same or similar technical solutions as the above embodiments, and are not described herein again.
In some embodiments, the radar sensor is disposed on the unmanned vehicle, and the radar sensor collects corresponding data points to form point cloud data during the driving process of the vehicle. For example, the target objects corresponding to the point cloud data may be trees, lanes, other vehicles, buildings, and the like. Therefore, during clustering, the data points are clustered according to the characteristics of the target object to form corresponding point cloud clusters. I.e. each target object may be represented by a point cloud cluster.
Step 23: and splitting each point cloud cluster to obtain a first point cloud cluster corresponding to the first point cloud data and a second point cloud cluster corresponding to the second point cloud data.
Therefore, the point cloud cluster after time clustering comprises data points at two moments, and the point cloud cluster needs to be split to form point cloud clusters at different moments so as to facilitate the determination of a subsequent moving target.
In some embodiments, referring to fig. 3, step 23 may be the following flow:
step 231: and acquiring the time sequence number of the data points in each point cloud cluster.
When point cloud data is collected, data points in each point cloud data have corresponding time sequence numbers. If the second point cloud data is acquired at the time t-1, the time sequence number of the data point in the second point cloud data can be set to be t-1, and the first point cloud data is acquired at the time t. The time series number of the data point in the first point cloud data may be set to t.
Therefore, the time sequence number corresponding to the data point in each point cloud cluster after clustering can be obtained.
Step 232: and obtaining a first point cloud cluster and a second point cloud cluster based on the point clouds with the same time sequence number.
And determining the data points with the same time sequence number as one type, and still representing the target object under the time sequence number due to clustering of the data points.
And if each point cloud cluster after clustering is classified according to the time sequence number t-1 and the time sequence number t, obtaining a first point cloud cluster corresponding to the first point cloud data and a second point cloud cluster corresponding to the second point cloud data. For example, after clustering, there are point cloud cluster a, point cloud cluster B, point cloud cluster C, and point cloud cluster D. After time sequence numbers of data points in a point cloud cluster A, a point cloud cluster B, a point cloud cluster C and a point cloud cluster D are obtained, the point cloud cluster A is split into a point cloud cluster A1 and a point cloud cluster A2 according to the time sequence numbers, the point cloud cluster B is split into a point cloud cluster B1 and a point cloud cluster B2, the point cloud cluster C is split into a point cloud cluster C1 and a point cloud cluster C2, and the point cloud cluster D is split into a point cloud cluster D1 and a point cloud cluster D2.
The point cloud cluster A1, the point cloud cluster B1, the point cloud cluster C1 and the point cloud cluster D1 correspond to first point cloud data. The point cloud cluster A2, the point cloud cluster B2, the point cloud cluster C2 and the point cloud cluster D2 correspond to second point cloud data.
Step 24: and determining whether the corresponding target object is a moving target or not based on the first point cloud cluster and the second point cloud cluster.
In some embodiments, an ICP (Iterative Closest Point) algorithm may be used to determine whether the target object corresponding to the first Point cloud cluster and the second Point cloud cluster is a moving target.
In some embodiments, a Chamfer Distance algorithm may be utilized to determine whether the target object corresponding to the first point cloud cluster and the second point cloud cluster is a moving target. In some embodiments, referring to fig. 4, step 24 may be the following flow:
step 241: and determining the coincidence condition of the first point cloud cluster and the second point cloud cluster.
If so, determining the projection directions corresponding to the first point cloud cluster and the second point cloud cluster, and projecting the first point cloud cluster and the second point cloud cluster to the projection directions to obtain corresponding vectors. Coincidence between vectors can be determined. Such as determining the cross-to-parallel ratio between the vectors.
Step 242: and determining whether the corresponding target object is a moving target or not based on the coincidence condition.
And setting a coincidence threshold, determining that the target objects corresponding to the first point cloud cluster and the second point cloud cluster are not moving targets when the coincidence condition is greater than the threshold, and determining that the target objects corresponding to the first point cloud cluster and the second point cloud cluster are moving targets when the coincidence condition is less than or equal to the threshold.
Taking the intersection ratio as an example, the larger the intersection ratio, the higher the coincidence degree, the higher the probability that the target object does not move is indicated, and the smaller the intersection ratio, the lower the coincidence degree, the higher the probability that the target object moves is indicated. Therefore, an intersection ratio threshold value can be set, and when the intersection ratio obtained by determining the coincidence condition is greater than the intersection ratio threshold value, the target object is determined to be a fixed object and does not move; and when the intersection ratio obtained by determining the coincidence condition is less than or equal to the intersection ratio threshold value, determining that the target object is a moving object.
In this embodiment, the moving target is determined for the first point cloud cluster and the second point cloud cluster of the same clustered target object, and the first point cloud cluster and the second point cloud cluster are obtained by splitting the clustered point cloud data, so that the splitting accuracy is ensured, and the accuracy of determining the moving target can be further improved, so as to facilitate subsequent processing based on the moving target.
Referring to fig. 5, fig. 5 is a schematic flow chart of a third embodiment of the moving object identification method provided in the present application. The method comprises the following steps:
step 51: merging the first point cloud data and the second point cloud data to obtain third point cloud data; the second point cloud data is acquired at the next moment of the first point cloud data.
In some embodiments, the first point cloud data may be projected into the second point cloud data to obtain third point cloud data.
In an application scenario, referring to fig. 6, step 51 may adopt the following process:
step 511: and acquiring first position information of the unmanned vehicle corresponding to the first point cloud data and second position information of the unmanned vehicle corresponding to the second point cloud data.
Since the unmanned vehicle itself is also moving during the driving process of the unmanned vehicle, coordinate transformation needs to be performed based on the pose of the unmanned vehicle. Thereby eliminating the error itself.
Step 512: and converting the coordinate system of the first point cloud data by using the second position information and the first position information.
At this time, the first point cloud data may be subjected to coordinate conversion depending on the first position information with reference to the second position information.
A translation relationship between the first location information and the second location information is first determined. Such as determining a transformation matrix. And converting the coordinate system of the first point cloud data by using the conversion matrix to obtain the coordinate of the second position information corresponding to the first point cloud data.
Step 513: and projecting the first point cloud data after the coordinate system conversion to the second point cloud data to obtain third point cloud data.
The second point cloud data is acquired under the second position information, so that the first point cloud data after the coordinate system conversion can be projected into the second point cloud data, and the coordinate system is uniform.
Step 52: and filtering the third point cloud data to remove the ground plane point cloud in the third point cloud data.
Optionally, any ground segmentation algorithm, such as a plane estimation method, may be used to remove data points belonging to the ground plane from the point cloud data, so that the remaining data points are all non-ground plane points.
Step 53: and clustering the removed third point cloud data to obtain clustered third point cloud data.
Step 54: and splitting the clustered third point cloud data to obtain fourth point cloud data and fifth point cloud data.
Step 55: and determining the moving target in the point cloud data based on the fourth point cloud data and the fifth point cloud data.
Steps 53 to 55 have the same or similar technical solutions as any of the above embodiments, and are not described herein again.
In this embodiment, the point cloud data at adjacent moments are merged and the merged data are clustered, so that simultaneous clustering of the point cloud data at two moments is completed, more point cloud data are merged, and the clustering effect can be improved.
Referring to fig. 7, fig. 7 is a schematic structural diagram of an embodiment of the vehicle-mounted control system provided in the present application. The onboard control system 70 includes: a radar sensor 71 and a processor 72.
The radar sensor 71 is configured to acquire first point cloud data and second point cloud data. The processor 72 is connected to the radar sensor 71 for performing the following steps:
merging the first point cloud data and the second point cloud data to obtain third point cloud data; the second point cloud data is acquired at the next moment of the first point cloud data; clustering the third point cloud data to obtain clustered third point cloud data; splitting the clustered third point cloud data to obtain fourth point cloud data and fifth point cloud data; and determining the moving target in the point cloud data based on the fourth point cloud data and the fifth point cloud data.
It can be understood that the processor 72 is also used for implementing the technical solution of any of the above embodiments, and the detailed description is omitted here.
The in-vehicle control system 70 can specify a travel route, a travel speed, and the like at the next time based on the specified moving object.
In other embodiments, the processor 72 is also coupled to a memory (not shown). The memory is used for storing a computer program, and the computer program is used for implementing the technical solution of any of the above embodiments when being executed by the processor 72, which is not described herein again.
Referring to fig. 8, fig. 8 is a schematic structural diagram of an embodiment of the unmanned vehicle provided in the present application. The unmanned vehicle 80 includes an onboard control system 70.
Referring to fig. 9, fig. 9 is a schematic structural diagram of an embodiment of a computer-readable storage medium provided in the present application. The computer-readable storage medium 90 is for storing a computer program 91, the computer program 91, when being executed by a processor, is for implementing the method of:
merging the first point cloud data and the second point cloud data to obtain third point cloud data; the second point cloud data is acquired at the next moment of the first point cloud data; clustering the third point cloud data to obtain clustered third point cloud data; splitting the clustered third point cloud data to obtain fourth point cloud data and fifth point cloud data; and determining the moving target in the point cloud data based on the fourth point cloud data and the fifth point cloud data.
It can be understood that the computer program 91 is also used for implementing the technical solution of any of the above embodiments when being executed by the processor, and is not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other manners. For example, the above-described device embodiments are merely illustrative, and for example, the division of the circuits or units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The above description is only an embodiment of the present application, and is not intended to limit the scope of the present application, and all equivalent structures or equivalent processes performed according to the contents of the specification and the drawings, or applied directly or indirectly to other related technical fields, are all included in the scope of the present application.

Claims (11)

1. A method for identifying a moving object, the method comprising:
merging the first point cloud data and the second point cloud data to obtain third point cloud data; the second point cloud data is acquired at the next moment of the first point cloud data;
clustering the third point cloud data to obtain clustered third point cloud data;
splitting the clustered third point cloud data to obtain fourth point cloud data and fifth point cloud data;
and determining a moving target in the point cloud data based on the fourth point cloud data and the fifth point cloud data.
2. The method of claim 1, wherein the clustering the third point cloud data to obtain clustered third point cloud data comprises:
clustering the third point cloud data to obtain at least one clustered point cloud cluster; each point cloud cluster corresponds to a target object;
splitting the clustered third point cloud data to obtain fourth point cloud data and fifth point cloud data, including:
and splitting each point cloud cluster to obtain a first point cloud cluster corresponding to the first point cloud data and a second point cloud cluster corresponding to the second point cloud data.
3. The method of claim 2, wherein splitting each point cloud cluster to obtain a first point cloud cluster corresponding to the first point cloud data and a second point cloud cluster corresponding to the second point cloud data comprises:
acquiring the time sequence number of the data points in each point cloud cluster;
and obtaining the first point cloud cluster and the second point cloud cluster based on the point clouds with the same time sequence number.
4. The method of claim 2, wherein determining the moving object in the point cloud data based on the fourth point cloud data and the fifth point cloud data comprises:
and determining whether the corresponding target object is a moving target or not based on the first point cloud cluster and the second point cloud cluster.
5. The method of claim 4, wherein the determining whether the corresponding object is a moving object based on the first point cloud cluster and the second point cloud cluster comprises:
determining the coincidence condition of the first point cloud cluster and the second point cloud cluster;
and determining whether the corresponding target object is a moving target or not based on the coincidence condition.
6. The method of claim 1, wherein the clustering the third point cloud data before obtaining the clustered third point cloud data comprises:
filtering the third point cloud data to remove the ground plane point cloud in the third point cloud data;
the clustering the third point cloud data to obtain clustered third point cloud data includes:
and clustering the removed third point cloud data to obtain clustered third point cloud data.
7. The method of claim 1, wherein merging the first point cloud data and the second point cloud data to obtain third point cloud data comprises:
and projecting the first point cloud data to the second point cloud data to obtain third point cloud data.
8. The method of claim 7, wherein projecting the first point cloud data into the second point cloud data to obtain the third point cloud data comprises:
acquiring first position information of the unmanned vehicle corresponding to the first point cloud data and second position information of the unmanned vehicle corresponding to the second point cloud data;
converting a coordinate system of the first point cloud data by using the second position information and the first position information;
and projecting the first point cloud data after the coordinate system conversion to the second point cloud data to obtain the third point cloud data.
9. An in-vehicle control system, characterized by comprising:
the radar sensor is used for acquiring first point cloud data and second point cloud data;
a processor connected to the radar sensor for implementing the method of any one of claims 1-8.
10. An unmanned vehicle, comprising an on-board control system as claimed in claim 9.
11. A computer-readable storage medium, characterized in that the computer-readable storage medium is used to store a computer program which, when being executed by a processor, is used to carry out the method according to any one of claims 1-8.
CN202210046811.XA 2022-01-14 2022-01-14 Moving object identification method and related device Pending CN115546522A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210046811.XA CN115546522A (en) 2022-01-14 2022-01-14 Moving object identification method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210046811.XA CN115546522A (en) 2022-01-14 2022-01-14 Moving object identification method and related device

Publications (1)

Publication Number Publication Date
CN115546522A true CN115546522A (en) 2022-12-30

Family

ID=84724679

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210046811.XA Pending CN115546522A (en) 2022-01-14 2022-01-14 Moving object identification method and related device

Country Status (1)

Country Link
CN (1) CN115546522A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117934324A (en) * 2024-03-25 2024-04-26 广东电网有限责任公司中山供电局 Denoising method and device for laser point cloud data and radar scanning device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117934324A (en) * 2024-03-25 2024-04-26 广东电网有限责任公司中山供电局 Denoising method and device for laser point cloud data and radar scanning device
CN117934324B (en) * 2024-03-25 2024-06-11 广东电网有限责任公司中山供电局 Denoising method and device for laser point cloud data and radar scanning device

Similar Documents

Publication Publication Date Title
CN108394410B (en) ECU, autonomous vehicle including the same, and method of determining travel lane of the vehicle
CN112750150B (en) Vehicle flow statistical method based on vehicle detection and multi-target tracking
CN111382768A (en) Multi-sensor data fusion method and device
CN109703569B (en) Information processing method, device and storage medium
CN113155173B (en) Perception performance evaluation method and device, electronic device and storage medium
CN111582189A (en) Traffic signal lamp identification method and device, vehicle-mounted control terminal and motor vehicle
CN110008891B (en) Pedestrian detection positioning method and device, vehicle-mounted computing equipment and storage medium
CN114299464A (en) Lane positioning method, device and equipment
CN111367901B (en) Ship data denoising method
CN114419601A (en) Obstacle information determination method, obstacle information determination device, electronic device, and storage medium
CN115546522A (en) Moving object identification method and related device
CN114972443A (en) Target tracking method and device and unmanned vehicle
CN104616317A (en) Video vehicle tracking validity checking method
CN114241448A (en) Method and device for obtaining heading angle of obstacle, electronic equipment and vehicle
US10522032B2 (en) Driving-state data storage apparatus
CN114419573A (en) Dynamic occupancy grid estimation method and device
CN116664025A (en) Loading and unloading position point generation method, device and equipment
CN111338336B (en) Automatic driving method and device
CN111127952A (en) Method, apparatus and storage medium for detecting potential traffic collision
CN116010543A (en) Lane information determination method, lane information determination device, electronic equipment and storage medium
CN114169247A (en) Method, device and equipment for generating simulated traffic flow and computer readable storage medium
CN112632304B (en) Index-based data searching method, device, server and storage medium
CN114154510A (en) Control method and device for automatic driving vehicle, electronic equipment and storage medium
US20210286078A1 (en) Apparatus for tracking object based on lidar sensor and method therefor
CN114048626A (en) Traffic flow simulation scene construction method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination