CN111428692A - Method and device for determining travel trajectory of vehicle, and storage medium - Google Patents

Method and device for determining travel trajectory of vehicle, and storage medium Download PDF

Info

Publication number
CN111428692A
CN111428692A CN202010329269.XA CN202010329269A CN111428692A CN 111428692 A CN111428692 A CN 111428692A CN 202010329269 A CN202010329269 A CN 202010329269A CN 111428692 A CN111428692 A CN 111428692A
Authority
CN
China
Prior art keywords
point cloud
coordinates
cloud data
determining
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010329269.XA
Other languages
Chinese (zh)
Inventor
黄鹏峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaoma Huixing Technology Co ltd
Original Assignee
Beijing Xiaoma Huixing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaoma Huixing Technology Co ltd filed Critical Beijing Xiaoma Huixing Technology Co ltd
Priority to CN202010329269.XA priority Critical patent/CN111428692A/en
Publication of CN111428692A publication Critical patent/CN111428692A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/285Analysis of motion using a sequence of stereo image pairs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application provides a method and a device for determining a travel track of a vehicle, a storage medium, a processor and the vehicle. The method comprises the following steps: the method comprises the steps of obtaining 3D point cloud data of a target vehicle in preset time, determining coordinates of each frame of the target vehicle in the preset time according to the 3D point cloud data, and determining a running track of the target vehicle in the preset time according to the coordinates of each frame. The driving track of the target vehicle can be determined according to the acquired 3D point cloud data, the purpose of determining the driving track by the 3D point cloud data is achieved, machine learning training can be performed on a marking tool of the vehicle, the driving track of the vehicle can be planned more accurately subsequently, the driving track of the vehicle can be processed according to the driving track, and the driving track determined according to the 3D point cloud data is more accurate compared with other methods in the prior art.

Description

Method and device for determining travel trajectory of vehicle, and storage medium
Technical Field
The application relates to the field of automatic driving, in particular to a method and a device for determining a running track of a vehicle, a storage medium, a processor and the vehicle.
Background
The top of the unmanned vehicle is generally provided with a laser radar, the laser radar can acquire 3D point cloud data of the vehicle in the moving process, and objects and the like around are identified according to the 3D point cloud data.
In the field of unmanned driving, it is very important to acquire a driving track of a vehicle, and machine learning training can be subsequently performed according to the driving track, and the driving track of the vehicle and the like can be planned according to a training result.
In the prior art, a method for acquiring a driving track of a vehicle by using 3D point cloud data is lacked.
The above information disclosed in this background section is only for enhancement of understanding of the background of the technology described herein and, therefore, certain information may be included in the background that does not form the prior art that is already known in this country to a person of ordinary skill in the art.
Disclosure of Invention
The application mainly aims to provide a method and a device for determining a running track of a vehicle, a storage medium, a processor and the vehicle, so as to solve the technical problem that a method for acquiring the running track of the vehicle by using 3D point cloud data is lacked in the prior art.
According to an aspect of an embodiment of the present application, there is provided a method for determining a travel track of a vehicle, including: acquiring 3D point cloud data of a target vehicle in preset time; determining coordinates of each frame of the target vehicle within the predetermined time according to the 3D point cloud data; and determining the running track of the target vehicle in a preset time according to the coordinates of each frame.
Optionally, determining coordinates of each frame of the target vehicle within the predetermined time from the 3D point cloud data comprises: obtaining labels of the target vehicles with a preset number of frames in the 3D point cloud data to obtain a preset number of obtained labels, wherein the preset number of frames is less than or equal to the sum of the number of frames of the 3D point cloud data in the preset time; and determining the coordinates of the target vehicle according to the acquired labels to obtain the coordinates of the target vehicle in each frame.
Optionally, in a case that the predetermined number of frames is less than the sum of the number of frames, determining coordinates of the target vehicle according to the obtained labels, and obtaining coordinates of the target vehicle in each frame includes: marking the target carrier with the residual frame number according to the preset number marks to obtain a plurality of residual marks, wherein the residual frame number is the total number of unmarked frames of the target carrier; determining coordinates corresponding to the target carrier according to the residual labels to obtain at least one first coordinate; and determining the coordinates corresponding to the target vehicle according to the acquired labels to obtain a plurality of second coordinates, wherein at least one first coordinate and the plurality of second coordinates form the coordinates of the target vehicle in each frame. .
Optionally, labeling the target vehicles with the remaining number of frames according to the predetermined number of labels to obtain a plurality of remaining labels, including: determining target point cloud data in the remaining frame number by using the acquired label, wherein the target point cloud data is 3D point cloud data corresponding to the target carrier; and marking the 3D point cloud data corresponding to the target carrier in the residual frame number to obtain data residual marks.
Optionally, determining the target point cloud data in the remaining number of frames by using the obtained label includes: matching the acquired point cloud data with the 3D point cloud data in the residual frame number, wherein the acquired point cloud data is the 3D point cloud data correspondingly marked by the acquired mark; and determining the 3D point cloud data matched with the acquired point cloud data in the residual frame number as the target point cloud data.
Optionally, matching the acquired point cloud data with the 3D point cloud data in the remaining number of frames includes: matching the intensity data of each point in the acquired point cloud data with the intensity data of each point in the 3D point cloud data in the remaining frame number; matching the height data of each point in the acquired point cloud data with the height data of each point in the 3D point cloud data in the remaining frame number; and matching the shape data of each point in the acquired point cloud data with the shape data of each point in the 3D point cloud data in the remaining frame number.
Optionally, determining that the 3D point cloud data matched with the acquired point cloud data in the remaining number of frames is the target point cloud data includes: and determining that the 3D point cloud data of which the shape data, the height data and the intensity data are respectively matched with the shape data, the height data and the intensity data of the acquired point cloud data in the residual frame number is the target point cloud data.
Optionally, determining a travel trajectory of the target vehicle within a predetermined time according to the coordinates of each frame comprises: determining coordinates of the target vehicle between any two adjacent frames by using the coordinates of any two adjacent frames; and determining the driving track according to the coordinates of each frame and the coordinates between two adjacent frames.
Optionally, determining coordinates of the target vehicle between any two adjacent frames using the coordinates of the two adjacent frames comprises: and inputting the coordinates of any two adjacent frames into a Bezier curve model to obtain the coordinates of the target vehicle between the two adjacent frames.
According to another aspect of the embodiments of the present application, there is also provided a method for acquiring a travel track of a vehicle, including: obtaining labels of a preset number of frames in 3D point cloud data of a target carrier in preset time to obtain a preset number of obtaining labels, wherein the preset number of frames is less than or equal to the sum of the number of frames of the 3D point cloud data in the preset time; determining the coordinates of each frame of the target vehicle in the preset time according to the acquired labels; and determining the running track of the target vehicle in a preset time according to the coordinates of each frame.
Optionally, determining a travel trajectory of the target vehicle within a predetermined time according to the coordinates of each frame comprises: determining coordinates of the target vehicle between any two adjacent frames by using the coordinates of any two adjacent frames; and determining the driving track according to the coordinates of each frame and the coordinates between two adjacent frames.
Optionally, determining coordinates of the target vehicle between any two adjacent frames using the coordinates of the two adjacent frames comprises: and inputting the coordinates of any two adjacent frames into a Bezier curve model to obtain the coordinates of the target vehicle between the two adjacent frames.
According to another aspect of the embodiments of the present application, there is also provided a device for determining a travel track of a vehicle, including: the system comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring 3D point cloud data of a target vehicle within preset time; a first determining unit for determining coordinates of each frame of the target vehicle within the predetermined time according to the 3D point cloud data; and the second determining unit is used for determining the running track of the target vehicle in the preset time according to the coordinates of each frame.
According to still another aspect of the embodiments of the present application, there is also provided a storage medium including a stored program, wherein the program executes any one of the determination methods of a travel trajectory of a vehicle.
According to still another aspect of the embodiments of the present application, there is provided a processor, wherein the program is executed to perform any one of the methods for determining a travel trajectory of a vehicle.
According to another aspect of the embodiments of the present application, there is also provided a vehicle including: one or more processors, memory, and one or more programs stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the methods of determining a travel trajectory of a vehicle.
In the embodiment of the application, the method for determining the running track of the vehicle is adopted, by acquiring the 3D point cloud data of the target vehicle in a preset time, determining coordinates of each frame of the target vehicle within the predetermined time from the 3D point cloud data, determining the running track of the target vehicle in preset time according to the coordinates of each frame, achieving the purpose of determining the running track by 3D point cloud data, therefore, the machine learning training can be carried out on the marking tool of the delivery vehicle, the driving track of the vehicle can be planned more accurately subsequently, the processing of the driving track of the delivery vehicle can be carried out according to the driving track, and, compared with other methods in the prior art, the driving track determined according to the 3D point cloud data is more accurate, and the technical problem that a method for acquiring the driving track of the vehicle by adopting 3D point cloud data is lacked in the prior art is solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the application and, together with the description, serve to explain the application and are not intended to limit the application. In the drawings:
fig. 1 shows a schematic flow diagram of a method for determining a travel trajectory of a vehicle according to an embodiment of the present application;
FIG. 2 illustrates a flow diagram of another method of determining a travel trajectory of a vehicle according to an embodiment of the present application; and
fig. 3 is a schematic structural diagram illustrating a device for determining a travel track of a vehicle according to an embodiment of the present application.
Detailed Description
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be used. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It will be understood that when an element such as a layer, film, region, or substrate is referred to as being "on" another element, it can be directly on the other element or intervening elements may also be present. Also, in the specification and claims, when an element is described as being "connected" to another element, the element may be "directly connected" to the other element or "connected" to the other element through a third element.
As described in the background art, there is a lack of a method for acquiring a driving trajectory of a vehicle using 3D point cloud data in the prior art, and in order to solve the above problems, a method for determining a driving trajectory of a vehicle, a determining apparatus, a storage medium, a processor, and a vehicle are provided.
According to an embodiment of the present application, a method of determining a travel trajectory of a vehicle is provided. Fig. 1 is a flowchart illustrating a method for determining a travel track of a vehicle according to an embodiment of the present application. As shown in fig. 1, the method comprises the steps of:
step S101, acquiring 3D point cloud data of a target vehicle in preset time;
step S102, determining the coordinates of each frame of the target vehicle in the preset time according to the 3D point cloud data;
step S103, determining a traveling trajectory of the target vehicle within a predetermined time according to the coordinates of each frame.
In the scheme, firstly, 3D point cloud data of a target vehicle in a preset time is acquired, secondly, coordinates of each frame of the target vehicle in the preset time are determined according to the 3D point cloud data, and then, a running track of the target vehicle in the preset time is determined according to the coordinates of each frame. According to the method, the driving track of the target vehicle can be determined according to the acquired 3D point cloud data, the purpose of determining the driving track by the 3D point cloud data is achieved, machine learning training can be performed on a marking tool of the vehicle, the driving track of the vehicle can be planned more accurately subsequently, the driving track of the vehicle can be processed according to the driving track, and the driving track determined according to the 3D point cloud data is more accurate compared with other methods in the prior art.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than presented herein.
In an embodiment of the present application, determining coordinates of each frame of the target vehicle within the predetermined time according to the 3D point cloud data includes: and obtaining labels of the target vehicles with a preset number of frames in the 3D point cloud data to obtain a preset number of obtaining labels, wherein the preset number of frames is less than or equal to the sum of the number of frames of the 3D point cloud data in the preset time, and determining the coordinates of the target vehicles according to the obtaining labels to obtain the coordinates of the target vehicles in each frame. Thus, the coordinates of the vehicle in each frame of the preset number of frames can be accurately determined, and the running track of the vehicle can be more efficiently determined according to the coordinates of the vehicle in each frame of the preset number of frames.
It should be noted that the obtained labels in the present application may be obtained by labeling a labeling tool, or may be obtained by manual labeling, and those skilled in the art may obtain labels obtained by manual labeling or labeling of a labeling tool according to actual situations.
It should be noted that, the labeling tool labels each frame, the predetermined time may be 10 seconds, or 15 seconds, or 20 seconds, or other predetermined times, and a person skilled in the art may select an appropriate predetermined time according to the actual situation, and the labeled data may be 100 frames of data labeled in 10 seconds, or 150 frames of data labeled in 15 seconds, and a person skilled in the art may select an appropriate predetermined time according to the actual situation to label the data of each frame.
It should be further noted that the predetermined number of frames in the present application may be equal to the sum of the number of frames, or may be smaller than the sum of the number of frames, and those skilled in the art may obtain the number of frames with an appropriate number according to actual situations.
In another embodiment of the present application, in a case that the predetermined number of frames is less than the total number of frames, determining coordinates of the target vehicle according to the obtaining labels to obtain coordinates of the target vehicle in each frame, includes: marking the target carrier with the remaining number of frames according to the marks with the preset number to obtain a plurality of remaining marks, wherein the remaining number of frames is the total number of frames of the target carrier which are not marked, determining the corresponding coordinates of the target carrier according to the remaining marks to obtain at least one first coordinate, determining the corresponding coordinates of the target carrier according to the obtained marks to obtain a plurality of second coordinates, and the coordinates of the target carrier in each frame are formed by the at least one first coordinate and the plurality of second coordinates. Namely, the remaining frames which are not marked are automatically marked, so that the specific coordinate positions of the vehicles in the remaining frames can be determined, and then the coordinates of each frame of the vehicles in the preset time can be more accurately determined according to the two coordinates. For example, in the running process of the vehicle, the point cloud data is scanned every 0.1 second to obtain the point cloud data in a preset area, and points with longer interval distances can be automatically marked in a certain number of frames in the middle, so that the running track of the vehicle can be more accurately determined.
It should be noted that, when the frame to be acquired is determined according to the position of the marked frames, the time of acquiring the marked frames, and the instantaneous speed of the vehicle, the data of the remaining number of frames to be marked can be assisted, and as the input of the remaining number of frames to be marked, the driving track of the vehicle can be determined more accurately.
Specifically, in an embodiment of the present application, labeling the target vehicles with the remaining number of frames according to the predetermined number of labels to obtain a plurality of remaining labels includes: and determining target point cloud data in the remaining frame number by using the obtained labels, wherein the target point cloud data is 3D point cloud data corresponding to the target carrier, and labeling the 3D point cloud data corresponding to the target carrier in the remaining frame number to obtain data remaining labels. In the scheme, the 3D point cloud data corresponding to the vehicles in the rest frames are determined, so that the marking can be carried out more efficiently, and the running track of the vehicles can be determined more accurately and efficiently.
It should be noted that when the coordinate position is marked, the marking time difference needs to be calculated, the time difference of the scene of the point cloud is collected once, the marking is performed at certain time intervals, the positions of several time periods are marked, and the marking time interval is calculated.
In another embodiment of the present application, determining target point cloud data in the remaining number of frames by using the above-mentioned obtained labels includes: and matching the acquired point cloud data with the 3D point cloud data in the residual frame number, wherein the acquired point cloud data is the 3D point cloud data correspondingly marked by the acquired mark, and the 3D point cloud data matched with the acquired point cloud data in the residual frame number is determined as the target point cloud data. The method can determine the target point cloud data more accurately, thereby further ensuring that the subsequently acquired coordinates are more accurate.
In order to determine the target point cloud data more accurately, in an embodiment of the present application, matching the acquired point cloud data with the 3D point cloud data in the remaining number of frames includes: and matching the intensity data of each point in the acquired point cloud data with the intensity data of each point in the 3D point cloud data in the remaining number of frames, matching the height data of each point in the acquired point cloud data with the height data of each point in the 3D point cloud data in the remaining number of frames, and matching the shape data of each point in the acquired point cloud data with the shape data of each point in the 3D point cloud data in the remaining number of frames. The method comprises the steps of selecting point cloud data of the frames to match the intensity, the height and the shape, wherein the point cloud data comprises data of the height, the coordinates and the reflection intensity of an obstacle, and shape information changes in the point cloud within a continuous time period but is fixed in any frame, so that the point cloud data marked before are used for matching with the data in each frame, and the matching content is the intensity, the height and the shape of points.
In another embodiment of the present application, determining that the 3D point cloud data matching the acquired point cloud data in the remaining number of frames is the target point cloud data includes: and determining that the 3D point cloud data in which the shape data, the height data and the intensity data are respectively matched with the shape data, the height data and the intensity data of the acquired point cloud data in the remaining number of frames is the target point cloud data. The scheme can accurately determine that the 3D point cloud data matched with the acquired point cloud data in the residual frame number is the target point cloud data.
Specifically, in another embodiment of the present application, determining a travel trajectory of the target vehicle within a predetermined time according to the coordinates of each frame includes: and determining the coordinates of the target vehicle between two adjacent frames by using the coordinates of any two adjacent frames, and determining the running track according to the coordinates of each frame and the coordinates between two adjacent frames. The coordinates of the vehicle between two adjacent frames are determined, so that the travel track of the vehicle between two adjacent frames can be determined more accurately, and the path planning of the vehicle can be processed more accurately according to the travel track, for example, how the vehicle runs under different scenes, the previous vehicle decelerates, the vehicle is controlled to decelerate, and the deceleration speed and the arrival time length are controlled.
In order to more effectively determine the coordinates of the vehicle between two adjacent frames, in another embodiment of the present application, the determining the coordinates of the target vehicle between two adjacent frames by using the coordinates of any two adjacent frames includes: and inputting the coordinates of any two adjacent frames into a Bezier curve model to obtain the coordinates of the target vehicle between the two adjacent frames.
It should be noted that the bezier curve has four points: the starting point, the ending point (also called anchor point) and two mutually separated intermediate points slide, the shape of the curve can be changed, the operation is simple, the graph is concise, and the coordinates of the carrier between two adjacent frames can be more accurately determined.
Fig. 2 is a flowchart illustrating another method for determining a travel track of a vehicle according to an embodiment of the present application. As shown in fig. 2, the method comprises the steps of:
step S201, obtaining labels of a preset number of frames in 3D point cloud data of a target carrier in a preset time to obtain a preset number of obtaining labels, wherein the preset number of frames is less than or equal to the sum of the number of frames of the 3D point cloud data in the preset time;
step S202, determining the coordinates of each frame of the target vehicle in the preset time according to the acquisition labels;
step S203, determining a travel path of the target vehicle within a predetermined time according to the coordinates of each frame.
In the scheme, the driving track of the target vehicle can be determined according to the marks of the preset number of frames in the acquired 3D point cloud data, and the purpose of determining the driving track by the 3D point cloud data is achieved, so that machine learning training can be performed on the marking tool of the vehicle, the driving track of the vehicle can be planned more accurately subsequently, the driving track of the vehicle can be processed according to the driving track, and the driving track determined according to the marks of the 3D point cloud data is more accurate compared with other methods in the prior art.
It should be noted that the obtained labels in the present application may be obtained by labeling a labeling tool, or may be obtained by manual labeling, and those skilled in the art may obtain labels obtained by manual labeling or labeling of a labeling tool according to actual situations.
It should be noted that, the labeling tool labels each frame, the predetermined time may be 10 seconds, or 15 seconds, or 20 seconds, or other predetermined times, and a person skilled in the art may select an appropriate predetermined time according to the actual situation, and the labeled data may be 100 frames of data labeled in 10 seconds, or 150 frames of data labeled in 15 seconds, and a person skilled in the art may select an appropriate predetermined time according to the actual situation to label the data of each frame.
It should be further noted that the predetermined number of frames in the present application may be equal to the sum of the number of frames, or may be smaller than the sum of the number of frames, and those skilled in the art may obtain the number of frames with an appropriate number according to actual situations.
In another embodiment of the present application, determining a travel trajectory of the target vehicle within a predetermined time according to the coordinates of each frame includes: and determining the coordinates of the target vehicle between two adjacent frames by using the coordinates of any two adjacent frames, and determining the running track according to the coordinates of each frame and the coordinates between two adjacent frames. The coordinates of the vehicle between two adjacent frames are determined, so that the travel track of the vehicle between two adjacent frames can be determined more accurately, and the path planning of the vehicle can be processed more accurately according to the travel track, for example, how the vehicle runs under different scenes, the previous vehicle decelerates, the vehicle is controlled to decelerate, and the deceleration speed and the arrival time length are controlled.
In another embodiment of the present application, determining coordinates of the target vehicle between two adjacent frames by using the coordinates of any two adjacent frames includes: and inputting the coordinates of any two adjacent frames into a Bezier curve model to obtain the coordinates of the target vehicle between the two adjacent frames. The coordinates of the vehicle between two adjacent frames can be determined more efficiently.
It should be noted that the bezier curve has four points: the starting point, the ending point (also called anchor point) and two mutually separated intermediate points slide, the shape of the curve can be changed, the operation is simple, the graph is concise, and the coordinates of the carrier between two adjacent frames can be more accurately determined.
In the scheme, a specific label of a preset number of frames in 3D point cloud data of a target vehicle in a preset time is acquired, a preset number of acquiring label modes are obtained, a mode of determining coordinates of each frame of the target vehicle in the preset time according to the acquiring label, a mode of determining a running track of the target vehicle in the preset time according to the coordinates of each frame, a mode of determining coordinates of the target vehicle between two adjacent frames by using the coordinates of any two adjacent frames, a mode of determining the running track according to the coordinates of each frame and the coordinates between two adjacent frames and the coordinates of any two adjacent frames are input into a Bezier curve model, the above-mentioned description can be referred to the above-mentioned manner for obtaining the coordinates of the target vehicle between two adjacent frames, and the details are not repeated herein.
The embodiment of the present application further provides a device for determining a travel track of a vehicle, and it should be noted that the device for determining a travel track of a vehicle according to the embodiment of the present application may be used to execute the method for determining a travel track of a vehicle according to the embodiment of the present application. The following describes a device for determining a travel trajectory of a vehicle according to an embodiment of the present application.
Fig. 3 is a schematic structural diagram of a device for determining a travel track of a vehicle according to an embodiment of the present application. As shown in fig. 3, the apparatus includes:
an acquisition unit 10 for acquiring 3D point cloud data of a target vehicle within a predetermined time;
a first determining unit 20, configured to determine coordinates of each frame of the target vehicle within the predetermined time according to the 3D point cloud data;
a second determining unit 30, configured to determine a travel track of the target vehicle within a predetermined time according to the coordinates of each frame.
In the device, an acquisition unit acquires 3D point cloud data of a target vehicle within a preset time, a first determination unit determines coordinates of each frame of the target vehicle within the preset time according to the 3D point cloud data, and a second determination unit determines a running track of the target vehicle within the preset time according to the coordinates of each frame. In the device, the driving track of the target vehicle can be determined according to the acquired 3D point cloud data, and the purpose of determining the driving track by the 3D point cloud data is realized, so that machine learning training can be performed on a marking tool of the vehicle, the driving track of the vehicle can be planned more accurately subsequently, the driving track of the vehicle can be processed according to the driving track, and the driving track determined according to the 3D point cloud data is more accurate compared with other devices in the prior art.
In an embodiment of the application, the first determining unit includes an obtaining module and a first determining module, the obtaining module is configured to obtain labels of the target vehicles with a predetermined number of frames in the 3D point cloud data to obtain a predetermined number of obtained labels, the predetermined number of frames is less than or equal to a total number of frames of the 3D point cloud data within the predetermined time, and the first determining module is configured to determine coordinates of the target vehicles according to the obtained labels to obtain coordinates of the target vehicles in each frame. Thus, the coordinates of the vehicle in each frame of the preset number of frames can be accurately determined, and the running track of the vehicle can be more efficiently determined according to the coordinates of the vehicle in each frame of the preset number of frames.
It should be noted that the obtained labels in the present application may be obtained by labeling a labeling tool, or may be obtained by manual labeling, and those skilled in the art may obtain labels obtained by manual labeling or labeling of a labeling tool according to actual situations.
It should be noted that, the labeling tool labels each frame, the predetermined time may be 10 seconds, or 15 seconds, or 20 seconds, or other predetermined times, and a person skilled in the art may select an appropriate predetermined time according to the actual situation, and the labeled data may be 100 frames of data labeled in 10 seconds, or 150 frames of data labeled in 15 seconds, and a person skilled in the art may select an appropriate predetermined time according to the actual situation to label the data of each frame.
It should be further noted that the predetermined number of frames in the present application may be equal to the sum of the number of frames, or may be smaller than the sum of the number of frames, and those skilled in the art may obtain the number of frames with an appropriate number according to actual situations.
In another embodiment of the application, the first determining module includes a first labeling submodule, a first determining submodule and a second determining submodule, the first labeling submodule is configured to label the target vehicle with the remaining number of frames according to the predetermined number of labels to obtain a plurality of remaining labels, the remaining number of frames is a total number of frames of the target vehicle that are not labeled, the first determining submodule is configured to determine coordinates corresponding to the target vehicle according to the remaining labels to obtain at least one first coordinate, the second determining submodule is configured to determine coordinates corresponding to the target vehicle according to the obtained labels to obtain a plurality of second coordinates, and the at least one first coordinate and the plurality of second coordinates form coordinates of the target vehicle in each frame. Namely, the remaining frames which are not marked are automatically marked, so that the specific coordinate positions of the vehicles in the remaining frames can be determined, and then the coordinates of each frame of the vehicles in the preset time can be more accurately determined according to the two coordinates. For example, in the running process of the vehicle, the point cloud data is scanned every 0.1 second to obtain the point cloud data in a preset area, and points with longer interval distances can be automatically marked in a certain number of frames in the middle, so that the running track of the vehicle can be more accurately determined.
It should be noted that, when the frame to be acquired is determined according to the position of the marked frames, the time of acquiring the marked frames, and the instantaneous speed of the vehicle, the data of the remaining number of frames to be marked can be assisted, and as the input of the remaining number of frames to be marked, the driving track of the vehicle can be determined more accurately.
Specifically, in an embodiment of the present application, the first labeling submodule includes a third determining submodule and a second labeling submodule, the third determining submodule is configured to determine target point cloud data in the remaining number of frames by using the obtained label, where the target point cloud data is 3D point cloud data corresponding to the target vehicle, and the second labeling submodule is configured to label the 3D point cloud data corresponding to the target vehicle in the remaining number of frames, so as to obtain a data remaining label. In the device, the 3D point cloud data corresponding to the vehicles in the rest frames can be determined, and the marking can be carried out more efficiently, so that the running track of the vehicles can be determined more accurately.
It should be noted that when the coordinate position is marked, the marking time difference needs to be calculated, the time difference of the scene of the point cloud is collected once, the marking is performed at certain time intervals, the positions of several time periods are marked, and the marking time interval is calculated.
In yet another embodiment of the present application, the third determining sub-module includes a first matching sub-module and a fourth determining sub-module, the first matching sub-module is configured to match the acquired point cloud data with the 3D point cloud data in the remaining number of frames, the acquired point cloud data is the 3D point cloud data corresponding to the acquired mark, and the fourth determining sub-module is configured to determine that the 3D point cloud data matched with the acquired point cloud data in the remaining number of frames is the target point cloud data. The device can more accurately determine the target point cloud data, thereby further ensuring that the subsequently acquired coordinates are more accurate.
In order to more efficiently match and acquire the point cloud data and the 3D point cloud data in the remaining number of frames, in an embodiment of the application, the first matching sub-module includes a second matching sub-module, a third matching sub-module and a fourth matching sub-module, the second matching sub-module is configured to match intensity data of each point in the acquired point cloud data with intensity data of each point in the 3D point cloud data in the remaining number of frames, the third matching sub-module is configured to match height data of each point in the acquired point cloud data with height data of each point in the 3D point cloud data in the remaining number of frames, and the fourth matching sub-module is configured to match shape data of each point in the acquired point cloud data with shape data of each point in the 3D point cloud data in the remaining number of frames. The method comprises the steps of selecting point cloud data of the frames to match the intensity, the height and the shape, wherein the point cloud data comprises data of the height, the coordinates and the reflection intensity of an obstacle, and shape information changes in the point cloud within a continuous time period but is fixed in any frame, so that the point cloud data marked before are used for matching with the data in each frame, and the matching content is the intensity, the height and the shape of points.
In yet another embodiment of the present application, the fourth determining submodule includes a fifth determining submodule, configured to determine that, among the remaining number of frames, the 3D point cloud data in which the shape data, the height data, and the intensity data are respectively matched with the shape data, the height data, and the intensity data of the acquired point cloud data is the target point cloud data. The device can accurately determine that the 3D point cloud data matched with the acquired point cloud data in the residual frame number is the target point cloud data.
Specifically, in still another embodiment of the present application, the second determining unit includes a second determining module configured to determine coordinates of the target vehicle between two adjacent frames by using the coordinates of any two adjacent frames, and a third determining module configured to determine the travel track according to the coordinates of each frame and the coordinates between two adjacent frames. The coordinates in any two adjacent frames are selected, and the coordinates of the vehicle between the two adjacent frames can be determined more accurately, so that the travel track of the vehicle between the two adjacent frames can be determined more accurately, and the path planning of the vehicle can be processed more accurately according to the travel track, for example, how the vehicle runs under different scenes, the front vehicle decelerates, the vehicle is controlled to decelerate, the deceleration speed and the arrival time length are controlled, and the like.
In order to more effectively determine the coordinates of the vehicle between two adjacent frames, in another embodiment of the present application, the second determining module includes an input submodule, configured to input the coordinates of any two adjacent frames into a bezier curve model, so as to obtain the coordinates of the target vehicle between two adjacent frames.
It should be noted that the bezier curve has four points: the starting point, the ending point (also called anchor point) and two mutually separated intermediate points slide, the shape of the curve can be changed, the operation is simple, the graph is concise, and the coordinates of the carrier between two adjacent frames can be more accurately determined.
The embodiment of the present application further provides another device for determining a travel track of a vehicle, and it should be noted that the device for determining a travel track of a vehicle according to the embodiment of the present application may be used to execute the method for determining a travel track of a vehicle according to the embodiment of the present application. The following describes a device for determining a travel trajectory of a vehicle according to an embodiment of the present application. The device includes:
the system comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring labels of a preset number of frames in 3D point cloud data of a target carrier in preset time to obtain a preset number of acquisition labels, and the preset number of frames is less than or equal to the sum of the number of frames of the 3D point cloud data in the preset time;
a first determining unit, configured to determine coordinates of each frame of the target vehicle within the predetermined time according to the obtained labels;
and the second determining unit is used for determining the running track of the target vehicle in the preset time according to the coordinates of each frame.
In the device, the driving track of the target vehicle can be determined according to the acquired 3D point cloud data, and the purpose of determining the driving track by the 3D point cloud data is realized, so that machine learning training can be performed on a marking tool of the vehicle, the driving track of the vehicle can be planned more accurately subsequently, the driving track of the vehicle can be processed according to the driving track, and the driving track determined according to the 3D point cloud data is more accurate compared with other devices in the prior art.
It should be noted that the obtained labels in the present application may be obtained by labeling a labeling tool, or may be obtained by manual labeling, and those skilled in the art may obtain labels obtained by manual labeling or labeling of a labeling tool according to actual situations.
It should be noted that, the labeling tool labels each frame, the predetermined time may be 10 seconds, or 15 seconds, or 20 seconds, or other predetermined times, and a person skilled in the art may select an appropriate predetermined time according to the actual situation, and the labeled data may be 100 frames of data labeled in 10 seconds, or 150 frames of data labeled in 15 seconds, and a person skilled in the art may select an appropriate predetermined time according to the actual situation to label the data of each frame.
It should be further noted that the predetermined number of frames in the present application may be equal to the sum of the number of frames, or may be smaller than the sum of the number of frames, and those skilled in the art may obtain the number of frames with an appropriate number according to actual situations.
In one embodiment of the present application, the second determining unit includes a first determining module configured to determine coordinates of the target vehicle between two adjacent frames by using the coordinates of any two adjacent frames, and a second determining module configured to determine the travel track according to the coordinates of each frame and the coordinates between two adjacent frames. The coordinates of the vehicle between two adjacent frames are determined, so that the travel track of the vehicle between two adjacent frames can be determined more accurately, and the path planning of the vehicle can be processed more accurately according to the travel track, for example, how the vehicle runs under different scenes, the previous vehicle decelerates, the vehicle is controlled to decelerate, and the deceleration speed and the arrival time length are controlled.
In yet another embodiment of the present application, the first determining module includes an input module, configured to input the coordinates of any two adjacent frames into a bezier curve model, so as to obtain the coordinates of the target vehicle between the two adjacent frames. The coordinates of the vehicle between two adjacent frames can be determined more effectively, and it should be noted that the bezier curve has four points: the starting point, the ending point (also called anchor point) and two mutually separated intermediate points slide, the shape of the curve can be changed, the operation is simple, the graph is concise, and the coordinates of the carrier between two adjacent frames can be more accurately determined.
The device specifically acquires labels of a preset number of frames in 3D point cloud data of a target vehicle in a preset time to obtain a preset number of acquiring labels, determines coordinates of each frame of the target vehicle in the preset time according to the acquiring labels, determines a running track of the target vehicle in the preset time according to the coordinates of each frame, determines coordinates of the target vehicle between two adjacent frames by using the coordinates of any two adjacent frames, determines the running track according to the coordinates of each frame and the coordinates of each two adjacent frames, and inputs the coordinates of any two adjacent frames into a Bezier curve model, the apparatus for obtaining the coordinates of the target vehicle between two adjacent frames can refer to the above description, and is not described herein again.
The device for determining the travel track of the vehicle comprises a processor and a memory, wherein the acquiring unit, the first determining unit, the second determining unit and the like are stored in the memory as program units, and the processor executes the program units stored in the memory to realize corresponding functions.
The processor comprises a kernel, and the kernel calls the corresponding program unit from the memory. The kernel can be set to be one or more than one, and the driving track of the vehicle is obtained by adopting the 3D point cloud data through adjusting the kernel parameters.
The memory may include volatile memory in a computer readable medium, Random Access Memory (RAM) and/or nonvolatile memory such as Read Only Memory (ROM) or flash memory (flash RAM), and the memory includes at least one memory chip.
An embodiment of the present invention provides a storage medium on which a program is stored, the program implementing the method for determining a travel trajectory of a vehicle described above when executed by a processor.
The embodiment of the invention provides a processor, wherein the processor is used for running a program, and when the program runs, the method for determining the running track of the vehicle is executed.
The embodiment of the invention provides equipment, which comprises a processor, a memory and a program which is stored on the memory and can run on the processor, wherein when the processor executes the program, at least the following steps are realized:
step S101, acquiring 3D point cloud data of a target vehicle in preset time;
step S102, determining the coordinates of each frame of the target vehicle in the preset time according to the 3D point cloud data;
step S103, determining a traveling trajectory of the target vehicle within a predetermined time according to the coordinates of each frame, or implementing at least the following steps when the processor executes a program: or the processor executes the program to realize at least the following steps:
step S201, obtaining labels of a preset number of frames in 3D point cloud data of a target carrier in a preset time to obtain a preset number of obtaining labels, wherein the preset number of frames is less than or equal to the sum of the number of frames of the 3D point cloud data in the preset time;
step S202, determining the coordinates of each frame of the target vehicle in the preset time according to the acquisition labels;
step S203, determining a travel path of the target vehicle within a predetermined time according to the coordinates of each frame.
The device herein may be a server, a PC, a PAD, a mobile phone, etc.
The present application further provides a computer program product adapted to perform a program of initializing at least the following method steps when executed on a data processing device:
step S101, acquiring 3D point cloud data of a target vehicle in preset time;
step S102, determining the coordinates of each frame of the target vehicle in the preset time according to the 3D point cloud data;
step S103, determining a travel trajectory of the target vehicle within a predetermined time according to the coordinates of each frame, or adapted to execute a program initialized with at least the following method steps: or adapted to perform a procedure initialized with at least the following method steps:
step S201, obtaining labels of a preset number of frames in 3D point cloud data of a target carrier in a preset time to obtain a preset number of obtaining labels, wherein the preset number of frames is less than or equal to the sum of the number of frames of the 3D point cloud data in the preset time;
step S202, determining the coordinates of each frame of the target vehicle in the preset time according to the acquisition labels;
step S203, determining a travel path of the target vehicle within a predetermined time according to the coordinates of each frame.
In yet another embodiment of the present application, a vehicle is provided, the vehicle comprising one or more processors, memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs comprising a method for performing any of the above-described vehicle travel trajectory determinations.
Due to the fact that the carrying tool is provided with the processor, the memory, the display device and the program, when the carrying tool is judged to be in the running process, the running track can be accurately determined, the safe running of the carrying tool is guaranteed, and the technical problem that a method for acquiring the running track of a vehicle by adopting 3D point cloud data is lacked in the prior art is solved.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer-readable storage medium if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above methods according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
From the above description, it can be seen that the above-described embodiments of the present application achieve the following technical effects:
1) the method for determining the traveling track of the vehicle comprises the steps of firstly obtaining 3D point cloud data of a target vehicle within preset time, secondly determining coordinates of each frame of the target vehicle within the preset time according to the 3D point cloud data, and thirdly determining the traveling track of the target vehicle within the preset time according to the coordinates of each frame. According to the method, the driving track of the target vehicle can be determined according to the acquired 3D point cloud data, the purpose of determining the driving track by the 3D point cloud data is achieved, machine learning training can be performed on a marking tool of the vehicle, the driving track of the vehicle can be planned more accurately subsequently, the driving track of the vehicle can be processed according to the driving track, and the driving track determined according to the 3D point cloud data is more accurate compared with other methods in the prior art.
2) The method for determining the traveling track of the vehicle obtains labels of a preset number of frames in 3D point cloud data of a target vehicle within preset time to obtain a preset number of obtained labels, wherein the preset number of frames is less than or equal to the sum of the number of frames of the 3D point cloud data within the preset time, determines coordinates of each frame of the target vehicle within the preset time according to the obtained labels, and determines the traveling track of the target vehicle within the preset time according to the coordinates of each frame. The driving track of the target vehicle can be determined according to the acquired 3D point cloud data, the purpose of determining the driving track by the 3D point cloud data is achieved, machine learning training can be performed on a marking tool of the vehicle, the driving track of the vehicle can be planned more accurately subsequently, the driving track of the vehicle can be processed according to the driving track, and the driving track determined according to the 3D point cloud data is more accurate compared with other methods in the prior art.
3) According to the determining device for the traveling track of the vehicle, an obtaining unit obtains 3D point cloud data of a target vehicle in preset time, a first determining unit determines coordinates of each frame of the target vehicle in the preset time according to the 3D point cloud data, and a second determining unit determines the traveling track of the target vehicle in the preset time according to the coordinates of each frame. In the device, the driving track of the target vehicle can be determined according to the acquired 3D point cloud data, and the purpose of determining the driving track by the 3D point cloud data is realized, so that machine learning training can be performed on a marking tool of the vehicle, the driving track of the vehicle can be planned more accurately subsequently, the driving track of the vehicle can be processed according to the driving track, and the driving track determined according to the 3D point cloud data is more accurate compared with other methods in the prior art.
4) According to the device for determining the traveling track of the other vehicle, an acquisition unit acquires marks of a preset number of frames in 3D point cloud data of a target vehicle within preset time to obtain the acquired marks of the preset number, the preset number of frames is less than or equal to the sum of the number of frames of the 3D point cloud data within the preset time, a first determination unit determines coordinates of each frame of the target vehicle within the preset time according to the acquired marks, and a second determination unit determines the traveling track of the target vehicle within the preset time according to the coordinates of each frame. The driving track of the target vehicle can be determined according to the acquired 3D point cloud data, the purpose of determining the driving track by the 3D point cloud data is achieved, machine learning training can be performed on a marking tool of the vehicle, the driving track of the vehicle can be planned more accurately subsequently, the driving track of the vehicle can be processed according to the driving track, and the driving track determined according to the 3D point cloud data is more accurate compared with other devices in the prior art.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (16)

1. A method of determining a travel trajectory of a vehicle, comprising:
acquiring 3D point cloud data of a target vehicle in preset time;
determining coordinates of each frame of the target vehicle within the predetermined time according to the 3D point cloud data;
and determining the running track of the target vehicle in a preset time according to the coordinates of each frame.
2. The method of claim 1, wherein determining coordinates of each frame of the target vehicle over the predetermined time from the 3D point cloud data comprises:
obtaining labels of the target vehicles with a preset number of frames in the 3D point cloud data to obtain a preset number of obtained labels, wherein the preset number of frames is less than or equal to the sum of the number of frames of the 3D point cloud data in the preset time;
and determining the coordinates of the target vehicle according to the acquired labels to obtain the coordinates of the target vehicle in each frame.
3. A method according to claim 2, wherein determining coordinates of the target vehicle based on the acquisition tag in the case that the predetermined number of frames is less than the sum of the number of frames, and obtaining coordinates of the target vehicle in each frame comprises:
marking the target carrier with the residual frame number according to the preset number marks to obtain a plurality of residual marks, wherein the residual frame number is the total number of unmarked frames of the target carrier;
determining coordinates corresponding to the target carrier according to the residual labels to obtain at least one first coordinate;
and determining the coordinates corresponding to the target vehicle according to the acquired labels to obtain a plurality of second coordinates, wherein at least one first coordinate and the plurality of second coordinates form the coordinates of the target vehicle in each frame.
4. The method of claim 3, wherein labeling the target vehicle for a remaining number of frames based on the predetermined number of labels to obtain a plurality of remaining labels comprises:
determining target point cloud data in the remaining frame number by using the acquired label, wherein the target point cloud data is 3D point cloud data corresponding to the target carrier;
and marking the 3D point cloud data corresponding to the target carrier in the residual frame number to obtain data residual marks.
5. The method of claim 4, wherein determining the target point cloud data in the remaining number of frames using the obtained labels comprises:
matching the acquired point cloud data with the 3D point cloud data in the residual frame number, wherein the acquired point cloud data is the 3D point cloud data correspondingly marked by the acquired mark;
and determining the 3D point cloud data matched with the acquired point cloud data in the residual frame number as the target point cloud data.
6. The method of claim 5, wherein matching the acquired point cloud data with the 3D point cloud data in the remaining number of frames comprises:
matching the intensity data of each point in the acquired point cloud data with the intensity data of each point in the 3D point cloud data in the remaining frame number;
matching the height data of each point in the acquired point cloud data with the height data of each point in the 3D point cloud data in the remaining frame number;
and matching the shape data of each point in the acquired point cloud data with the shape data of each point in the 3D point cloud data in the remaining frame number.
7. The method of claim 6, wherein determining the 3D point cloud data in the remaining number of frames that matches the acquired point cloud data as the target point cloud data comprises:
and determining that the 3D point cloud data of which the shape data, the height data and the intensity data are respectively matched with the shape data, the height data and the intensity data of the acquired point cloud data in the residual frame number is the target point cloud data.
8. The method according to any one of claims 1 to 7, wherein determining a travel trajectory of the target vehicle over a predetermined time from the coordinates of each frame comprises:
determining coordinates of the target vehicle between any two adjacent frames by using the coordinates of any two adjacent frames;
and determining the driving track according to the coordinates of each frame and the coordinates between two adjacent frames.
9. The method of claim 8, wherein determining coordinates of the target vehicle between any two adjacent frames using the coordinates of the two adjacent frames comprises:
and inputting the coordinates of any two adjacent frames into a Bezier curve model to obtain the coordinates of the target vehicle between the two adjacent frames.
10. A method of obtaining a travel trajectory of a vehicle, comprising:
obtaining labels of a preset number of frames in 3D point cloud data of a target carrier in preset time to obtain a preset number of obtaining labels, wherein the preset number of frames is less than or equal to the sum of the number of frames of the 3D point cloud data in the preset time;
determining the coordinates of each frame of the target vehicle in the preset time according to the acquired labels;
and determining the running track of the target vehicle in a preset time according to the coordinates of each frame.
11. The method of claim 10, wherein determining a travel trajectory of the target vehicle for a predetermined time from the coordinates of each frame comprises:
determining coordinates of the target vehicle between any two adjacent frames by using the coordinates of any two adjacent frames;
and determining the driving track according to the coordinates of each frame and the coordinates between two adjacent frames.
12. The method of claim 11, wherein determining coordinates of the target vehicle between any two adjacent frames using the coordinates of the two adjacent frames comprises:
and inputting the coordinates of any two adjacent frames into a Bezier curve model to obtain the coordinates of the target vehicle between the two adjacent frames.
13. An apparatus for determining a travel trajectory of a vehicle, comprising:
the system comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring 3D point cloud data of a target vehicle within preset time;
a first determining unit for determining coordinates of each frame of the target vehicle within the predetermined time according to the 3D point cloud data;
and the second determining unit is used for determining the running track of the target vehicle in the preset time according to the coordinates of each frame.
14. A storage medium characterized by comprising a stored program, wherein the program executes the determination method of the travel trajectory of a vehicle according to any one of claims 1 to 12.
15. A processor, characterized in that the processor is configured to run a program, wherein the program when running performs the method of determining a trajectory for a vehicle according to any one of claims 1 to 12.
16. A vehicle, comprising: one or more processors, memory, and one or more programs stored in the memory and configured to be executed by the one or more processors, the one or more programs comprising instructions for performing the method of determining a travel trajectory of a vehicle of any of claims 1-12.
CN202010329269.XA 2020-04-23 2020-04-23 Method and device for determining travel trajectory of vehicle, and storage medium Pending CN111428692A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010329269.XA CN111428692A (en) 2020-04-23 2020-04-23 Method and device for determining travel trajectory of vehicle, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010329269.XA CN111428692A (en) 2020-04-23 2020-04-23 Method and device for determining travel trajectory of vehicle, and storage medium

Publications (1)

Publication Number Publication Date
CN111428692A true CN111428692A (en) 2020-07-17

Family

ID=71554499

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010329269.XA Pending CN111428692A (en) 2020-04-23 2020-04-23 Method and device for determining travel trajectory of vehicle, and storage medium

Country Status (1)

Country Link
CN (1) CN111428692A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107239216A (en) * 2016-03-28 2017-10-10 北大方正集团有限公司 Drawing modification method and apparatus based on touch-screen
CN108228798A (en) * 2017-12-29 2018-06-29 百度在线网络技术(北京)有限公司 The method and apparatus for determining the matching relationship between point cloud data
CN109214248A (en) * 2017-07-04 2019-01-15 百度在线网络技术(北京)有限公司 The method and apparatus of the laser point cloud data of automatic driving vehicle for identification
CN110103956A (en) * 2019-05-16 2019-08-09 北方工业大学 Automatic overtaking track planning method for unmanned vehicle
CN110211388A (en) * 2019-05-27 2019-09-06 武汉万集信息技术有限公司 Multilane free-flow vehicle matching process and system based on 3D laser radar
CN110717918A (en) * 2019-10-11 2020-01-21 北京百度网讯科技有限公司 Pedestrian detection method and device
CN110758381A (en) * 2019-09-18 2020-02-07 北京汽车集团有限公司 Method and device for generating steering track, storage medium and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107239216A (en) * 2016-03-28 2017-10-10 北大方正集团有限公司 Drawing modification method and apparatus based on touch-screen
CN109214248A (en) * 2017-07-04 2019-01-15 百度在线网络技术(北京)有限公司 The method and apparatus of the laser point cloud data of automatic driving vehicle for identification
CN108228798A (en) * 2017-12-29 2018-06-29 百度在线网络技术(北京)有限公司 The method and apparatus for determining the matching relationship between point cloud data
CN110103956A (en) * 2019-05-16 2019-08-09 北方工业大学 Automatic overtaking track planning method for unmanned vehicle
CN110211388A (en) * 2019-05-27 2019-09-06 武汉万集信息技术有限公司 Multilane free-flow vehicle matching process and system based on 3D laser radar
CN110758381A (en) * 2019-09-18 2020-02-07 北京汽车集团有限公司 Method and device for generating steering track, storage medium and electronic equipment
CN110717918A (en) * 2019-10-11 2020-01-21 北京百度网讯科技有限公司 Pedestrian detection method and device

Similar Documents

Publication Publication Date Title
CN103218854B (en) Method and the augmented reality system of parts mark is realized in augmented reality process
CN110874594A (en) Human body surface damage detection method based on semantic segmentation network and related equipment
WO2019223361A1 (en) Video analysis method and apparatus
CN109117848A (en) A kind of line of text character identifying method, device, medium and electronic equipment
CN109840463B (en) Lane line identification method and device
CN109683617B (en) Automatic driving method and device and electronic equipment
CN110675407A (en) Image instance segmentation method and device, electronic equipment and storage medium
CN112498369B (en) Vehicle control method, control device, processor and vehicle
CN111429512B (en) Image processing method and device, storage medium and processor
CN108154119B (en) Automatic driving processing method and device based on self-adaptive tracking frame segmentation
CN111178170A (en) Gesture recognition method and electronic equipment
Santana et al. Swarm-based visual saliency for trail detection
CN112396630B (en) Method and device for determining target object state, storage medium and electronic device
CN111428692A (en) Method and device for determining travel trajectory of vehicle, and storage medium
CN111426299A (en) Method and device for ranging based on depth of field of target object
CN116576872A (en) Route planning method, device, equipment and medium for unmanned cleaning vehicle
CN115236689A (en) Method and device for determining relative positions of laser radar and image acquisition equipment
CN113344198B (en) Model training method and device
CN115631282A (en) Method and system for drawing point cloud three-dimensional continuous Bessel curve and storage medium
CN115115705A (en) Point cloud labeling method and device and vehicle
CN112785704A (en) Semantic map construction method, computer readable storage medium and processor
CN111523475B (en) Method and device for identifying object in 3D point cloud, storage medium and processor
CN111539326B (en) Motion information determining method, motion information determining device, storage medium and processor
CN109901589A (en) Mobile robot control method and apparatus
CN113672252B (en) Model upgrading method, video monitoring system, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200717