CN114863089A - Automatic acquisition method, device, medium and equipment for automatic driving perception data - Google Patents

Automatic acquisition method, device, medium and equipment for automatic driving perception data Download PDF

Info

Publication number
CN114863089A
CN114863089A CN202210350488.5A CN202210350488A CN114863089A CN 114863089 A CN114863089 A CN 114863089A CN 202210350488 A CN202210350488 A CN 202210350488A CN 114863089 A CN114863089 A CN 114863089A
Authority
CN
China
Prior art keywords
vehicle
information
boundary frame
point cloud
surrounding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210350488.5A
Other languages
Chinese (zh)
Inventor
樊东升
王晓东
李广敬
冯思渊
杨荣
高延辉
曲明
易应强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Port No2 Container Terminal Co ltd
Beijing Zhuxian Technology Co Ltd
Original Assignee
Tianjin Port No2 Container Terminal Co ltd
Beijing Zhuxian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Port No2 Container Terminal Co ltd, Beijing Zhuxian Technology Co Ltd filed Critical Tianjin Port No2 Container Terminal Co ltd
Priority to CN202210350488.5A priority Critical patent/CN114863089A/en
Publication of CN114863089A publication Critical patent/CN114863089A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application discloses an automatic acquisition method, device, medium and equipment for automatic driving perception data, which belong to the technical field of automatic driving, and the method comprises the following steps: determining first peripheral vehicle information of peripheral vehicles through the acquired current frame point cloud information of the peripheral vehicles; determining a corresponding first boundary frame according to the first peripheral vehicle information; acquiring second surrounding vehicle information of surrounding vehicles from a vehicle information database; converting the second surrounding vehicle information into a self vehicle coordinate system, and determining a second boundary frame corresponding to the surrounding vehicle; and matching the first boundary frame with the second boundary frame, and recording point cloud information according to a matching result. According to the method and the device, the surrounding vehicle information is extracted from the pre-established vehicle information database and compared with the surrounding vehicle information perceived by the vehicle, and the vehicle perception data is recorded and deleted according to the comparison result, so that the coverage of the perception data is improved, and the effectiveness of data acquisition is improved.

Description

Automatic acquisition method, device, medium and equipment for automatic driving perception data
Technical Field
The present application relates to the field of automatic driving technologies, and in particular, to a method, an apparatus, a medium, and a device for automatically collecting driving awareness data.
Background
In the field of automatic driving, safe driving of a vehicle is a prerequisite, which requires that the vehicle has strong sensing capability, and sensing requires collection of a large amount of data for labeling, training, analysis and processing, which consumes a large amount of cost. The conventional sensing system can realize stable detection and identification of most scenes, but cannot cover few extreme scenes such as a port environment, so that hidden danger is brought to safe driving of a vehicle. In the presence of a large amount of automatic driving data, it is very important how to acquire really meaningful data and how to acquire extreme scenes which cannot be covered by a sensing system, and avoid acquiring repeated and invalid data.
Disclosure of Invention
The application provides an automatic acquisition method, device, medium and equipment for automatic driving perception data, aiming at the problems that in the prior art, when perception of automatic driving road information is carried out, perception data cannot completely cover a driving scene under some special scenes, and acquisition is missed or repeated.
In one aspect of the present application, an automatic acquisition method of automatic driving perception data is provided, including: determining first peripheral vehicle information of peripheral vehicles under a pre-established vehicle coordinate system through current frame point cloud information of the peripheral vehicles acquired by a laser radar on the current automatic driving vehicle; determining first boundary frames corresponding to the surrounding vehicles according to the first surrounding vehicle information; acquiring second surrounding vehicle information of surrounding vehicles of the current automatic driving vehicle in the current area from a pre-established vehicle information database; converting the second surrounding vehicle information into a self vehicle coordinate system, and determining second boundary frames corresponding to surrounding vehicles respectively; and matching the first bounding box with the second bounding box, and recording the point cloud information according to a matching result.
Optionally, determining first peripheral vehicle information of the peripheral vehicle in a pre-established own vehicle coordinate system includes: preprocessing current frame point cloud information; and identifying the preprocessed point cloud information through a deep learning detection network, and determining first peripheral vehicle information, wherein the first peripheral vehicle information comprises position information, orientation information and size information of peripheral vehicles in a self-vehicle coordinate system.
Optionally, determining, according to the first peripheral vehicle information, first boundary frames corresponding to the peripheral vehicles, includes: determining, by the vehicle-mounted computing unit, a position of the first bounding box according to the position information; determining the direction of the first bounding box according to the orientation information; the size of the first bounding box is determined based on the size information.
Optionally, the pre-establishing process of the vehicle information database includes: acquiring the position and the posture of an automatic driving vehicle through combined navigation equipment carried on the vehicle; integrating the position and the posture to obtain position information, orientation information and speed information of the vehicle in a world coordinate system; integrating the position information, the orientation information, the speed information and the size information of the vehicle to obtain vehicle information of the vehicle; and uploading the vehicle information to obtain a vehicle information database.
Optionally, matching the first bounding box with the second bounding box, and recording the point cloud information according to the matching result, including: respectively calculating the distance of the central point and the intersection ratio of each first boundary frame and each second boundary frame; and if the distance between the central points is less than the preset distance threshold and the intersection ratio is greater than the preset threshold, matching the first boundary frame with the corresponding second boundary frame.
Optionally, matching the first bounding box with the second bounding box, and recording the point cloud information according to the matching result, further including: if the first boundary frame is matched with the corresponding second boundary frame, the difference value of a first speed corresponding to the first boundary frame and a second speed corresponding to the second boundary frame is judged; if the difference value is larger than the preset speed threshold value, recording the point cloud information of each preset time period before and after the corresponding moment of the current frame point cloud information; and if the difference value is not greater than the preset speed threshold value, not recording the point cloud information of the current frame.
Optionally, matching the first bounding box with the second bounding box, and recording the point cloud information according to the matching result, further including: if the first boundary frame has no matched second boundary frame, recording the current frame point cloud information, and marking as false detection; and if the second bounding box has no first bounding box matched with the second bounding box, recording the point cloud information of the current frame, and marking the point cloud information as false detection or missing detection.
In one aspect of the present application, an automatic acquisition device for automatic driving perception data is provided, including: the system comprises a module for determining first peripheral vehicle information of peripheral vehicles under a pre-established vehicle coordinate system through current frame point cloud information of the peripheral vehicles acquired by a laser radar on the current automatic driving vehicle; the module is used for determining a first boundary frame corresponding to each of the surrounding vehicles according to the first surrounding vehicle information; means for obtaining second surrounding vehicle information of a surrounding vehicle of the current autonomous vehicle in the current area from a pre-established vehicle information database; converting the second surrounding vehicle information into a self vehicle coordinate system, and determining a second boundary frame corresponding to each surrounding vehicle; and the module is used for matching the first boundary frame with the second boundary frame and recording the point cloud information according to the matching result.
In one aspect of the present application, a computer-readable storage medium is provided, which stores computer instructions, wherein the computer instructions are operable to perform the method for automatically collecting automatic driving perception data in the first aspect.
In one aspect of the present application, a computer device is provided, which includes a processor and a memory, where the memory stores computer instructions, wherein: the processor operates the computer instructions to perform the method for automatic acquisition of autopilot perception data in scenario one.
The beneficial effect of this application is: according to the method and the device, the surrounding vehicle information is extracted from the pre-established vehicle information database and compared with the surrounding vehicle information perceived by the vehicle, and the vehicle perception data is recorded and deleted according to the comparison result, so that data missing under a special environment is avoided, the coverage of perception data is improved, and the effectiveness of data acquisition is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
FIG. 1 is a schematic flow chart diagram illustrating one embodiment of a method for automatically collecting autodrive perception data according to the present application;
fig. 2 is a schematic structural diagram of an embodiment of the automatic acquisition device for automatic driving perception data according to the present application.
With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. These drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the inventive concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the above-described drawings (if any) are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises" and "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of steps or elements is not necessarily limited to those elements explicitly listed, but may include other elements not expressly listed or inherent to such product or apparatus.
In the field of automatic driving, safe driving of a vehicle is a prerequisite, which requires that the vehicle has strong sensing capability, and sensing requires collection of a large amount of data for labeling, training, analysis and processing, which consumes a large amount of cost. The conventional sensing system can realize stable detection and identification of most scenes, but cannot cover few extreme scenes such as a port environment, so that hidden danger is brought to safe driving of a vehicle. In the presence of a large amount of automatic driving data, it is very important how to acquire really meaningful data and how to acquire extreme scenes which cannot be covered by a sensing system, and avoid acquiring repeated and invalid data.
The method aims at solving the problems that sensing data cannot completely cover driving scenes under some special scenes when the automatic driving road information is sensed, missing collection is easy to occur or collection is repeated, the method acquires surrounding vehicle information uploaded by surrounding vehicles from a pre-established vehicle information database besides the laser radar on the automatic driving vehicle collects the surrounding vehicle information, and correspondingly records the surrounding vehicle information sensed by the vehicle through comparing the relationship between the surrounding vehicle information and the surrounding vehicle information, so that data missing collection under special environments is avoided, the coverage of sensing data is improved, and the effectiveness of data collection is improved.
Therefore, the application provides an automatic acquisition method, device, medium and equipment for automatic driving perception data. The automatic acquisition method of the automatic driving perception data comprises the steps of determining first peripheral vehicle information of peripheral vehicles under a pre-established self-vehicle coordinate system through current frame point cloud information of the peripheral vehicles acquired by a laser radar on the automatic driving vehicles; determining first boundary frames corresponding to the surrounding vehicles according to the first surrounding vehicle information; acquiring second surrounding vehicle information of surrounding vehicles of the automatic driving vehicle in the current area from a pre-established vehicle information database; converting the second surrounding vehicle information into a self vehicle coordinate system, and determining second boundary frames corresponding to surrounding vehicles respectively; and matching the first boundary frame with the second boundary frame, and recording point cloud information according to a matching result.
The following describes the technical solutions of the present application and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 1 illustrates an embodiment of the present application of a method for automatic acquisition of autopilot perception data.
In the embodiment shown in fig. 1, the method for automatically acquiring driving awareness data includes a process S101 of determining first surrounding vehicle information of a surrounding vehicle in a pre-established vehicle coordinate system through current frame point cloud information of the surrounding vehicle acquired by a laser radar on the automatically driven vehicle.
In the embodiment, when the current automatic driving vehicle runs, the laser radar device mounted on the laser radar device scans the surrounding environment information of the vehicle to obtain the current frame point cloud information of the surrounding vehicle around the vehicle, and then the first surrounding vehicle information of the surrounding vehicle is obtained under the pre-established vehicle coordinate system through position determination and size determination according to the current frame point cloud information of the surrounding vehicle. The current automatic driving vehicle is the automatic driving vehicle which carries out data acquisition and judgment at present.
Optionally, determining first peripheral vehicle information of the peripheral vehicle in a pre-established own vehicle coordinate system includes: preprocessing current frame point cloud information; and identifying the preprocessed point cloud information through a deep learning detection network, and determining first peripheral vehicle information, wherein the first peripheral vehicle information comprises position information, orientation information and size information of peripheral vehicles in a self-vehicle coordinate system.
In this optional embodiment, in the process of determining the first peripheral vehicle information of the peripheral vehicle according to the current frame point cloud information of the peripheral vehicle, the current frame point cloud is preprocessed first, for example, the preprocessing process includes noise point removal, so that the accuracy of the result is ensured. And dividing ground points and non-ground points in the current frame point cloud, better determining first surrounding vehicle information through the ground points and the non-ground points, and after preprocessing, identifying the preprocessed point cloud information through a deep learning detection network so as to obtain the position information, the orientation information and the size information of the vehicle length, the width and the height of the surrounding vehicle in a self-vehicle coordinate system.
In the embodiment shown in fig. 1, the method for automatically collecting driving awareness data of the present application includes a process S102, where first boundary frames corresponding to surrounding vehicles are determined according to first surrounding vehicle information.
In this embodiment, after obtaining the position, orientation, and size information of the surrounding vehicle in the own vehicle coordinate system, the in-vehicle computing unit mounted on the autonomous vehicle obtains the first boundary frame corresponding to the surrounding vehicle from the first surrounding vehicle information, where the first boundary frame indicates the corresponding surrounding vehicle.
Optionally, determining, according to the first peripheral vehicle information, first boundary frames corresponding to the peripheral vehicles, includes: determining, by the vehicle-mounted computing unit, a position of the first bounding box according to the position information; determining the direction of the first bounding box according to the orientation information; the size of the first bounding box is determined based on the size information.
In this alternative embodiment, the position of the first bounding box is determined by the in-vehicle computing unit using the position information in the first surrounding vehicle information when the determination of the first bounding box is performed; and determining the direction of the first boundary frame by using the orientation information and determining the size of the first boundary frame by using the dimension information, and finally determining the first boundary frames corresponding to the surrounding vehicles respectively.
Specifically, the first boundary frame is, for example, a rectangular frame, and the orientation, i.e., the angle, of the rectangular frame is determined using the position information of the surrounding vehicle, for example, coordinates in the own vehicle coordinate system, as the center of the rectangular frame, and using the orientation information of the surrounding vehicle; the length and width of the rectangular frame are determined based on the size information of the vehicle.
In the embodiment shown in fig. 1, the automatic collection method of the automatic driving perception data of the present application includes a process S103 of obtaining second surrounding vehicle information of a surrounding vehicle of the automatic driving vehicle in a current area from a pre-established vehicle information database.
In this embodiment, after the self vehicle acquires point cloud information of a surrounding vehicle by a laser radar mounted on the self vehicle and further determines first surrounding vehicle information of the surrounding vehicle, second surrounding vehicle information of the surrounding vehicle is acquired in a pre-established vehicle information database, wherein the second vehicle information includes position information, orientation information and size information corresponding to the surrounding vehicle.
Optionally, the pre-establishing process of the vehicle information database includes: acquiring the position and the posture of an automatic driving vehicle through combined navigation equipment carried on the vehicle; integrating the position and the posture to obtain position information, orientation information and speed information of the vehicle in a world coordinate system; integrating the position information, the orientation information and the speed information with the size information of the vehicle to obtain vehicle information of the vehicle; and uploading the vehicle information to obtain a vehicle information database.
In this alternative embodiment, when the vehicle information database is created, the position and the attitude of the vehicle itself are measured by a combined navigation device mounted on the vehicle itself, such as a GPS device, an IMU inertial unit, or the like, and data processing and integration are performed to obtain the position information of the vehicle, the orientation information indicating the attitude of the vehicle, and the speed information indicating the speed of the vehicle. And integrating the size of the vehicle with the information to obtain the vehicle information of the vehicle. The acquired vehicle information of the own vehicle is uploaded to the vehicle information database by using a communication module, such as 5G communication. The vehicle information database can be established according to the gPRC framework, and data transmission between different vehicles is carried out.
In the embodiment shown in fig. 1, the method for automatically acquiring the driving awareness data includes a process S104, which converts the second surrounding vehicle information into the own vehicle coordinate system, and determines the second bounding boxes corresponding to the surrounding vehicles.
In this embodiment, after the second surrounding vehicle information of the surrounding vehicle is obtained from the vehicle information database, since the second surrounding vehicle information is information obtained in the own vehicle coordinate system of the corresponding vehicle, when the determination of the first surrounding vehicle information is performed, it is necessary to shift to the own vehicle coordinate system of the current vehicle, and determine the corresponding second bounding box based on the second surrounding vehicle information. Wherein the position of the second boundary frame is determined by the position information in the second surrounding vehicle information, the size of the second boundary frame is determined by the size information in the second surrounding vehicle information, and the direction of the second boundary frame is determined by the orientation information in the second surrounding vehicle information.
In the embodiment shown in fig. 1, the method for automatically acquiring driving awareness data includes a process S105 of matching a first bounding box and a second bounding box, and recording point cloud information according to a matching result.
In the embodiment, the current automatic driving vehicle obtains first peripheral vehicle age information of surrounding vehicles through a mounted laser radar device, and obtains first boundary frames corresponding to the surrounding vehicles respectively; and extracting second surrounding vehicle information acquired by surrounding vehicles in the driving area through own navigation equipment in a vehicle information database, and obtaining a corresponding second boundary frame. The first boundary frame sensed by the current self vehicle is compared with a second boundary frame obtained by second surrounding vehicle information uploaded by surrounding vehicles, whether the data obtained by the current self vehicle is correct or not is determined, whether repeated detection is performed or not is determined, and then data recording is judged, so that the accuracy of data measurement and the repeated recording of the data are guaranteed under a special environment.
Optionally, matching the first bounding box with the second bounding box, and recording the point cloud information according to the matching result, including: respectively calculating the distance of the central point and the intersection ratio of each first boundary frame and each second boundary frame; and if the distance between the central points is less than the preset distance threshold and the intersection ratio is greater than the preset threshold, matching the first boundary frame with the corresponding second boundary frame.
In this alternative embodiment, when the matching determination of the first bounding box and the second bounding box is performed, the distance between the center of the first bounding box and the midpoint of the second bounding box, and the intersection-to-intersection ratio between the first bounding box and the second bounding box are calculated, respectively. And if the distance between the two central points is smaller than a preset distance threshold value and the intersection ratio is larger than a preset threshold value, determining that the first boundary frame is matched with the second boundary frame. Because the first boundary frame is matched with the second boundary frame, the current automatic driving vehicle can completely perform the automatic driving control process of the current automatic driving vehicle by the aid of the second boundary frame obtained by the second surrounding vehicle information extracted from the vehicle information base, judgment on the first surrounding vehicle information sensed by the current automatic driving vehicle is not needed, and therefore the sensed point cloud information is not needed to be recorded.
Specifically, the value range of the preset distance threshold can be 1.35-1.55 meters, and preferably, the value range of the preset distance threshold is 1.45 meters; the value range of the preset threshold of the intersection ratio can be selected from 0.25-0.35, and preferably, the value of the preset threshold is 0.3. It should be noted that, both the value range and the value of the threshold are preferably selected, and in the actual operation process, a suitable selection may be performed according to the actual processing requirement, which is not specifically limited in the present application.
Optionally, matching the first bounding box with the second bounding box, and recording the point cloud information according to the matching result, further including: if the first boundary frame is matched with the corresponding second boundary frame, the difference value of a first speed corresponding to the first boundary frame and a second speed corresponding to the second boundary frame is judged; if the difference value is larger than the preset speed threshold value, recording the point cloud information of each preset time period before and after the corresponding moment of the current frame point cloud information; and if the difference value is not greater than the preset speed threshold value, not recording the point cloud information of the current frame.
In the optional embodiment, if the first boundary frame is matched with the second boundary frame, comparing a first speed corresponding to the first boundary frame with a second speed corresponding to the second boundary frame, and if the difference value of the two speeds is greater than a preset speed threshold, recording point cloud information of each preset time period before and after the current frame point cloud information acquired by the current automatic driving vehicle corresponds to the time; and if the difference value of the two speeds is not greater than the preset speed threshold value, not recording the point cloud information of the current frame. The recorded data are used for analyzing a sensing system on the current automatic driving vehicle in the follow-up process, and further determining the reason why the speed estimation of the tracking module in the sensing system on the obstacle is inaccurate, so that the sensing system is improved and optimized conveniently.
Optionally, matching the first bounding box with the second bounding box, and recording the point cloud information according to the matching result, further including: if the first boundary frame has no matched second boundary frame, recording the current frame point cloud information, and marking as false detection; and if the second boundary frame does not have the first boundary frame matched with the second boundary frame, recording the point cloud information of the current frame and marking the point cloud information as missed detection.
In this alternative embodiment, the first boundary box is sensed by the current autonomous vehicle, the second boundary box is obtained from the vehicle information data, and the second boundary box may be considered as a true value, and the first boundary box is considered as a detected value. After the first boundary frame and the second boundary frame are matched and judged, if the first boundary frame does not have the second boundary frame matched with the first boundary frame, current frame point cloud information perceived by the current automatic driving vehicle is recorded, which indicates that the current automatic driving vehicle carries out false detection on non-vehicle objects, so that the perceived first boundary frame does not have the second boundary frame matched with the first boundary frame, and the current frame point cloud information perceived by the current automatic driving vehicle is marked as false detection. If the second bounding box does not have a first bounding box matched with the second bounding box, the first bounding box is less than the second bounding box at the moment, or the first bounding box and the second bounding box do not correspond to each other, at the moment, the first bounding box may be missed or mistakenly detected, so that the second bounding box does not have the first bounding box matched with the second bounding box. If the detection is missed or false, the detection indicates that the sensing equipment of the current automatic driving vehicle is abnormal, and subsequent equipment needs to be adjusted.
According to the automatic acquisition method of the automatic driving perception data, the surrounding vehicle information is extracted from the pre-established vehicle information database and compared with the surrounding vehicle information perceived by the vehicle, and the vehicle perception data is recorded and deleted according to the comparison result, so that data missing in a special environment is avoided, the coverage of the perception data is improved, and the effectiveness of data acquisition is improved. In addition, in the form process of automatically driving the vehicle, the vehicle information acquired by the self-combined navigation equipment in real time is uploaded to the vehicle information database in real time, so that the real-time performance of the data in the vehicle information database is ensured, and the accuracy in judging whether to record the data is ensured.
Fig. 2 illustrates one mode of the present application for an automatic acquisition device of automatic driving perception data.
In the embodiment shown in fig. 2, the automatic acquisition device of the automatic driving perception data of the present application includes: a module 201, configured to determine, through current frame point cloud information of surrounding vehicles obtained by a laser radar on a current autonomous driving vehicle, first surrounding vehicle information of the surrounding vehicles in a pre-established autonomous coordinate system; a module 202 for determining a first bounding box corresponding to each of the surrounding vehicles according to the first surrounding vehicle information; a module 203 for obtaining second surrounding vehicle information of the surrounding vehicle of the current autonomous vehicle in a current area from a pre-established vehicle information database; a module 204 for converting the second surrounding vehicle information into the own vehicle coordinate system and determining a second bounding box corresponding to each of the surrounding vehicles; a module 205 for matching the first bounding box with the second bounding box and recording the point cloud information according to the matching result.
Optionally, in the module 201, the current frame point cloud information is preprocessed; and identifying the preprocessed point cloud information through a deep learning detection network, and determining first peripheral vehicle information, wherein the first peripheral vehicle information comprises position information, orientation information and size information of peripheral vehicles in a self-vehicle coordinate system.
Optionally, in the module 202, the position of the first bounding box is determined according to the position information through the vehicle-mounted computing unit; determining the direction of the first bounding box according to the orientation information; the size of the first bounding box is determined based on the size information.
Optionally, the pre-establishing process of the vehicle information database includes: acquiring the position and the posture of an automatic driving vehicle through combined navigation equipment carried on the vehicle; integrating the position and the posture to obtain position information, orientation information and speed information of the vehicle in a world coordinate system; integrating the position information, the orientation information, the speed information and the size information of the vehicle to obtain vehicle information of the vehicle; and uploading the vehicle information to obtain a vehicle information database.
Optionally, in the module 205, the distance of the central point and the intersection ratio are calculated for each of the first bounding box and the second bounding box respectively; and if the distance between the central points is less than the preset distance threshold and the intersection ratio is greater than the preset threshold, matching the first boundary frame with the corresponding second boundary frame.
Optionally, in block 205, if the first bounding box matches the corresponding second bounding box, determining a difference between a first speed corresponding to the first bounding box and a second speed corresponding to the second bounding box; if the difference value is larger than the preset speed threshold value, recording the point cloud information of each preset time period before and after the corresponding moment of the current frame point cloud information; and if the difference value is not greater than the preset speed threshold value, not recording the point cloud information of the current frame.
Optionally, in the module 205, if the first bounding box has no second bounding box matched with the first bounding box, the current frame point cloud information is recorded and marked as false detection; and if the second boundary frame does not have the first boundary frame matched with the second boundary frame, recording the point cloud information of the current frame, and marking the point cloud information as false detection or missed detection.
The automatic acquisition device of automatic driving perception data of this application draws vehicle information and vehicle information around of own car perception through in the vehicle information database of establishing in advance and compares, carries out the record and the deletion of own car perception data according to the result of comparison to avoid data under special environment to miss and adopt, improve perception data's coverage, improve data acquisition's validity.
In a particular embodiment of the present application, a computer-readable storage medium stores computer instructions, wherein the computer instructions are operable to perform the method for automatically collecting driving awareness data described in any of the embodiments. Wherein the storage medium may be directly in hardware, in a software module executed by a processor, or in a combination of the two.
A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium.
The Processor may be a Central Processing Unit (CPU), other general-purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), other Programmable logic devices, discrete Gate or transistor logic, discrete hardware components, or any combination thereof. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.
In one embodiment of the present application, a computer device includes a processor and a memory, the memory storing computer instructions, wherein: the processor operates the computer instructions to perform the method of automatically collecting autopilot perception data described in any of the embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above embodiments are merely examples, which are not intended to limit the scope of the present disclosure, and all equivalent structural changes made by using the contents of the specification and the drawings, or any other related technical fields, are also included in the scope of the present disclosure.

Claims (10)

1. A method for automatically collecting automatic driving perception data is characterized by comprising the following steps:
determining first peripheral vehicle information of surrounding vehicles under a pre-established own vehicle coordinate system through current frame point cloud information of the surrounding vehicles acquired by a laser radar on the current automatic driving vehicle;
determining first boundary frames corresponding to the surrounding vehicles according to the first surrounding vehicle information;
acquiring second surrounding vehicle information of the surrounding vehicle of the current automatic driving vehicle in a current area from a pre-established vehicle information database;
converting the second surrounding vehicle information into the own vehicle coordinate system, and determining a second boundary frame corresponding to each surrounding vehicle;
and matching the first boundary frame and the second boundary frame, and recording the point cloud information according to a matching result.
2. The method of automatically collecting autodrive awareness data according to claim 1, wherein said determining first peripheral vehicle information of the peripheral vehicle in a pre-established vehicle coordinate system comprises:
preprocessing the current frame point cloud information;
and identifying the preprocessed point cloud information through a deep learning detection network, and determining the first peripheral vehicle information, wherein the first peripheral vehicle information comprises position information, orientation information and size information of the peripheral vehicle in the self-vehicle coordinate system.
3. The method of automatically collecting autodrive perception data according to claim 2, wherein determining respective first bounding boxes for the surrounding vehicles based on the first surrounding vehicle information includes:
determining, by the vehicle-mounted computing unit, a position of the first bounding box according to the position information;
determining, by the on-board computing unit, a direction of the first bounding box according to the orientation information;
and determining the size of the first boundary frame according to the size information through the vehicle-mounted computing unit.
4. The automatic collection method of automated driving perception data according to claim 1, wherein the pre-establishment process of the vehicle information database includes:
acquiring the position and the posture of an automatic driving vehicle through combined navigation equipment carried on the automatic driving vehicle;
integrating the position and the posture, and determining position information, orientation information and speed information of the automatic driving vehicle in a world coordinate system;
integrating the position information, the orientation information and the speed information with the size information of the automatic driving vehicle to obtain vehicle information of the automatic driving vehicle;
and uploading the vehicle information to obtain the vehicle information database.
5. The method of claim 1, wherein the matching the first bounding box and the second bounding box and recording the point cloud information according to the matching result comprises:
respectively calculating the distance of the central point and the intersection ratio of each first boundary frame and each second boundary frame;
and if the distance between the central points is smaller than a preset distance threshold and the intersection ratio is larger than a preset threshold, matching the first boundary frame with the corresponding second boundary frame.
6. The method of automatically collecting autopilot-sensing data of claim 5 wherein said matching the first bounding box to the second bounding box and recording the point cloud information based on the matching further comprises:
if the first boundary frame is matched with the corresponding second boundary frame, judging the difference value between a first speed corresponding to the first boundary frame and a second speed corresponding to the second boundary frame;
if the difference value is larger than a preset speed threshold value, recording point cloud information collected in preset time periods before and after the corresponding moment of the current frame point cloud information;
and if the difference value is not greater than the preset speed threshold value, not recording the current frame point cloud information.
7. The method of automatically collecting autopilot-sensing data of claim 5 wherein said matching the first bounding box to the second bounding box and recording the point cloud information based on the matching further comprises:
if the first boundary frame does not have the second boundary frame matched with the first boundary frame, recording the point cloud information of the current frame, and marking the point cloud information as false detection;
and if the second boundary frame does not have the first boundary frame matched with the second boundary frame, recording the current frame point cloud information, and marking the current frame point cloud information as false detection or missed detection.
8. An automatic acquisition device of automatic driving perception data, comprising:
the system comprises a module for determining first peripheral vehicle information of surrounding vehicles under a pre-established own vehicle coordinate system through current frame point cloud information of the surrounding vehicles acquired by a laser radar on the current automatic driving vehicle;
a module for determining respective corresponding first bounding boxes of the surrounding vehicles according to the first surrounding vehicle information;
means for obtaining second surrounding vehicle information of the surrounding vehicle within a current area of the current autonomous vehicle from a pre-established vehicle information database;
converting the second surrounding vehicle information into the own vehicle coordinate system, and determining a second boundary frame corresponding to each surrounding vehicle;
and the module is used for matching the first boundary frame with the second boundary frame and recording the point cloud information according to a matching result.
9. A computer readable storage medium storing computer instructions, wherein the computer instructions are operative to perform the method of automatic acquisition of autopilot perception data according to any of claims 1-7.
10. A computer device comprising a processor and a memory, the memory storing computer instructions, wherein: the processor operates the computer instructions to perform the method of automatic acquisition of autopilot perception data of any of claims 1-7.
CN202210350488.5A 2022-04-02 2022-04-02 Automatic acquisition method, device, medium and equipment for automatic driving perception data Pending CN114863089A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210350488.5A CN114863089A (en) 2022-04-02 2022-04-02 Automatic acquisition method, device, medium and equipment for automatic driving perception data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210350488.5A CN114863089A (en) 2022-04-02 2022-04-02 Automatic acquisition method, device, medium and equipment for automatic driving perception data

Publications (1)

Publication Number Publication Date
CN114863089A true CN114863089A (en) 2022-08-05

Family

ID=82630314

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210350488.5A Pending CN114863089A (en) 2022-04-02 2022-04-02 Automatic acquisition method, device, medium and equipment for automatic driving perception data

Country Status (1)

Country Link
CN (1) CN114863089A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115662168A (en) * 2022-10-18 2023-01-31 浙江吉利控股集团有限公司 Environment sensing method and device and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110364023A (en) * 2018-04-10 2019-10-22 奥迪股份公司 Driving assistance system and method
CN111619560A (en) * 2020-07-29 2020-09-04 北京三快在线科技有限公司 Vehicle control method and device
CN112101092A (en) * 2020-07-31 2020-12-18 北京智行者科技有限公司 Automatic driving environment sensing method and system
TWI726278B (en) * 2019-01-30 2021-05-01 宏碁股份有限公司 Driving detection method, vehicle and driving processing device
CN113442950A (en) * 2021-08-31 2021-09-28 国汽智控(北京)科技有限公司 Automatic driving control method, device and equipment based on multiple vehicles
CN113468922A (en) * 2020-03-31 2021-10-01 郑州宇通客车股份有限公司 Road boundary identification method and device based on radar point cloud
CN113945219A (en) * 2021-09-28 2022-01-18 武汉万集光电技术有限公司 Dynamic map generation method, system, readable storage medium and terminal equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110364023A (en) * 2018-04-10 2019-10-22 奥迪股份公司 Driving assistance system and method
TWI726278B (en) * 2019-01-30 2021-05-01 宏碁股份有限公司 Driving detection method, vehicle and driving processing device
CN113468922A (en) * 2020-03-31 2021-10-01 郑州宇通客车股份有限公司 Road boundary identification method and device based on radar point cloud
CN111619560A (en) * 2020-07-29 2020-09-04 北京三快在线科技有限公司 Vehicle control method and device
CN112101092A (en) * 2020-07-31 2020-12-18 北京智行者科技有限公司 Automatic driving environment sensing method and system
WO2022022694A1 (en) * 2020-07-31 2022-02-03 北京智行者科技有限公司 Method and system for sensing automated driving environment
CN113442950A (en) * 2021-08-31 2021-09-28 国汽智控(北京)科技有限公司 Automatic driving control method, device and equipment based on multiple vehicles
CN113945219A (en) * 2021-09-28 2022-01-18 武汉万集光电技术有限公司 Dynamic map generation method, system, readable storage medium and terminal equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115662168A (en) * 2022-10-18 2023-01-31 浙江吉利控股集团有限公司 Environment sensing method and device and electronic equipment

Similar Documents

Publication Publication Date Title
CN110705458B (en) Boundary detection method and device
CN103150786B (en) Non-contact type unmanned vehicle driving state measuring system and measuring method
US11294387B2 (en) Systems and methods for training a vehicle to autonomously drive a route
CN110728650B (en) Well lid depression detection method based on intelligent terminal and related equipment
CN111179300A (en) Method, apparatus, system, device and storage medium for obstacle detection
EP3842735B1 (en) Position coordinates estimation device, position coordinates estimation method, and program
CN111275960A (en) Traffic road condition analysis method, system and camera
CN108364476B (en) Method and device for acquiring Internet of vehicles information
CN117576652B (en) Road object identification method and device, storage medium and electronic equipment
CN112884892A (en) Unmanned mine car position information processing system and method based on road side device
CN114863089A (en) Automatic acquisition method, device, medium and equipment for automatic driving perception data
CN113945219B (en) Dynamic map generation method, system, readable storage medium and terminal device
CN114779276A (en) Obstacle detection method and device
WO2021138372A1 (en) Feature coverage analysis
CN108981729A (en) Vehicle positioning method and device
CN109887124B (en) Vehicle motion data processing method and device, computer equipment and storage medium
CN115841660A (en) Distance prediction method, device, equipment, storage medium and vehicle
US20220101025A1 (en) Temporary stop detection device, temporary stop detection system, and recording medium
CN117859041A (en) Method and auxiliary device for supporting vehicle functions in a parking space and motor vehicle
CN111126336B (en) Sample collection method, device and equipment
CN115402347A (en) Method for identifying a drivable region of a vehicle and driving assistance method
CN109655073B (en) Map drawing method and device in no-signal or weak-signal area and vehicle
CN113220805A (en) Map generation device, recording medium, and map generation method
CN109389643A (en) Parking stall principal direction judgment method, system and storage medium
US20230025579A1 (en) High-definition mapping

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination