CN111626314B - Classification method and device for point cloud data, computer equipment and storage medium - Google Patents

Classification method and device for point cloud data, computer equipment and storage medium Download PDF

Info

Publication number
CN111626314B
CN111626314B CN201910151657.0A CN201910151657A CN111626314B CN 111626314 B CN111626314 B CN 111626314B CN 201910151657 A CN201910151657 A CN 201910151657A CN 111626314 B CN111626314 B CN 111626314B
Authority
CN
China
Prior art keywords
point cloud
target
training
target obstacle
global
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910151657.0A
Other languages
Chinese (zh)
Other versions
CN111626314A (en
Inventor
李忠蓬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suteng Innovation Technology Co Ltd
Original Assignee
Suteng Innovation Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suteng Innovation Technology Co Ltd filed Critical Suteng Innovation Technology Co Ltd
Priority to CN201910151657.0A priority Critical patent/CN111626314B/en
Publication of CN111626314A publication Critical patent/CN111626314A/en
Application granted granted Critical
Publication of CN111626314B publication Critical patent/CN111626314B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application relates to a classification method and device of point cloud data, computer equipment and a storage medium. The method comprises the following steps: acquiring point cloud data corresponding to a plurality of target barriers in an outdoor environment; identifying bounding boxes corresponding to each target obstacle according to the point cloud data, and extracting global characteristic information corresponding to each target obstacle according to the bounding boxes; invoking a trained classifier, and inputting global characteristic information corresponding to each target obstacle into the classifier; and carrying out prediction operation on the characteristic information through the classifier, and outputting the category corresponding to each target obstacle. By adopting the method, the classification accuracy of the target obstacle in the outdoor environment can be effectively improved.

Description

Classification method and device for point cloud data, computer equipment and storage medium
Technical Field
The present application relates to the technical field of lidar, and in particular, to a method and apparatus for classifying point cloud data, a computer device, and a storage medium.
Background
The laser radar sensor can provide real-time and accurate three-dimensional scene information, has the congenital advantage of environmental perception, has a large ranging range and high accuracy, and is widely applied to the fields of unmanned driving, security monitoring, surveying and mapping exploration and the like. The information acquired after the laser radar scans the target obstacle is usually presented in the form of point cloud, and the type of the target obstacle can be effectively identified by classifying the point cloud data.
In the conventional point cloud classification method, local features of each dimension are extracted from point cloud data to classify target obstacles. However, for a target obstacle in an outdoor environment, the laser radar cannot scan all parts of the target obstacle due to shielding and other conditions, so that all local features of the target obstacle cannot be extracted from the point cloud data. If the traditional point cloud classification method is adopted to classify the point cloud data collected in the outdoor environment, the accuracy of classifying the target obstacle is reduced.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a classification method, apparatus, computer device, and storage medium for point cloud data that can effectively improve accuracy of classification of target obstacles in an outdoor environment.
A method of classifying point cloud data, the method comprising:
acquiring point cloud data corresponding to a plurality of target barriers in an outdoor environment;
identifying bounding boxes corresponding to each target obstacle according to the point cloud data, and extracting global characteristic information corresponding to each target obstacle according to the bounding boxes;
invoking a trained classifier, and inputting global characteristic information corresponding to each target obstacle into the classifier;
and carrying out prediction operation on the characteristic information through the classifier, and outputting the category corresponding to each target obstacle.
In one embodiment, the identifying the bounding box corresponding to each target obstacle according to the point cloud data includes:
calculating a three-dimensional frame corresponding to the target obstacle by utilizing the point cloud data;
and extracting a target point cloud in the three-dimensional frame, and performing bounding box fitting operation on the target point cloud to obtain a bounding box corresponding to the target obstacle.
In one embodiment, the method further comprises:
acquiring training data and labels corresponding to the training data, wherein the training data comprises point cloud data acquired in an outdoor environment in advance;
extracting training global feature information from the training data, wherein the training global feature information corresponds to a label;
and generating a training feature vector by using the extracted training global feature information, and training the classifier through the training feature vector and the label.
In one embodiment, before the global feature information corresponding to each target obstacle is input to the classifier, the method further includes:
acquiring a corresponding feature sequence of the classifier in the training process;
and converting the global feature information into feature vectors corresponding to the target obstacle according to the feature sequence.
In one embodiment, the extracting global feature information corresponding to each target obstacle includes: calling a plurality of threads, and extracting global characteristic information corresponding to each target obstacle through the multithreading;
the converting the global feature information into feature vectors corresponding to the target obstacle according to the feature sequence includes: and converting global characteristic information corresponding to each target obstacle by utilizing the multithreading according to the characteristic sequence to obtain a characteristic vector corresponding to each target obstacle.
In one embodiment, the global feature information includes a point cloud bounding box position pose and size, a point cloud inertia tensor, a point cloud covariance matrix, a point cloud feature value, a point cloud height range, and a point cloud reflection intensity statistic characteristic.
A classification apparatus for point cloud data, the apparatus comprising:
the data acquisition module is used for acquiring point cloud data corresponding to a plurality of target barriers in an outdoor environment;
the feature extraction module is used for identifying a bounding box corresponding to each target obstacle according to the point cloud data and extracting global feature information corresponding to each target obstacle according to the bounding box;
the classification module is used for calling the trained classifier and inputting global characteristic information corresponding to each target obstacle into the classifier; and carrying out prediction operation on the characteristic information through the classifier, and outputting the category corresponding to each target obstacle.
In one embodiment, the feature extraction module is further configured to calculate a three-dimensional frame corresponding to the target obstacle using the point cloud data; and extracting a target point cloud in the three-dimensional frame, and performing bounding box fitting operation on the target point cloud to obtain a bounding box corresponding to the target obstacle.
In one embodiment, the data acquisition module is further configured to acquire training data and a tag corresponding to the training data, where the training data includes point cloud data acquired in advance in an outdoor environment; the feature extraction module is also used for extracting training global feature information from the training data, wherein the training global feature information corresponds to the tag;
the apparatus further comprises:
and the training module is used for generating training feature vectors by using the extracted training global feature information, and training the classifier through the training feature vectors and the labels.
A computer device comprising a memory storing a computer program and a processor implementing the steps of the method embodiments described above when the processor executes the computer program.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the various method embodiments described above.
According to the classification method, the classification device, the computer equipment and the storage medium of the point cloud data, the point cloud data corresponding to a plurality of target barriers in the outdoor environment are collected through the laser radar. After the computer equipment acquires the point cloud data, the bounding box of each target obstacle is identified according to the point cloud data, so that a plurality of target obstacles in the outdoor environment can be primarily identified. And extracting global characteristic information corresponding to each target obstacle by utilizing the point cloud data. And taking the global characteristic information corresponding to each target obstacle as the input of a classifier, performing prediction operation by the classifier, and outputting the category corresponding to each target obstacle. Because the classifier is obtained after training, each target obstacle in the outdoor environment can be accurately classified, and the accuracy of classifying the target obstacle in the outdoor environment is effectively improved.
Drawings
FIG. 1 is a flow chart of a method for classifying point cloud data according to one embodiment;
FIG. 2 is a block diagram of a classification apparatus for point cloud data according to one embodiment;
FIG. 3 is an internal block diagram of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
In one embodiment, as shown in fig. 1, a method for classifying point cloud data is provided, and the method is applied to computer equipment for illustration, and includes the following steps:
step 102, acquiring point cloud data corresponding to a plurality of target obstacles in an outdoor environment.
And 104, identifying bounding boxes corresponding to the target obstacles according to the point cloud data, and extracting global characteristic information corresponding to the target obstacles according to the bounding boxes.
And 106, calling the trained classifier, and inputting global characteristic information corresponding to each target obstacle into the classifier.
And step 108, carrying out prediction operation on the characteristic information through a classifier, and outputting the category corresponding to each target obstacle.
In the running process of the vehicle, a plurality of target obstacles in the outdoor environment can be scanned by the laser radar, so that corresponding point cloud data are obtained. The outdoor environment may be an environment in which the vehicle travels. The vehicle may be in an automatic driving mode when traveling in an outdoor environment. Each target obstacle may correspond to one or more target point clouds, i.e., to one or more sets of point cloud data. And the laser radar transmits the point cloud data corresponding to the multiple target point clouds to the computer equipment.
In order to accurately identify the category of each target obstacle, the computer device needs to identify the area in which each target obstacle is located. Namely, the computer equipment needs to divide the whole scanned area in the outdoor environment to obtain the area corresponding to each target obstacle. The computer equipment calculates a three-dimensional frame corresponding to each target obstacle according to the point cloud data of each target point cloud, wherein the three-dimensional frame comprises a center position, a size, a direction and the like, so that the area corresponding to each target obstacle can be determined according to the three-dimensional frame. The lamination procedure of the three-dimensional frame and the target obstacle is not high enough, so that the classification of the target obstacle is adversely affected. In order to improve the classification accuracy of the target obstacle, the computer equipment also needs to extract a target point cloud in the three-dimensional frame, and carries out bounding box fitting on the target point cloud.
Bounding box fitting the cloud of target points may take a variety of ways. Including geometric method fitting. In this example, geometric fitting is taken as an example. After the computer equipment extracts the corresponding target point cloud in the three-dimensional frame, the target point cloud is projected to a two-dimensional plane to generate two-dimensional point cloud data. In the two-dimensional plane, a main direction corresponding to the target point cloud is calculated, and a horizontal range is defined by the maximum value point (comprising a maximum value point and a minimum value point) of the target point cloud in the main direction and the vertical direction, so that the length and the width of the target point cloud are determined. The computer device then determines a height range of the target point cloud on the vertical plane. And the computer equipment generates a fitting frame corresponding to the target point cloud according to the horizontal range and the height range. Because each frame of point cloud data contains a plurality of target obstacles, the computer equipment needs to perform fitting operation for a plurality of times, and each target point cloud can obtain a plurality of fitting frames. The plurality of fitting boxes may form a fitting box set. The computer equipment uses the fitting frame set center as the geometric center of the target point cloud, a rectangular frame is defined according to the geometric center, grids are divided for the rectangular frame, the grids are adjusted according to points surrounded by the grids, and accordingly the fitting is carried out to obtain corresponding surrounding frames. The bounding box is more attached to the target point cloud, so that the classification of the point cloud data is more accurate, and the accuracy of classifying the target obstacle can be effectively improved.
After identifying the bounding box of each target obstacle, the computer equipment extracts global characteristic information corresponding to each target obstacle according to the bounding box. The global characteristic information is characteristic information capable of reflecting the overall characteristics of the target obstacle. The global feature information includes: the position and the posture of the point cloud bounding box, the size of the point cloud bounding box, the point cloud inertia tensor, the point cloud covariance matrix, the point cloud characteristic value, the point cloud height range, the point cloud reflection intensity statistical characteristics and the like. Due to the fact that shielding and the like exist in an outdoor environment, the laser radar cannot scan all local features of the target obstacle. Moreover, in an application scenario of an outdoor environment, for example, when driving automatically, the computer device only needs to identify target obstacles around the vehicle, and does not need to identify all specific locations of each target obstacle. Therefore, by identifying the bounding box of each target obstacle, the plurality of target obstacles scanned in the outdoor environment can be effectively segmented, thereby facilitating classification of each target obstacle.
The pre-trained classifier is stored on the computer device. The classifier is obtained by training point cloud data (also called training data) corresponding to various obstacles in an outdoor environment. The classifier may employ a variety of machine-learned classifiers, such as adaboost, random forest, xgboost, etc.
The computer equipment converts the global feature information corresponding to each target obstacle to obtain feature vectors corresponding to each target obstacle, and inputs the feature vectors corresponding to the target obstacles into the classifier. The computer equipment can acquire the feature sequence corresponding to the global features of the classifier in the training process, and convert the global feature information into feature vectors corresponding to the target obstacle according to the feature sequence. The computer device may generate a corresponding feature vector for each target obstacle, i.e. the computer device may calculate a corresponding feature vector for a plurality of target obstacles scanned in the outdoor environment, respectively. The computer device may also splice using the feature vectors corresponding to each target obstacle to generate a total feature vector corresponding to the plurality of target obstacles.
The computer device may input the feature vector corresponding to each target obstacle into the trained classifier, and perform prediction operation by using the classifier, so as to output the category corresponding to each target obstacle. The computer device may further input the total feature vectors corresponding to the plurality of target obstacles into the trained classifier, and perform prediction operation by using the classifier, and sequentially output the category corresponding to each target obstacle. Categories include background, pedestrians, motor vehicles, non-motor vehicles, and the like. Thereby completing classification of a plurality of target obstacles in the outdoor environment.
In this embodiment, point cloud data corresponding to a plurality of target obstacles in an outdoor environment is collected by a laser radar. After the computer equipment acquires the point cloud data, the bounding box of each target obstacle is identified according to the point cloud data, so that a plurality of target obstacles in the outdoor environment can be primarily identified. And extracting global characteristic information corresponding to each target obstacle by utilizing the point cloud data. And taking the global characteristic information corresponding to each target obstacle as the input of a classifier, performing prediction operation by the classifier, and outputting the category corresponding to each target obstacle. Because the classifier is obtained after training, each target obstacle in the outdoor environment can be accurately classified.
Further, in the conventional point cloud data classification method, because local features of each dimension of the target obstacle need to be extracted, the required local feature information needs to be extracted by traversing three times in the point cloud data. The time complexity is O (n 3 ). In this embodiment, the required global feature information can be extracted from the point cloud data only by traversing the point cloud data once, and the time complexity is O (n). The time complexity is effectively reduced, the classification efficiency of the point cloud data is improved, and the classification efficiency of the target obstacle in the outdoor environment is further improved.
It should be understood that, although the steps in the flowchart of fig. 1 are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 1 may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor do the order in which the sub-steps or stages are performed necessarily performed in sequence, but may be performed alternately or alternately with at least a portion of other steps or sub-steps of other steps.
In one embodiment, the method further comprises: acquiring training data and labels corresponding to the training data, wherein the training data comprises point cloud data acquired in an outdoor environment in advance; extracting training global feature information from the training data, wherein the training global feature information corresponds to the label; and generating a training feature vector by using the extracted training global feature information, and training the classifier by using the training feature vector and the label.
The computer device pre-collects point cloud data of a large number of obstacles (which may also be referred to as training obstacles) in the outdoor environment as training data. Wherein the training data has a corresponding label. The tag may represent a category of point cloud data, i.e. a category of training obstacles. Categories include background, pedestrians, motor vehicles, non-motor vehicles, and the like. Wherein the tag may be stored in the form of a bounding box of the training barrier. The computer device extracts a set of training global feature information for each point cloud data (training data). The training global features comprise the position posture and the size of a point cloud bounding box, a point cloud inertia tensor, a point cloud covariance matrix, a point cloud characteristic value, a point cloud height range, a point cloud reflection intensity statistical characteristic and the like.
The computer device may arrange the training global features according to a preset feature sequence corresponding to the training global features, and convert the arranged training global features to obtain training feature vectors corresponding to each training target obstacle. Wherein the computer device may generate a corresponding training feature vector for each training obstacle separately. The computer device may also splice feature vectors corresponding to each training obstacle to generate a total training feature vector corresponding to the plurality of training obstacles.
The computer device may input training feature vectors corresponding to each training obstacle into the classifier according to the respective training feature vectors, and train the classifier so that the classifier outputs a class corresponding to each training obstacle. The computer device may further input the total training feature vectors corresponding to the plurality of training obstacles into the classifier, and train the classifier, so that the classifier sequentially outputs the class corresponding to each training obstacle.
By training the classifier by utilizing a large amount of point cloud data corresponding to the training obstacle, the trained classifier can accurately classify the point cloud data, so that the classification accuracy of the obstacle is improved.
In one embodiment, extracting global feature information corresponding to each target obstacle includes: calling a plurality of threads, and extracting global characteristic information corresponding to each target obstacle through the multithreading; converting the global feature information into feature vectors corresponding to the target obstacle in the feature order includes: and converting global characteristic information corresponding to each target obstacle by utilizing multithreading according to the characteristic sequence to obtain a characteristic vector corresponding to each target obstacle.
In order to effectively improve the classification efficiency of the point cloud data and the classification efficiency of the target obstacle, the computer equipment can also call multithreading for concurrent processing. Specifically, the computer device may generate a feature extraction task corresponding to each target obstacle after identifying the bounding box of each target obstacle. The computer equipment calls a plurality of threads, and the threads execute the feature extraction task in parallel, namely the threads extract global feature information corresponding to a plurality of target barriers in parallel. If the number of the target barriers is greater than the number of threads, the threads extract global feature information corresponding to part of the target barriers in parallel, and after the execution is finished, the global feature information corresponding to the rest target barriers is extracted in parallel. After all feature extraction tasks have been performed, the thread is released.
The computer device may also generate a feature conversion task corresponding to each target obstacle after extracting global feature information corresponding to each target obstacle. The computer device may invoke multiple threads, with the multiple threads concurrently executing feature transformation tasks. The method comprises the steps that a plurality of threads respectively acquire feature sequences corresponding to global feature information in the classifier training process, and the multithread converts the global feature information corresponding to a plurality of target barriers according to the feature sequences in a concurrent mode to obtain feature vectors corresponding to each target barrier.
The global characteristic information of the plurality of target obstacles is extracted through multithreading concurrency, and the global characteristic information of the plurality of target obstacles is converted through multithreading concurrency, so that the classification efficiency of the point cloud data is effectively improved, and the classification efficiency of the target obstacles is improved.
In one embodiment, as shown in fig. 2, there is provided a classification apparatus of point cloud data, including: a data acquisition module 202, a feature extraction module 204, a classification module 206, wherein:
the data acquisition module 202 is configured to acquire point cloud data corresponding to a plurality of target obstacles in an outdoor environment.
The feature extraction module 204 is configured to identify bounding boxes corresponding to each target obstacle according to the point cloud data, and extract global feature information corresponding to each target obstacle according to the bounding boxes.
The classification module 206 is configured to invoke the trained classifier and input global feature information corresponding to each target obstacle to the classifier; and carrying out prediction operation on the characteristic information through a classifier, and outputting the category corresponding to each target obstacle.
In one embodiment, the feature extraction module is further configured to calculate a three-dimensional frame corresponding to the target obstacle using the point cloud data; and extracting a target point cloud in the three-dimensional frame, and performing bounding box fitting operation on the target point cloud to obtain a bounding box corresponding to the target obstacle.
In one embodiment, the data acquisition module is further configured to acquire training data and a tag corresponding to the training data, where the training data includes point cloud data acquired in advance in an outdoor environment; the feature extraction module is also used for extracting training global feature information in the training data, wherein the training global feature information corresponds to the tag; the apparatus further comprises: the training module is used for generating training feature vectors by using the extracted training global feature information, and training the classifier through the training feature vectors and the labels.
In one embodiment, the feature extraction module is further configured to obtain a feature sequence corresponding to the classifier in the training process; the global feature information is converted into feature vectors corresponding to the target obstacle in the feature order.
In one embodiment, the feature extraction module is further configured to invoke a plurality of threads, and extract global feature information corresponding to each target obstacle through multithreading; and converting global characteristic information corresponding to each target obstacle by utilizing multithreading according to the characteristic sequence to obtain a characteristic vector corresponding to each target obstacle.
In one embodiment, the global characteristic information includes a point cloud bounding box position pose and size, a point cloud inertial tensor, a point cloud covariance matrix, a point cloud characteristic value, a point cloud height range, and a point cloud reflection intensity statistic.
For specific limitations of the classification device of the point cloud data, reference may be made to the above limitation of the classification method of the point cloud data, which is not described herein. The above-mentioned respective modules in the classification device of the point cloud data may be implemented in whole or in part by software, hardware, and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, the internal structure of which may be as shown in FIG. 3. The computer device includes a processor, a memory, a communication interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used for storing point cloud data and the like. The communication interface of the computer device is used for connecting and communicating with the laser radar. The computer program, when executed by a processor, implements a method of classifying point cloud data.
It will be appreciated by those skilled in the art that the structure shown in FIG. 3 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, a computer device is provided comprising a memory storing a computer program and a processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, carries out the steps of the respective method embodiments described above.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.

Claims (10)

1. A method of classifying point cloud data, the method comprising:
acquiring point cloud data corresponding to a plurality of target barriers in an outdoor environment;
identifying a three-dimensional frame corresponding to each target obstacle according to the point cloud data;
extracting a target point cloud in the three-dimensional frame, determining a horizontal range and a height range of the target point cloud, and generating a fitting frame set corresponding to the target point cloud according to the horizontal range and the height range of the target point cloud;
taking the fitting frame set center as a geometric center of the target point cloud, and defining a rectangular frame according to the geometric center;
dividing the rectangular frame into grids, adjusting the grids according to points surrounded by the grids, fitting the grids after adjustment to obtain surrounding frames corresponding to the target obstacles, and extracting global characteristic information corresponding to each target obstacle according to the surrounding frames; the global characteristic information comprises a point cloud inertia tensor;
invoking a trained classifier, and inputting global characteristic information corresponding to each target obstacle into the classifier;
and carrying out prediction operation on the global characteristic information through the classifier, and outputting the category corresponding to each target obstacle.
2. The method according to claim 1, wherein the method further comprises:
acquiring training data and labels corresponding to the training data, wherein the training data comprises point cloud data acquired in an outdoor environment in advance;
extracting training global feature information from the training data, wherein the training global feature information corresponds to a label;
and generating a training feature vector by using the extracted training global feature information, and training the classifier through the training feature vector and the label.
3. The method of claim 1, wherein prior to said inputting global characteristic information corresponding to each target obstacle into the classifier, the method further comprises:
acquiring a corresponding feature sequence of the classifier in the training process;
and converting the global feature information into feature vectors corresponding to the target obstacle according to the feature sequence.
4. A method according to claim 3, wherein extracting global feature information corresponding to each target obstacle comprises: calling a plurality of threads, and extracting global characteristic information corresponding to each target obstacle through the plurality of threads;
the converting the global feature information into feature vectors corresponding to the target obstacle according to the feature sequence includes: and converting global characteristic information corresponding to each target obstacle by utilizing the threads according to the characteristic sequence to obtain a characteristic vector corresponding to each target obstacle.
5. The method of any of claims 1-4, wherein the global feature information further comprises a point cloud bounding box position pose and size, a point cloud covariance matrix, point cloud feature values, a point cloud height range, and point cloud reflection intensity statistics.
6. A classification device for point cloud data, the device comprising:
the data acquisition module is used for acquiring point cloud data corresponding to a plurality of target barriers in an outdoor environment;
the feature extraction module is used for identifying a three-dimensional frame corresponding to each target obstacle according to the point cloud data; extracting a target point cloud in the three-dimensional frame, determining a horizontal range and a height range of the target point cloud, and generating a fitting frame set corresponding to the target point cloud according to the horizontal range and the height range of the target point cloud; taking the fitting frame set center as a geometric center of the target point cloud, and defining a rectangular frame according to the geometric center; dividing the rectangular frame into grids, adjusting the grids according to points surrounded by the grids, fitting the grids after adjustment to obtain surrounding frames corresponding to the target obstacles, and extracting global characteristic information corresponding to each target obstacle according to the surrounding frames; the global characteristic information comprises a point cloud inertia tensor;
the classification module is used for calling the trained classifier and inputting global characteristic information corresponding to each target obstacle into the classifier; and carrying out prediction operation on the global characteristic information through the classifier, and outputting the category corresponding to each target obstacle.
7. The apparatus of claim 6, wherein the feature extraction module is further to: acquiring a corresponding feature sequence of the classifier in the training process; and converting the global feature information into feature vectors corresponding to the target obstacle according to the feature sequence.
8. The apparatus of claim 6, wherein the data acquisition module is further configured to acquire training data and a tag corresponding to the training data, the training data including point cloud data acquired in advance in an outdoor environment; the feature extraction module is also used for extracting training global feature information from the training data, wherein the training global feature information corresponds to the tag;
the apparatus further comprises:
and the training module is used for generating training feature vectors by using the extracted training global feature information, and training the classifier through the training feature vectors and the labels.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 5 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 5.
CN201910151657.0A 2019-02-28 2019-02-28 Classification method and device for point cloud data, computer equipment and storage medium Active CN111626314B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910151657.0A CN111626314B (en) 2019-02-28 2019-02-28 Classification method and device for point cloud data, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910151657.0A CN111626314B (en) 2019-02-28 2019-02-28 Classification method and device for point cloud data, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111626314A CN111626314A (en) 2020-09-04
CN111626314B true CN111626314B (en) 2023-11-07

Family

ID=72272429

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910151657.0A Active CN111626314B (en) 2019-02-28 2019-02-28 Classification method and device for point cloud data, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111626314B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112347999B (en) * 2021-01-07 2021-05-14 深圳市速腾聚创科技有限公司 Obstacle recognition model training method, obstacle recognition method, device and system
CN113239719B (en) * 2021-03-29 2024-04-19 深圳元戎启行科技有限公司 Trajectory prediction method and device based on abnormal information identification and computer equipment
CN113359758A (en) * 2021-06-30 2021-09-07 山东新一代信息产业技术研究院有限公司 Environment cost map generation method and system based on artificial potential field method
CN113807184A (en) * 2021-08-17 2021-12-17 北京百度网讯科技有限公司 Obstacle detection method and device, electronic equipment and automatic driving vehicle
CN114485700A (en) * 2021-12-29 2022-05-13 网络通信与安全紫金山实验室 High-precision dynamic map generation method and device
CN115457496B (en) * 2022-09-09 2023-12-08 北京百度网讯科技有限公司 Automatic driving retaining wall detection method and device and vehicle
CN115980702B (en) * 2023-03-10 2023-06-06 安徽蔚来智驾科技有限公司 Target false detection prevention method, device, driving device and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106709475A (en) * 2017-01-22 2017-05-24 百度在线网络技术(北京)有限公司 Obstacle recognition method and device, computer equipment and readable storage medium
CN106845412A (en) * 2017-01-20 2017-06-13 百度在线网络技术(北京)有限公司 Obstacle recognition method and device, computer equipment and computer-readable recording medium
CN106951847A (en) * 2017-03-13 2017-07-14 百度在线网络技术(北京)有限公司 Obstacle detection method, device, equipment and storage medium
CN107451602A (en) * 2017-07-06 2017-12-08 浙江工业大学 A kind of fruits and vegetables detection method based on deep learning
CN108931983A (en) * 2018-09-07 2018-12-04 深圳市银星智能科技股份有限公司 Map constructing method and its robot
CN109344804A (en) * 2018-10-30 2019-02-15 百度在线网络技术(北京)有限公司 A kind of recognition methods of laser point cloud data, device, equipment and medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109101861A (en) * 2017-06-20 2018-12-28 百度在线网络技术(北京)有限公司 Obstacle identity recognition methods, device, equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106845412A (en) * 2017-01-20 2017-06-13 百度在线网络技术(北京)有限公司 Obstacle recognition method and device, computer equipment and computer-readable recording medium
CN106709475A (en) * 2017-01-22 2017-05-24 百度在线网络技术(北京)有限公司 Obstacle recognition method and device, computer equipment and readable storage medium
CN106951847A (en) * 2017-03-13 2017-07-14 百度在线网络技术(北京)有限公司 Obstacle detection method, device, equipment and storage medium
CN107451602A (en) * 2017-07-06 2017-12-08 浙江工业大学 A kind of fruits and vegetables detection method based on deep learning
CN108931983A (en) * 2018-09-07 2018-12-04 深圳市银星智能科技股份有限公司 Map constructing method and its robot
CN109344804A (en) * 2018-10-30 2019-02-15 百度在线网络技术(北京)有限公司 A kind of recognition methods of laser point cloud data, device, equipment and medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张穗华等.基于三维激光雷达的障碍物检测方法研究.《机电产品开发与创新》.2016,第14-17页. *
曾钰廷.基于深度学习的物体检测与跟踪方法的研究.《中国优秀硕士学位论文全文数据库信息科技辑》.2018,第7-8、51-65页. *

Also Published As

Publication number Publication date
CN111626314A (en) 2020-09-04

Similar Documents

Publication Publication Date Title
CN111626314B (en) Classification method and device for point cloud data, computer equipment and storage medium
CN111160302B (en) Obstacle information identification method and device based on automatic driving environment
Akyon et al. Slicing aided hyper inference and fine-tuning for small object detection
Simonelli et al. Disentangling monocular 3d object detection
CN111353512B (en) Obstacle classification method, obstacle classification device, storage medium and computer equipment
US11151734B2 (en) Method and system for generating synthetic point cloud data using a generative model
CN112232293B (en) Image processing model training method, image processing method and related equipment
WO2021134296A1 (en) Obstacle detection method and apparatus, and computer device and storage medium
US11556745B2 (en) System and method for ordered representation and feature extraction for point clouds obtained by detection and ranging sensor
US9355320B2 (en) Blur object tracker using group lasso method and apparatus
US20160210530A1 (en) Fast object detection method based on deformable part model (dpm)
CN106709475B (en) Obstacle recognition method and device, computer equipment and readable storage medium
CN111191600A (en) Obstacle detection method, obstacle detection device, computer device, and storage medium
US10430691B1 (en) Learning method and learning device for object detector based on CNN, adaptable to customers' requirements such as key performance index, using target object merging network and target region estimating network, and testing method and testing device using the same to be used for multi-camera or surround view monitoring
JP2020080141A (en) Vehicle classification from laser scanner using fisher and profile signature
WO2021134285A1 (en) Image tracking processing method and apparatus, and computer device and storage medium
CN110298281B (en) Video structuring method and device, electronic equipment and storage medium
US11410388B1 (en) Devices, systems, methods, and media for adaptive augmentation for a point cloud dataset used for training
WO2021134258A1 (en) Point cloud-based target tracking method and apparatus, computer device and storage medium
US20230386242A1 (en) Information processing apparatus, control method, and non-transitory storage medium
CN115004259B (en) Object recognition method, device, computer equipment and storage medium
CN114830177A (en) Electronic device and method for controlling the same
Wu et al. Realtime single-shot refinement neural network with adaptive receptive field for 3D object detection from LiDAR point cloud
US20220270327A1 (en) Systems and methods for bounding box proposal generation
US20210357763A1 (en) Method and device for performing behavior prediction by using explainable self-focused attention

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant