CN112733885A - Point cloud identification model determining method and point cloud identification method and device - Google Patents

Point cloud identification model determining method and point cloud identification method and device Download PDF

Info

Publication number
CN112733885A
CN112733885A CN202011541308.9A CN202011541308A CN112733885A CN 112733885 A CN112733885 A CN 112733885A CN 202011541308 A CN202011541308 A CN 202011541308A CN 112733885 A CN112733885 A CN 112733885A
Authority
CN
China
Prior art keywords
point cloud
identification model
cloud identification
feature map
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011541308.9A
Other languages
Chinese (zh)
Inventor
聂泳忠
杨素伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiren Ma Diyan Beijing Technology Co ltd
Original Assignee
Xiren Ma Diyan Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiren Ma Diyan Beijing Technology Co ltd filed Critical Xiren Ma Diyan Beijing Technology Co ltd
Priority to CN202011541308.9A priority Critical patent/CN112733885A/en
Publication of CN112733885A publication Critical patent/CN112733885A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/231Hierarchical techniques, i.e. dividing or merging pattern sets so as to obtain a dendrogram
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a point cloud identification model determining method, a point cloud identification method and a point cloud identification device, wherein the determining method comprises the following steps: clustering the acquired point cloud sample data by using a preset clustering algorithm to obtain the category information of the point cloud; determining convolution kernel information of the neural network according to the category information; constructing an initial point cloud identification model according to the convolution kernel information of the neural network; and training the initial point cloud identification model by using the point cloud sample data to obtain a target point cloud identification model. According to the method and the device, the point cloud identification model can learn point cloud information more pertinently, and therefore identification efficiency and accuracy of the point cloud identification model are improved.

Description

Point cloud identification model determining method and point cloud identification method and device
Technical Field
The application belongs to the technical field of computers, and particularly relates to a point cloud identification model determining method, a point cloud identification device and a computer storage medium.
Background
The point cloud generally refers to a point data set of the product appearance surface obtained by a measuring instrument. The point cloud data is widely applied to scenes such as automatic driving, security monitoring and intelligent traffic.
In order to obtain high-quality point cloud characteristics, the point cloud sample data needs to be accurately identified and analyzed, and useless information can be removed while the integrity of the original point cloud information is kept as much as possible.
In the related art, when a model based on a KCNET neural network is used for identifying and analyzing point cloud sample data, although the extraction of local features can be optimized, the problems of low identification efficiency and low accuracy still exist.
Disclosure of Invention
The point cloud identification model determining method, the point cloud identification device, the point cloud identification equipment and the computer storage medium can enable the point cloud identification model to learn point cloud information more specifically, and therefore identification efficiency and accuracy of the point cloud identification model are improved.
In a first aspect, an embodiment of the present application provides a method for determining a point cloud identification model, including:
clustering the acquired point cloud sample data by using a preset clustering algorithm to obtain the category information of the point cloud;
determining convolution kernel information of the neural network according to the category information;
constructing an initial point cloud identification model according to the convolution kernel information of the neural network;
and training the initial point cloud identification model by using the point cloud sample data to obtain a target point cloud identification model.
Optionally, the training the initial point cloud identification model by using the point cloud sample data to obtain a target point cloud identification model includes:
performing feature extraction on the point cloud sample data by using a feature extraction network to obtain a first feature map;
performing feature fusion on the first feature map and original map information of the point cloud sample data to obtain a second feature map;
determining point cloud feature map information corresponding to the second feature map according to the second feature map by using an attention network unit;
and training the point cloud identification model by using the point cloud characteristic diagram information to obtain a target point cloud identification model.
Optionally, the extracting the features of the point cloud sample data by using a feature extraction network to obtain a first feature map includes:
calculating K nearest neighbor points corresponding to the point cloud sample data;
and calculating to obtain a first feature map according to the K nearest neighbor point and the convolution kernel information.
Optionally, the determining, by the attention network unit, point cloud feature map information corresponding to the second feature map according to the second feature map includes:
acquiring weight information corresponding to the second characteristic diagram;
and according to the weight information corresponding to the second feature map, performing self-attention calculation on the second feature map to obtain point cloud feature map information corresponding to the second feature map.
Optionally, the training the point cloud identification model by using the point cloud feature map information to obtain a target point cloud identification model includes:
performing image up-sampling calculation on the point cloud characteristic map information to obtain first point cloud characteristic map information;
performing image down-sampling calculation on the first point cloud feature map information to obtain a loss function value of the point cloud identification model;
adjusting model parameters of the point cloud identification model to be trained according to the loss function values;
and performing iterative training on the adjusted point cloud identification model by using the point cloud sample data until a preset training stopping condition is met to obtain the target point cloud identification model.
In a second aspect, an embodiment of the present application provides a method for point cloud identification, where the method includes:
acquiring point cloud data to be processed;
and inputting the point cloud data into a target point cloud identification model obtained by training by using the method selected in the first aspect and the method selected in the first aspect, and outputting an identification result of the point cloud data.
In a third aspect, an embodiment of the present application provides an apparatus for determining a point cloud identification model, where the apparatus includes:
the clustering module is used for clustering the acquired point cloud sample data by using a preset clustering algorithm to obtain the category information of the point cloud;
the determining module is used for determining convolution kernel information of the neural network according to the category information;
the building module is used for building an initial point cloud identification model according to the convolution kernel information of the neural network;
and the training module is used for training the initial point cloud identification model by using the point cloud sample data to obtain a target point cloud identification model.
In a fourth aspect, an embodiment of the present application provides an apparatus for point cloud identification, where the apparatus includes:
the acquisition module is used for acquiring point cloud data to be processed;
and the identification module is used for inputting the point cloud data into a target point cloud identification model obtained by training by using the method of the first aspect and the method of the first aspect, and outputting an identification result of the point cloud data.
In a fifth aspect, an embodiment of the present application provides a device for determining a point cloud identification model, where the device includes:
a processor and a memory storing computer program instructions;
the processor, when executing the computer program instructions, implements the method of determining a point cloud identification model as described in the first aspect and optional aspects of the first aspect.
In a fifth aspect, an embodiment of the present application provides a device for point cloud identification, where the device includes:
a processor and a memory storing computer program instructions;
the processor, when executing the computer program instructions, implements a method of point cloud identification as described in the second aspect.
In a sixth aspect, the present application provides a computer storage medium having computer program instructions stored thereon, which when executed by a processor, implement the method for determining a point cloud identification model according to the first aspect and the optional first aspect, and the method for point cloud identification according to the second aspect.
The point cloud identification model determining method, the point cloud identification device, the point cloud identification equipment and the computer storage medium can perform clustering analysis on point cloud data by using a preset clustering algorithm. And determining convolution kernel information of the point cloud identification model based on the point cloud clustering result, and training the target point cloud identification model. Therefore, the point cloud identification model can learn point cloud information more pertinently, and the identification efficiency and accuracy of the point cloud identification model are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments of the present application will be briefly described below, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic view of an implementation scenario of a method for determining a point cloud identification model according to some embodiments of the present application;
FIG. 2 is a schematic flow chart diagram of a method for determining a point cloud identification model according to some embodiments of the present disclosure;
FIG. 3 is a schematic flow chart of training a target point cloud identification model provided by some embodiments of the present application;
FIG. 4 is a schematic diagram of an attention network element provided by some embodiments of the present application;
FIG. 5 is a schematic diagram of a network structure of a point cloud identification model provided by some embodiments of the present application;
FIG. 6 is a schematic flow chart diagram of a method of point cloud identification provided by some embodiments of the present application;
FIG. 7 is a schematic structural diagram of an apparatus for determining a point cloud identification model according to some embodiments of the present disclosure;
FIG. 8 is a schematic structural diagram of an apparatus for point cloud identification provided by some embodiments of the present application;
fig. 9 is a schematic hardware structure diagram of a point cloud identification model determining device or a point cloud identification device provided by some embodiments of the present application.
Detailed Description
Features and exemplary embodiments of various aspects of the present application will be described in detail below, and in order to make objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail below with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. It will be apparent to one skilled in the art that the present application may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the present application by illustrating examples thereof.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
With the development of science and technology, point cloud data has been widely applied to scenes such as automatic driving, security monitoring, intelligent traffic and the like.
In an automatic driving scene, the pre-recognition and semantic understanding of obstacles in a road and on two sides are very important based on point cloud data acquired by measuring equipment such as a radar sensor. The target detection of the point cloud data of the traffic barrier and the quality of the semantic segmentation algorithm are directly related to the effect of the subsequent multi-sensor fusion processing. Identification of the point cloud data may include, for example, image segmentation, object recognition, edge extraction, and the like. In order to better identify point cloud data, high-quality point cloud local features need to be acquired, multi-layer neural network feature map fusion processing is carried out on point cloud sample data, original information integrity (namely main features) is kept as far as possible, and meanwhile useless information in signals can be removed.
In the related technology, point cloud data are analyzed by using a recognition model based on PointNet and PointNet + + neural network structures. PointNet avoids the difference of results brought by disordered point cloud through a max pooling layer (max _ pooling) symmetric function, and learns feature vectors through T-Net to keep the point cloud rotation invariance, wherein the MLP feature extraction of each point is weight sharing. In addition, in order to fully learn the point cloud information, the point cloud data is expanded and increased to 64 dimensions through local embedding, and feature fusion among the multi-layer network feature maps is realized. The PointNet + + algorithm model solves and optimizes neighborhood local feature extraction on the basis of the former. A multi-level feature extraction structure is provided, and multi-scale feature combination and multi-resolution feature combination are realized. The problem of inconsistent downsampling of density is solved.
A point cloud identification model of a KCNET neural network structure tries to improve PointNet by searching a convolution process in a simulation image and a learnable local feature extraction method with geometric interpretation under the condition of keeping a simple network structure. And simulating a convolution process in the image by using a kernel correlation algorithm to construct K nearest neighbor graphs which are equivalent to K convolution kernels, wherein each kernel has M points and is used for extracting kernel correlation of a local geometric structure and recursively performing maximum pool operation in each point neighborhood.
However, in the related art, the method for identifying point cloud data based on the KCNET point cloud identification model still has some defects, for example, the learning efficiency is not high, the convolution kernel determined for learning has certain blindness, and the like, so that the accuracy of identifying point cloud data is not high.
In order to solve the prior art problems, the embodiment of the application provides a point cloud identification model determination method, a point cloud identification device, equipment and a computer storage medium, so that the point cloud identification model can learn point cloud information more specifically, and the identification efficiency and accuracy of the point cloud identification model are improved.
The following describes a method for determining a point cloud identification model, a method, an apparatus, a device and a computer storage medium provided by an embodiment of the present application, with reference to the accompanying drawings. It should be noted that these examples are not intended to limit the scope of the present disclosure.
Fig. 1 is a schematic view of an implementation scenario of a method for determining a point cloud identification model according to some embodiments of the present application. As shown in fig. 1, a training sample set, i.e., point cloud sample data, is formed by using historical point cloud data. Then, the computing platform trains the point cloud identification model with the training sample set. After the point cloud identification model is obtained through training, the point cloud data to be identified can be sent to a computing platform, the point cloud data is automatically identified through the point cloud identification model, the results of target detection and semantic segmentation corresponding to the point cloud data are obtained, and the object represented by the point cloud is effectively identified.
The method for determining the point cloud identification model provided by the embodiment of the application is described below.
Fig. 2 is a schematic flow chart of a method for determining a point cloud identification model according to some embodiments of the present disclosure. In some embodiments of the present application, as shown in fig. 2, the method may be embodied as the following steps:
s101: and clustering the acquired point cloud sample data by using a preset clustering algorithm to obtain the category information of the point cloud.
The point cloud sample data can be historical point cloud data, and the historical point cloud data is used as a training sample data set for determining the point cloud identification model.
In some embodiments of the present application, the point cloud sample data may comprise annotated point cloud data.
The preset clustering algorithm can comprise K-means clustering, hierarchical clustering algorithm, SOM clustering algorithm, FCM clustering algorithm and the like. It is understood that, in practical applications, other existing clustering algorithms can be selected and used according to specific requirements.
Illustratively, a K-means clustering algorithm is used to perform clustering analysis on N × 3 point cloud sample data, where N is the number of point cloud sample data, and each point cloud data point is represented by 3 coordinates (x, y, z) thereof. According to the cluster analysis result, the category information of the point cloud can be obtained, for example, S categories are obtained.
S102: and determining convolution kernel information of the neural network according to the category information.
On the basis of the category information, the convolution kernel information of the neural network can be obtained through calculation.
In some embodiments of the present application, the convolution kernel information may be the number of convolution kernels, i.e., the number of convolution kernels. The number of convolution kernels is a hyper-parameter of the neural network. In machine learning, a hyper-parameter is a parameter whose value is set before the learning process starts.
For example, when the number of the point cloud categories is S, the number L of convolution kernels can be calculated by using the following formula (1):
L=2*S (1)
wherein L and S may be positive integers.
S103: and constructing an initial point cloud identification model according to the convolution kernel information of the neural network.
S104: and training the initial point cloud identification model by using point cloud sample data to obtain a target point cloud identification model.
In some embodiments of the present application, first, as shown in fig. 3, fig. 3 is a schematic flow chart of a training target point cloud identification model provided in some embodiments of the present application, and the training target point cloud identification model may be implemented as the following steps:
s1041: and performing feature extraction on the point cloud sample data by using a feature extraction network to obtain a first feature map.
The feature extraction network may be a K nearest Neighbor graph algorithm (KNNG) based neural network.
Firstly, K nearest neighbor points corresponding to point cloud sample data are calculated. And then, calculating to obtain a first feature map according to the K nearest neighbor point and the convolution kernel information.
In some embodiments of the present application, for each point cloud sample data of N x 3, K x N x 3 points are found in K-th order neighbors, and each point cloud sample data shares one identical graph from the euclidean neighborhood of each point.
And calculating L different genetic relationships between K nearest neighbors of each point in the point cloud sample data and L convolution kernels, wherein each kernel comprises M learnable three-dimensional points. The specific calculation formula (2) is as follows:
Figure BDA0002854660230000081
wherein k ismIs the mth learnable point in the convolution kernel, N (i) is the current point xiSet of neighbors of, xnIs xiOne of the K neighbors of (1), xn-xiIndicating the distance between two points. Kσ(x, y) represents any significant kernel function. In order to efficiently store the local neighborhood of points, the K-nearest-neighbor graph KNNG, whose edges only connect adjacent vertices, is pre-computed by treating each point as a vertex.
In some embodiments of the present application, Kσ(x, y) may be a gaussian kernel, as shown in equation (3):
Figure BDA0002854660230000082
where | represents the euclidean distance between two points, and σ represents the kernel width. The gaussian kernel function exhibits an exponential decay as a function of the distance between two points.
S1042: and performing feature fusion on the first feature map and the original map information of the point cloud sample data to obtain a second feature map.
S1043: and determining point cloud characteristic map information corresponding to the second characteristic map according to the second characteristic map by using an attention network unit.
The Attention network element may comprise a Scaled Dot-product attribute (Scaled Dot-product attribute) of a Self-Attention mechanism (Self-Attention), see fig. 4, where fig. 4 is a schematic diagram of an Attention network element provided in some embodiments of the present application. First, q (x) represents a Query (Query) vector, k (x) represents a Key (Key) vector, and v (x) represents a Value (Value) vector. Firstly, matrix multiplication (MatMul) is carried out, similarity or correlation between q (x) and each k (x) is calculated, then matrix multiplication (MatMul) is carried out with v (x) after the similarity or correlation passes through a softmax layer, and finally weighting (Add) calculation is carried out by combining point cloud sample data to obtain the final attention weight.
In some embodiments of the present application, first, weight information corresponding to the second feature map may be obtained. And then, according to the weight information corresponding to the second feature map, performing self-attention calculation on the second feature map to obtain point cloud feature map information corresponding to the second feature map.
Illustratively, a plurality of different (W) may be obtainedQ,WK,WV) And (4) a weight matrix. And performing linear transformation on the second characteristic diagram according to different weights to obtain n different (Q, K, V) dense matrixes, and obtaining the attention weights of various different angles through self-attention calculation. And calculating point cloud characteristic map information corresponding to the second characteristic map according to the attention weight.
S1044: and training the point cloud identification model by using the point cloud characteristic diagram information to obtain a target point cloud identification model.
In some embodiments of the present application, first, image up-sampling calculation may be performed on point cloud feature map information to obtain first point cloud feature map information. And carrying out image downsampling calculation on the first point cloud characteristic graph information to obtain a loss function value of the point cloud identification model. And adjusting the model parameters of the point cloud identification model to be trained according to the loss function values.
And finally, performing iterative training on the adjusted point cloud identification model by using the point cloud sample data until a preset training stopping condition is met to obtain a target point cloud identification model.
In some embodiments of the present application, image upsampling calculations may refer to extended upscaling calculations in the network layer. The image downsampling calculation may be referred to as a dimension reduction calculation.
It is understood that the target point cloud identification model may be a point cloud identification model of an improved KCNET neural network structure.
Fig. 5 is a schematic network structure diagram of a point cloud identification model according to some embodiments of the present application.
In some embodiments of the application, the point cloud identification model aggregates input point cloud data through a K-means and other clustering algorithms to obtain the number of convolution kernels, and determines the number of learnable points in the convolution kernels. When the nuclear correlation KNNG algorithm calculation is executed, the local characteristic information of the local geometric structure of the point cloud sample data can be better extracted. Secondly, an attention unit mechanism is added to the point cloud identification model on the basis of original feature fusion, namely the point cloud identification model is a neural network based on feature fusion and an attention mechanism. Therefore, the KCNET-based recognition model is improved, and loss of key features in the feature map caused by different receptive field capabilities among different network layers is relieved. The point cloud identification model can effectively capture more key information fused among layers and stably realize better performance on a main data set.
In some embodiments of the present application, the point cloud identification model is a neural network based on feature fusion + attention mechanism. Referring to fig. 5, a corresponding attention mechanism may be added at the network layer involved in feature fusion.
In summary, the method for determining the point cloud identification model in the embodiment of the present application can perform cluster analysis on the point cloud data by using a preset clustering algorithm. And determining convolution kernel information of the point cloud identification model based on the point cloud clustering result, and training the target point cloud identification model. Therefore, the point cloud identification model can learn point cloud information more pertinently, and the identification efficiency and accuracy of the point cloud identification model are improved.
In addition, because the point cloud identification model comprises the attention network unit, attention weights of different angles can be obtained through attention calculation, and a certain data enhancement effect is achieved. Since the stitching transformation of the feature map may offset the relative importance of the layers and suppress critical information in the deep layers of the network. By the structure based on feature fusion and attention mechanism, related features in all channel maps can be integrated, channel maps with deep layers and shallow layers correlated can be selectively emphasized, and deep layer semantics can help an attention unit to find useful information existing in shallow layers of a network. The feature fusion and the attention mechanism supplement each other, so that more angles of information can be learned, and the identification accuracy of the point cloud identification model can be further optimized
Fig. 6 is a schematic flow chart of a method of point cloud identification provided by some embodiments of the present application. As shown in fig. 6, in some embodiments, the point cloud identification model in the above embodiments may be used to better identify point cloud data to be analyzed, and the specific identification method may include:
s601: and acquiring point cloud data to be processed.
It is understood that the point cloud data may be data to be detected and semantically segmented. The point cloud data may include newly acquired and historical data. These point cloud data may include unlabeled data.
S602: and inputting the point cloud data into the point cloud identification model, and outputting an identification result of the point cloud data.
The point cloud identification model is utilized to automatically identify the point cloud data, so that an identification result corresponding to the point cloud data is obtained, and the object represented by the point cloud is effectively identified. For example, in an autopilot scenario, the automatically identified point cloud data results may include pedestrians, cars, bicycles, roads, as well as road and road accessories, and the like.
In summary, according to the point cloud identification method in the embodiment of the application, the point cloud data is automatically identified by using the target point cloud identification model in the embodiment, and because the point cloud identification model can learn point cloud information more specifically, the identification efficiency and accuracy of the point cloud identification model are further improved.
Based on the method for determining the point cloud identification model provided by the embodiment, correspondingly, the application further provides a specific implementation mode of the device for determining the point cloud identification model. Please see the examples below.
Fig. 7 is a schematic structural diagram of a device for determining a point cloud identification model according to some embodiments of the present application. As shown in fig. 7, the apparatus for determining a point cloud identification model includes:
the clustering module 701 is configured to cluster the acquired point cloud sample data by using a preset clustering algorithm to obtain category information of the point cloud;
a determining module 702, configured to determine convolution kernel information of the point neural network according to the category information;
a building module 703, configured to build an initial point cloud identification model according to the convolution kernel information of the neural network;
a training module 704, configured to train the initial point cloud identification model by using the point cloud sample data to obtain a target point cloud identification model.
Therefore, the determining device of the point cloud identification model in the embodiment of the application can be used for executing the determining method of the point cloud identification model in the embodiment, and the method can perform cluster analysis on point cloud data by using a preset clustering algorithm. And determining convolution kernel information of the point cloud identification model based on the point cloud clustering result, and training the target point cloud identification model. Therefore, the point cloud identification model can learn point cloud information more pertinently, and the identification efficiency and accuracy of the point cloud identification model are improved.
Each module/unit in the device for determining the point cloud identification model shown in fig. 7 has a function of implementing each step in fig. 2 and 3, and can achieve the corresponding technical effect, and for brevity, no further description is given here.
Based on the method for point cloud identification provided by the embodiment, correspondingly, the application further provides a specific implementation mode of the device for point cloud identification. Please see the examples below.
Fig. 8 is a schematic structural diagram of an apparatus for point cloud identification according to some embodiments of the present disclosure. As shown in fig. 8, the apparatus for point cloud identification includes:
an obtaining module 801, configured to obtain point cloud data to be processed;
the identification module 802 is configured to input the point cloud data into the point cloud identification model, and output an identification result of the point cloud data.
Fig. 9 is a schematic hardware structure diagram of a point cloud identification model determining device or a point cloud identification device provided by some embodiments of the present application.
The apparatus for determining a point cloud identification model or the apparatus for point cloud identification may comprise a processor 901 and a memory 902 storing computer program instructions.
Specifically, the processor 901 may include a Central Processing Unit (CPU), or an Application Specific Integrated Circuit (ASIC), or may be configured to implement one or more Integrated circuits of the embodiments of the present Application.
Memory 902 may include mass storage for data or instructions. By way of example, and not limitation, memory 902 may include a Hard Disk Drive (HDD), floppy Disk Drive, flash memory, optical Disk, magneto-optical Disk, tape, or Universal Serial Bus (USB) Drive or a combination of two or more of these. Memory 902 may include removable or non-removable (or fixed) media, where appropriate. The memory 902 may be internal or external to the integrated gateway disaster recovery device, where appropriate. In a particular embodiment, the memory 902 is a non-volatile solid-state memory. In a particular embodiment, the memory 902 includes Read Only Memory (ROM). Where appropriate, the ROM may be mask-programmed ROM, Programmable ROM (PROM), Erasable PROM (EPROM), Electrically Erasable PROM (EEPROM), electrically rewritable ROM (EAROM), or flash memory or a combination of two or more of these.
The processor 901 reads and executes the computer program instructions stored in the memory 902 to implement the determination method of the point cloud identification model or the point cloud identification method in any one of the above embodiments.
In one example, the determining device of the point cloud identification model or the device of the point cloud identification may further include a communication interface 903 and a bus 910. As shown in fig. 9, the processor 901, the memory 902, and the communication interface 903 are connected via a bus 910 to complete communication with each other.
The communication interface 903 is mainly used for implementing communication between modules, apparatuses, units and/or devices in this embodiment of the application.
Bus 910 includes hardware, software, or both to couple certain devices of the point cloud identification model or components of the point cloud identified device to each other. By way of example, and not limitation, a bus may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a Front Side Bus (FSB), a Hypertransport (HT) interconnect, an Industry Standard Architecture (ISA) bus, an infiniband interconnect, a Low Pin Count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or other suitable bus or a combination of two or more of these. Bus 910 can include one or more buses, where appropriate. Although specific buses are described and shown in the embodiments of the application, any suitable buses or interconnects are contemplated by the application.
The point cloud identification model determining device or the point cloud identification device may perform the point cloud identification model determining method or the point cloud identification method in the embodiment of the present application, so as to implement the point cloud identification model determining method described in conjunction with fig. 2 and 3 or implement the point cloud identification method described in conjunction with fig. 6.
In addition, in combination with the method for determining a point cloud identification model or the method for identifying a point cloud in the foregoing embodiments, the embodiments of the present application may be implemented by providing a computer storage medium. The computer storage medium having computer program instructions stored thereon; the computer program instructions, when executed by a processor, implement the method for determining a point cloud identification model or the method for point cloud identification in any of the above embodiments.
It is to be understood that the present application is not limited to the particular arrangements and instrumentality described above and shown in the attached drawings. A detailed description of known methods is omitted herein for the sake of brevity. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present application are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications, and additions or change the order between the steps after comprehending the spirit of the present application.
The functional blocks shown in the above-described structural block diagrams may be implemented as hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, plug-in, function card, or the like. When implemented in software, the elements of the present application are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine-readable medium or transmitted by a data signal carried in a carrier wave over a transmission medium or a communication link. A "machine-readable medium" may include any medium that can store or transfer information. Examples of a machine-readable medium include electronic circuits, semiconductor memory devices, ROM, flash memory, Erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, Radio Frequency (RF) links, and so forth. The code segments may be downloaded via computer networks such as the internet, intranet, etc.
It should also be noted that the exemplary embodiments mentioned in this application describe some methods or systems based on a series of steps or devices. However, the present application is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, may be performed in an order different from the order in the embodiments, or may be performed simultaneously.
Aspects of the present disclosure are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such a processor may be, but is not limited to, a general purpose processor, a special purpose processor, an application specific processor, or a field programmable logic circuit. It will also be understood that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware for performing the specified functions or acts, or combinations of special purpose hardware and computer instructions.
As described above, only the specific embodiments of the present application are provided, and it can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system, the module and the unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. It should be understood that the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present application, and these modifications or substitutions should be covered within the scope of the present application.

Claims (11)

1. A method for determining a point cloud identification model is characterized by comprising the following steps:
clustering the acquired point cloud sample data by using a preset clustering algorithm to obtain the category information of the point cloud;
determining convolution kernel information of the neural network according to the category information;
constructing an initial point cloud identification model according to the convolution kernel information of the neural network;
and training the initial point cloud identification model by using the point cloud sample data to obtain a target point cloud identification model.
2. The method of claim 1, wherein training the initial point cloud identification model using the point cloud sample data to obtain a target point cloud identification model comprises:
performing feature extraction on the point cloud sample data by using a feature extraction network to obtain a first feature map;
performing feature fusion on the first feature map and original map information of the point cloud sample data to obtain a second feature map;
determining point cloud feature map information corresponding to the second feature map according to the second feature map by using an attention network unit;
and training the point cloud identification model by using the point cloud characteristic diagram information to obtain a target point cloud identification model.
3. The method of claim 2, wherein the extracting the features of the point cloud sample data by using the feature extraction network to obtain a first feature map comprises:
calculating K nearest neighbor points corresponding to the point cloud sample data;
and calculating to obtain a first feature map according to the K nearest neighbor point and the convolution kernel information.
4. The method of claim 2, wherein the determining, by the attention network unit, point cloud feature map information corresponding to the second feature map according to the second feature map comprises:
acquiring weight information corresponding to the second characteristic diagram;
and according to the weight information corresponding to the second feature map, performing self-attention calculation on the second feature map to obtain point cloud feature map information corresponding to the second feature map.
5. The method of claim 2, wherein training the point cloud identification model using the point cloud feature map information to obtain a target point cloud identification model comprises:
performing image up-sampling calculation on the point cloud characteristic map information to obtain first point cloud characteristic map information;
performing image down-sampling calculation on the first point cloud feature map information to obtain a loss function value of the point cloud identification model;
adjusting model parameters of the point cloud identification model to be trained according to the loss function values;
and performing iterative training on the adjusted point cloud identification model by using the point cloud sample data until a preset training stopping condition is met to obtain the target point cloud identification model.
6. A method of point cloud identification, the method comprising:
acquiring point cloud data to be processed;
inputting the point cloud data into a target point cloud identification model obtained by training according to the method of any one of claims 1 to 5, and outputting the identification result of the point cloud data.
7. An apparatus for determining a point cloud identification model, the apparatus comprising:
the clustering module is used for clustering the acquired point cloud sample data by using a preset clustering algorithm to obtain the category information of the point cloud;
the determining module is used for determining convolution kernel information of the neural network according to the category information;
the building module is used for building an initial point cloud identification model according to the convolution kernel information of the neural network;
and the training module is used for training the initial point cloud identification model by using the point cloud sample data to obtain a target point cloud identification model.
8. An apparatus for point cloud identification, the apparatus comprising:
the acquisition module is used for acquiring point cloud data to be processed;
a recognition module, configured to input the point cloud data into a target point cloud recognition model trained by the method according to any one of claims 1 to 5, and output a recognition result of the point cloud data.
9. An apparatus for determining a point cloud identification model, the apparatus comprising: a processor and a memory storing computer program instructions;
the processor, when executing the computer program instructions, implements the method of determining a point cloud identification model of any of claims 1 to 5.
10. An apparatus for point cloud identification, the apparatus comprising: a processor and a memory storing computer program instructions;
the processor, when executing the computer program instructions, implements the method of point cloud identification of claim 6.
11. A computer storage medium, characterized in that the computer storage medium has stored thereon computer program instructions which, when executed by a processor, implement the method of determining a point cloud identification model according to any one of claims 1 to 5 and the method of point cloud identification according to claim 6.
CN202011541308.9A 2020-12-23 2020-12-23 Point cloud identification model determining method and point cloud identification method and device Pending CN112733885A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011541308.9A CN112733885A (en) 2020-12-23 2020-12-23 Point cloud identification model determining method and point cloud identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011541308.9A CN112733885A (en) 2020-12-23 2020-12-23 Point cloud identification model determining method and point cloud identification method and device

Publications (1)

Publication Number Publication Date
CN112733885A true CN112733885A (en) 2021-04-30

Family

ID=75604685

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011541308.9A Pending CN112733885A (en) 2020-12-23 2020-12-23 Point cloud identification model determining method and point cloud identification method and device

Country Status (1)

Country Link
CN (1) CN112733885A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113723468A (en) * 2021-08-06 2021-11-30 西南科技大学 Object detection method of three-dimensional point cloud
CN114677322A (en) * 2021-12-30 2022-06-28 东北农业大学 Milk cow body condition automatic scoring method based on attention-guided point cloud feature learning
CN114882020A (en) * 2022-07-06 2022-08-09 深圳市信润富联数字科技有限公司 Method, device and equipment for detecting defects of product and computer readable medium
CN115019105A (en) * 2022-06-24 2022-09-06 厦门大学 Latent semantic analysis method, device, medium and equipment of point cloud classification model
CN115249349A (en) * 2021-11-18 2022-10-28 上海仙途智能科技有限公司 Point cloud denoising method, electronic device and storage medium
CN115294343A (en) * 2022-07-13 2022-11-04 苏州驾驶宝智能科技有限公司 Point cloud feature enhancement method based on cross-position and channel attention mechanism
CN116091777A (en) * 2023-02-27 2023-05-09 阿里巴巴达摩院(杭州)科技有限公司 Point Yun Quanjing segmentation and model training method thereof and electronic equipment

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113723468A (en) * 2021-08-06 2021-11-30 西南科技大学 Object detection method of three-dimensional point cloud
CN113723468B (en) * 2021-08-06 2023-08-04 西南科技大学 Object detection method of three-dimensional point cloud
CN115249349A (en) * 2021-11-18 2022-10-28 上海仙途智能科技有限公司 Point cloud denoising method, electronic device and storage medium
CN114677322A (en) * 2021-12-30 2022-06-28 东北农业大学 Milk cow body condition automatic scoring method based on attention-guided point cloud feature learning
CN115019105A (en) * 2022-06-24 2022-09-06 厦门大学 Latent semantic analysis method, device, medium and equipment of point cloud classification model
CN114882020A (en) * 2022-07-06 2022-08-09 深圳市信润富联数字科技有限公司 Method, device and equipment for detecting defects of product and computer readable medium
CN114882020B (en) * 2022-07-06 2022-11-11 深圳市信润富联数字科技有限公司 Product defect detection method, device, equipment and computer readable medium
CN115294343A (en) * 2022-07-13 2022-11-04 苏州驾驶宝智能科技有限公司 Point cloud feature enhancement method based on cross-position and channel attention mechanism
CN116091777A (en) * 2023-02-27 2023-05-09 阿里巴巴达摩院(杭州)科技有限公司 Point Yun Quanjing segmentation and model training method thereof and electronic equipment

Similar Documents

Publication Publication Date Title
CN112733885A (en) Point cloud identification model determining method and point cloud identification method and device
Jin Kim et al. Learned contextual feature reweighting for image geo-localization
US10984659B2 (en) Vehicle parking availability map systems and methods
Tao et al. Scene context-driven vehicle detection in high-resolution aerial images
CN110263920B (en) Convolutional neural network model, training method and device thereof, and routing inspection method and device thereof
Zhang et al. Road recognition from remote sensing imagery using incremental learning
CN111738032B (en) Vehicle driving information determination method and device and vehicle-mounted terminal
CN116704431A (en) On-line monitoring system and method for water pollution
CN111199558A (en) Image matching method based on deep learning
CN111062347B (en) Traffic element segmentation method in automatic driving, electronic equipment and storage medium
CN112149612A (en) Marine organism recognition system and recognition method based on deep neural network
CN115457288A (en) Multi-target tracking method and device based on aerial view angle, storage medium and equipment
Ding et al. Efficient vanishing point detection method in unstructured road environments based on dark channel prior
Liang et al. Car detection and classification using cascade model
CN116861262B (en) Perception model training method and device, electronic equipment and storage medium
Rani et al. ShortYOLO-CSP: a decisive incremental improvement for real-time vehicle detection
CN113129336A (en) End-to-end multi-vehicle tracking method, system and computer readable medium
EP3764335A1 (en) Vehicle parking availability map systems and methods
Al Mamun et al. Efficient lane marking detection using deep learning technique with differential and cross-entropy loss.
CN114972492A (en) Position and pose determination method and device based on aerial view and computer storage medium
CN111832463A (en) Deep learning-based traffic sign detection method
CN116468895A (en) Similarity matrix guided few-sample semantic segmentation method and system
CN115311652A (en) Object detection method and device, electronic equipment and readable storage medium
CN114972737A (en) Remote sensing image target detection system and method based on prototype comparison learning
CN115345806A (en) Object detection method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination