CN112330680A - Lookup table-based method for accelerating point cloud segmentation - Google Patents

Lookup table-based method for accelerating point cloud segmentation Download PDF

Info

Publication number
CN112330680A
CN112330680A CN202011218060.2A CN202011218060A CN112330680A CN 112330680 A CN112330680 A CN 112330680A CN 202011218060 A CN202011218060 A CN 202011218060A CN 112330680 A CN112330680 A CN 112330680A
Authority
CN
China
Prior art keywords
point cloud
point
pointnet
seg
basic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011218060.2A
Other languages
Chinese (zh)
Other versions
CN112330680B (en
Inventor
李仁杰
朝红阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN202011218060.2A priority Critical patent/CN112330680B/en
Publication of CN112330680A publication Critical patent/CN112330680A/en
Application granted granted Critical
Publication of CN112330680B publication Critical patent/CN112330680B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/901Indexing; Data structures therefor; Storage structures
    • G06F16/9017Indexing; Data structures therefor; Storage structures using directory or table look-up
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Graphics (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the field of 3D point cloud segmentation in the field of computer vision, and particularly relates to a method for accelerating point cloud segmentation based on a lookup table. According to the invention, principal component analysis is innovatively applied to point cloud segmentation processing, so that a segmentation network can get rid of dependence on a space transformation network module, and the point cloud segmentation network has rotation invariance and simultaneously reduces the calculation amount.

Description

Lookup table-based method for accelerating point cloud segmentation
Technical Field
The invention belongs to the field of 3D point cloud segmentation in the field of computer vision, and particularly relates to a method for accelerating point cloud segmentation based on a lookup table.
Background
In the unmanned system, real-time segmentation is completed according to the acquired point cloud data, so that the formation of real-time decision is very important. Over the past few years, much progress has been made on how to accomplish model compression and accelerate model inference on point cloud split networks. For example, assuming that network parameters of each layer in the segmented network are independent from each other, the influence of the setting of the network parameter of a certain layer on the performance of the whole segmented network is explored by using a method of controlling variables, so that unimportant connections in the segmented network are pruned, and the number of network layers is reduced. Meanwhile, the network parameters are quantized and converted from floating point numbers to integer numbers, so that the operations of the convolution layer, the pooling layer and the full-connection layer in the deep neural network are completed quickly. The method accelerates the segmentation process of the point cloud segmentation network by reducing the number of network parameters and improving the network operation speed. Because the method prunes the segmented network, the network parameters are reduced and the precision of the network parameters is lost, and the method has great influence on the network performance and the segmentation accuracy.
Disclosure of Invention
The invention provides a method for accelerating point cloud segmentation based on a lookup table, aiming at overcoming at least one defect in the prior art, wherein the lookup table is utilized to improve the efficiency of point cloud network segmentation, and the time spent on point cloud segmentation is greatly reduced.
In order to solve the technical problems, the invention adopts the technical scheme that: a method for accelerating point cloud segmentation based on a lookup table comprises the following steps:
s1, point cloud data normalization processing is carried out, and point cloud with normalized size is obtained;
s2, building and training a point cloud segmentation network PointNet, wherein the point cloud segmentation problem can be regarded as a classification problem of each point in the point cloud; the input of the network is point cloud data of Nx 3, N is the number of points contained in the point cloud, and each point has a three-dimensional coordinate for representation. The network output is NxK, and K is a classification label of each point in the point cloud; after training, saving the network parameters;
s3, establishing a characteristic lookup table;
s4, fine adjustment is carried out on the PointNet _ Seg _ Basic _ Cls part, and the segmentation accuracy of the segmented network is further improved;
s5, rapidly dividing the point cloud, obtaining the characteristics of each point in the point cloud through a characteristic lookup table, inputting the characteristics of each point into the fine-tuned PointNet _ Seg _ Basic _ Cls network, and obtaining the classification result of the point; and integrating the classification condition of each point to finally obtain the segmentation result of the point cloud.
Further, the step S1 specifically includes:
s11, point cloud translation normalization is achieved through gravity center translation, and firstly according to a formula
Figure BDA0002761095950000021
Calculating the gravity center of the point cloud, and then moving each point of the point cloud along with the center of gravity, wherein the translated coordinate of each point is P'i=Pi-Pcenter
S12, realizing point cloud rotation normalization by using a principal component analysis method;
and S13, utilizing an axis alignment bounding box to realize point cloud size normalization.
Further, the step S12 specifically includes:
s121, calculating a 3 multiplied by 3 covariance matrix of the point cloud, solving three eigenvalues by using eigen decomposition, and arranging the three eigenvalues as lambda from large to small123And corresponding feature vector e1,e2,e3Form a rotation matrix R ═ (e)1,e2,e3);
S122, projecting each point of the point cloud to a point cloud model e1,e2,e3In the formed space coordinate system, each point of the point cloud is projected as Pnew=R-1Pi=RTPi
Further, the step S13 specifically includes:
s131, finding two points of the point cloud in the x, y and z directions, wherein the two points are Pmin(xmin,ymin,zmin) And Pmax(xmax,ymax,zmax) Using three length components obtained by subtracting the minimum point from the maximum point as the length, the width and the height of the axis alignment bounding box respectively;
s132, dividing the three components of each point of the point cloud by the length, the width and the height of the bounding box respectively to obtain the point cloud with normalized size, and distributing the coordinates of the point cloud in the range of [ -1,1 ].
Further, the step S2 specifically includes:
s21, removing two T-nets in the Point _ Seg of the Point Net segmentation network, and only keeping the simplest Point Net _ Seg _ Basic network structure; because the point cloud is subjected to principal component analysis in advance in point cloud normalization to carry out rotation normalization on the point cloud, the point cloud representation has rotation invariance, and therefore a T-net structure is not needed in a point cloud segmentation network, the PointNet _ Seg _ Basic with a simple structure is used as the point cloud segmentation network, and the method has the advantage of saving the calculated amount;
and S22, training the PointNet _ Seg _ Basic network by using the training set, and saving the network parameters with the best segmentation accuracy.
Further, in step S22, the network Parameters are divided into two parts, one is that the Parameters for extracting the Features of each point of the point cloud are denoted as PointNet _ Seg _ Basic _ Features _ Parameters, and the other is that the Parameters for classifying the Features of each point are denoted as PointNet _ Seg _ Basic _ Cls _ Parameters.
Further, the step S3 specifically includes:
s31, after the point cloud is subjected to size normalization, the input of the point cloud is fixed in a space V, and V is [ -1,1]3(ii) a Dividing V into S3A plurality of disjoint, independent voxels, each having a length δ of 2/S and a volume of 8/S3(ii) a S is a variable, and different S values can be selected according to requirements to divide the space V;
s32. dividing S from V3Individual element number, using a three-dimensional coordinate (i, j, k) e [0, S]3To identify each voxel, where (0,0,0) represents the lower left corner of the V and (S, S) represents the upper right corner;
s33, constructing a characteristic lookup table T [ i ] [ j ] [ k ] [ f ], wherein i, j, k are voxel numbers, and f represents characteristic vectors corresponding to the voxels with the numbers i, j, k.
Further, the step S33 specifically includes:
s331, constructing a feature extraction network, splitting the PointNet _ Seg _ Basic point cloud segmentation network into two parts, namely PointNet _ Seg _ Basic _ Features and PointNet _ Seg _ Basic _ Cls, wherein the PointNet _ Seg _ Basic _ Features only comprises a network structure of a feature extraction part in the PointNet _ Seg _ Basic, and the PointNet _ Seg _ Basic _ Cls only comprises a network structure for classifying Features according to each point;
s332, restoring the saved PointNet _ Seg _ Basic _ Features _ Parameters into Parameters of a PointNet _ Seg _ Basic _ Features network structure;
s333, generating a three-dimensional coordinate point for each voxel (i, j, k) in the V, wherein the coordinates of the point are (i multiplied by delta, j multiplied by delta, k multiplied by delta); passing the point through a PointNet _ Seg _ Basic _ Features network structure to obtain a feature vector f of the point; the feature vector f is used as a feature vector corresponding to the voxel with the serial number i, j, k; the formula is expressed as: t [ i ] [ j ] [ k ] [ f ] ═ pointent _ seg _ basic _ features (i δ, j δ, k δ);
and S334, storing the obtained characteristic lookup table T [ i ] [ j ] [ k ] [ f ].
Further, the step S4 specifically includes:
s41, after reading the feature lookup table into the memory, looking up the feature vector of each point of the point cloud through the lookup table; for a point (u, V, w) in the point cloud, the number of the voxel in V corresponding to the point is
Figure BDA0002761095950000041
Look-up table T [ i ] from features][j][k][f]Find out [ i, j, k]A corresponding feature vector as the feature vector for the point (u, v, w);
s42, restoring the saved PointNet _ Seg _ Basic _ Cls _ Parameters to the Parameters of the PointNet _ Seg _ Basic _ Cls network structure;
s43, fine adjustment is carried out on parameters in the PointNet _ Seg _ Basic _ Cls through normalized point cloud training data; replacing the PointNet _ Seg _ Basic _ Features part of the PointNet _ Seg _ Basic _ networks with a feature lookup table T [ i ] [ j ] [ k ] [ f ], inquiring feature vectors in the feature lookup table by training data, inputting the training data into a PointNet _ Seg _ Basic _ Cls network structure, and finely adjusting parameters in the PointNet _ Seg _ Basic _ Cls by using the ground route of the training data as a supervision signal;
s44, saving the Parameters in the trimmed PointNet _ Seg _ Basic _ Cls into PointNet _ Seg _ Basic _ Cls _ Parameters.
Further, the step S5 specifically includes:
s51, reading the stored characteristic lookup table T [ i ] [ j ] [ k ] [ f ] to a memory;
s52, constructing a network structure of PointNet _ Seg _ Basic _ Cls, and reading the fine-adjusted PointNet _ Seg _ Basic _ Cls _ Parameters into the PointNet _ Seg _ Basic _ Cls;
s53, carrying out translation, rotation and size normalization processing on the test point cloud data;
s54, acquiring features from a feature query table for each point of the test point cloud, and inputting the features of each point of the test point cloud into a PointNet _ Seg _ Basic _ Cls network to acquire a classification result of the point; and integrating the classification result of each point to finally obtain the segmentation result of the test point cloud.
In the invention, the idea of the lookup table is applied to the point cloud segmentation problem in a pioneering way, and the forward calculation of a neural network is replaced by accessing the lookup table, so that the point cloud segmentation process is greatly accelerated. According to the invention, principal component analysis is innovatively applied to point cloud segmentation processing, so that a segmentation network can get rid of dependence on a space transformation network module, and the point cloud segmentation network has rotation invariance and simultaneously reduces the calculation amount.
Compared with the prior art, the beneficial effects are:
1. the invention skillfully replaces a network structure for extracting the characteristics in the point cloud segmentation network by using the characteristic query table, the time for extracting the characteristics of the point cloud completely depends on the memory access time, the calculated amount is greatly reduced, and only the calculation of the query table index is needed. Compared with other technologies for accelerating point cloud segmentation which still need complex network calculation, the method has the advantage that the speed of point cloud segmentation is greatly improved. Under the condition of using an Intel Xeon E5-2673v4 CPU, the existing faster point cloud segmentation network PointNet (except T-Net) is used, the average 240 seconds are needed for segmenting the point cloud with 50 ten thousand points, while the segmentation technology provided by the invention only needs 5.8 seconds and is accelerated by about 40 times in speed.
2. The invention uses the characteristic lookup table to replace a network structure for characteristic extraction in the point cloud segmentation network, saves the calculation amount of a characteristic extraction part in the point cloud segmentation network, greatly simplifies the network structure, greatly reduces the calculation resources occupied by point cloud classification, and can get rid of the dependence of the deep point cloud segmentation network on the GPU. The method can be applied to a mobile terminal or embedded equipment with limited computing power, so that the real-time point cloud segmentation of the mobile terminal or the embedded equipment is realized.
3. After the characteristic lookup table is constructed, the segmentation network is finely adjusted, so that the segmentation accuracy of the method is basically consistent with that of PointNet (T-net removal) under the condition that the segmentation speed is 40 times faster than that of the conventional point cloud segmentation network. The method greatly accelerates the point cloud segmentation network, and does not have the condition of serious slip of segmentation accuracy in other acceleration methods.
Drawings
FIG. 1 is a schematic view of the overall process of the present invention.
FIG. 2 is a flow chart of the lookup table and the split network of the present invention.
FIG. 3 is a schematic diagram of point cloud normalization processing according to the present invention.
Fig. 4 is a schematic diagram of the network structure of the present invention.
FIG. 5 is a schematic diagram of a feature lookup table structure according to the present invention.
Fig. 6 is a schematic diagram of network feature extraction according to the present invention.
FIG. 7 is a schematic diagram of a point cloud segmentation structure according to the present invention.
Detailed Description
The drawings are for illustration purposes only and are not to be construed as limiting the invention; for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted. The positional relationships depicted in the drawings are for illustrative purposes only and are not to be construed as limiting the invention.
As shown in fig. 1 and 2, a method for accelerating point cloud segmentation based on a lookup table specifically includes the following steps:
step 1: as shown in fig. 3, the point cloud data is normalized, and the left graph in fig. 3 is the un-normalized point cloud and the right graph is the normalized point cloud. The normalization procedure was as follows:
s11, point cloud translation normalization is achieved through gravity center translation, and firstly according to a formula
Figure BDA0002761095950000051
Calculating the gravity center of the point cloud, and then moving each point of the point cloud along with the center of gravity, wherein the translated coordinate of each point is P'i=Pi-Pcenter
S12, realizing point cloud rotation normalization by using a principal component analysis method;
s121, calculating a 3 multiplied by 3 covariance matrix of the point cloud, solving three eigenvalues by using eigen decomposition, and arranging the three eigenvalues as lambda from large to small123And corresponding feature vector e1,e2,e3Form a rotation matrix R ═ (e)1,e2,e3);
S122, projecting each point of the point cloud to a point cloud model e1,e2,e3In the formed space coordinate system, each point of the point cloud is projected as Pnew=R-1Pi=RTPi
S13, point cloud size normalization is achieved by using an axis alignment bounding box;
s131, finding two points of the point cloud in the x, y and z directions, wherein the two points are Pmin(xmin,ymin,zmin) And Pmax(xmax,ymax,zmax) Using three length components obtained by subtracting the minimum point from the maximum point as the length, the width and the height of the axis alignment bounding box respectively;
s132, dividing the three components of each point of the point cloud by the length, the width and the height of the bounding box respectively to obtain the point cloud with normalized size, and distributing the coordinates of the point cloud in the range of [ -1,1 ].
Step 2: according to the network structure diagram shown in FIG. 4, a point cloud segmentation network is built and trained; the method specifically comprises the following steps:
and S21, building and training a point cloud segmentation network. The point cloud segmentation problem can be viewed as a classification problem for each point in the point cloud. Therefore, the input of the network should be the point cloud data of N × 3, N is the number of points included in the point cloud, the output of the network is N × K, and K is the classification label of each point in the point cloud. And after the training is finished, saving the network parameters.
S22, removing two T-nets in the PointNet segmentation network, and only keeping the simplest PointNet _ Seg _ Basic network structure. Because the point cloud is subjected to principal component analysis in advance in point cloud normalization to carry out rotation normalization on the point cloud, so that the point cloud network has rotation invariance, a T-net structure is not required in the point cloud segmentation network, and the PointNet _ Seg _ Basic (shown in figure 4) with a simple structure is used as the point cloud segmentation network.
S23, training PointNet _ Seg _ Basic by using a training set, storing a network parameter with the best segmentation accuracy, and dividing the network parameter into two parts, wherein the parameter for extracting the Features of each point of the point cloud is recorded as PointNet _ Seg _ Basic _ Features _ Parameters, and the parameter for classifying the Features of each point is recorded as PointNet _ Seg _ Basic _ Cls _ Parameters.
And step 3: a feature lookup table as shown in fig. 5 is established, and each small grid in fig. 5 represents a voxel, and a feature vector corresponding to the voxel is recorded in the voxel. The detailed steps are as follows:
s31, after the point cloud is subjected to size normalization, the input of the point cloud is fixed in a space V, and V is [ -1,1]3(ii) a Dividing V into S3A plurality of disjoint, independent voxels, each having a length δ of 2/S and a volume of 8/S3(ii) a S is a variable, and different S values can be selected according to requirements to divide the space V;
s32. dividing S from V3Individual element number, using a three-dimensional coordinate (i, j, k) e [0, S]3To identify each voxel, where (0,0,0) represents the lower left corner of the V and (S, S) represents the upper right corner;
s33, constructing a characteristic lookup table T [ i ] [ j ] [ k ] [ f ], wherein i, j, k are voxel numbers, and f represents characteristic vectors corresponding to voxels with the numbers i, j, k; the method specifically comprises the following steps:
s331, constructing a feature extraction network, splitting the PointNet _ Seg _ Basic point cloud segmentation network into two parts, namely PointNet _ Seg _ Basic _ Features and PointNet _ Seg _ Basic _ Cls, wherein the PointNet _ Seg _ Basic _ Features only comprises a network structure of a feature extraction part in the PointNet _ Seg _ Basic, and the PointNet _ Seg _ Basic _ Cls only comprises a network structure for classifying Features according to each point;
s332, restoring the saved PointNet _ Seg _ Basic _ Features _ Parameters into Parameters of a PointNet _ Seg _ Basic _ Features network structure;
s333, generating a three-dimensional coordinate point for each voxel (i, j, k) in the V, wherein the coordinates of the point are (i multiplied by delta, j multiplied by delta, k multiplied by delta); passing the point through a PointNet _ Seg _ Basic _ Features network structure to obtain a feature vector f of the point; the feature vector f is used as a feature vector corresponding to the voxel with the serial number i, j, k; the formula is expressed as: t [ i ] [ j ] [ k ] [ f ] ═ pointent _ seg _ basic _ features (i δ, j δ, k δ);
and S334, storing the obtained characteristic lookup table T [ i ] [ j ] [ k ] [ f ].
And 4, step 4: fine-tuning the PointNet _ Seg _ Basic _ Cls part so as to avoid the descending of the segmentation accuracy of the segmentation network; the method specifically comprises the following steps:
s41, after reading the characteristic lookup table into the memory, looking up the characteristic vector of each point of the point cloud through the lookup table, and for the point (u, V, w) in the point cloud, the number of the voxel in the point corresponding to V is
Figure BDA0002761095950000071
Look-up table T [ i ] from features][j][k][f]Find out [ i, j, k]The corresponding feature vector is taken as the feature vector of the point;
s42, restoring the saved PointNet _ Seg _ Basic _ Cls _ Parameters to the Parameters of the PointNet _ Seg _ Basic _ Cls network structure;
s43, fine adjustment is carried out on parameters in the PointNet _ Seg _ Basic _ Cls through normalized point cloud training data; as shown by the black arrow in fig. 6, the operations of extracting 64-dimensional, 128-dimensional, 256-dimensional, 512-dimensional, 1024-dimensional and 2048-dimensional feature vectors from the PointNet _ Seg _ Basic _ Features part of the PointNet _ Seg _ Basic network are replaced by 64-dimensional, 128-dimensional, 256-dimensional, 512-dimensional, 1024-dimensional and 2048-dimensional feature lookup tables, after the feature vectors are queried in each feature lookup table by the training data, all the feature vectors are connected together and input into the PointNet _ Seg _ Basic _ cis network, and the parameters in the PointNet _ Seg _ Basic _ cis network are finely adjusted by using the ground of the training data as a supervision signal;
s44, saving the Parameters in the trimmed PointNet _ Seg _ Basic _ Cls into PointNet _ Seg _ Basic _ Cls _ Parameters.
And 5: as shown in fig. 7, a point cloud is rapidly partitioned; the method specifically comprises the following steps:
s51, reading the characteristic lookup table T [ i ] [ j ] [ k ] [ f ] of each dimension to a memory
S52, constructing a network structure of PointNet _ Seg _ Basic _ Cls, and reading the fine-tuned PointNet _ Seg _ Basic _ Cls _ Parameters to the PointNet _ Seg _ Basic _ Cls
S53, normalization processing is carried out on the test point cloud data
And S54, acquiring feature vectors of different dimensions from the feature query table of each dimension for each point of the test point cloud, and then, connecting all the feature vectors together and inputting the connected feature vectors into a PointNet _ Seg _ Basic _ Cls network to acquire a classification result of the point. And integrating the classification result of each point to finally obtain the segmentation result of the test point cloud.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.
It should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (10)

1. A method for accelerating point cloud segmentation based on a lookup table is characterized by comprising the following steps:
s1, point cloud data normalization processing is carried out, and point cloud with normalized size is obtained;
s2, building and training a point cloud segmentation network PointNet, wherein the point cloud segmentation problem can be regarded as a classification problem of each point in the point cloud; the input of the network is point cloud data of Nx 3, N is the number of points contained in the point cloud, and each point has a three-dimensional coordinate for representation; the network output is NxK, and K is a classification label of each point in the point cloud; after training, saving the network parameters;
s3, establishing a characteristic lookup table;
s4, fine adjustment is carried out on the PointNet _ Seg _ Basic _ Cls part, and the segmentation accuracy of the segmented network is further improved;
s5, rapidly dividing the point cloud, obtaining the characteristics of each point in the point cloud through a characteristic lookup table, inputting the characteristics of each point into the fine-tuned PointNet _ Seg _ Basic _ Cls network, and obtaining the classification result of the point; and integrating the classification condition of each point to finally obtain the segmentation result of the point cloud.
2. The lookup table-based method for accelerating point cloud segmentation as claimed in claim 1, wherein the step S1 specifically comprises:
s11, point cloud translation normalization is achieved through gravity center translation, and firstly according to a formula
Figure FDA0002761095940000011
Calculating the gravity center of the point cloud, and then moving each point of the point cloud along the gravity center, wherein the translated coordinate of each point is P'i=Pi-Pcenter
S12, realizing point cloud rotation normalization by using a principal component analysis method;
and S13, utilizing an axis alignment bounding box to realize point cloud size normalization.
3. The lookup table-based method for accelerating point cloud segmentation as claimed in claim 2, wherein the step S12 specifically comprises:
s121, calculating a 3 multiplied by 3 covariance matrix of the point cloud, solving three eigenvalues by using eigen decomposition, and arranging the three eigenvalues as lambda from large to small123And corresponding feature vector e1,e2,e3Form a rotation matrix R ═ (e)1,e2,e3);
S122, projecting each point of the point cloud to a point cloud model e1,e2,e3In the formed space coordinate system, each point of the point cloud is projected as Pnew=R-1Pi=RTPi
4. The lookup table-based method for accelerating point cloud segmentation as claimed in claim 3, wherein the step S13 specifically comprises:
s131, finding two points of the point cloud in the x, y and z directions, wherein the two points are Pmin(xmin,ymin,zmin) And Pmax(xmax,ymax,zmax) Using three length components obtained by subtracting the minimum point from the maximum point as the length, the width and the height of the axis alignment bounding box respectively;
s132, dividing the three components of each point of the point cloud by the length, the width and the height of the bounding box respectively to obtain the point cloud with normalized size, and distributing the coordinates of the point cloud in the range of [ -1,1 ].
5. The lookup table-based method for accelerating point cloud segmentation as claimed in claim 4, wherein the step S2 specifically comprises:
s21, removing two T-nets in the PointNet segmentation network PointNet _ Seg, and only reserving the simplest PointNet _ Seg _ Basic network structure;
and S22, training the PointNet _ Seg _ Basic network by using the training set, and saving the network parameters with the best segmentation accuracy.
6. The method for accelerating point cloud segmentation based on lookup tables as claimed in claim 5, wherein in step S22, the saved network Parameters are divided into two parts, one is the parameter for feature extraction of each point in the point cloud and is denoted as PointNet _ Seg _ Basic _ Features _ Parameters, and the other is the parameter for classifying the feature of each point and is denoted as PointNet _ Seg _ Basic _ Cls _ Parameters.
7. The lookup table-based method for accelerating point cloud segmentation as claimed in claim 6, wherein the step S3 specifically comprises:
s31, after the point cloud is subjected to size normalization, the input of the point cloud is fixed in a space V, and V is [ -1,1]3(ii) a Dividing V into S3A plurality of disjoint, independent voxels, each having a length δ of 2/S and a volume of 8/S3(ii) a S is a variable, different S values can be selected according to requirements to divide the space V, the accuracy is slightly improved along with the increase of S, and the memory occupied by V is increased;
s32. for S to be divided from V3Individual element number, using a three-dimensional coordinate (i, j, k) e [0, S]3To identify each voxel, where (0,0,0) represents the lower left corner of the V and (S, S) represents the upper right corner;
s33, constructing a characteristic lookup table T [ i ] [ j ] [ k ] [ f ], wherein i, j, k are voxel numbers, and f represents characteristic vectors corresponding to the voxels with the numbers i, j, k.
8. The lookup table-based method for accelerating point cloud segmentation as claimed in claim 7, wherein the step S33 specifically comprises:
s331, constructing a feature extraction network, splitting the PointNet _ Seg _ Basic point cloud segmentation network into two parts, namely PointNet _ Seg _ Basic _ Features and PointNet _ Seg _ Basic _ Cls, wherein the PointNet _ Seg _ Basic _ Features only comprises a network structure of a feature extraction part in the PointNet _ Seg _ Basic, and the PointNet _ Seg _ Basic _ Cls only comprises a network structure for classifying Features according to each point;
s332, restoring the saved PointNet _ Seg _ Basic _ Features _ Parameters into Parameters of a PointNet _ Seg _ Basic _ Features network structure;
s333, generating a three-dimensional coordinate point for each voxel (i, j, k) in the V, wherein the coordinates of the point are (i multiplied by delta, j multiplied by delta, k multiplied by delta), and delta is the length of the voxel; passing the generated point through a PointNet _ Seg _ Basic _ Features network structure to obtain a feature vector f of the point; the feature vector f is used as a feature vector corresponding to the voxel with the serial number i, j, k; the formula is expressed as:
T[i][j][k][f]=pointnet_seg_basic_features(iδ,jδ,kδ);
and S334, storing the obtained characteristic lookup table T [ i ] [ j ] [ k ] [ f ].
9. The lookup table-based method for accelerating point cloud segmentation as claimed in claim 8, wherein the step S4 specifically comprises:
s41, after reading the stored feature lookup table into a memory, looking up the feature vector of each point of the point cloud through the lookup table; for a point (u, V, w) in the point cloud, the number of the voxel in V corresponding to the point is
Figure FDA0002761095940000031
Wherein δ is the voxel length; look-up table T [ i ] from features][j][k][f]Find out [ i, j, k]A corresponding feature vector as the feature vector for point (u, v, w);
s42, restoring the saved PointNet _ Seg _ Basic _ Cls _ Parameters to the Parameters of the PointNet _ Seg _ Basic _ Cls network structure;
s43, fine adjustment is carried out on parameters in the PointNet _ Seg _ Basic _ Cls through normalized point cloud training data; replacing the PointNet _ Seg _ Basic _ Features part of the PointNet _ Seg _ Basic _ Features extracted by the PointNet _ Seg _ Basic network with a feature lookup table T [ i ] [ j ] [ k ] [ f ], inquiring feature vectors in the feature lookup table by training data, inputting the feature vectors into a PointNet _ Seg _ Basic _ Cls network structure, and finely adjusting parameters in the PointNet _ Seg _ Basic _ Cls by using the ground channel of the training data as a supervision signal;
s44, saving the Parameters in the trimmed PointNet _ Seg _ Basic _ Cls into PointNet _ Seg _ Basic _ Cls _ Parameters.
10. The lookup table-based method for accelerating point cloud segmentation as claimed in claim 9, wherein the step S5 specifically comprises:
s51, reading the stored characteristic lookup table T [ i ] [ j ] [ k ] [ f ] to a memory;
s52, constructing a network structure of PointNet _ Seg _ Basic _ Cls, and reading the fine-adjusted PointNet _ Seg _ Basic _ Cls _ Parameters into the PointNet _ Seg _ Basic _ Cls;
s53, carrying out translation, rotation and size normalization processing on the point cloud data;
s54, acquiring features from the feature query table for each point of the point cloud, and inputting the features of each point of the point cloud into a PointNet _ Seg _ Basic _ Cls network to acquire a classification result of the point; and integrating the classification condition of each point to finally obtain the segmentation result of the point cloud.
CN202011218060.2A 2020-11-04 2020-11-04 Method for accelerating point cloud segmentation based on lookup table Active CN112330680B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011218060.2A CN112330680B (en) 2020-11-04 2020-11-04 Method for accelerating point cloud segmentation based on lookup table

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011218060.2A CN112330680B (en) 2020-11-04 2020-11-04 Method for accelerating point cloud segmentation based on lookup table

Publications (2)

Publication Number Publication Date
CN112330680A true CN112330680A (en) 2021-02-05
CN112330680B CN112330680B (en) 2023-07-21

Family

ID=74323638

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011218060.2A Active CN112330680B (en) 2020-11-04 2020-11-04 Method for accelerating point cloud segmentation based on lookup table

Country Status (1)

Country Link
CN (1) CN112330680B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019019680A1 (en) * 2017-07-28 2019-01-31 北京大学深圳研究生院 Point cloud attribute compression method based on kd tree and optimized graph transformation
US10650278B1 (en) * 2017-07-21 2020-05-12 Apple Inc. Semantic labeling of point clouds using images
CN111462275A (en) * 2019-01-22 2020-07-28 北京京东尚科信息技术有限公司 Map production method and device based on laser point cloud
US20200342250A1 (en) * 2019-04-26 2020-10-29 Unikie Oy Method for extracting uniform features from point cloud and system therefor
CN111862101A (en) * 2020-07-15 2020-10-30 西安交通大学 3D point cloud semantic segmentation method under aerial view coding visual angle

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10650278B1 (en) * 2017-07-21 2020-05-12 Apple Inc. Semantic labeling of point clouds using images
WO2019019680A1 (en) * 2017-07-28 2019-01-31 北京大学深圳研究生院 Point cloud attribute compression method based on kd tree and optimized graph transformation
CN111462275A (en) * 2019-01-22 2020-07-28 北京京东尚科信息技术有限公司 Map production method and device based on laser point cloud
US20200342250A1 (en) * 2019-04-26 2020-10-29 Unikie Oy Method for extracting uniform features from point cloud and system therefor
CN111862101A (en) * 2020-07-15 2020-10-30 西安交通大学 3D point cloud semantic segmentation method under aerial view coding visual angle

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HONGYANG CHAO ET AL.: "Justlookup: One Millisecond Deep Feature Extraction for Point Clouds By Lookup Tables", 《 2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME)》, pages 1 - 5 *
王鑫龙;孙文磊;张建杰;黄勇;黄海博;: "基于点云数据的逆向工程技术研究综述", 制造技术与机床, no. 02, pages 1 - 2 *

Also Published As

Publication number Publication date
CN112330680B (en) 2023-07-21

Similar Documents

Publication Publication Date Title
CN107369161A (en) A kind of workpiece point cloud segmentation method at random based on the European cluster of improvement
CN106682233A (en) Method for Hash image retrieval based on deep learning and local feature fusion
CN109544681B (en) Fruit three-dimensional digitization method based on point cloud
CN107085731B (en) Image classification method based on RGB-D fusion features and sparse coding
CN109101981B (en) Loop detection method based on global image stripe code in streetscape scene
CN109376787B (en) Manifold learning network and computer vision image set classification method based on manifold learning network
CN113628263A (en) Point cloud registration method based on local curvature and neighbor characteristics thereof
CN106123812A (en) The method and device of relief surface sugarcane acreage is obtained based on remote sensing image
CN111583279A (en) Super-pixel image segmentation method based on PCBA
CN107704867A (en) Based on the image characteristic point error hiding elimination method for weighing the factor in a kind of vision positioning
CN111310811B (en) Large-scene three-dimensional point cloud classification method based on multi-dimensional feature optimal combination
CN110348478B (en) Method for extracting trees in outdoor point cloud scene based on shape classification and combination
Pratikakis et al. Partial 3D object retrieval combining local shape descriptors with global fisher vectors
CN108985346B (en) Existing exploration image retrieval method fusing low-level image features and CNN features
CN112967296B (en) Point cloud dynamic region graph convolution method, classification method and segmentation method
CN112270746B (en) Aluminum alloy 3D printing point cloud simplifying algorithm based on neighborhood covariance characteristic parameter threshold
Kadam et al. S3i-pointhop: So (3)-invariant pointhop for 3d point cloud classification
CN106683105A (en) Image segmentation method and image segmentation device
CN111597367B (en) Three-dimensional model retrieval method based on view and hash algorithm
CN113780550A (en) Convolutional neural network pruning method and device for quantizing feature map similarity
CN112330680A (en) Lookup table-based method for accelerating point cloud segmentation
CN111062393B (en) Natural scene Chinese character segmentation method based on spectral clustering
CN104268535A (en) Method for extracting features of two-dimensional image
Zhang et al. Point clouds classification of large scenes based on blueprint separation convolutional neural network
CN115272673A (en) Point cloud semantic segmentation method based on three-dimensional target context representation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant