CN110738194A - three-dimensional object identification method based on point cloud ordered coding - Google Patents

three-dimensional object identification method based on point cloud ordered coding Download PDF

Info

Publication number
CN110738194A
CN110738194A CN201911068121.9A CN201911068121A CN110738194A CN 110738194 A CN110738194 A CN 110738194A CN 201911068121 A CN201911068121 A CN 201911068121A CN 110738194 A CN110738194 A CN 110738194A
Authority
CN
China
Prior art keywords
point cloud
ordered
dimensional
matrix
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911068121.9A
Other languages
Chinese (zh)
Other versions
CN110738194B (en
Inventor
李文生
董帅
李悦乔
谷俊霖
张文强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China Zhongshan Institute
Original Assignee
University of Electronic Science and Technology of China Zhongshan Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China Zhongshan Institute filed Critical University of Electronic Science and Technology of China Zhongshan Institute
Priority to CN201911068121.9A priority Critical patent/CN110738194B/en
Publication of CN110738194A publication Critical patent/CN110738194A/en
Application granted granted Critical
Publication of CN110738194B publication Critical patent/CN110738194B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/647Three-dimensional objects by matching two-dimensional images to three-dimensional objects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B21/00Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant
    • G01B21/20Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant for measuring contours or curvatures, e.g. determining profile

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an point cloud ordered coding-based three-dimensional object identification method which comprises the steps of conducting spherical coordinate ordering on unordered point clouds to obtain sparse multi-channel arrays, utilizing convolution operation to extract spatial feature information of the sparse multi-channel arrays, utilizing the spatial feature information to conduct three-dimensional object identification, conducting operation through ordered coding and utilizing two-dimensional convolution to greatly reduce operation complexity and speed.

Description

three-dimensional object identification method based on point cloud ordered coding
Technical Field
The invention relates to the technical field of object identification, in particular to point cloud ordered coding-based three-dimensional object identification methods.
Background
The two-dimensional plane image is only the projection of an object in a real environment under a certain viewing angle, and cannot contain three-dimensional space information of each point on the surface of the object. Therefore, the identification method based on the three-dimensional spatial information still has the necessity of research thereof. At present, the three-dimensional spatial information of an object is mostly expressed by point cloud data, which includes the positions of points on the surface of the object in a spatial rectangular coordinate system and RGB pixel values, and can approximately and completely express the object in a real environment, but the point cloud data is dense and is disordered. For the convolutional neural network, the reason why the convolutional neural network can be used for processing image data is that the local correlation of the image data [ the local correlation of the image data, which means that more correlations exist between adjacent pixels on an image and the correlations among the pixels at a longer distance are smaller, and the adjacent pixels are processed in a local connection and parameter sharing mode, so that a large number of training parameters are reduced and the convolutional neural network has translation invariance. However, this property of convolution operations cannot be applied directly to point cloud data that is unordered.
Disclosure of Invention
Aiming at the problems in the prior art, the invention aims to provide methods for identifying three-dimensional objects based on point cloud ordered coding, which comprises the steps of firstly ordering disordered point clouds to obtain sparse multi-channel arrays, then extracting spatial feature information by using two-dimensional convolution operation, and completing an identification task based on the features.
In order to solve the above problems, the present invention adopts the following technical solutions.
A three-dimensional object recognition method based on point cloud ordered coding comprises the following steps:
carrying out spherical coordinate ordering on the disordered point cloud to obtain sparse multi-channel arrays;
extracting the spatial feature information of the sparse multi-channel array by using convolution operation;
and carrying out three-dimensional object identification by using the spatial characteristic information.
As a further improvement of , the spherical coordinate ordering of the unordered point cloud includes spherical coordinate ordering or cylindrical coordinate ordering.
As a further improvement of the present invention, , it is first necessary to collect the point cloud data set
Figure 99210DEST_PATH_IMAGE001
From a spatial rectangular coordinate system
Figure 77793DEST_PATH_IMAGE002
Conversion to spherical coordinate system
Figure 404298DEST_PATH_IMAGE003
That is to say thatIs converted into
Figure 279161DEST_PATH_IMAGE005
Wherein
Figure 486152DEST_PATH_IMAGE006
And is
Figure 350334DEST_PATH_IMAGE007
As a further improvement of the invention, a space rectangular coordinate system
Figure 901663DEST_PATH_IMAGE002
Coordinates of any points
Figure 250605DEST_PATH_IMAGE008
Corresponding to a spherical coordinate system
Figure 203779DEST_PATH_IMAGE009
The following transformation relationship exists:
Figure 612764DEST_PATH_IMAGE010
Figure 759712DEST_PATH_IMAGE011
as a further improvement of the present invention, , a spherical coordinate representation of the point cloud data is obtained
Figure 413809DEST_PATH_IMAGE012
Then, will
Figure 720025DEST_PATH_IMAGE013
Value range of
Figure 801376DEST_PATH_IMAGE014
And
Figure 825833DEST_PATH_IMAGE015
value range of
Figure 18042DEST_PATH_IMAGE016
Discretizing according to the angle size with the step length of 2 to obtain
Figure 178765DEST_PATH_IMAGE013
Value sequence of
Figure 431017DEST_PATH_IMAGE017
And
Figure 677191DEST_PATH_IMAGE015
value sequence of
Figure 673091DEST_PATH_IMAGE018
And then respectively establishing an ordered matrix by taking the two sequences as a vertical axis and a horizontal axis
Figure 953899DEST_PATH_IMAGE019
Elements of a matrix
Figure 377053DEST_PATH_IMAGE020
Is to divide the sampling point in the compartment, in which
Figure 110522DEST_PATH_IMAGE021
And is an integer which is the number of the whole,
Figure 779849DEST_PATH_IMAGE023
and is an integer, newly creating a matrixIs thatSparse four-channel two-dimensional arrays. For the sameAnd
Figure 932372DEST_PATH_IMAGE025
according to
Figure 187773DEST_PATH_IMAGE026
The difference of values can be divided into points on different surfaces of the object, which correspond to different matrixes respectively
Figure 326630DEST_PATH_IMAGE019
Figure 270578DEST_PATH_IMAGE027
Simply take
Figure 910507DEST_PATH_IMAGE028
And
Figure 498441DEST_PATH_IMAGE029
the case of (1), i.e. the final ordered matrix
Figure 932834DEST_PATH_IMAGE030
Is a sparse eight-channel two-dimensional array.
As a further improvement of the present invention, , it is first necessary to collect the point cloud data set
Figure 364077DEST_PATH_IMAGE001
From spatial rectangular coordinates
Figure 542118DEST_PATH_IMAGE002
Conversion to cylindrical coordinate systemIn that, will soon
Figure 488656DEST_PATH_IMAGE004
Is converted intoWherein
Figure 388927DEST_PATH_IMAGE006
And is
Figure 473427DEST_PATH_IMAGE033
As a further improvement of the invention, the coordinates of any points in a space rectangular coordinate system
Figure 124988DEST_PATH_IMAGE008
Corresponding to a cylindrical coordinate system
Figure 655458DEST_PATH_IMAGE034
The following transformation relationship exists:
Figure 676766DEST_PATH_IMAGE035
as a further improvement of the present invention at , a cylindrical coordinate representation of the point cloud data is obtained
Figure 438234DEST_PATH_IMAGE012
Then, will
Figure 331366DEST_PATH_IMAGE037
The value of (a) is equally divided into 90 intervals between the minimum value and the maximum value to obtain a value sequence
Figure 654900DEST_PATH_IMAGE038
Figure 949877DEST_PATH_IMAGE015
In the range of values
Figure 943241DEST_PATH_IMAGE016
Discretizing according to the angle size, wherein the step length is 2, and obtaining a value sequence
Figure 448303DEST_PATH_IMAGE039
And then respectively establishing an ordered matrix by taking the two sequences as a vertical axis and a horizontal axis
Figure 575528DEST_PATH_IMAGE019
Elements of a matrix
Figure 98913DEST_PATH_IMAGE040
Is to divide the sampling point in the compartment, in which
Figure 889277DEST_PATH_IMAGE041
Figure 255536DEST_PATH_IMAGE022
And is an integer which is the number of the whole,
Figure 422338DEST_PATH_IMAGE023
and is an integer, newly creating a matrix
Figure 924863DEST_PATH_IMAGE019
Namely a sparse four-channel two-dimensional array. For the same
Figure 151707DEST_PATH_IMAGE042
And
Figure 615050DEST_PATH_IMAGE025
according to
Figure 84077DEST_PATH_IMAGE026
The different values can be divided into points on different surfaces of the object, and the points respectively correspond to different matrixes
Figure 693306DEST_PATH_IMAGE019
Figure 966418DEST_PATH_IMAGE027
Simply takeAnd
Figure 386232DEST_PATH_IMAGE029
i.e. the final ordered matrix
Figure 489448DEST_PATH_IMAGE030
Is a sparse eight-channel two-dimensional array.
The invention has the advantages of
Compared with the prior art, the invention has the advantages that:
according to the invention, firstly, the disordered point cloud is ordered to obtain sparse multi-channel arrays, then, the spatial feature information is extracted by utilizing two-dimensional convolution operation, the identification task is completed based on the feature, and the operation is carried out by utilizing the two-dimensional convolution after the ordered coding, so that the running complexity is greatly reduced, and the speed is accelerated.
Drawings
FIG. 1 shows the transformation relationship between the spatial rectangular coordinates and the spherical coordinates in the present invention.
Fig. 2 is a representation of points on the surface of an object in spherical coordinates in accordance with the present invention.
Fig. 3 is a conversion relationship between the spatial rectangular coordinates and the cylindrical coordinates in the present invention.
Fig. 4 is a representation of points on the surface of an object in cylindrical coordinates in accordance with the present invention.
Fig. 5 is a convolutional network structure employed in the present invention.
FIG. 6 is a similarity matching model employed in the present invention.
FIG. 7 is a process of searching three-dimensional objects based on point cloud data according to the present invention.
Fig. 8 is a sample of a portion of a two-dimensional view dataset according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only partial embodiments of the present invention , rather than all embodiments, and all other embodiments obtained by those skilled in the art without any creative work based on the embodiments of the present invention belong to the protection scope of the present invention.
Referring to fig. 1 to 8, a method for identifying a three-dimensional object based on point cloud ordered coding includes the steps of ordering the spherical coordinates of the unordered point cloud to obtain sparse multi-channel arrays, extracting the spatial feature information of the sparse multi-channel arrays by convolution operation, and identifying the three-dimensional object by the spatial feature information.
As shown in FIG. 1, the spherical coordinate ordering is performed on the unordered point cloud, and first, a point cloud data set is requiredFrom a spatial rectangular coordinate system
Figure RE-683465DEST_PATH_IMAGE002
Conversion to spherical coordinate system
Figure RE-204576DEST_PATH_IMAGE003
That is to say that
Figure RE-828193DEST_PATH_IMAGE004
Is converted into
Figure RE-751150DEST_PATH_IMAGE005
Wherein
Figure RE-998592DEST_PATH_IMAGE006
And is. Coordinate of a certain point in space rectangular coordinate system
Figure RE-496624DEST_PATH_IMAGE008
Corresponding to a spherical coordinate systemThe following transformation relationship exists:
Figure RE-692430DEST_PATH_IMAGE010
Figure RE-421089DEST_PATH_IMAGE011
as shown in FIG. 2, a spherical coordinate representation of the point cloud data is obtained
Figure RE-215870DEST_PATH_IMAGE012
Then, will
Figure RE-847840DEST_PATH_IMAGE013
Value range of
Figure RE-437084DEST_PATH_IMAGE014
Andvalue range of
Figure RE-189194DEST_PATH_IMAGE016
Discretizing according to the angle size with the step length of 2 to obtainValue sequence of
Figure RE-199930DEST_PATH_IMAGE017
And
Figure RE-342330DEST_PATH_IMAGE015
value sequence of
Figure RE-478913DEST_PATH_IMAGE018
And then respectively establishing an ordered matrix by taking the two sequences as a vertical axis and a horizontal axis
Figure RE-318431DEST_PATH_IMAGE019
Elements of a matrixIs to divide the sampling point in the compartment, in which
Figure RE-144752DEST_PATH_IMAGE021
And is an integer which is the number of the whole,
Figure RE-779051DEST_PATH_IMAGE023
and is an integer, newly creating a matrix
Figure RE-84261DEST_PATH_IMAGE019
Namely a sparse four-channel two-dimensional array. For the sameAnd
Figure RE-974912DEST_PATH_IMAGE025
according to
Figure RE-556066DEST_PATH_IMAGE026
The difference of values can be divided into points on different surfaces of the object, which correspond to different matrixes respectively
Figure RE-828915DEST_PATH_IMAGE019
. Considering the completeness of the object information and the size of the data volume, only take the form of
Figure RE-948236DEST_PATH_IMAGE028
And
Figure RE-383896DEST_PATH_IMAGE029
the case of (1), i.e. the final ordered matrix
Figure RE-326182DEST_PATH_IMAGE030
Is a sparse eight-channel two-dimensional array.
As shown in FIG. 3, the cylindrical coordinate ordering of the disordered point cloud is first requiredData collection of point clouds
Figure 195915DEST_PATH_IMAGE001
From spatial rectangular coordinates
Figure 35695DEST_PATH_IMAGE002
Conversion to cylindrical coordinate system
Figure 267087DEST_PATH_IMAGE031
In that, will soon
Figure 575840DEST_PATH_IMAGE004
Is converted into
Figure 167358DEST_PATH_IMAGE032
WhereinAnd is
Figure 529518DEST_PATH_IMAGE033
. Coordinate of a certain point in space rectangular coordinate system
Figure 935354DEST_PATH_IMAGE008
Corresponding to a cylindrical coordinate system
Figure 330563DEST_PATH_IMAGE034
The following transformation relationship exists:
Figure 771034DEST_PATH_IMAGE035
Figure 219595DEST_PATH_IMAGE036
as shown in FIG. 4, a cylindrical coordinate representation of the point cloud data is obtained
Figure 486628DEST_PATH_IMAGE012
Then, will
Figure 810162DEST_PATH_IMAGE037
Is at the minimum and maximumThe large values are equally divided into 90 intervals to obtain a value sequence
Figure 849236DEST_PATH_IMAGE015
In the range of values
Figure 728199DEST_PATH_IMAGE016
Discretizing according to the angle size, wherein the step length is 2, and obtaining a value sequence
Figure 91309DEST_PATH_IMAGE039
And then respectively establishing an ordered matrix by taking the two sequences as a vertical axis and a horizontal axis
Figure 739328DEST_PATH_IMAGE019
Elements of a matrix
Figure 919905DEST_PATH_IMAGE040
Is to divide the sampling point in the compartment, in which
Figure 787629DEST_PATH_IMAGE041
And is an integer which is an integer,
Figure 961556DEST_PATH_IMAGE023
and is an integer, newly creating a matrixNamely a sparse four-channel two-dimensional array. For the same
Figure 319955DEST_PATH_IMAGE042
And
Figure 799609DEST_PATH_IMAGE025
according to
Figure 878423DEST_PATH_IMAGE026
The different values can be divided into points on different surfaces of the object, and the points respectively correspond to different matrixes
Figure 501034DEST_PATH_IMAGE019
Figure 400245DEST_PATH_IMAGE027
Considering the completeness of the object information and the size of the data volume, only take the data into account
Figure 101615DEST_PATH_IMAGE028
Andthe case of (1), i.e. the final ordered matrixIs a sparse eight-channel two-dimensional array.
After the disordered point cloud data is ordered according to the two modes, a sparse and ordered eight-channel two-dimensional array can be obtained, and two-dimensional convolution operation can be directly used on the data. To test the effectiveness of three-dimensional object classification based on this data, the present design used the convolutional network structure shown in FIG. 5.
After series of correlation operations are performed on the input data in the network model, the prediction scores are obtained at the last layers, and the output of the last hidden layers can be used as the feature vector of the input data
Figure 406716DEST_PATH_IMAGE043
In order to be an input, the user can select,
Figure 447312DEST_PATH_IMAGE044
representing the model's mapping from input data to category scores,
Figure 992563DEST_PATH_IMAGE045
represents a category score, then
Figure 215865DEST_PATH_IMAGE046
. The purpose of the training is to find
Figure 955413DEST_PATH_IMAGE044
So that it can accurately predict the class label of the input data. The objective function of this problem takes the form:
(3-1)
in the formula (I), the compound is shown in the specification,
Figure 980318DEST_PATH_IMAGE048
presentation category label
Figure 41815DEST_PATH_IMAGE049
And its predicted score
Figure 76898DEST_PATH_IMAGE045
The cross-entropy loss between the two,
Figure 115524DEST_PATH_IMAGE050
the number of samples in batches;
Figure 268156DEST_PATH_IMAGE051
indicating that the L2 regularization is,
Figure 200471DEST_PATH_IMAGE052
trainable parameters in the model;for the penalty coefficient, take 5 x 10-4
The convolution network model shown in fig. 2 is trained on the ordered point cloud data and the ordinary two-dimensional plane image respectively by using the target function shown in the formula (3-1), the two convolution networks after training can be used for extracting the space characteristic and the two-dimensional plane characteristic of the three-dimensional object respectively, and the characteristic pair formed by the two characteristics is used as the input of the similarity matching model in the retrieval task. The similarity matching uses the similarity matching network shown in fig. 3-6.
Assuming two-dimensional planar features and three-dimensionalSpatial characteristics are respectively
Figure 40700DEST_PATH_IMAGE054
And
Figure 232909DEST_PATH_IMAGE055
then the input of the similarity matching model is
Figure 659211DEST_PATH_IMAGE056
. In the characteristics after splicingThe similarity matching model is trained for input to determine if the two-dimensional planar features and the three-dimensional spatial features represent the same object
Figure 157636DEST_PATH_IMAGE058
In order to classify the decision function of the device,is the positive and negative case score of the final prediction, then
Figure 44132DEST_PATH_IMAGE059
. The objective function of the model takes the form of equation (3-1).
After a convolution network model and a similarity matching model for extracting two-dimensional plane features and three-dimensional space features are obtained through training respectively, testing is performed according to the process shown in fig. 7.
The method comprises the following steps of 1) constructing a sample feature database, specifically, collecting point cloud data of objects in a real environment, obtaining a sparse and ordered eight-channel two-dimensional array after spherical coordinate ordering or cylindrical coordinate ordering, obtaining feature vectors containing three-dimensional space information of the objects after the array passes through a convolution network, finally, constructing the sample feature database by utilizing the three-dimensional space features of all the sample objects, 2) matching similarity, obtaining two-dimensional plane features of the objects after a newly input common two-dimensional plane image of a certain object passes through the convolution network during actual test, forming a feature vector pair by the feature and the three-dimensional space features of each sample object in the database, obtaining the similarity of the two objects expressed by the feature vector pair and a similarity matching model, and finally obtaining a retrieval result according to the feature pair with the highest similarity.
The point cloud data and the two-dimensional view data used in this embodiment are generated by a Unity tool using 95 three-dimensional object models, in the dataset have 95 three-dimensional object models of different categories, each category includes 100 three-dimensional point clouds subjected to rotation and displacement at different angles and 100 two-dimensional views obtained from projection at different viewing angles, the training set, the verification set and the test set are divided according to a ratio of 6:1:3, and fig. 8 shows a part of samples in the two-dimensional view dataset.
In order to verify the effectiveness of the three-dimensional object identification method based on point cloud ordered coding, two groups of experiments are arranged: 1) carrying out three-dimensional object identification based on the ordered spherical coordinates; 2) and carrying out three-dimensional object identification based on the ordered cylindrical coordinates.
In order to reasonably train and evaluate the model, a data set required by each stage is distributed according to the following modes that (1) a training set of two-dimensional view data in the data set is used for training a network, the trained convolutional network obtains 2850 (95: 30) single-view features on a test set, (2) objects with 95 categories in a point cloud data set are totally obtained, each category comprises 100 point cloud data which are rotated and displaced, the training set, the test set and the verification set are divided according to the ratio of 6:1:3, the training set is used for training the convolutional network shown in figures 3-5 after being subjected to ordering processing, the trained convolutional network obtains 2850 (95: 30) three-dimensional space features on the test set subjected to ordering processing, the trained convolutional network is paired with 2850 (95: 30) single-view features extracted from the (1) to obtain single-view feature and three-dimensional space feature pairs, if the trained convolutional network is of the same category, the label is set as the label 1, otherwise, the label is set as 0, the label is set is matched with positive and is used for matching with a training set of the training sample, and the training set is used for matching the training set of the training samples, and the training set is divided according to the training data of positive 636: 3-3, and the training set is used for matching the training of the training set.
The same training method was used for each phase of both experiments, namely batch _ size was set to 50, the Nesterov gradient acceleration algorithm was used, and the initial learning rate was set to 10-2After stabilization, it becomes 10-3And 10-4The momentum is set to 0.9 and the Dropout rate is set to 0.3.
The experimental results of the three-dimensional object identification method based on the point cloud ordered coding are similar to those of the experimental results 1) and the experimental results of the experimental results 2), but the effects on classification and retrieval tasks are obviously higher than those of the identification method based on a single view, so that the effectiveness of the method is proved, the operation by utilizing two-dimensional convolution after the ordered coding also greatly reduces the complexity of operation and speeds up the speed, but the three-dimensional space information of the object is considered, but when the two groups of experiments carry out spherical coordinate and cylindrical coordinate ordering on point cloud data, parts of three-dimensional information are lost in the interval division and sampling process, for the converted spherical coordinate, the interval division and the uniform sampling cannot be uniformly carried out, particularly for the part close to two poles of the sphere and the part close to the equator, so that the three-dimensional information of the object expressed by the ordered sparse matrix is distorted, and for the converted cylindrical coordinate can be uniformly sampled.
TABLE 1 Experimental results of a three-dimensional object recognition method based on point cloud ordered coding
Figure 965821DEST_PATH_IMAGE060
The foregoing is only a preferred embodiment of the present invention; the scope of the invention is not limited thereto. Any person skilled in the art should be able to cover the technical scope of the present invention by equivalent or modified solutions and modifications within the technical scope of the present invention.

Claims (10)

1, A three-dimensional object recognition method based on point cloud ordered coding, which is characterized by comprising:
carrying out spherical coordinate ordering on the disordered point cloud to obtain sparse multi-channel arrays;
extracting the spatial feature information of the sparse multi-channel array by using convolution operation;
and carrying out three-dimensional object identification by using the spatial characteristic information.
2. The method for recognizing three-dimensional objects based on point cloud ordered coding of claim 1, wherein:
the spherical coordinate ordering of the disordered point cloud comprises spherical coordinate ordering or cylindrical coordinate ordering.
3. The method for recognizing three-dimensional objects based on point cloud ordered coding of claim 2, wherein:
first, a point cloud data set is required
Figure 634471DEST_PATH_IMAGE001
From a spatial rectangular coordinate system
Figure 434062DEST_PATH_IMAGE002
Conversion to spherical coordinate system
Figure 569377DEST_PATH_IMAGE003
That is to say thatIs converted intoWherein
Figure 95802DEST_PATH_IMAGE006
And is
Figure 587088DEST_PATH_IMAGE007
4. The method for recognizing three-dimensional objects based on point cloud ordered coding of claim 3, wherein:
rectangular coordinate system of space
Figure 850579DEST_PATH_IMAGE002
Coordinates of any points
Figure 60106DEST_PATH_IMAGE008
Corresponding to a spherical coordinate system
Figure 434455DEST_PATH_IMAGE009
The following transformation relationship exists:
Figure 419729DEST_PATH_IMAGE010
Figure 355586DEST_PATH_IMAGE011
5. the method for recognizing three-dimensional objects based on point cloud ordered coding of claim 4, wherein:
obtaining spherical coordinate representation of point cloud data
Figure 550944DEST_PATH_IMAGE012
Then, will
Figure 230449DEST_PATH_IMAGE013
Value range of
Figure 194863DEST_PATH_IMAGE014
And
Figure 301622DEST_PATH_IMAGE015
value range ofDiscretizing according to the angle size with the step length of 2 to obtain
Figure 201893DEST_PATH_IMAGE013
Value sequence of
Figure 896179DEST_PATH_IMAGE017
And
Figure 672374DEST_PATH_IMAGE015
value sequence ofAnd then respectively establishing an ordered matrix by taking the two sequences as a vertical axis and a horizontal axisElements of a matrix
Figure 38524DEST_PATH_IMAGE020
Is to divide the sampling point in the compartment, in which
Figure 251200DEST_PATH_IMAGE021
Figure 518233DEST_PATH_IMAGE022
And is an integer which is the number of the whole,
Figure 343232DEST_PATH_IMAGE023
and is an integer, newly creating a matrix
Figure 871165DEST_PATH_IMAGE019
Namely a sparse four-channel two-dimensional array.
6. For the same
Figure 756207DEST_PATH_IMAGE024
And
Figure 510536DEST_PATH_IMAGE025
according to
Figure 372182DEST_PATH_IMAGE026
The difference of the values can be divided into points on different surfaces of the object, which respectively correspond to different matrixes,
Figure 521666DEST_PATH_IMAGE027
simply takeAndthe case of (1), i.e. the final ordered matrix
Figure 218991DEST_PATH_IMAGE030
Is a sparse eight-channel two-dimensional array.
7. The method for recognizing three-dimensional objects based on point cloud ordered coding of claim 2, wherein:
first, a point cloud data set is required
Figure 862462DEST_PATH_IMAGE001
From spatial rectangular coordinates
Figure 322262DEST_PATH_IMAGE002
Conversion to cylindrical coordinate system
Figure 653845DEST_PATH_IMAGE031
In that, will soon
Figure 857293DEST_PATH_IMAGE004
Is converted into
Figure 355271DEST_PATH_IMAGE032
Wherein
Figure 487437DEST_PATH_IMAGE006
And is
Figure 828289DEST_PATH_IMAGE033
8. The method for recognizing three-dimensional objects based on point cloud ordered coding of claim 1, wherein:
coordinates of any points in space rectangular coordinate system
Figure 710794DEST_PATH_IMAGE008
Corresponding to a cylindrical coordinate system
Figure 689377DEST_PATH_IMAGE034
The following transformation relationship exists:
Figure 756559DEST_PATH_IMAGE035
Figure 820592DEST_PATH_IMAGE036
9. the method for recognizing three-dimensional objects based on point cloud ordered coding of claim 1, wherein:
obtaining cylindrical coordinate representation of point cloud data
Figure 365843DEST_PATH_IMAGE012
Then, will
Figure 838412DEST_PATH_IMAGE037
The value of (a) is equally divided into 90 intervals between the minimum value and the maximum value to obtain a value sequenceIn the range of values
Figure 586554DEST_PATH_IMAGE016
Discretizing according to the angle size, wherein the step length is 2, and obtaining a value sequence
Figure 539728DEST_PATH_IMAGE039
And then respectively establishing an ordered matrix by taking the two sequences as a vertical axis and a horizontal axis
Figure 948713DEST_PATH_IMAGE019
Elements of a matrixIs to divide the sampling point in the compartment, in which
Figure 749758DEST_PATH_IMAGE041
And is an integer which is the number of the whole,
Figure 137325DEST_PATH_IMAGE023
and is an integer, newly creating a matrix
Figure 37148DEST_PATH_IMAGE019
Namely a sparse four-channel two-dimensional array.
10. For the same
Figure 727892DEST_PATH_IMAGE042
And
Figure 390080DEST_PATH_IMAGE025
according to
Figure 16234DEST_PATH_IMAGE026
The different values can be divided into points on different surfaces of the object, and the points respectively correspond to different matrixes
Figure 527986DEST_PATH_IMAGE019
Figure 258307DEST_PATH_IMAGE027
Simply take
Figure 539116DEST_PATH_IMAGE028
Andi.e. the final ordered matrixIs a sparse eight-channel two-dimensional array.
CN201911068121.9A 2019-11-05 2019-11-05 Three-dimensional object identification method based on point cloud ordered coding Active CN110738194B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911068121.9A CN110738194B (en) 2019-11-05 2019-11-05 Three-dimensional object identification method based on point cloud ordered coding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911068121.9A CN110738194B (en) 2019-11-05 2019-11-05 Three-dimensional object identification method based on point cloud ordered coding

Publications (2)

Publication Number Publication Date
CN110738194A true CN110738194A (en) 2020-01-31
CN110738194B CN110738194B (en) 2023-04-07

Family

ID=69272147

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911068121.9A Active CN110738194B (en) 2019-11-05 2019-11-05 Three-dimensional object identification method based on point cloud ordered coding

Country Status (1)

Country Link
CN (1) CN110738194B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112365577A (en) * 2020-11-09 2021-02-12 重庆邮电大学 Mechanical part augmented reality tracking registration method based on convolutional neural network
WO2022017129A1 (en) * 2020-07-22 2022-01-27 上海商汤临港智能科技有限公司 Target object detection method and apparatus, electronic device, and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109063753A (en) * 2018-07-18 2018-12-21 北方民族大学 A kind of three-dimensional point cloud model classification method based on convolutional neural networks
CN109816714A (en) * 2019-01-15 2019-05-28 西北大学 A kind of point cloud object type recognition methods based on Three dimensional convolution neural network
CN109886206A (en) * 2019-02-21 2019-06-14 电子科技大学中山学院 Three-dimensional object identification method and equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109063753A (en) * 2018-07-18 2018-12-21 北方民族大学 A kind of three-dimensional point cloud model classification method based on convolutional neural networks
CN109816714A (en) * 2019-01-15 2019-05-28 西北大学 A kind of point cloud object type recognition methods based on Three dimensional convolution neural network
CN109886206A (en) * 2019-02-21 2019-06-14 电子科技大学中山学院 Three-dimensional object identification method and equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022017129A1 (en) * 2020-07-22 2022-01-27 上海商汤临港智能科技有限公司 Target object detection method and apparatus, electronic device, and storage medium
CN112365577A (en) * 2020-11-09 2021-02-12 重庆邮电大学 Mechanical part augmented reality tracking registration method based on convolutional neural network

Also Published As

Publication number Publication date
CN110738194B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
JP7058669B2 (en) Vehicle appearance feature identification and vehicle search methods, devices, storage media, electronic devices
CN108564129B (en) Trajectory data classification method based on generation countermeasure network
CN110032925B (en) Gesture image segmentation and recognition method based on improved capsule network and algorithm
WO2019015246A1 (en) Image feature acquisition
CN111767882A (en) Multi-mode pedestrian detection method based on improved YOLO model
CN110175615B (en) Model training method, domain-adaptive visual position identification method and device
Obinata et al. Temporal extension module for skeleton-based action recognition
CN111680678B (en) Target area identification method, device, equipment and readable storage medium
CN111709313A (en) Pedestrian re-identification method based on local and channel combination characteristics
CN112905828B (en) Image retriever, database and retrieval method combining significant features
CN111414875B (en) Three-dimensional point cloud head posture estimation system based on depth regression forest
CN110968734A (en) Pedestrian re-identification method and device based on depth measurement learning
Zelener et al. Cnn-based object segmentation in urban lidar with missing points
Pratama et al. Face recognition for presence system by using residual networks-50 architecture
CN111125396B (en) Image retrieval method of single-model multi-branch structure
CN110738194A (en) three-dimensional object identification method based on point cloud ordered coding
CN112084895A (en) Pedestrian re-identification method based on deep learning
CN114627424A (en) Gait recognition method and system based on visual angle transformation
CN112329662B (en) Multi-view saliency estimation method based on unsupervised learning
CN116935411A (en) Radical-level ancient character recognition method based on character decomposition and reconstruction
CN116758419A (en) Multi-scale target detection method, device and equipment for remote sensing image
CN115359304B (en) Single image feature grouping-oriented causal invariance learning method and system
CN113723468B (en) Object detection method of three-dimensional point cloud
CN116311504A (en) Small sample behavior recognition method, system and equipment
CN115018886A (en) Motion trajectory identification method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant