CN110738194A - three-dimensional object identification method based on point cloud ordered coding - Google Patents
three-dimensional object identification method based on point cloud ordered coding Download PDFInfo
- Publication number
- CN110738194A CN110738194A CN201911068121.9A CN201911068121A CN110738194A CN 110738194 A CN110738194 A CN 110738194A CN 201911068121 A CN201911068121 A CN 201911068121A CN 110738194 A CN110738194 A CN 110738194A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- ordered
- dimensional
- matrix
- coordinate system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 26
- 238000003491 array Methods 0.000 claims abstract description 9
- 239000011159 matrix material Substances 0.000 claims description 25
- 238000005070 sampling Methods 0.000 claims description 8
- 238000006243 chemical reaction Methods 0.000 claims description 7
- 230000009466 transformation Effects 0.000 claims description 7
- HOWHQWFXSLOJEF-MGZLOUMQSA-N systemin Chemical compound NCCCC[C@H](N)C(=O)N[C@@H](CCSC)C(=O)N[C@@H](CCC(N)=O)C(=O)N[C@@H]([C@@H](C)O)C(=O)N[C@@H](CC(O)=O)C(=O)OC(=O)[C@@H]1CCCN1C(=O)[C@H]1N(C(=O)[C@H](CC(O)=O)NC(=O)[C@H](CCCN=C(N)N)NC(=O)[C@H](CCCCN)NC(=O)[C@H](CO)NC(=O)[C@H]2N(CCC2)C(=O)[C@H]2N(CCC2)C(=O)[C@H](CCCCN)NC(=O)[C@H](CO)NC(=O)[C@H](CCC(N)=O)NC(=O)[C@@H](NC(=O)[C@H](C)N)C(C)C)CCC1 HOWHQWFXSLOJEF-MGZLOUMQSA-N 0.000 claims description 3
- 108010050014 systemin Proteins 0.000 claims description 3
- 238000012549 training Methods 0.000 description 21
- 230000006872 improvement Effects 0.000 description 7
- 238000012360 testing method Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 4
- 239000013598 vector Substances 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000002474 experimental method Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000012795 verification Methods 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 150000001875 compounds Chemical class 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
- G06V20/647—Three-dimensional objects by matching two-dimensional images to three-dimensional objects
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B21/00—Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant
- G01B21/20—Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant for measuring contours or curvatures, e.g. determining profile
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an point cloud ordered coding-based three-dimensional object identification method which comprises the steps of conducting spherical coordinate ordering on unordered point clouds to obtain sparse multi-channel arrays, utilizing convolution operation to extract spatial feature information of the sparse multi-channel arrays, utilizing the spatial feature information to conduct three-dimensional object identification, conducting operation through ordered coding and utilizing two-dimensional convolution to greatly reduce operation complexity and speed.
Description
Technical Field
The invention relates to the technical field of object identification, in particular to point cloud ordered coding-based three-dimensional object identification methods.
Background
The two-dimensional plane image is only the projection of an object in a real environment under a certain viewing angle, and cannot contain three-dimensional space information of each point on the surface of the object. Therefore, the identification method based on the three-dimensional spatial information still has the necessity of research thereof. At present, the three-dimensional spatial information of an object is mostly expressed by point cloud data, which includes the positions of points on the surface of the object in a spatial rectangular coordinate system and RGB pixel values, and can approximately and completely express the object in a real environment, but the point cloud data is dense and is disordered. For the convolutional neural network, the reason why the convolutional neural network can be used for processing image data is that the local correlation of the image data [ the local correlation of the image data, which means that more correlations exist between adjacent pixels on an image and the correlations among the pixels at a longer distance are smaller, and the adjacent pixels are processed in a local connection and parameter sharing mode, so that a large number of training parameters are reduced and the convolutional neural network has translation invariance. However, this property of convolution operations cannot be applied directly to point cloud data that is unordered.
Disclosure of Invention
Aiming at the problems in the prior art, the invention aims to provide methods for identifying three-dimensional objects based on point cloud ordered coding, which comprises the steps of firstly ordering disordered point clouds to obtain sparse multi-channel arrays, then extracting spatial feature information by using two-dimensional convolution operation, and completing an identification task based on the features.
In order to solve the above problems, the present invention adopts the following technical solutions.
A three-dimensional object recognition method based on point cloud ordered coding comprises the following steps:
carrying out spherical coordinate ordering on the disordered point cloud to obtain sparse multi-channel arrays;
extracting the spatial feature information of the sparse multi-channel array by using convolution operation;
and carrying out three-dimensional object identification by using the spatial characteristic information.
As a further improvement of , the spherical coordinate ordering of the unordered point cloud includes spherical coordinate ordering or cylindrical coordinate ordering.
As a further improvement of the present invention, , it is first necessary to collect the point cloud data setFrom a spatial rectangular coordinate systemConversion to spherical coordinate systemThat is to say thatIs converted intoWhereinAnd is。
As a further improvement of the invention, a space rectangular coordinate systemCoordinates of any pointsCorresponding to a spherical coordinate systemThe following transformation relationship exists:
as a further improvement of the present invention, , a spherical coordinate representation of the point cloud data is obtainedThen, willValue range ofAndvalue range ofDiscretizing according to the angle size with the step length of 2 to obtainValue sequence ofAndvalue sequence ofAnd then respectively establishing an ordered matrix by taking the two sequences as a vertical axis and a horizontal axisElements of a matrixIs to divide the sampling point in the compartment, in which,And is an integer which is the number of the whole,and is an integer, newly creating a matrixIs thatSparse four-channel two-dimensional arrays. For the sameAndaccording toThe difference of values can be divided into points on different surfaces of the object, which correspond to different matrixes respectively,Simply takeAndthe case of (1), i.e. the final ordered matrixIs a sparse eight-channel two-dimensional array.
As a further improvement of the present invention, , it is first necessary to collect the point cloud data setFrom spatial rectangular coordinatesConversion to cylindrical coordinate systemIn that, will soonIs converted intoWhereinAnd is。
As a further improvement of the invention, the coordinates of any points in a space rectangular coordinate systemCorresponding to a cylindrical coordinate systemThe following transformation relationship exists:
as a further improvement of the present invention at , a cylindrical coordinate representation of the point cloud data is obtainedThen, willThe value of (a) is equally divided into 90 intervals between the minimum value and the maximum value to obtain a value sequence,In the range of valuesDiscretizing according to the angle size, wherein the step length is 2, and obtaining a value sequenceAnd then respectively establishing an ordered matrix by taking the two sequences as a vertical axis and a horizontal axisElements of a matrixIs to divide the sampling point in the compartment, in which,And is an integer which is the number of the whole,and is an integer, newly creating a matrixNamely a sparse four-channel two-dimensional array. For the sameAndaccording toThe different values can be divided into points on different surfaces of the object, and the points respectively correspond to different matrixes,Simply takeAndi.e. the final ordered matrixIs a sparse eight-channel two-dimensional array.
The invention has the advantages of
Compared with the prior art, the invention has the advantages that:
according to the invention, firstly, the disordered point cloud is ordered to obtain sparse multi-channel arrays, then, the spatial feature information is extracted by utilizing two-dimensional convolution operation, the identification task is completed based on the feature, and the operation is carried out by utilizing the two-dimensional convolution after the ordered coding, so that the running complexity is greatly reduced, and the speed is accelerated.
Drawings
FIG. 1 shows the transformation relationship between the spatial rectangular coordinates and the spherical coordinates in the present invention.
Fig. 2 is a representation of points on the surface of an object in spherical coordinates in accordance with the present invention.
Fig. 3 is a conversion relationship between the spatial rectangular coordinates and the cylindrical coordinates in the present invention.
Fig. 4 is a representation of points on the surface of an object in cylindrical coordinates in accordance with the present invention.
Fig. 5 is a convolutional network structure employed in the present invention.
FIG. 6 is a similarity matching model employed in the present invention.
FIG. 7 is a process of searching three-dimensional objects based on point cloud data according to the present invention.
Fig. 8 is a sample of a portion of a two-dimensional view dataset according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only partial embodiments of the present invention , rather than all embodiments, and all other embodiments obtained by those skilled in the art without any creative work based on the embodiments of the present invention belong to the protection scope of the present invention.
Referring to fig. 1 to 8, a method for identifying a three-dimensional object based on point cloud ordered coding includes the steps of ordering the spherical coordinates of the unordered point cloud to obtain sparse multi-channel arrays, extracting the spatial feature information of the sparse multi-channel arrays by convolution operation, and identifying the three-dimensional object by the spatial feature information.
As shown in FIG. 1, the spherical coordinate ordering is performed on the unordered point cloud, and first, a point cloud data set is requiredFrom a spatial rectangular coordinate systemConversion to spherical coordinate systemThat is to say thatIs converted intoWhereinAnd is. Coordinate of a certain point in space rectangular coordinate systemCorresponding to a spherical coordinate systemThe following transformation relationship exists:;
as shown in FIG. 2, a spherical coordinate representation of the point cloud data is obtainedThen, willValue range ofAndvalue range ofDiscretizing according to the angle size with the step length of 2 to obtainValue sequence ofAndvalue sequence ofAnd then respectively establishing an ordered matrix by taking the two sequences as a vertical axis and a horizontal axisElements of a matrixIs to divide the sampling point in the compartment, in which,And is an integer which is the number of the whole,and is an integer, newly creating a matrixNamely a sparse four-channel two-dimensional array. For the sameAndaccording toThe difference of values can be divided into points on different surfaces of the object, which correspond to different matrixes respectively,. Considering the completeness of the object information and the size of the data volume, only take the form ofAndthe case of (1), i.e. the final ordered matrixIs a sparse eight-channel two-dimensional array.
As shown in FIG. 3, the cylindrical coordinate ordering of the disordered point cloud is first requiredData collection of point cloudsFrom spatial rectangular coordinatesConversion to cylindrical coordinate systemIn that, will soonIs converted intoWhereinAnd is. Coordinate of a certain point in space rectangular coordinate systemCorresponding to a cylindrical coordinate systemThe following transformation relationship exists:;。
as shown in FIG. 4, a cylindrical coordinate representation of the point cloud data is obtainedThen, willIs at the minimum and maximumThe large values are equally divided into 90 intervals to obtain a value sequence,In the range of valuesDiscretizing according to the angle size, wherein the step length is 2, and obtaining a value sequenceAnd then respectively establishing an ordered matrix by taking the two sequences as a vertical axis and a horizontal axisElements of a matrixIs to divide the sampling point in the compartment, in whichAnd is an integer which is an integer,and is an integer, newly creating a matrixNamely a sparse four-channel two-dimensional array. For the sameAndaccording toThe different values can be divided into points on different surfaces of the object, and the points respectively correspond to different matrixes,Considering the completeness of the object information and the size of the data volume, only take the data into accountAndthe case of (1), i.e. the final ordered matrixIs a sparse eight-channel two-dimensional array.
After the disordered point cloud data is ordered according to the two modes, a sparse and ordered eight-channel two-dimensional array can be obtained, and two-dimensional convolution operation can be directly used on the data. To test the effectiveness of three-dimensional object classification based on this data, the present design used the convolutional network structure shown in FIG. 5.
After series of correlation operations are performed on the input data in the network model, the prediction scores are obtained at the last layers, and the output of the last hidden layers can be used as the feature vector of the input dataIn order to be an input, the user can select,representing the model's mapping from input data to category scores,represents a category score, then. The purpose of the training is to findSo that it can accurately predict the class label of the input data. The objective function of this problem takes the form:
(3-1)
in the formula (I), the compound is shown in the specification,presentation category labelAnd its predicted scoreThe cross-entropy loss between the two,the number of samples in batches;indicating that the L2 regularization is,trainable parameters in the model;for the penalty coefficient, take 5 x 10-4。
The convolution network model shown in fig. 2 is trained on the ordered point cloud data and the ordinary two-dimensional plane image respectively by using the target function shown in the formula (3-1), the two convolution networks after training can be used for extracting the space characteristic and the two-dimensional plane characteristic of the three-dimensional object respectively, and the characteristic pair formed by the two characteristics is used as the input of the similarity matching model in the retrieval task. The similarity matching uses the similarity matching network shown in fig. 3-6.
Assuming two-dimensional planar features and three-dimensionalSpatial characteristics are respectivelyAndthen the input of the similarity matching model is. In the characteristics after splicingThe similarity matching model is trained for input to determine if the two-dimensional planar features and the three-dimensional spatial features represent the same objectIn order to classify the decision function of the device,is the positive and negative case score of the final prediction, then. The objective function of the model takes the form of equation (3-1).
After a convolution network model and a similarity matching model for extracting two-dimensional plane features and three-dimensional space features are obtained through training respectively, testing is performed according to the process shown in fig. 7.
The method comprises the following steps of 1) constructing a sample feature database, specifically, collecting point cloud data of objects in a real environment, obtaining a sparse and ordered eight-channel two-dimensional array after spherical coordinate ordering or cylindrical coordinate ordering, obtaining feature vectors containing three-dimensional space information of the objects after the array passes through a convolution network, finally, constructing the sample feature database by utilizing the three-dimensional space features of all the sample objects, 2) matching similarity, obtaining two-dimensional plane features of the objects after a newly input common two-dimensional plane image of a certain object passes through the convolution network during actual test, forming a feature vector pair by the feature and the three-dimensional space features of each sample object in the database, obtaining the similarity of the two objects expressed by the feature vector pair and a similarity matching model, and finally obtaining a retrieval result according to the feature pair with the highest similarity.
The point cloud data and the two-dimensional view data used in this embodiment are generated by a Unity tool using 95 three-dimensional object models, in the dataset have 95 three-dimensional object models of different categories, each category includes 100 three-dimensional point clouds subjected to rotation and displacement at different angles and 100 two-dimensional views obtained from projection at different viewing angles, the training set, the verification set and the test set are divided according to a ratio of 6:1:3, and fig. 8 shows a part of samples in the two-dimensional view dataset.
In order to verify the effectiveness of the three-dimensional object identification method based on point cloud ordered coding, two groups of experiments are arranged: 1) carrying out three-dimensional object identification based on the ordered spherical coordinates; 2) and carrying out three-dimensional object identification based on the ordered cylindrical coordinates.
In order to reasonably train and evaluate the model, a data set required by each stage is distributed according to the following modes that (1) a training set of two-dimensional view data in the data set is used for training a network, the trained convolutional network obtains 2850 (95: 30) single-view features on a test set, (2) objects with 95 categories in a point cloud data set are totally obtained, each category comprises 100 point cloud data which are rotated and displaced, the training set, the test set and the verification set are divided according to the ratio of 6:1:3, the training set is used for training the convolutional network shown in figures 3-5 after being subjected to ordering processing, the trained convolutional network obtains 2850 (95: 30) three-dimensional space features on the test set subjected to ordering processing, the trained convolutional network is paired with 2850 (95: 30) single-view features extracted from the (1) to obtain single-view feature and three-dimensional space feature pairs, if the trained convolutional network is of the same category, the label is set as the label 1, otherwise, the label is set as 0, the label is set is matched with positive and is used for matching with a training set of the training sample, and the training set is used for matching the training set of the training samples, and the training set is divided according to the training data of positive 636: 3-3, and the training set is used for matching the training of the training set.
The same training method was used for each phase of both experiments, namely batch _ size was set to 50, the Nesterov gradient acceleration algorithm was used, and the initial learning rate was set to 10-2After stabilization, it becomes 10-3And 10-4The momentum is set to 0.9 and the Dropout rate is set to 0.3.
The experimental results of the three-dimensional object identification method based on the point cloud ordered coding are similar to those of the experimental results 1) and the experimental results of the experimental results 2), but the effects on classification and retrieval tasks are obviously higher than those of the identification method based on a single view, so that the effectiveness of the method is proved, the operation by utilizing two-dimensional convolution after the ordered coding also greatly reduces the complexity of operation and speeds up the speed, but the three-dimensional space information of the object is considered, but when the two groups of experiments carry out spherical coordinate and cylindrical coordinate ordering on point cloud data, parts of three-dimensional information are lost in the interval division and sampling process, for the converted spherical coordinate, the interval division and the uniform sampling cannot be uniformly carried out, particularly for the part close to two poles of the sphere and the part close to the equator, so that the three-dimensional information of the object expressed by the ordered sparse matrix is distorted, and for the converted cylindrical coordinate can be uniformly sampled.
TABLE 1 Experimental results of a three-dimensional object recognition method based on point cloud ordered coding
The foregoing is only a preferred embodiment of the present invention; the scope of the invention is not limited thereto. Any person skilled in the art should be able to cover the technical scope of the present invention by equivalent or modified solutions and modifications within the technical scope of the present invention.
Claims (10)
1, A three-dimensional object recognition method based on point cloud ordered coding, which is characterized by comprising:
carrying out spherical coordinate ordering on the disordered point cloud to obtain sparse multi-channel arrays;
extracting the spatial feature information of the sparse multi-channel array by using convolution operation;
and carrying out three-dimensional object identification by using the spatial characteristic information.
2. The method for recognizing three-dimensional objects based on point cloud ordered coding of claim 1, wherein:
the spherical coordinate ordering of the disordered point cloud comprises spherical coordinate ordering or cylindrical coordinate ordering.
5. the method for recognizing three-dimensional objects based on point cloud ordered coding of claim 4, wherein:
obtaining spherical coordinate representation of point cloud dataThen, willValue range ofAndvalue range ofDiscretizing according to the angle size with the step length of 2 to obtainValue sequence ofAndvalue sequence ofAnd then respectively establishing an ordered matrix by taking the two sequences as a vertical axis and a horizontal axisElements of a matrixIs to divide the sampling point in the compartment, in which,And is an integer which is the number of the whole,and is an integer, newly creating a matrixNamely a sparse four-channel two-dimensional array.
9. the method for recognizing three-dimensional objects based on point cloud ordered coding of claim 1, wherein:
obtaining cylindrical coordinate representation of point cloud dataThen, willThe value of (a) is equally divided into 90 intervals between the minimum value and the maximum value to obtain a value sequence,In the range of valuesDiscretizing according to the angle size, wherein the step length is 2, and obtaining a value sequenceAnd then respectively establishing an ordered matrix by taking the two sequences as a vertical axis and a horizontal axisElements of a matrixIs to divide the sampling point in the compartment, in which,And is an integer which is the number of the whole,and is an integer, newly creating a matrixNamely a sparse four-channel two-dimensional array.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911068121.9A CN110738194B (en) | 2019-11-05 | 2019-11-05 | Three-dimensional object identification method based on point cloud ordered coding |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911068121.9A CN110738194B (en) | 2019-11-05 | 2019-11-05 | Three-dimensional object identification method based on point cloud ordered coding |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110738194A true CN110738194A (en) | 2020-01-31 |
CN110738194B CN110738194B (en) | 2023-04-07 |
Family
ID=69272147
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911068121.9A Active CN110738194B (en) | 2019-11-05 | 2019-11-05 | Three-dimensional object identification method based on point cloud ordered coding |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110738194B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112365577A (en) * | 2020-11-09 | 2021-02-12 | 重庆邮电大学 | Mechanical part augmented reality tracking registration method based on convolutional neural network |
WO2022017129A1 (en) * | 2020-07-22 | 2022-01-27 | 上海商汤临港智能科技有限公司 | Target object detection method and apparatus, electronic device, and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109063753A (en) * | 2018-07-18 | 2018-12-21 | 北方民族大学 | A kind of three-dimensional point cloud model classification method based on convolutional neural networks |
CN109816714A (en) * | 2019-01-15 | 2019-05-28 | 西北大学 | A kind of point cloud object type recognition methods based on Three dimensional convolution neural network |
CN109886206A (en) * | 2019-02-21 | 2019-06-14 | 电子科技大学中山学院 | Three-dimensional object identification method and equipment |
-
2019
- 2019-11-05 CN CN201911068121.9A patent/CN110738194B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109063753A (en) * | 2018-07-18 | 2018-12-21 | 北方民族大学 | A kind of three-dimensional point cloud model classification method based on convolutional neural networks |
CN109816714A (en) * | 2019-01-15 | 2019-05-28 | 西北大学 | A kind of point cloud object type recognition methods based on Three dimensional convolution neural network |
CN109886206A (en) * | 2019-02-21 | 2019-06-14 | 电子科技大学中山学院 | Three-dimensional object identification method and equipment |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022017129A1 (en) * | 2020-07-22 | 2022-01-27 | 上海商汤临港智能科技有限公司 | Target object detection method and apparatus, electronic device, and storage medium |
CN112365577A (en) * | 2020-11-09 | 2021-02-12 | 重庆邮电大学 | Mechanical part augmented reality tracking registration method based on convolutional neural network |
Also Published As
Publication number | Publication date |
---|---|
CN110738194B (en) | 2023-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7058669B2 (en) | Vehicle appearance feature identification and vehicle search methods, devices, storage media, electronic devices | |
CN108564129B (en) | Trajectory data classification method based on generation countermeasure network | |
CN110032925B (en) | Gesture image segmentation and recognition method based on improved capsule network and algorithm | |
WO2019015246A1 (en) | Image feature acquisition | |
CN111767882A (en) | Multi-mode pedestrian detection method based on improved YOLO model | |
CN110175615B (en) | Model training method, domain-adaptive visual position identification method and device | |
Obinata et al. | Temporal extension module for skeleton-based action recognition | |
CN111680678B (en) | Target area identification method, device, equipment and readable storage medium | |
CN111709313A (en) | Pedestrian re-identification method based on local and channel combination characteristics | |
CN112905828B (en) | Image retriever, database and retrieval method combining significant features | |
CN111414875B (en) | Three-dimensional point cloud head posture estimation system based on depth regression forest | |
CN110968734A (en) | Pedestrian re-identification method and device based on depth measurement learning | |
Zelener et al. | Cnn-based object segmentation in urban lidar with missing points | |
Pratama et al. | Face recognition for presence system by using residual networks-50 architecture | |
CN111125396B (en) | Image retrieval method of single-model multi-branch structure | |
CN110738194A (en) | three-dimensional object identification method based on point cloud ordered coding | |
CN112084895A (en) | Pedestrian re-identification method based on deep learning | |
CN114627424A (en) | Gait recognition method and system based on visual angle transformation | |
CN112329662B (en) | Multi-view saliency estimation method based on unsupervised learning | |
CN116935411A (en) | Radical-level ancient character recognition method based on character decomposition and reconstruction | |
CN116758419A (en) | Multi-scale target detection method, device and equipment for remote sensing image | |
CN115359304B (en) | Single image feature grouping-oriented causal invariance learning method and system | |
CN113723468B (en) | Object detection method of three-dimensional point cloud | |
CN116311504A (en) | Small sample behavior recognition method, system and equipment | |
CN115018886A (en) | Motion trajectory identification method, device, equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |