CN110400349A - Robot navigation tracks restoration methods under small scene based on random forest - Google Patents
Robot navigation tracks restoration methods under small scene based on random forest Download PDFInfo
- Publication number
- CN110400349A CN110400349A CN201910593421.2A CN201910593421A CN110400349A CN 110400349 A CN110400349 A CN 110400349A CN 201910593421 A CN201910593421 A CN 201910593421A CN 110400349 A CN110400349 A CN 110400349A
- Authority
- CN
- China
- Prior art keywords
- random
- training
- pixel
- training characteristics
- layers
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 75
- 238000007637 random forest analysis Methods 0.000 title claims abstract description 29
- 238000012549 training Methods 0.000 claims abstract description 87
- 239000011159 matrix material Substances 0.000 claims abstract description 40
- 230000009466 transformation Effects 0.000 claims abstract description 30
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 12
- 238000005070 sampling Methods 0.000 claims abstract description 10
- 238000005457 optimization Methods 0.000 claims description 3
- 238000006073 displacement reaction Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 238000005303 weighing Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/24323—Tree-organised classifiers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
Robot navigation tracks restoration methods under the small scene based on random forest that the invention discloses a kind of, includes the following steps, the scene that (1) selected one needs to track repeatedly shoots scene using RGB-D camera;(2) transformation matrix that every group of image is obtained using three-dimensional reconstruction algorithm, is obtained M transformation matrix;(3) training label and training characteristics are obtained;(4) with one Random Forest model T of training label and training characteristics training;(5) if robot tracking failure, 1 shooting is carried out to scene using RGB-D camera;(6) the several pixels of stochastical sampling from the RGB figure of this group of image calculate the corresponding random character of each pixel, and all random characters are sent into training in T, export the corresponding 3D world coordinates of all random characters;Final to determine itself pose according to the transformation matrix obtained, navigation tracking restores to complete.
Description
Technical field
The present invention relates under method more particularly to a kind of small scene based on random forest that a kind of robot restores tracking
Robot navigation tracks restoration methods.
Background technique
With the development of robot technology, the automatic pathfinding of robot and build figure function have in robot technology it is important
Status, during this robot need to know oneself relative to the position of real world and with the posture of itself, so machine
The tracking problem of device people is always the hot issue in a robotics.
With the development of vSLAM (Visual Simultaneous Localization and Mapping) technology, In
After robotic tracking's failure, restoring robotic tracking becomes extremely important, restores in this regard, people also have investigated many kinds of tracking
Method.What tracking restored focuses on calculating correct 6D pose of the robot after tracking failure, obtains 6D correct
It postpones, robot is just able to know that oneself correct pose relative to real world, to restore to track.
The method based on local feature that S.Se and D.G.Lowe et al. are proposed, thought are first to extract the figure of known pose
The image characteristic point of picture and unknown pose, and description is calculated, it is calculated using P3P algorithm and RANSAC algorithm progress pose and excellent
The shortcomings that change, this method, is usually using SIFT feature or SURF feature, and the calculating of these features and its description
Calculating time-consuming, real-time not can guarantee.
The method based on key frame that G.Klein and D.Murray is proposed, by the phase for inquiring known image and key frame
Like property, it is inferred to a hypothesis pose, speed is fast, but when unknown images and known image similitude have a long way to go, it is assumed that position
Appearance may differ by larger with true pose.
The PoseNet benefit that Alex Kendall, the Matthew Grimes and Roberto Cipolla of Cambridge University are proposed
With deep learning, direct forecast image pose is schemed using a RGB, but this method is due to its computation complexity height, it is time-consuming
Long, precision is lower and is unable to satisfy the requirement of real-time tracking recovery.
Summary of the invention
It solves the above problems the object of the invention is that providing one kind, schemes to calculate training characteristics and training using RGB-D
Label, training random forest predicts 3D scene coordinate, then restores tracking by coordinate relationship, thus make to restore the simple, period,
Robot navigation tracks restoration methods under small scene with high accuracy based on random forest.
To achieve the goals above, the technical solution adopted by the present invention is that such: a kind of small field based on random forest
Robot navigation tracks restoration methods under scape, comprising the following steps:
(1) scene for needing to track is selected, M random shooting is carried out to scene using RGB-D camera, obtains M group picture
Picture, every group of image include RGB figure and a depth map;
(2) transformation matrix that every group of image is obtained using three-dimensional reconstruction algorithm, is obtained M transformation matrix;
(3) training label and training characteristics are obtained;
(31) the several pixels of stochastical sampling from the RGB figure of first group of image calculate the corresponding 3D generation of each pixel
Boundary's coordinate and random character, and the 3D world coordinates and random character of the pixel correspond;
(32) by remaining M-1 group image, by the method for step (31), obtain the pixel of stochastical sampling, and with picture
The corresponding 3D world coordinates of vegetarian refreshments and random character, and the training label using 3D world coordinates as model, are made with random character
For the training characteristics of model;
(4) with one Random Forest model T of training label and training characteristics training;
(41) all training characteristics are sorted, selects several training characteristics at random from the training characteristics after sequence and constitutes at random
Set, default to use the number of plies of tree balance method for A layers, several depth thresholds is B layers, A < B;
(42) splitting parameter that the 1st to A layers of node is obtained using tree balance method, using minimum space variance side
Method obtains the splitting parameter of A+1 layers to B layers of node;
(43) each layer of split vertexes constitute Random Forest model T;
(5) robot starts to track, if tracking failure, carries out 1 shooting to scene using RGB-D camera, obtains one group
Image, this group of image include RGB figure and a depth map;
(6) the several pixels of stochastical sampling from the RGB figure of this group of image calculate the corresponding random spy of each pixel
Sign, and all random characters are sent into T, export the corresponding 3D world coordinates of all random characters;
(7) a predictive transformation matrix is obtained according to the 3D world coordinates that step (6) obtains;
(8) the predictive transformation matrix of robot obtaining step (7), determines itself pose, and navigation tracking restores to complete.
As preferred: in the step (2), obtaining transformation using KinectFusion three-dimensional rebuilding method to every group of image
Matrix.
As preferred: the step (31) specifically:
A pixel is chosen, pixel 3D world coordinates X is calculated using following formula, and calculate according to 3D world coordinates X
The world coordinates W of the pixel1;
X=K-1*P*D(P);
W1=HX;
Wherein, K is the internal reference matrix of camera, K-1Indicate that internal reference inverse of a matrix matrix, P are the vector of the pixel, D (P)
The depth value in depth map is corresponded to for the pixel, H is that the RGB where the pixel schemes corresponding transformation matrix;
The random character F of the pixel is calculated using following formulap,
Wherein, c1And c2Indicate in RGB image random two channels, I (P, c in 3 image channels1) indicate to lead at vector P
Road c1On pixel value, δ indicate pixel coordinate on 2D offset.
As preferred: in the step (42):
Tree balance method obtains the splitting parameter of the 1st to A layers of node specifically:
A1. the node of first layer is constructed;
Its balance parameters Q is calculated using following formula to training characteristics each in random collectionb, select the smallest QbCorresponding
Training characteristics, as the splitting parameter of the node of first layer;
In formula, SLFor the training characteristics number on the current training characteristics left side, SRFor the training characteristics on the right of current training characteristics
Number;
A2. the left and right both sides training characteristics that will act as the training characteristics of a node layer splitting parameter respectively constitute a number
According to collection, the smallest Q in each data set is found out using the same method of A1bCorresponding training characteristics, as next layer
The splitting parameter of node;
A3. step A2, the node until finding A layers are repeated;
Minimize the splitting parameter that space Variance Method obtains A+1 layers to B layers of node specifically:
B1. the training characteristics at A layers of all split vertexes both ends are respectively constituted into a data set, to training each in data set
Feature calculates space variance Q using following formulav, find out Q in the data setvThe smallest training characteristics, as A+1 layers of node
Splitting parameter;
Wherein
Wherein n indicates that the node index of tree, m indicate that the training label being calculated, S indicate randomly selected marked good
Pixel (p, m) set, SnIndicating the full set at node n, L indicates that left subtree, R indicate right subtree,It indicates
The set of left subtree or right subtree,Indicate the mean value of m in S;
B2. according to the method for B1, the A+2 layers of splitting parameter for arriving B node layer are calculated.
As preferred: in step (7), obtaining a predictive transformation matrix method particularly includes:
Position auto―control is calculated with Kabsch method to 3D world coordinates, and most with RANSAC algorithm optimization position auto―control
Predictive transformation matrix is obtained eventually.
Compared with the prior art, the advantages of the present invention are as follows: it does not need to carry out characteristic point and describes the calculating of son, do not need
Picture is matched, it is only necessary to train a Random Forest model in advance.And in the present invention, the side of Random Forest model is constructed
Method is very special, using tree balance method and minimizes space Variance Method jointly to construct Random Forest model, and put down in tree
In weighing apparatus method, Q is obtained according to our own formula and methodb, final this method can shorten the time in actual use,
Compared with the method for deep learning, accuracy is higher, and the trained and testing time is also shorter.
Detailed description of the invention
Fig. 1 is flow chart of the present invention;
Fig. 2 is angle true value and valuation comparison diagram in embodiment 2;
Fig. 3 is displacement true value and valuation comparison diagram in embodiment 2;
Fig. 4 is angle true value and valuation Error Graph in Fig. 2;
Fig. 5 is displacement true value and valuation Error Graph in Fig. 4.
Specific embodiment
The present invention will be further described with reference to the accompanying drawings.
Embodiment 1: referring to Fig. 1, robot navigation tracks restoration methods under a kind of small scene based on random forest, including
Following steps:
(1) scene for needing to track is selected, M random shooting is carried out to scene using RGB-D camera, obtains M group picture
Picture, every group of image include RGB figure and a depth map;
(2) transformation matrix that every group of image is obtained using three-dimensional reconstruction algorithm, is obtained M transformation matrix;
(3) training label and training characteristics are obtained;
(31) the several pixels of stochastical sampling from the RGB figure of first group of image calculate the corresponding 3D generation of each pixel
Boundary's coordinate and random character, and the 3D world coordinates and random character of the pixel correspond;
(32) by remaining M-1 group image, by the method for step (31), obtain the pixel of stochastical sampling, and with picture
The corresponding 3D world coordinates of vegetarian refreshments and random character, and the training label using 3D world coordinates as model, are made with random character
For the training characteristics of model;
(4) with one Random Forest model T of training label and training characteristics training;
(41) all training characteristics are sorted, selects several training characteristics at random from the training characteristics after sequence and constitutes at random
Set, default to use the number of plies of tree balance method for A layers, several depth thresholds is B layers, A < B;
(42) splitting parameter that the 1st to A layers of node is obtained using tree balance method, using minimum space variance side
Method obtains the splitting parameter of A+1 layers to B layers of node;
(43) each layer of split vertexes constitute Random Forest model T;
(5) robot starts to track, if tracking failure, carries out 1 shooting to scene using RGB-D camera, obtains one group
Image, this group of image include RGB figure and a depth map;
(6) the several pixels of stochastical sampling from the RGB figure of this group of image calculate the corresponding random spy of each pixel
Sign, and all random characters are sent into T, export the corresponding 3D world coordinates of all random characters;
(7) a predictive transformation matrix is obtained according to the 3D world coordinates that step (6) obtains;
(8) the predictive transformation matrix of robot obtaining step (7), determines itself pose, and navigation tracking restores to complete.
In the present embodiment: in the step (2), being obtained and become using KinectFusion three-dimensional rebuilding method to every group of image
Change matrix.
The step (31) specifically:
A pixel is chosen, pixel 3D world coordinates X is calculated using following formula, and calculate according to 3D world coordinates X
The world coordinates W of the pixel1;
X=K-1*P*D(P);
W1=HX;
Wherein, K is the internal reference matrix of camera, K-1Indicate that internal reference inverse of a matrix matrix, P are the vector of the pixel, D (P)
The depth value in depth map is corresponded to for the pixel, H is that the RGB where the pixel schemes corresponding transformation matrix;
The random character F of the pixel is calculated using following formulap:
Wherein, c1And c2Indicate in RGB image random two channels, I (P, c in 3 image channels1) indicate to lead at vector P
Road c1On pixel value, δ indicate pixel coordinate on 2D offset.
In the step (42):
Tree balance method obtains the splitting parameter of the 1st to A layers of node specifically:
A1. the node of first layer is constructed;
Its balance parameters Q is calculated using following formula to training characteristics each in random collectionb, select the smallest QbCorresponding
Training characteristics, as the splitting parameter of the node of first layer;
In formula, SLFor the training characteristics number on the current training characteristics left side, SRFor the training characteristics on the right of current training characteristics
Number;
A2. the left and right both sides training characteristics that will act as the training characteristics of a node layer splitting parameter respectively constitute a number
According to collection, the smallest Q in each data set is found out using the same method of A1bCorresponding training characteristics, as next layer
The splitting parameter of node;
A3. step A2, the node until finding A layers are repeated;
Minimize the splitting parameter that space Variance Method obtains A+1 layers to B layers of node specifically:
B1. the training characteristics at A layers of all split vertexes both ends are respectively constituted into a data set, to training each in data set
Feature calculates space variance Q using following formulav, find out Q in the data setvThe smallest training characteristics, as A+1 layers of node
Splitting parameter;
Wherein
Wherein n indicates that the node index of tree, m indicate that the training label being calculated, S indicate randomly selected marked good
Pixel (p, m) set, SnIndicating the full set at node n, L indicates that left subtree, R indicate right subtree,It indicates
The set of left subtree or right subtree,Indicate the mean value of m in S;
B2. according to the method for B1, the A+2 layers of splitting parameter for arriving B node layer are calculated.
In step (7), a predictive transformation matrix is obtained method particularly includes:
Position auto―control is calculated with Kabsch method to 3D world coordinates, and most with RANSAC algorithm optimization position auto―control
Predictive transformation matrix is obtained eventually.
In the present invention, all coordinates all use the notation of homogeneous coordinates;Moreover, not taking in the present embodiment to all pictures
The strategy that vegetarian refreshments is all calculated, and the random character for the pixel being selected randomly only is calculated, the random character calculated
Queue is corresponded and is combined into training label, for training random forest.
The present embodiment refers to a balance parameters Qb, this have the advantage that, without really calculating left and right
The node number of tree reduces the time complexity of calculating, shortens the training time it is concluded that whether subtree balances out, and
And when left and right subtree substantial equilibrium, the excessive caused data portion over-fitting of any one side subtree depth difference can be prevented
Or poor fitting, splitting parameter of the smallest random character of calculated result as node split is finally selected, explanation a bit is worth
It is that left and right children tree nodes number all should be greater than the minimum node number of setting.
In the present embodiment, minimizing space Variance Method is used at A+1 layers to B layers, when the number of plies is greater than A, I
Sorting parameter is obtained using the method.And it is tradition that random forest is all generated by the way of minimizing space variance
Random forest generating mode.The present invention improves it, only uses when A+1 layers to B layers, remaining is using tree
Balance method.
In the present embodiment, in order to better illustrate tree balance method, we are exemplified below:
We assume that obtaining many training characteristics in step (41), randomly selects 10 and be ranked up, preset using tree
The number of plies of balance method is 2 layers, then when the number of plies is less than or equal to 2, using tree balance method, remainder layer is using minimum space side
Difference method.Training characteristics see the table below 1:
Table 1: the table after randomly selecting 10 training characteristics sequences
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
(1) Q is usedbCalculation formula, 10 training characteristics are just calculated, the Q of random character 6 is found outbMinimum, just with 6
As splitting parameter, the data set on the left side is 1,2,3,4,5, and the data set on the right is 8,9,10, and the data set on the left side gives left son
Right subtree is given on tree, the right, and first layer is over because only that a node so first layer is built.
(2) start to build the second layer: because the number of plies, also in two layers, subtree each use by oneself upper one layer in left and right gives oneself
Characteristic set is similarly operated, and will respectively select the node of oneself in this way, it is assumed that left subtree select 3, right subtree choosing
Out 8, in this way, 4 set will be obtained, they is given the left and right subtree of node oneself respectively, then start to build third layer.
(3) third layer: the number of plies is greater than 2 layers, so looking for Q with the method for minimizing space variancev, with the smallest QvIt is corresponding
Training characteristics as splitting parameter.
It repeats always in this way, until reaching threshold value B.
Embodiment 2: referring to Fig. 1 to Fig. 5, under a kind of small scene based on random forest of the present embodiment robot navigation with
Track restoration methods, comprising the following steps:
(1) scene for needing to track is selected, M random shooting is carried out to scene using RGB-D camera, obtains M group picture
Picture, every group of image include RGB figure and a depth map;The present embodiment directlys adopt the public data collection 4- of Stanford University
Off-the-air picture collection kitchen in Scenes, image size are 640*480, and it is total to test picture for 550 groups of RGB-D figures of training picture
520 groups of RGB-D figures, off-the-air picture collection kitchen includes the information of transformation matrix, we are as true value;
(2) (3) (4) are same as Example 1, finally obtain a Random Forest model T;
(5) robot starts to track, if tracking failure, carries out 1 shooting to scene using RGB-D camera, obtains one group
Image, this group of image include RGB figure and a depth map.Herein, in order to illustrate the effect of the present embodiment, we are using following
Means:
From off-the-air picture collection kitchen, multiple test pictures are randomly selected, the random forest that step (4) obtain is sent into
It is tested in model T, the predictive transformation matrix of every group of picture is exported, as the valuation of image transformation matrix;
Analysis of experimental results:
In order to facilitate comparison, transformation matrix is parsed into pitch angle, yaw angle, roll angle, X-direction, Y-direction, Z-direction 6
A value.Fig. 2 is the comparison diagram of angle true value and valuation that 520 groups of pictures calculate in the embodiment of the present invention 2, and Fig. 3 is this hair
In bright embodiment 2, displacement true value and valuation comparison diagram that 520 groups of pictures calculate;Since similarity is especially high, we are supplemented
Fig. 4 and Fig. 5, wherein Fig. 4 is angle true value and valuation Error Graph in Fig. 2, and Fig. 5 is displacement true value and valuation error in Fig. 4
Figure.
As can be seen that valuation curve and true value curve co-insides degree are high in from Fig. 2 to Fig. 5, illustrate the pose of estimation and true
Pose similarity is high, and angle error value is predominantly located within the scope of -5 ° to+5 °, displacement error be predominantly located in -0.05m arrive+
Within the scope of 0.05m, valuation differs small with true value, and error range is also smaller, illustrates that algorithm robustness is preferable, and algorithm is run defeated
Result is stablized out, restores to track using valuation, can obtain preferable effect.
Embodiment 3: for the present invention compared with other classical algorithms, this method application effect is significant.
Referring to table 2, angle uses 5 ° of standards, and displacement uses 5cm standard, this method test result reach 92.7% it is accurate
Rate, significant effect.
The Comparative result of 2 distinct methods of table
In table 2, the method based on key frame uses the matched method of global key frame, the method based on local feature
Two kinds of characteristics of image of ORB feature and SIFT feature have been respectively adopted, have been greater than 5 ° using the method angular error of deep learning, displacement
Error is greater than 50cm, so not listing in table.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all in essence of the invention
Made any modifications, equivalent replacements, and improvements etc., should all be included in the protection scope of the present invention within mind and principle.
Claims (5)
1. robot navigation tracks restoration methods under a kind of small scene based on random forest, it is characterised in that: including following step
It is rapid:
(1) scene for needing to track is selected, M random shooting is carried out to scene using RGB-D camera, obtains M group image, often
Group image includes RGB figure and a depth map;
(2) transformation matrix that every group of image is obtained using three-dimensional reconstruction algorithm, is obtained M transformation matrix;
(3) training label and training characteristics are obtained;
(31) the several pixels of stochastical sampling from the RGB figure of first group of image calculate the corresponding world 3D of each pixel and sit
Mark and random character, and the 3D world coordinates and random character of the pixel correspond;
(32) remaining M-1 group image is obtained into the pixel of stochastical sampling, and and pixel by the method for step (31)
Corresponding 3D world coordinates and random character, and the training label using 3D world coordinates as model, using random character as mould
The training characteristics of type;
(4) with one Random Forest model T of training label and training characteristics training;
(41) all training characteristics are sorted, selects several training characteristics at random from the training characteristics after sequence and constitutes random set
It closes, default to use the number of plies of tree balance method for A layers, several depth thresholds is B layers, A < B;
(42) splitting parameter that the 1st to A layers of node is obtained using tree balance method is obtained using space Variance Method is minimized
Take the splitting parameter of A+1 layers to B layers of node;
(43) each layer of split vertexes constitute Random Forest model T;
(5) robot starts to track, if tracking failure, carries out 1 shooting to scene using RGB-D camera, obtains one group of image,
This group of image includes RGB figure and a depth map;
(6) the several pixels of stochastical sampling from the RGB figure of this group of image, calculate the corresponding random character of each pixel,
And all random characters are sent into T, export the corresponding 3D world coordinates of all random characters;
(7) a predictive transformation matrix is obtained according to the 3D world coordinates that step (6) obtains;
(8) the predictive transformation matrix of robot obtaining step (7), determines itself pose, and navigation tracking restores to complete.
2. robot navigation tracks restoration methods, feature under the small scene according to claim 1 based on random forest
It is: in the step (2), transformation matrix is obtained using KinectFusion three-dimensional rebuilding method to every group of image.
3. robot navigation tracks restoration methods, feature under the small scene according to claim 1 based on random forest
It is: the step (31) specifically:
A pixel is chosen, pixel 3D world coordinates X is calculated using following formula, and the picture is calculated according to 3D world coordinates X
The world coordinates W of vegetarian refreshments1;
X=K-1*P*D(P);
W1=HX;
Wherein, K is the internal reference matrix of camera, K-1Indicate that internal reference inverse of a matrix matrix, P are the vector of the pixel, D (P) is should
Pixel corresponds to the depth value in depth map, and H is that the RGB where the pixel schemes corresponding transformation matrix;
The random character F of the pixel is calculated using following formulap,
Wherein, c1And c2Indicate in RGB image random two channels, I (P, c in 3 image channels1) indicate channel c at vector P1
On pixel value, δ indicate pixel coordinate on 2D offset.
4. robot navigation tracks restoration methods, feature under the small scene according to claim 1 based on random forest
It is: in the step (42):
Tree balance method obtains the splitting parameter of the 1st to A layers of node specifically:
A1. the node of first layer is constructed;
Its balance parameters Q is calculated using following formula to training characteristics each in random collectionb, select the smallest QbCorresponding training
Feature, as the splitting parameter of the node of first layer;
In formula, SLFor the training characteristics number on the current training characteristics left side, SRFor the training characteristics on the right of current training characteristics
Number;
A2. the left and right both sides training characteristics that will act as the training characteristics of a node layer splitting parameter respectively constitute a data set,
The smallest Q in each data set is found out using the same method of A1bCorresponding training characteristics, as next node layer
Splitting parameter;
A3. step A2, the node until finding A layers are repeated;
Minimize the splitting parameter that space Variance Method obtains A+1 layers to B layers of node specifically:
B1. the training characteristics at A layers of all split vertexes both ends are respectively constituted into a data set, to training characteristics each in data set
Variance Q in space is calculated using following formulav, find out Q in the data setvThe smallest training characteristics, as point of A+1 layers of nodes
Split parameter;
Wherein
Wherein n indicates that the node index of tree, m indicate that the training label being calculated, S indicate randomly selected marked good picture
The set of vegetarian refreshments (p, m), SnIndicating the full set at node n, L indicates that left subtree, R indicate right subtree,Indicate left son
The set of tree or right subtree,Indicate the mean value of m in S;
B2. according to the method for B1, the A+2 layers of splitting parameter for arriving B node layer are calculated.
5. robot navigation tracks restoration methods, feature under the small scene according to claim 1 based on random forest
It is: in step (7), obtains a predictive transformation matrix method particularly includes:
Position auto―control is calculated with Kabsch method to 3D world coordinates, and is obtained with RANSAC algorithm optimization position auto―control is final
To predictive transformation matrix.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910593421.2A CN110400349B (en) | 2019-07-03 | 2019-07-03 | Robot navigation tracking recovery method in small scene based on random forest |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910593421.2A CN110400349B (en) | 2019-07-03 | 2019-07-03 | Robot navigation tracking recovery method in small scene based on random forest |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110400349A true CN110400349A (en) | 2019-11-01 |
CN110400349B CN110400349B (en) | 2022-04-15 |
Family
ID=68322729
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910593421.2A Expired - Fee Related CN110400349B (en) | 2019-07-03 | 2019-07-03 | Robot navigation tracking recovery method in small scene based on random forest |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110400349B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113255490A (en) * | 2021-05-15 | 2021-08-13 | 成都理工大学 | Unsupervised pedestrian re-identification method based on k-means clustering and merging |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140241617A1 (en) * | 2013-02-22 | 2014-08-28 | Microsoft Corporation | Camera/object pose from predicted coordinates |
US20150248765A1 (en) * | 2014-02-28 | 2015-09-03 | Microsoft Corporation | Depth sensing using an rgb camera |
US20150347846A1 (en) * | 2014-06-02 | 2015-12-03 | Microsoft Corporation | Tracking using sensor data |
CN108898623A (en) * | 2018-05-24 | 2018-11-27 | 北京飞搜科技有限公司 | Method for tracking target and equipment |
-
2019
- 2019-07-03 CN CN201910593421.2A patent/CN110400349B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140241617A1 (en) * | 2013-02-22 | 2014-08-28 | Microsoft Corporation | Camera/object pose from predicted coordinates |
US20150248765A1 (en) * | 2014-02-28 | 2015-09-03 | Microsoft Corporation | Depth sensing using an rgb camera |
US20150347846A1 (en) * | 2014-06-02 | 2015-12-03 | Microsoft Corporation | Tracking using sensor data |
CN108898623A (en) * | 2018-05-24 | 2018-11-27 | 北京飞搜科技有限公司 | Method for tracking target and equipment |
Non-Patent Citations (2)
Title |
---|
刘袁缘等: "基于树结构分层随机森林在非约束环境下的头部姿态估计", 《电子与信息学报》 * |
马娟娟等: "基于深度优先随机森林分类器的目标检测", 《中国惯性技术学报》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113255490A (en) * | 2021-05-15 | 2021-08-13 | 成都理工大学 | Unsupervised pedestrian re-identification method based on k-means clustering and merging |
Also Published As
Publication number | Publication date |
---|---|
CN110400349B (en) | 2022-04-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109631855B (en) | ORB-SLAM-based high-precision vehicle positioning method | |
CN114782691B (en) | Robot target identification and motion detection method based on deep learning, storage medium and equipment | |
CN110659727B (en) | Sketch-based image generation method | |
CN109658445A (en) | Network training method, increment build drawing method, localization method, device and equipment | |
CN108648274B (en) | Cognitive point cloud map creating system of visual SLAM | |
CN113205595B (en) | Construction method and application of 3D human body posture estimation model | |
CN110223382B (en) | Single-frame image free viewpoint three-dimensional model reconstruction method based on deep learning | |
CN108171249B (en) | RGBD data-based local descriptor learning method | |
Dudek et al. | Vision-based robot localization without explicit object models | |
CN109389156B (en) | Training method and device of image positioning model and image positioning method | |
KR102608473B1 (en) | Method and apparatus for aligning 3d model | |
CN111881716A (en) | Pedestrian re-identification method based on multi-view-angle generation countermeasure network | |
CN110705566A (en) | Multi-mode fusion significance detection method based on spatial pyramid pool | |
CN111507184B (en) | Human body posture detection method based on parallel cavity convolution and body structure constraint | |
CN109977827A (en) | A kind of more people's 3 d pose estimation methods using multi-view matching method | |
CN113673354A (en) | Human body key point detection method based on context information and combined embedding | |
KR20140143310A (en) | Estimator learning method and pose estimation mehtod using a depth image | |
CN114663880A (en) | Three-dimensional target detection method based on multi-level cross-modal self-attention mechanism | |
CN114723784A (en) | Pedestrian motion trajectory prediction method based on domain adaptation technology | |
CN110400349A (en) | Robot navigation tracks restoration methods under small scene based on random forest | |
KR102186764B1 (en) | Apparatus and method for estimating optical flow and disparity via cycle consistency | |
CN112396167A (en) | Loop detection method for fusing appearance similarity and spatial position information | |
CN116797830A (en) | Image risk classification method and device based on YOLOv7 | |
CN106408654A (en) | Three-dimensional map creation method and system | |
CN116612513A (en) | Head posture estimation method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20220415 |