The specific embodiment is as follows:
the present invention will be described in further detail with reference to the drawings and the technical scheme, in order to make the objects, technical schemes and advantages of the present invention more apparent.
At present, the deep learning unsupervised point cloud registration method still adopts a point corresponding mode as a constraint condition in a loss function part, and under the condition that noise and point cloud do not correspond one to one, the constraint can cause the reduction of registration precision. Referring to fig. 1, an embodiment of the present invention provides a point cloud registration method based on deep learning without supervision, including:
s101, constructing a point cloud registration network and training, wherein the point cloud registration network comprises: the system comprises a twin network for extracting the characteristics of the cleaning source point cloud and the characteristics of the cleaning target point cloud, a similar matrix convolution for iteratively updating and correcting the cleaning source point cloud by utilizing tensor concatenation of the characteristics of the cleaning target point cloud and the characteristics of the cleaning source point cloud, a potential surface prediction network for predicting the cleaning target point cloud and the point cloud noise of the cleaning source point cloud after updating and correcting, and a point cloud registration output for carrying out one-to-one correspondence on the source point cloud and the target point cloud according to the point cloud space point position by acquiring the point cloud space point position according to the point cloud noise prediction result and utilizing a noise reduction mode;
s102, collecting a target point cloud and a source point cloud of an object to be detected, inputting the target point cloud and the source point cloud into a trained point cloud registration network, and realizing point cloud registration of the target point cloud and the source point cloud of the object to be detected through the point cloud registration network.
Under the condition of point cloud one-to-one correspondence, a point P in the source point cloud P and a point Q in the target point cloud Q are corresponding points, and P is changed into P after the rotation matrix R and the translation matrix t are subjected to rigid transformation Rt The calculation process is shown in formula (1).
p Rt =Rp+t(1)
In theory, under the condition that the R and t parameters are accurately calculated, the coordinates of the source point cloud point p' subjected to transformation calculation are completely consistent with the target point cloud q.
The source point cloud P is transformed and calculated into a noisy source point cloud after being affected by noise and is marked as P', and the transformed noise source point cloud is marked as P R ′ t . Similarly, the target point cloud Q is noted as Q' affected by noise. At this time, the noise intensity at the point p affected by the noise is n p The actual position of p' can be calculated according to the method, and the calculation method is shown in formula (2).
p′ Rt =R(p+n p )+t(2)
While the actual position of the q point is also subject to similar noise n q The effect is represented by the position q' and the relationship between the effect and q is represented by the formula (3).
q′=q+n q (3)
Previous unsupervised loss to reduce the distance between p 'and q' asConstraint by whole P R ′ t And the minimum Q' distance as a constraint, the influence of noise is not considered, resulting in a decrease in registration accuracy.
In a real scene, the source point cloud P and the target point cloud Q are not in one-to-one correspondence under the influence of different observation positions and angles of the sensors, so that the point clouds are generally searched in a nearest-neighbor mode for one-to-one registration. The approximate corresponding point q can be obtained by transforming, calculating and solving the coordinates of the nearest neighbor points, and the calculating mode is shown in a formula (4) and a formula (5).
p Rt =Rp+t(4)
q=KNN(p Rt ,Q)(5)
However, this registration introduces an additional error d, which is calculated as shown in equation (6).
d=q-(Rp+t)(6)
Due to the introduction of the error d, the registration loss calculation result of the unsupervised method cannot well express the corrected point cloud p Rt And q, resulting in reduced registration accuracy.
This case of noise and point-to-point correspondence can be generalized to the problem of inconsistent description of two point clouds. Point cloud registration is generally constrained by adopting a point pair mode, however, in the case that two point cloud coordinates are not consistent, forced pairing of points can cause reduced registration accuracy. In fig. 2, the coordinates of the source point cloud a and the target point cloud a' do not completely correspond, and if a point pair is adopted as a constraint, the registration accuracy is reduced.
However, the points in the point cloud all exist depending on a plane, and although the plane cannot be represented in the data, the plane can be predicted by a calculation mode, and is called a potential surface, as shown in fig. 3. From fig. 4, the relationship between the predicted potential surface and the real surface can be found, wherein black represents the real surface and grey represents the predicted potential surface.
Therefore, to improve the accuracy of the deep learning unsupervised point cloud registration, one should start with describing the coincidence problem. Inspired by the idea of predicting a potential surface mode of point cloud noise reduction, in the embodiment of the present disclosure, the potential surface of the point cloud is predicted by the noise reduction mode to describe the position of any point in the space of the point cloud, so as to realize one-to-one correspondence between the source point cloud and the target point cloud. In addition, in the prediction process, noise can be relieved to a certain extent by introducing a potential surface prediction network and a corresponding loss function, and the point cloud registration accuracy is further improved.
As a preferred embodiment, the twin network adopts GNN network structure to perform feature extraction on the source point cloud and the target point cloud, and utilizes the multi-layer perceptron to obtain feature saliency scores, respectively ranks the saliency scores of all points in the source point cloud and the target point cloud, and respectively selects points with the saliency scores meeting a preset threshold as the cleaning source point cloud and the cleaning target point cloud. In the similarity matrix convolution, the source point cloud characteristics and the target point cloud characteristics can be combined and encoded in tensor characteristic dimensions to obtain the distance enhancement characteristics of the source point cloud and the target point cloud, the similarity score of each point is extracted, the distance enhancement characteristics are fused, a softmax function is used for converting a similarity matrix value into a probability value, and an argmax function is used for searching the optimal corresponding relation between the source point cloud and the target point cloud.
In the point cloud registration network as shown in fig. 5, first, feature extraction is performed on point clouds (n×3) to form point cloud features (n×k); secondly, removing difficult points on the basis of feature extraction, and eliminating obvious error points in the difficult points to form a clean point cloud (M multiplied by 3); thirdly, calculating a cleaning point cloud related feature (MxM x 4), and splicing the cleaning point cloud related feature (MxM x 4) in a feature dimension to form a spliced feature tensor (MxM x (2K+4)); then, information fusion of feature dimensions is realized by utilizing one-dimensional convolution, and fused features (MxMx32) are formed; calculating similarity score (MxM) through one-dimensional convolution, simultaneously calculating a weight matrix through mixing elimination, and finally obtaining transformation matrices R and t through SVD decomposition; and finally correcting the cleaned source point cloud according to the transformation matrix R and t, iterating again until the iteration is finished, respectively predicting the source point cloud and the target point cloud by using a potential surface prediction network, respectively transforming the source point cloud and the target point cloud based on the generated transformation matrix R and t, and restraining by using the information of the surface.
In the feature extraction of the twin network, the GNN network has the advantages of strong feature extraction capability, higher precision and relatively faster calculation speed, so that the GNN network can be selected as the feature extraction network.
In the similar matrix convolution, when solving the rigid transformation matrices R and t, the corresponding relationship between the source point cloud and the target point cloud needs to be found. The current mainstream mode adopts the inner product of the point characteristics or the L2 distance as the measure of similarity, and seeks the optimal solution from multiple training feedback. However, this approach has two disadvantages, namely that the source point cloud has multiple correspondences in the target point cloud, and due to randomness of the registration process, misregistration may occur, so that the result of single registration is not ideal. Thus, registration may be performed step by step in an iterative manner; on the other hand, the similarity evaluation capability between two points is limited by using the two points alone, so the point cloud can be evaluated through a group of characteristic combinations.
According to the two problems, the corresponding point is found by adopting the convolution of the distance perception similarity matrix, and p in the source point cloud is assumed i Extracted geometric feature f s And target point cloud q j Extracted geometric feature f t All are K dimensions, and point p in the nth iteration source point cloud i And a point q of the target point cloud j The characteristic tensor value description of (2) is as shown in formula (7).
In the formula, [ -; …]Combining the tensor feature dimensions and carrying out joint coding of the features; f (f) s (i) Representing the characteristics of a source point cloud at an i point; t (i) represents the characteristic of the target point cloud at the point j; ||p i -q j I represents p i And q j A Euclidean distance between them;represents the ratio q j Pointing to p i Is a unit vector of (a).
T (n) Named distance enhancement featureThe final dimension combined size is 2k+4; in order to further fuse the features, a similarity score for each point is extracted, after the tensor dimensions are spliced, the distance enhancement features are fused in the feature dimensions by means of one-dimensional convolution, and then the similarity matrix values are converted into probability values by means of a softmax function.
The ith row and the jth column in the fusion distance characteristic tensor T can be understood as a source point cloud p i And target point cloud q j The corresponding probability can find the optimal corresponding relation by using the argmax function. Finally, the optimization process transforms as shown in equation (8).
This is a classical absolute orientation problem that can be effectively solved using Singular Value Decomposition (SVD) methods, then using the decomposed R (n) And t (n) The source point cloud location is updated and then the next iteration is entered.
Although the similarity matrix has strong registration function, the distance characteristic tensor and the number N of the source point clouds are enhanced s Target point cloud N t And the number K of adjacent points are related, and as the number of the adjacent points is increased, the volume N of tensor is increased s ×N t X (2k+4) is also increasing exponentially, resulting in a huge computational overhead. Meanwhile, in the point cloud registration process, a small number of registration points can finish point cloud registration, so that in the actual calculation process, the point cloud is subjected to downsampling. However, the downsampling process may cause many points between the source point cloud and the target point cloud to no longer have a correspondence, which may drastically reduce registration accuracy. To solve this problem, the network is eliminated in two stages, registration difficulty point elimination and hybrid elimination, respectively.
The registration difficulty point elimination can effectively reduce the burden of the similar matrix convolution pair. The process first gives the local shape features of each point to be extracted, and then obtains a saliency score through a multi-layer perceptron. Wherein the higher the score, the more pronounced the point feature, such as a corner point, is. The whole process is carried out on a single point, and point pairs are not considered. Significance of all pointsThe first M points with higher feature significance can be selected as the cleaning point cloud by sequencing, and the rest points are regarded as registration difficulty points to be eliminated. In the embodiment, the selection can be selected
Although registration difficulties are eliminated, the process may have negative effects on the model, such as the situation that the points which may be correctly registered are eliminated in error in the registration process may occur, so that the similar matrix convolution can never find the correct correspondence.
In the registration process, an attempt is made to find the relationship between the maximum similarity scores of the source point cloud and the target point cloud, however, in the case that no corresponding point exists in the point cloud, the result becomes inaccurate through the formula (8). This situation is particularly severe when partially overlapping point cloud scenes are presented, in which case the points of the non-overlapping areas do not have any correspondence even if no registration difficulties are eliminated with them.
To address this problem, in the present embodiment, the point pairs may be operated using a hybrid point cancellation technique. The specific process is that all possible corresponding information for a given point in the source point cloud can be aggregated by employing a permutation invariant pool operation and a validity score is output. The higher the score, the greater the likelihood of correspondence is accounted for.
The calculation availability process can be described by the following formula (9).
Wherein σ (·) represents a sigmoid function;representing a merging method that is consistent in terms of element arrangement, typically includes averaging or maximizing, etc., in which case maxima may be used; f represents a multi-layer perceptron.
The mixture elimination weights can be calculated by the effectiveness score, and the weights of the ith point pair can be defined by equation (10).
Wherein,the indication function is represented, and 1 is satisfied when the condition is satisfied, and the rest are 0.
In the potential surface prediction, a similar structure of a noise reduction network refinement module can be adopted, and considering the computational complexity of the noise reduction network, the network can use only 2 graph convolution layers, and the number of neurons of the hidden layer can be set to be 32.
Under the condition that the sample is not marked, the unsupervised loss function plays a role in replacing the sample marking through the constraint of a mathematical mode. Therefore, the unsupervised loss function is particularly critical. The total loss function may be set as the sum of four parts, point registration loss (Point Matching Loss), negative entropy loss (Negative Entropy Loss), mixture elimination loss (Hybrid Elimination Loss), and potential surface coincidence loss (Latent Surface Uniform Loss), the definition of which may be as shown in equation (11).
L total =L PM +L NE +L HE +L LSU (11)
The point registration penalty is a standard cross entropy penalty for supervising the similarity matrix convolution. The loss function is calculated during each iteration and can be defined as shown in equation (12).
Wherein,
in the formula (13), j is the target point cloud B after the true value change T Closest source point cloud B S Cord of the i-th point in (b)And (5) guiding. r is the minimum radius controlling two sufficiently close points, is the super-parameter, if p i And q j If the distance between the two points is larger than r, no corresponding relation exists between the two points, so that control constraint is not added to the part of points, and the situation is common in the part of overlapping point clouds.
The final total negative entropy loss is the average of all iterations, which is defined as shown in equation (14).
The negative entropy function is mainly used to eliminate difficult points in the registration process. Considering that the training is in an unsupervised mode, the labeling information of the sample cannot be obtained. Thus, negative entropy loss is employed to eliminate these registration difficulties. The specific idea is as follows: if a point p i ∈B s Is a salient point, i.e., a high salient score, the point has a higher confidence and a lower probability distribution entropy. Therefore, the negative entropy of the probability distribution can be used as a supervisory signal for the saliency score, and then the negative entropy loss of the nth iteration is defined as shown in formula (15).
Where s (i) is the source point cloud B S A significant score for the i-th point in (a).
Theoretically, the loss function can be used for each iteration, and then the loss value can be calculated by superposition and averaging, however, this way may interfere with the similar matrix convolution training. As understood by definition, point registration loses euclidean features of the training network, while negative entropy loses shape features of the training network. From the registration process, the shape features are more important than the euclidean features in the early stage of training registration, so the loss function is used only in the first iteration, while the negative entropy loss is cut off from the gradient flow of the similar matrix to avoid further interference, and additional interference is avoided.
Hybrid cancellationThe loss is similar to the difficult point cancellation concept, except that the difficult point cancellation only considers the point itself information, while the hybrid cancellation considers the point pair information. Thus, the effect of mixing to eliminate losses is more pronounced. Specific implementation of the cloud B T As a supervisory signal, the mixed cancellation loss of the nth iteration is defined as shown in equation (16).
Wherein the method comprises the steps of
This process actually feeds back the correct tags and filters the incorrect tags. Those pairs of points with high probability of correct registration will have higher effectiveness scores during long-term training and multiple iterations.
The potential consistent surface loss is used to constrain the problem of inconsistent registration accuracy due to noise, point-to-point non-one correspondence. The potential surface consistency consists of noise reduction Loss (Denoise Loss) and noise consistency Loss (Noise Consistency Loss) together. The loss function for the nth iteration can be described by equation (18).
In the method, in the process of the invention,noise reduction loss value representing the nth iteration, +.>Consistency loss.
Because the potential surface of the cloud is predicted by using the noise reduction network in the noise reduction process, the noise reduction network is trained, and therefore the network drop should be calculatedLoss of noise. Consider a source point cloud B S And target point cloud B T The coordinates of the corresponding points are affected by noise and acquisition, so that the coordinates cannot be in one-to-one correspondence in space, and the coordinates are difficult to converge on one point even after the noise is eliminated. Thus, in designing the noise reduction loss function, the predicted R and t can be used as transformation parameters to obtain the source point cloud B S And target point cloud B T Calculating a predicted target point cloud B' T And predicting source point cloud B S . The transformation process can be described by the formula (19) and the formula (20).
B′ T =RB S +t(19)
B S ′=R(B T -t)(20)
Then, B is S 、B S ′、B T And B' T Respectively input to a noise removing network to obtain noise vectorsAnd->Four noise vectors. Meanwhile, considering that the noise network also needs to be trained, and the whole network is an unsupervised training method, the noise network loss is trained by adopting the unsupervised method, and the loss function can be described by a formula (21).
Wherein,
wherein sigma is a scaling factor; n is the number of points input to the point cloud, where input contains B S 、B S ′、B T And B' T Four types of input; v (V) input Representing a network-based prediction noise vector;noise vectors calculated based on the neighborhood.
The calculation method of (2) is shown in the formula (23).
Wherein KNN (p) i ,B inout ) Representing slave point cloud B input Find point p in i The kth neighbor point. The nearest point coordinate vectors together form a point p i Is a noise vector of (a).
Noise vector after noise reduction can be obtained through noise reduction network
V input =f denoise (input,θ)(25)
In the formulas (24) and (25), input is similar to the above, and is B S 、B S ′、B T And B' T Respectively generate correspondinglyAnd->Four noise vectors.
Due toAnd->Respectively by->And->Obtained by means of spatial transformation taking into account +.>And->Not one-to-one even in case of accurate R and t predictions>And->And->And not exactly equal. Theoretically, under the condition that R and t are predicted accurately, the method comprises the following steps of ++>And->And->The difference between them should be equal.
Thus, the final noise consistency penalty is described as source point cloud B S And target point cloud B T Predicted noise vectorAndpredicting source point cloud B S 'and predicted target point cloud B' T Predicted noise vector->And->The smooth L1 loss function for the two sets of point cloud vector differences is shown in equation (26).
Wherein,
the total loss function is obtained through the above, and the point cloud registration network is trained and optimized based on the total loss function, so that generalization and robust performance of the point cloud registration network model are improved.
Further, based on the above method, the embodiment of the present invention further provides a point cloud registration system based on deep learning without supervision, including: a model construction module and a point cloud registration module, wherein,
the model construction module is used for constructing a point cloud registration network and training, wherein the point cloud registration network comprises: the system comprises a twin network for extracting the characteristics of the cleaning source point cloud and the characteristics of the cleaning target point cloud, a similar matrix convolution for iteratively updating and correcting the cleaning source point cloud by utilizing tensor concatenation of the characteristics of the cleaning target point cloud and the characteristics of the cleaning source point cloud, a potential surface prediction network for predicting the cleaning target point cloud and the point cloud noise of the cleaning source point cloud after updating and correcting, and a point cloud registration output for carrying out one-to-one correspondence on the source point cloud and the target point cloud according to the point cloud space point position by acquiring the point cloud space point position according to the point cloud noise prediction result and utilizing a noise reduction mode;
the point cloud registration module is used for collecting the target point cloud and the source point cloud of the object to be detected, inputting the target point cloud and the source point cloud into the trained point cloud registration network, and realizing the point cloud registration of the target point cloud and the source point cloud of the object to be detected through the point cloud registration network.
To verify the validity of this protocol, the following is further explained in connection with experimental data:
verifying the validity of the present method, in particular the present IDMA method, by comparing a plurality of existing methods, such as FG, DCP, PRNet; secondly, considering that the method can remove noise in the registration process, the method has an improvement effect on the generalization performance of the model, so that verification is required by experiments; finally, comparing with a firework-based registration algorithm, analyzing the difference between the traditional registration method and the deep learning registration method in terms of accuracy and performance, and searching for application scenes suitable for each application scene.
Experimental data set the experimental parameter set is shown in table 1 using the model 40_ply_hdf5_2048 data set.
TABLE 1 Experimental parameters of the potential surface texture of the consistent network
1. Validity analysis
In order to verify the effectiveness of the method, the method is compared with the method before improvement through experiments, and the method is an unsupervised method, so that the training set result shows the final precision of the model, and the testing set result shows the generalization performance of the model. The training set and the test set are distinguished when specific results are listed, and the default is to train the results on the test set when the specific results are not noted. The specific experimental results are shown in table 2.
Table 2 comparison of experimental results for various registration methods
From experimental results, the registration accuracy of the IDAM and the method is far higher than that of other methods. On a training set, the average absolute error of the rotation matrix R of the method is slightly lower than that of the IDAM method before improvement, the improvement of the translation matrix t is similar, and the method after improvement is slightly improved in accuracy on a training model. On the test set, the generalization accuracy of the model is greatly improved, wherein the root mean square error is reduced by approximately 25%, the average absolute error of the rotation matrix R is reduced by about 20%, and the mean absolute error of the translation matrix t is not reduced, but the root mean square error is reduced. The point cloud used in the scheme is clean point cloud which does not contain any noise, the influence of the noise is eliminated, the previous constraint mode is based on point mode registration, and the scheme is based on potential table mode point cloud registration, so that the influence of the noise can be reduced in the registration process of the network, the characteristic extraction network can learn the characteristics of more point clouds rather than the characteristics of the noise, and therefore better generalization can be represented on a test set.
To further verify the experimental results of this case, DCP, PRNet, GNN +idam and gnn+the method of this case were visualized. The color of the upper right part represents the source point cloud, the color of the lower left part represents the target point cloud, the color between the two represents the point cloud passing through the prediction transformation matrix R and t, and the registration effect is better when the two are overlapped, namely the color of the lower left part or the color between the two. As can be seen from fig. 6, the accuracy and visualization effect of the various methods can be mutually verified, with gnn+ being the method of highest accuracy.
2. Noise immunity and generalization analysis
The potential surface loss is mainly used for resisting the situation that the surfaces do not correspond to each other and noise, and in theory, the noise resistance of the network can be improved by increasing the potential surface loss. To further investigate the effect, both networks were performed in the noise range of 0% to 2%, respectively.
From the training set results fig. 7, it can be seen that, in the range of 0.25% gaussian noise, the potential surface network structure has a certain improvement on the rotation matrix R and the translation matrix t, but a certain oscillation exists in the middle stage, which is presumably due to the fact that the noise greatly affects the potential surface prediction, so that the registration accuracy is reduced.
From the test set result fig. 8, it can be seen that in the range of 0.25% gaussian noise, the potential surface network structure is greatly improved on the rotation matrix R and the translation matrix t, and similar to the training set, certain fluctuation exists in the noise increasing process, but the whole is lower than the IDAM method, which indicates that the method can improve the generalization performance of the model. In addition, due to excessive noise, deviations in the predicted surface may be caused, thereby affecting the final registration accuracy.
3. Algorithm contrast analysis
Table 3 registration feature comparison based on conventional and deep learning methods
Because the adopted data sets are different, in order to compare the advantages and disadvantages of the two methods, the adaptive firework registration algorithm and the IDAM method in the scheme are compared on the same data set, and the results are shown in the following table 4.
Table 4 registration accuracy comparison based on conventional and deep learning methods
From the experimental results, it can be obviously seen that the gnn+idam method before improvement or the gnn+present method after improvement exceeds the adaptive firework registration algorithm after improvement, and in addition, the deep learning method has great lifting potential and will lead the traditional point cloud registration method further in the future. And visualizing the experimental result, and listing according to iteration rounds, wherein the color at the upper right corner is the source point cloud position, the color at the lower left corner is the target point cloud position, and the color in the middle of the two modes and the improvement ICP algorithm. Comparing fig. 9 (a) and (b), it can be found that (a) has a slight deviation between the target point cloud and the predicted position, while (b) almost completely coincides, which can also indicate that gnn+the present method has better registration accuracy than the adaptive firework algorithm+icp.
In the practical application process, the performance is also an important reference index. Considering that the previous firework-based registration algorithm samples a c++ language, which is different from the python language commonly used for deep learning, the process is compiled again with the python language for fair comparison.
Table 6 performance comparison
/>
From experimental results, the time consumption of the enhanced firework algorithm +Tree_ICP is approximately 2 times of that of the basic firework algorithm +Tree_ICP, and the time consumption of the adaptive firework algorithm +Tree_ICP is slightly higher than that of the basic firework registration algorithm. Unlike traditional optimization algorithms, deep learning methods have a model training process that generally requires a long time, which results in extremely low cost performance of the deep learning method when processing a single point cloud.
The point cloud registration method based on the deep learning in the scheme is superior to the point cloud registration based on the traditional method in precision and performance through comparison of the precision analysis and the performance analysis, however, the method cannot be used for explaining that the deep learning can completely replace the traditional registration method, and the traditional registration method still has certain advantages under a certain scene. A specific analytical comparison of the two methods is shown in table 6 below.
Table 6 Firework series optimization method and IDMA series deep learning method point cloud registration contrast analysis
As can be seen from table 6, the point cloud registration method based on deep learning has great advantages in large-data-volume and high-precision scenes, while the conventional method still maintains a place in small-data-volume and low-precision-requirement data scenes.
Based on the experimental data, further proved that the scheme can effectively improve the generalization performance and the robustness of the model through the potential surface consistent constraint loss under the noiseless condition, and the root mean square error and the average absolute error of the rotation matrix R are respectively reduced by about 25 percent and 20 percent; noise resistance of the network model can be effectively improved in a mode of noise reduction and surface consistency constraint, and the potential surface network structure is greatly improved on the rotation matrix R and the translation matrix t within the range of 0.25% Gaussian noise; when the noise is too large, deviation of the predicted surface can be caused, and the final registration result is affected; the deep learning series matching method is superior to the traditional firework series registration method in precision and performance at present.
The relative steps, numerical expressions and numerical values of the components and steps set forth in these embodiments do not limit the scope of the present invention unless it is specifically stated otherwise.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the system disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
The elements and method steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or a combination thereof, and the elements and steps of the examples have been generally described in terms of functionality in the foregoing description to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Those of ordinary skill in the art may implement the described functionality using different methods for each particular application, but such implementation is not considered to be beyond the scope of the present invention.
Those of ordinary skill in the art will appreciate that all or a portion of the steps in the above methods may be performed by a program that instructs associated hardware, and that the program may be stored on a computer readable storage medium, such as: read-only memory, magnetic or optical disk, etc. Alternatively, all or part of the steps of the above embodiments may be implemented using one or more integrated circuits, and accordingly, each module/unit in the above embodiments may be implemented in hardware or may be implemented in a software functional module. The present invention is not limited to any specific form of combination of hardware and software.
Finally, it should be noted that: the above examples are only specific embodiments of the present invention, and are not intended to limit the scope of the present invention, but it should be understood by those skilled in the art that the present invention is not limited thereto, and that the present invention is described in detail with reference to the foregoing examples: any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or perform equivalent substitution of some of the technical features, while remaining within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.