CN111612841B - Target positioning method and device, mobile robot and readable storage medium - Google Patents

Target positioning method and device, mobile robot and readable storage medium Download PDF

Info

Publication number
CN111612841B
CN111612841B CN202010574730.8A CN202010574730A CN111612841B CN 111612841 B CN111612841 B CN 111612841B CN 202010574730 A CN202010574730 A CN 202010574730A CN 111612841 B CN111612841 B CN 111612841B
Authority
CN
China
Prior art keywords
point cloud
registered
template
point
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010574730.8A
Other languages
Chinese (zh)
Other versions
CN111612841A (en
Inventor
文乃武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Mumu Jucong Robot Technology Co ltd
Original Assignee
Shanghai Mumu Jucong Robot Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Mumu Jucong Robot Technology Co ltd filed Critical Shanghai Mumu Jucong Robot Technology Co ltd
Priority to CN202010574730.8A priority Critical patent/CN111612841B/en
Publication of CN111612841A publication Critical patent/CN111612841A/en
Application granted granted Critical
Publication of CN111612841B publication Critical patent/CN111612841B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a target positioning method and device, a mobile robot and a readable storage medium, wherein the target positioning method comprises the following steps: acquiring a color map and a depth map containing a target to be positioned; identifying a location area in the color map where the object to be localized is located; determining a point cloud to be registered in the depth map according to a position area of a target to be positioned in the color map; partitioning the template point cloud according to the point cloud to be registered to obtain a template point cloud subset; and registering the point cloud to be registered in the depth map with the template point cloud subset based on a normal weighted iterative closest point algorithm to finish the positioning of the target. The problem that in the prior art, a suspended object is difficult to detect and the mobile robot collides with an obstacle is solved, the accuracy of target positioning is improved, meanwhile, the registration precision and the convergence speed are improved, and the calculation time is saved.

Description

Target positioning method and device, mobile robot and readable storage medium
Technical Field
The present invention relates to the field of obstacle positioning technologies, and in particular, to a target positioning method and apparatus, a mobile robot, and a readable storage medium.
Background
In recent years, with the rapid development of mobile robot technology, more and more mobile robots play a great role in various social fields. With the advent of the Artificial Intelligence (AI) era, robotics have become an important sign of the development of high technologies, such as home service robots, shopping guide robots, sweeping robots, substation inspection robots, and the like. However, as the requirement of people on the intelligent robot is higher and higher, almost all mobile robots face a problem of how to accurately position and avoid obstacles.
Currently, a mobile robot mainly relies on a laser radar to avoid an obstacle in the moving process, and before that, distance information of the obstacle needs to be acquired and positioned. However, since the 2d lidar can only detect planar information, and the installation height of the lidar in the mobile robot is generally low, suspended objects are difficult to detect, so that the mobile robot collides with an obstacle.
Disclosure of Invention
The invention aims to provide a target positioning method and device, a mobile robot and a readable storage medium, which effectively solve the technical problem that the mobile robot is inaccurate in positioning of an obstacle in the prior art.
The technical scheme provided by the invention is as follows:
a target positioning method, comprising:
acquiring a color map and a depth map containing a target to be positioned;
identifying a location area of the target to be located in a color map;
determining a point cloud to be registered in a depth map according to the position area of the target to be positioned in the color map;
partitioning the template point cloud according to the point cloud to be aligned to obtain a template point cloud subset;
and registering the point cloud to be registered in the depth map with the template point cloud subset based on a normal weighted iterative closest point algorithm to finish the positioning of the target.
Further preferably, in registering the point cloud to be registered and the template point cloud subset in the depth map by the iterative closest point algorithm based on normal weighting, the method includes:
fixing a transformation matrix between the template point cloud subset and the point cloud to be registered, sequentially finding the nearest points to each point in the point cloud to be registered in the template point cloud subset, and calculating the square distance;
sequencing the square distances obtained through calculation and updating the overlapping rate;
according to the updated overlapping rate, carrying out branch removing operation on points with the distance larger than a preset distance threshold value in point cloud to be registered;
updating a transformation matrix according to the points after the branch removing operation and calculating a mean square error based on a preset objective function;
and judging whether to stop loop iteration according to the mean square error.
Further preferably, in updating the transformation matrix according to the point after the pruning operation and calculating the mean square error based on a preset objective function, the preset objective function f is:
Figure GDA0004273274550000021
wherein, the template point cloud q= { Q 1 ,q 2 ,...,q m M represents the number of point clouds of the template point cloud, and the subset of the template point clouds
Figure GDA0004273274550000023
Point cloud to be registered p= { P 1 ,p 2 ,...,p n N represents the number of point clouds to be aligned; w (w) i Representing an ith point p in a point cloud to be registered after a deliberation operation i Weight, v i Representing an ith point p in a point cloud to be registered after a deliberation operation i Normal vector n of (2) i And the ith point q in the template point cloud subset c(i) Normal vector n of (2) c(i) Included angle between (and->
Figure GDA0004273274550000022
m r Representing the number of point clouds to be registered after the deliberation operation; r represents a rotation angle, and t represents a translation amount.
Further preferably, in the identifying a location area where the target to be located is in a color map: identifying a position area of a target to be positioned in the color map by using a classifier created based on an SSD (Single Shot MultiBox Detector) multi-target detection object detection algorithm; and/or the number of the groups of groups,
the method for partitioning the template point cloud according to the point cloud to be aligned to obtain a template point cloud subset comprises the following steps:
selecting key points of point clouds to be registered and template point clouds;
respectively calculating PFH descriptors of key points in the point cloud to be aligned and the template point cloud;
performing feature matching on the PFH descriptors of the point cloud to be aligned and the PFH descriptors of the template point cloud;
selecting a preset group of matching point pairs with highest matching degree;
and obtaining a template point cloud subset according to the selected matching points.
The invention also provides a target positioning device, which comprises:
the image acquisition module is used for acquiring a color image and a depth image containing a target to be positioned;
the target identification module is used for identifying the position area of the target to be positioned in the color chart;
the point cloud to be registered determining module is used for determining the point cloud to be registered in the depth map according to the position area of the target to be positioned in the color map;
the template blocking module is used for blocking the template point cloud according to the point cloud to be registered to obtain a template point cloud subset;
and the point cloud registration module is used for registering the point cloud to be registered in the depth map and the template point cloud subset based on a normal weighted iterative closest point algorithm to finish the positioning of the target.
Further preferably, in the point cloud registration module, it includes:
the first calculation unit is used for fixing a transformation matrix between the template point cloud subset and the point cloud to be registered, sequentially finding the point closest to each point in the point cloud to be registered in the template point cloud subset, and calculating the square distance; the method comprises the steps of sequencing the calculated square distances and updating the overlapping rate; the method comprises the steps of carrying out a pruning operation on a pruning unit, and updating a transformation matrix according to points after the pruning operation of the pruning unit and calculating a mean square error based on a preset objective function;
the branch removing unit is used for carrying out branch removing operation on points with the distance larger than a preset distance threshold value in the point cloud to be registered according to the updated overlapping rate of the first calculating unit;
and the judging unit is used for judging whether to stop loop iteration according to the mean square error calculated by the first calculating unit.
Further preferably, in the first calculation unit, a preset objective function f used for calculating the mean square error is:
Figure GDA0004273274550000031
wherein, the template point cloud q= { Q 1 ,q 2 ,...,q m M represents the number of point clouds of the template point cloud, and the subset of the template point clouds
Figure GDA0004273274550000033
Point cloud to be registered p= { P 1 ,p 2 ,...,p n N represents the number of point clouds to be aligned; w (w) i Representing an ith point p in a point cloud to be registered after a deliberation operation i Weight, v i Representing an ith point p in a point cloud to be registered after a deliberation operation i Normal vector n of (2) i And the ith point q in the template point cloud subset c(i) Normal vector n of (2) c(i) Included angle between (and->
Figure GDA0004273274550000032
m r Representing the number of point clouds to be registered after the deliberation operation; r represents a rotation angle, and t represents a translation amount.
Further preferably, in the target identification module, a classifier created based on an SSD object detection algorithm is adopted to identify a position area of a target to be positioned in the color chart; and/or the number of the groups of groups,
in the template blocking module, it includes:
the key point selecting unit is used for selecting key points of the point cloud to be registered and the template point cloud;
the second calculation unit is used for calculating PFH descriptors of key points in the point cloud to be aligned and the template point cloud respectively;
the feature matching unit is used for performing feature matching on the PFH descriptors of the point cloud to be aligned and the PFH descriptors of the template point cloud;
the matching point selecting unit is used for selecting a preset group of matching point pairs with highest matching degree in the characteristic matching unit and obtaining a template point cloud subset according to the selected matching points.
The invention also provides a mobile robot, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes the steps of the target positioning method when running the computer program.
A computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the above-described target positioning method.
In the target positioning method and device, the mobile robot and the readable storage medium provided by the invention, after a color image and a depth image containing a target to be positioned are acquired based on a GRBD camera, a position area of the target to be positioned is identified from the color image, and then a point cloud to be registered is obtained from the depth image; and then, registering the point cloud to be registered and the template point cloud subset in the depth map based on an improved normal weighted iterative closest point algorithm to finish positioning the target, solve the problem that a mobile robot collides with an obstacle because a suspended object is difficult to detect in the prior art, improve the accuracy of target positioning, improve the registration precision and convergence speed and save the calculation time. In addition, in the registration process, the overlapping rate of two point clouds is calculated in each iteration, and the accuracy of solving the transformation matrix is improved through pruning operation, so that the accuracy of target positioning is further improved.
Drawings
The above features, technical features, advantages and implementation thereof will be further described in the following detailed description of the preferred embodiments with reference to the accompanying drawings in a clearly understandable manner.
FIG. 1 is a flow chart of an embodiment of a method for locating a target according to the present invention;
FIG. 2 is a schematic diagram of an embodiment of a target positioning device according to the present invention;
fig. 3 is a schematic diagram of a mobile robot according to the present invention.
Reference numerals illustrate:
100-target positioning device, 110-image acquisition module, 120-target identification module, 130-point cloud to be registered determination module, 140-template blocking module and 150-point cloud registration module.
Detailed Description
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, specific embodiments of the present invention will be described below with reference to the accompanying drawings. It is evident that the drawings in the following description are only some embodiments of the present invention, from which other drawings and other embodiments can be obtained by those skilled in the art without inventive effort.
In a first embodiment of the present invention, as shown in fig. 1, a target positioning method includes: s10, acquiring a color map and a depth map containing a target to be positioned; s20, identifying a position area of a target to be positioned in the color chart; s30, determining a point cloud to be registered in the depth map according to a position area of a target to be positioned in the color map; s40, partitioning the template point cloud according to the point cloud to be registered to obtain a template point cloud subset; and S50, registering the point cloud to be registered in the depth map and the template point cloud subset based on a normal weighted iterative closest point algorithm, and completing the positioning of the target.
In the embodiment, when a target needs to be positioned, an RGBD camera is adopted to photograph the azimuth of the target, and a frame of color image and a frame of depth image are obtained at the same time; thereafter, a location area targeted in the color map is identified. Because the color map and the depth map are in one-to-one correspondence, after the position area of the target in the color map is obtained, the point cloud to be registered in the depth map can be correspondingly obtained and used for carrying out subsequent point cloud registration operation, thereby realizing the positioning of the target.
Before registration, three-dimensional modeling is carried out on the region where the target to be positioned is located in advance, and a model of each type of object in the corresponding region is rebuilt, so that in the subsequent step, the position information of the target is determined according to the position of the registered template point cloud in the three-dimensional model. The method of three-dimensional modeling is not particularly limited herein, and, as in an example, open source software elastic fusion is selected to perform three-dimensional modeling, which performs dense three-dimensional mapping based on RGBD data, so as to implement reconstruction of the object model.
The embodiment is obtained by modifying the above embodiment, in this embodiment, step S50 includes, in registering the point cloud to be registered and the template point cloud subset in the depth map based on a normal weighted Iterative Closest Point (ICP) algorithm: s51, fixing a transformation matrix between the template point cloud subset and the point cloud to be registered, sequentially finding the point closest to each point in the point cloud to be registered in the template point cloud subset, and calculating the square distance; sequencing the square distances obtained through calculation and updating the overlapping rate; s52, according to the updated overlapping rate, carrying out branch removing operation on points with the distance larger than a preset distance threshold value in point cloud to be registered; s53, updating a transformation matrix according to the points after the deliberation operation and calculating a mean square error based on a preset objective function; s54, judging whether to stop loop iteration according to the mean square error.
In the present embodiment, it is assumed that the template point cloud is q= { Q 1 ,q 2 ,...,q m M represents the number of point clouds of the template point clouds; the point to be registered is clouded as P= { P 1 ,p 2 ,...,p n N represents the number of point clouds of the point clouds to be aligned, and the normal vector of the template point cloud subset is N q The normal vector of the point cloud to be registered is N p Template point cloud subset
Figure GDA0004273274550000061
Then during the registration process:
according to the point cloud P to be registered and the template point cloud subset Q obtained by partitioning 1 Obtaining an initial transformation matrix T 1 And then, entering a loop iteration. In the kth iteration, the transformation matrix T is fixed k At the template point cloud subset Q 1 Finding the point p with the point cloud to be registered i The nearest point is calculated and its square distance is calculated
Figure GDA0004273274550000062
After that, square distance ++>
Figure GDA0004273274550000063
Sorting, updating the overlapping rate r, finding out the first r corresponding point pairs, and removing points with the point pair distance larger than a preset distance threshold; then updating the geometric transformation to obtain a transformation matrix T k+1 And calculate the mean square error epsilon k+1 Further according to the mean square error epsilon k+1 Judging whether a stopping condition is met, stopping iteration if the stopping condition is met, and outputting a rotation angle R, a translation amount t and a mean square error epsilon; otherwise, the (k+1) th iteration is carried out, so that the loop is carried out until the stop condition is met.
Specifically, an initial transformation matrix T 1 May be obtained by Principal Component Analysis (PCA) initialization, or may be obtained in other manners, and is not particularly limited herein. In updating the overlap rate r, when the distance between a pair of points is smaller than a preset distance threshold (radicleSetting according to the actual situation, such as setting to 1cm, 3cm and the like), the overlapping is judged, otherwise, no overlapping is judged. In the branch removing operation in each iteration process, the operation is also carried out according to the preset distance threshold value, and the points with the point-to-distance larger than the preset distance threshold value are removed, so that the solving accuracy of the transformation matrix is improved.
In updating the transformation matrix according to the points after the deliberation operation and calculating the mean square error epsilon based on a preset objective function, the preset objective function f is as follows (1):
Figure GDA0004273274550000064
wherein w is i Representing an ith point p in a point cloud to be registered after a deliberation operation i Weight, v i Representing an ith point p in a point cloud to be registered after a deliberation operation i Normal vector n of (2) i And the ith point q in the template point cloud subset c(i) Normal vector n of (2) c(i) Included angle between them, and
Figure GDA0004273274550000065
m r and (3) representing the number of point clouds to be registered after the deliberation operation, wherein R represents a rotation angle and t represents a translation amount.
Thus, the mean square error ε is calculated according to equation (2):
Figure GDA0004273274550000071
in the embodiment, a normal vector is introduced into a preset objective function f as a constraint condition, and in the operation process, small weights are adaptively configured for points with large included angles, and large weights are configured for points with small included angles, so that the registration accuracy is improved; at the same time introduce v i Accelerating the convergence speed of the function, thereby saving the operation time.
And comparing the calculated mean square error epsilon with a preset error threshold value for judging whether iteration is stopped, and when the mean square error epsilon obtained by one iteration calculation is smaller than the preset error threshold value, namely, the condition of stopping iteration is reached, otherwise, continuing iteration. The preset error threshold value can be set according to actual conditions, for example, 1cm; for another example, the length is set to 5 cm.
The above embodiment is modified to obtain the present embodiment, in which step S20 identifies that the target to be located is located in the position area in the color chart: and identifying the position area of the target to be positioned in the color map by adopting a classifier created based on an SSD object detection algorithm.
In this embodiment, before the target to be located is identified, a classifier created based on the SSD object detection algorithm is used, and the acquired color icons are annotated as a training set to train the classifier. In the prediction process, the SSD object detection algorithm achieves the purpose by drawing rectangular frames on the identified objects, and generates a classification label and an adjustment position coordinate for each default rectangular frame, so that the category of the target object and the range in the color map are determined. Because the color map and the depth map are in one-to-one correspondence, after the coordinate position of the rectangular frame targeted in the color map is obtained, the depth information of the same position in the depth map is converted into point cloud data to obtain the point cloud to be registered.
After the point cloud to be registered is obtained, the point cloud to be registered is immediately segmented based on the point cloud to be registered to obtain a template point cloud subset: firstly, filtering a point cloud to be aligned (a band-pass filter, a statistical filter and the like can be used) to filter noise points around a target object; then, calculating the normal direction and curvature of the point cloud to be registered after filtering, and extracting key points of the point cloud to be registered according to the curvature value; thereafter, the PFH descriptors of the keypoints (PFH feature model implementation based on GPU acceleration) are computed. Simultaneously, carrying out the same treatment on the reconstructed three-dimensional model point cloud to obtain PFH descriptors of key points in the template point cloud; and then matching the PFH descriptors, and dividing point cloud data similar to the point cloud to be registered from the three-dimensional model according to the matching result. The key point is selected according to a preset curvature threshold, namely when the curvature value of a certain point is larger than the curvature threshold, the certain point is selected as the key point.
Specific algorithms such asThe following steps: assume that the template point cloud is q= { Q 1 ,q 2 ,...,q m M represents the number of point clouds of the template point clouds; the point to be registered is clouded as P= { P 1 ,p 2 ,...,p n N represents the number of point clouds of the point clouds to be aligned, and the template point clouds are subset
Figure GDA0004273274550000081
The template point cloud is partitioned by: firstly, selecting key points P of a point cloud P to be registered and a template point cloud Q key And Q key The method comprises the steps of carrying out a first treatment on the surface of the Thereafter, the key points Q are calculated respectively key And P key PFH descriptor Q of (F) pfh And P pfh The method comprises the steps of carrying out a first treatment on the surface of the Next, PFH descriptor P of the point cloud to be registered pfh PFH descriptor Q of template point cloud pfh Performing feature matching; finally, selecting s groups of matching point pairs with highest matching precision; the PFH descriptors are in one-to-one correspondence with the space coordinates of the points, corresponding space coordinate points are selected from the template point cloud Q, then the center point of the selected space coordinate points is taken as the center, and a proper k neighborhood is selected to form a template point cloud subset Q 1 . Wherein, PFH descriptor P of point cloud to be aligned pfh PFH descriptor Q of template point cloud pfh In feature matching, PFH descriptor P is calculated pfh And PFH descriptor Q pfh The euclidean distance between the two is further from low to high (the minimum distance means higher matching accuracy). The number of the selected matching point pairs can be selected according to practical situations, such as 10 pairs and 15 pairs; the k value is determined according to the number of the point clouds to be registered, and the point which is similar to the number of the point cloud Yun Zhongdian to be registered is selected.
The present invention also provides an object positioning device 100, in one embodiment, as shown in fig. 2, comprising: an image acquisition module 110, configured to acquire a color map and a depth map that include an object to be positioned; a target recognition module 120 for recognizing a location area in the color map where a target to be positioned is located; the point cloud to be registered determining module 130 is configured to determine a point cloud to be registered in the depth map according to a position area of the target to be positioned in the color map; the template blocking module 140 is configured to block the template point cloud according to the point cloud to be registered to obtain a template point cloud subset; the point cloud registration module 150 is configured to register the point cloud to be registered in the depth map with the template point cloud subset based on a normal weighted iterative closest point algorithm, so as to complete positioning of the target.
In this embodiment, when the target needs to be located, the image acquisition module 110 adopts an RGBD camera to take a picture of the direction in which the target is located, and simultaneously obtains a color image and a depth image; thereafter, the object recognition module 120 recognizes the location area targeted in the color map. Because the color map and the depth map are in one-to-one correspondence, after the position area of the target in the color map is obtained, the point cloud to be registered determining module 130 can correspondingly obtain the point cloud to be registered in the depth map, and the point cloud to be registered is used for carrying out subsequent point cloud registration operation, so that the target is positioned.
Before registration, three-dimensional modeling is carried out on the region where the target to be positioned is located in advance, and a model of each type of object in the corresponding region is rebuilt, so that in the subsequent step, the position information of the target is determined according to the position of the registered template point cloud in the three-dimensional model. The method of three-dimensional modeling is not particularly limited herein, and, as in an example, open source software elastic fusion is selected to perform three-dimensional modeling, which performs dense three-dimensional mapping based on RGBD data, so as to implement reconstruction of the object model.
The embodiment is obtained by modifying the above embodiment, and in this embodiment, the point cloud registration module 150 includes: the first calculation unit is used for fixing a transformation matrix between the template point cloud subset and the point cloud to be registered, sequentially finding the point closest to each point in the point cloud to be registered in the template point cloud subset, and calculating the square distance; the method comprises the steps of sequencing the calculated square distances and updating the overlapping rate; the method comprises the steps of carrying out a pruning operation on a pruning unit, and updating a transformation matrix according to points after the pruning operation of the pruning unit and calculating a mean square error based on a preset objective function; the branch removing unit is used for carrying out branch removing operation on points with the distance larger than a preset distance threshold value in the point cloud to be registered according to the updated overlapping rate of the first calculating unit; and the judging unit is used for judging whether to stop loop iteration according to the mean square error calculated by the first calculating unit.
In the present embodimentAssume that the template point cloud is q= { Q 1 ,q 2 ,...,q m M represents the number of point clouds of the template point clouds; the point to be registered is clouded as P= { P 1 ,p 2 ,...,p n N represents the number of point clouds of the point clouds to be aligned, and the normal vector of the template point cloud subset is N q The normal vector of the point cloud to be registered is N p Template point cloud subset
Figure GDA0004273274550000091
Then during the registration process:
according to the point cloud P to be registered and the template point cloud subset Q obtained by partitioning 1 Obtaining an initial transformation matrix T 1 And then, entering a loop iteration. In the kth iteration, the transformation matrix T is fixed k The first computing unit performs a first calculation on the template point cloud subset Q 1 Finding the point p with the point cloud to be registered i The nearest point is calculated and its square distance is calculated
Figure GDA0004273274550000092
After that, square distance ++>
Figure GDA0004273274550000093
Sorting, updating the overlapping rate r, finding out the first r corresponding point pairs, and removing the points with the point pair distances larger than a preset distance threshold value by a pruning unit; the first calculation unit then further updates the geometric transformation to obtain a transformation matrix T k+1 And calculate the mean square error epsilon k+1 Further, the judging unit judges the mean square error epsilon k+1 Judging whether a stopping condition is met, stopping iteration if the stopping condition is met, and outputting a rotation angle R, a translation amount t and a mean square error epsilon; otherwise, the (k+1) th iteration is carried out, so that the loop is carried out until the stop condition is met.
Specifically, an initial transformation matrix T 1 May be obtained by Principal Component Analysis (PCA) initialization, or may be obtained in other manners, and is not particularly limited herein. In updating the overlap rate r, when the distance between a pair of points is smaller than a preset distance threshold (which can be set according to practical situations, such as 1cm, 3cm, etc.), the overlap is determined, otherwise no overlap is determinedAnd (3) stacking. In the branch removing operation in each iteration process, the operation is also carried out according to the preset distance threshold value, and the points with the point-to-distance larger than the preset distance threshold value are removed, so that the solving accuracy of the transformation matrix is improved.
In updating the transformation matrix according to the points after the deliberation operation and calculating the mean square error epsilon based on a preset objective function, the preset objective function f is shown as a formula (1), so that the mean square error epsilon is calculated according to a formula (2).
In the embodiment, a normal vector is introduced into a preset objective function f as a constraint condition, and in the operation process, small weights are adaptively configured for points with large included angles, and large weights are configured for points with small included angles, so that the registration accuracy is improved; at the same time introduce v i Accelerating the convergence speed of the function, thereby saving the operation time.
And comparing the calculated mean square error epsilon with a preset error threshold value for judging whether iteration is stopped, and when the mean square error epsilon obtained by one iteration calculation is smaller than the preset error threshold value, namely, the condition of stopping iteration is reached, otherwise, continuing iteration. The preset error threshold value can be set according to actual conditions, for example, 1cm; for another example, the length is set to 5 cm.
The above embodiment is modified to obtain this embodiment, in this embodiment, the target recognition module 120 uses a classifier created based on the SSD object detection algorithm to recognize the location area of the target to be located in the color map. The template blocking module 140 includes: the key point selecting unit is used for selecting key points of the point cloud to be registered and the template point cloud; the second calculation unit is used for calculating PFH descriptors of key points in the point cloud to be aligned and the template point cloud respectively; the feature matching unit is used for performing feature matching on the PFH descriptors of the point cloud to be aligned and the PFH descriptors of the template point cloud; the matching point selecting unit is used for selecting a preset group of matching point pairs with highest matching degree in the characteristic matching unit and obtaining a template point cloud subset according to the selected matching points.
In this embodiment, before the target to be located is identified, a classifier created based on the SSD object detection algorithm is used, and the acquired color icons are annotated as a training set to train the classifier. In the prediction process, the SSD object detection algorithm achieves the purpose by drawing rectangular frames on the identified objects, and generates a classification label and an adjustment position coordinate for each default rectangular frame, so that the category of the target object and the range in the color map are determined. Because the color map and the depth map are in one-to-one correspondence, after the coordinate position of the rectangular frame targeted in the color map is obtained, the depth information of the same position in the depth map is converted into point cloud data to obtain the point cloud to be registered.
After the point cloud to be registered is obtained, the point cloud to be registered is immediately segmented based on the point cloud to be registered to obtain a template point cloud subset: firstly, filtering a point cloud to be aligned (a band-pass filter, a statistical filter and the like can be used) to filter noise points around a target object; then, calculating the normal direction and curvature of the point cloud to be registered after filtering, and extracting key points of the point cloud to be registered by a key point selecting unit according to the curvature value; the second computing unit then computes the PFH descriptors (PFH feature model implementation based on GPU acceleration) of the keypoints. Simultaneously, carrying out the same treatment on the reconstructed three-dimensional model point cloud to obtain PFH descriptors of key points in the template point cloud; and then the feature matching unit matches the PFH descriptors, so that the matching point selection unit divides point cloud data similar to the point cloud to be registered from the three-dimensional model according to the matching result. The specific algorithm is as follows:
assume that the template point cloud is q= { Q 1 ,q 2 ,...,q m M represents the number of point clouds of the template point clouds; the point to be registered is clouded as P= { P 1 ,p 2 ,...,p n N represents the number of point clouds of the point clouds to be aligned, and the template point clouds are subset
Figure GDA0004273274550000111
The template point cloud is partitioned by: firstly, selecting key points P of a point cloud P to be registered and a template point cloud Q key And Q key The method comprises the steps of carrying out a first treatment on the surface of the Thereafter, the key points Q are calculated respectively key And P key PFH descriptor Q of (F) pfh And P pfh The method comprises the steps of carrying out a first treatment on the surface of the Next, PFH descriptor P of the point cloud to be registered pfh PFH descriptor Q of template point cloud pfh Performing feature matching; finally, selecting s groups of matching point pairs with highest matching precision; the PFH descriptors are in one-to-one correspondence with the space coordinates of the points, corresponding space coordinate points are selected from the template point cloud Q, then the center point of the selected space coordinate points is taken as the center, and a proper k neighborhood is selected to form a template point cloud subset Q 1 . Wherein, PFH descriptor P of point cloud to be aligned pfh PFH descriptor Q of template point cloud pfh In feature matching, PFH descriptor P is calculated pfh And PFH descriptor Q pfh The euclidean distance between the two is further from low to high (the minimum distance means higher matching accuracy). The number of the selected matching point pairs can be selected according to practical situations, such as 10 pairs and 15 pairs; the k value is determined according to the number of the point clouds to be registered, and the point which is similar to the number of the point cloud Yun Zhongdian to be registered is selected.
It will be apparent to those skilled in the art that the above-described program modules are merely illustrative of the division of each program module for convenience and brevity of description, and that in practical application, the above-described functional allocation may be performed by different program modules, i.e. the internal structure of the apparatus is divided into different program units or modules, to perform all or part of the above-described functions. The program modules in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one processing unit, where the integrated units may be implemented in a form of hardware or in a form of a software program unit. In addition, the specific names of the program modules are also only for distinguishing from each other, and are not used to limit the protection scope of the present invention.
Fig. 3 is a schematic structural view of a mobile robot provided in an embodiment of the present invention, and as shown, the mobile robot 200 includes: a processor 220, a memory 210, and a computer program 211 stored in the memory 210 and executable on the processor 220, such as: object location analysis program. The steps of the respective embodiments of the target positioning method described above are implemented by the processor 220 when the computer program 211 is executed, or the functions of the respective modules of the respective embodiments of the target positioning apparatus described above are implemented by the processor 220 when the computer program 211 is executed.
The mobile robot 200 may be a notebook, a palm computer, a tablet computer, a mobile phone, or the like. Mobile robot 200 may include, but is not limited to, a processor 220, a memory 210. It will be appreciated by those skilled in the art that fig. 3 is merely an example of mobile robot 200 and is not limiting of mobile robot 200 and may include more or fewer components than shown, or may combine certain components, or different components, such as: the mobile robot 200 may also include input and output devices, display devices, network access devices, buses, and the like.
The processor 220 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field-Programmable gate arrays (FPGA) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. The general purpose processor 220 may be a microprocessor or the processor may be any conventional processor or the like.
The memory 210 may be an internal memory unit of the mobile robot 200, for example: a hard disk or memory of mobile robot 200. The memory 210 may also be an external storage device of the mobile robot 200, such as: a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card) and the like provided on the mobile robot 200. Further, the memory 210 may also include both an internal memory unit and an external memory device of the mobile robot 200. The memory 210 is used to store a computer program 211 and other programs and data required for the mobile robot 200. The memory 210 may also be used to temporarily store data that has been output or is to be output.
In the foregoing embodiments, the descriptions of the embodiments are focused on, and the parts of a certain embodiment that are not described or depicted in detail may be referred to in the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed mobile robot and method may be implemented in other manners. For example, the mobile robot embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present invention may implement all or part of the flow of the method of the above-described embodiments, or may be implemented by sending instructions to related hardware by the computer program 211, where the computer program 211 may be stored in a computer readable storage medium, and where the computer program 211, when executed by the processor 220, may implement the steps of the method embodiments described above. Wherein the computer program 211 comprises: computer program code, which may be in the form of source code, object code, executable files, or in some intermediate form, etc. The computer readable storage medium may include: any entity or device capable of carrying the computer program 211 code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the content of the computer readable storage medium may be appropriately increased or decreased according to the requirements of legislation and patent practice in the jurisdiction, for example: in some jurisdictions, computer-readable media do not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
It should be noted that the above embodiments can be freely combined as needed. The foregoing is merely illustrative of the preferred embodiments of this invention, and it will be appreciated by those skilled in the art that variations and modifications may be made without departing from the principles of the invention, and such variations and modifications are to be regarded as being within the scope of the invention.

Claims (8)

1. A method of locating a target, comprising:
acquiring a color map and a depth map containing a target to be positioned;
identifying a location area of the target to be located in a color map;
determining a point cloud to be registered in a depth map according to the position area of the target to be positioned in the color map;
partitioning the template point cloud according to the point cloud to be aligned to obtain a template point cloud subset;
registering the point cloud to be registered in the depth map and the template point cloud subset based on a normal weighted iterative closest point algorithm to finish positioning the target;
the registering of the point cloud to be registered and the template point cloud subset in the depth map by the iterative closest point algorithm based on normal weighting comprises the following steps:
fixing a transformation matrix between the template point cloud subset and the point cloud to be registered, sequentially finding the nearest points to each point in the point cloud to be registered in the template point cloud subset, and calculating the square distance;
sequencing the square distances obtained through calculation and updating the overlapping rate;
according to the updated overlapping rate, carrying out branch removing operation on points with the distance larger than a preset distance threshold value in point cloud to be registered;
updating a transformation matrix according to the points after the branch removing operation and calculating a mean square error based on a preset objective function;
and judging whether to stop loop iteration according to the mean square error.
2. The target positioning method as claimed in claim 1, wherein in the updating of the transformation matrix according to the points after the deliberation and the calculation of the mean square error based on a preset objective function, the preset objective function f is:
Figure FDA0004173289640000011
wherein, the template point cloud q= { Q 1 ,q 2 ,...,q m M represents the number of point clouds of the template point cloud, and the subset of the template point clouds
Figure FDA0004173289640000012
Point cloud to be registered p= { P 1 ,p 2 ,...,p n N represents the number of point clouds to be aligned; w (w) i Representing a deliberation operationIth point p in point cloud to be registered i Weight, v i Representing an ith point p in a point cloud to be registered after a deliberation operation i Normal vector n of (2) i And the ith point q in the template point cloud subset c(i) Normal vector n of (2) c(i) Included angle between (and->
Figure FDA0004173289640000013
m r Representing the number of point clouds to be registered after the deliberation operation; r represents a rotation angle, and t represents a translation amount.
3. The object localization method as claimed in claim 1 or 2, wherein, in the identifying the location area in the color map where the object to be localized is located: a classifier created based on an SSD object detection algorithm is adopted to identify a position area of a target to be positioned in the color image; and/or the number of the groups of groups,
the method for partitioning the template point cloud according to the point cloud to be aligned to obtain a template point cloud subset comprises the following steps:
selecting key points of point clouds to be registered and template point clouds;
respectively calculating PFH descriptors of key points in the point cloud to be aligned and the template point cloud;
performing feature matching on the PFH descriptors of the point cloud to be aligned and the PFH descriptors of the template point cloud;
selecting a preset group of matching point pairs with highest matching degree;
and obtaining a template point cloud subset according to the selected matching points.
4. A target positioning device, comprising:
the image acquisition module is used for acquiring a color image and a depth image containing a target to be positioned;
the target identification module is used for identifying the position area of the target to be positioned in the color chart;
the point cloud to be registered determining module is used for determining the point cloud to be registered in the depth map according to the position area of the target to be positioned in the color map;
the template blocking module is used for blocking the template point cloud according to the point cloud to be registered to obtain a template point cloud subset;
the point cloud registration module is used for registering the point cloud to be registered in the depth map and the template point cloud subset based on a normal weighted iterative closest point algorithm to finish positioning the target;
in the point cloud registration module, it includes:
the first calculation unit is used for fixing a transformation matrix between the template point cloud subset and the point cloud to be registered, sequentially finding the point closest to each point in the point cloud to be registered in the template point cloud subset, and calculating the square distance; the method comprises the steps of sequencing the calculated square distances and updating the overlapping rate; the method comprises the steps of carrying out a pruning operation on a pruning unit, and updating a transformation matrix according to points after the pruning operation of the pruning unit and calculating a mean square error based on a preset objective function;
the branch removing unit is used for carrying out branch removing operation on points with the distance larger than a preset distance threshold value in the point cloud to be registered according to the updated overlapping rate of the first calculating unit;
and the judging unit is used for judging whether to stop loop iteration according to the mean square error calculated by the first calculating unit.
5. The object localization device of claim 4, wherein in the first calculation unit, a predetermined objective function f used for calculating the mean square error is:
Figure FDA0004173289640000031
wherein, the template point cloud q= { Q 1 ,q 2 ,...,q m M represents the number of point clouds of the template point cloud, and the subset of the template point clouds
Figure FDA0004173289640000033
Point cloud to be registered p= { P 1 ,p 2 ,...,p n N represents the number of point clouds to be aligned; w (w) i Representing the first point cloud to be registered after the branch removing operationi points p i Weight, v i Representing an ith point p in a point cloud to be registered after a deliberation operation i Normal vector n of (2) i And the ith point q in the template point cloud subset c(i) Normal vector n of (2) c(i) Included angle between (and->
Figure FDA0004173289640000032
m r Representing the number of point clouds to be registered after the deliberation operation; r represents a rotation angle, and t represents a translation amount.
6. The target positioning device according to claim 4 or 5, wherein in the target recognition module, a classifier created based on an SSD object detection algorithm is used to recognize a location area of a target to be positioned in the color map; and/or the number of the groups of groups,
in the template blocking module, it includes:
the key point selecting unit is used for selecting key points of the point cloud to be registered and the template point cloud;
the second calculation unit is used for calculating PFH descriptors of key points in the point cloud to be aligned and the template point cloud respectively;
the feature matching unit is used for performing feature matching on the PFH descriptors of the point cloud to be aligned and the PFH descriptors of the template point cloud;
the matching point selecting unit is used for selecting a preset group of matching point pairs with highest matching degree in the characteristic matching unit and obtaining a template point cloud subset according to the selected matching points.
7. A mobile robot comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor, when running the computer program, realizes the steps of the object localization method as claimed in any one of claims 1-3.
8. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the object localization method according to any one of claims 1-3.
CN202010574730.8A 2020-06-22 2020-06-22 Target positioning method and device, mobile robot and readable storage medium Active CN111612841B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010574730.8A CN111612841B (en) 2020-06-22 2020-06-22 Target positioning method and device, mobile robot and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010574730.8A CN111612841B (en) 2020-06-22 2020-06-22 Target positioning method and device, mobile robot and readable storage medium

Publications (2)

Publication Number Publication Date
CN111612841A CN111612841A (en) 2020-09-01
CN111612841B true CN111612841B (en) 2023-07-14

Family

ID=72202716

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010574730.8A Active CN111612841B (en) 2020-06-22 2020-06-22 Target positioning method and device, mobile robot and readable storage medium

Country Status (1)

Country Link
CN (1) CN111612841B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114593681A (en) * 2020-12-07 2022-06-07 北京格灵深瞳信息技术有限公司 Thickness measuring method, thickness measuring apparatus, electronic device, and storage medium
CN112686934A (en) * 2020-12-29 2021-04-20 广州广电研究院有限公司 Point cloud data registration method, device, equipment and medium
CN112862882A (en) * 2021-01-28 2021-05-28 北京格灵深瞳信息技术股份有限公司 Target distance measuring method, device, electronic apparatus and storage medium
CN113096165B (en) * 2021-04-16 2022-02-18 无锡物联网创新中心有限公司 Target object positioning method and device
CN113297989B (en) * 2021-05-28 2024-04-16 优必康(青岛)科技有限公司 Charging pile identification method, device, robot and computer readable storage medium
CN113838125A (en) * 2021-09-17 2021-12-24 中国第一汽车股份有限公司 Target position determining method and device, electronic equipment and storage medium
CN114387319B (en) * 2022-01-13 2023-11-14 北京百度网讯科技有限公司 Point cloud registration method, device, equipment and storage medium
CN115082547B (en) * 2022-07-27 2022-11-15 深圳市华汉伟业科技有限公司 Profile measuring method based on point cloud data and storage medium
CN115496898B (en) * 2022-11-16 2023-02-17 山东科技大学 Mobile robot target positioning method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107886528A (en) * 2017-11-30 2018-04-06 南京理工大学 Distribution line working scene three-dimensional rebuilding method based on a cloud
CN108133458A (en) * 2018-01-17 2018-06-08 视缘(上海)智能科技有限公司 A kind of method for automatically split-jointing based on target object spatial point cloud feature
CN109308737A (en) * 2018-07-11 2019-02-05 重庆邮电大学 A kind of mobile robot V-SLAM method of three stage point cloud registration methods
CN109934855A (en) * 2018-12-28 2019-06-25 南京理工大学 A kind of livewire work scene power components three-dimensional rebuilding method based on cloud
CN110749895A (en) * 2019-12-23 2020-02-04 广州赛特智能科技有限公司 Laser radar point cloud data-based positioning method
CN110942476A (en) * 2019-10-17 2020-03-31 湖南大学 Improved three-dimensional point cloud registration method and system based on two-dimensional image guidance and readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107886528A (en) * 2017-11-30 2018-04-06 南京理工大学 Distribution line working scene three-dimensional rebuilding method based on a cloud
CN108133458A (en) * 2018-01-17 2018-06-08 视缘(上海)智能科技有限公司 A kind of method for automatically split-jointing based on target object spatial point cloud feature
CN109308737A (en) * 2018-07-11 2019-02-05 重庆邮电大学 A kind of mobile robot V-SLAM method of three stage point cloud registration methods
CN109934855A (en) * 2018-12-28 2019-06-25 南京理工大学 A kind of livewire work scene power components three-dimensional rebuilding method based on cloud
CN110942476A (en) * 2019-10-17 2020-03-31 湖南大学 Improved three-dimensional point cloud registration method and system based on two-dimensional image guidance and readable storage medium
CN110749895A (en) * 2019-12-23 2020-02-04 广州赛特智能科技有限公司 Laser radar point cloud data-based positioning method

Also Published As

Publication number Publication date
CN111612841A (en) 2020-09-01

Similar Documents

Publication Publication Date Title
CN111612841B (en) Target positioning method and device, mobile robot and readable storage medium
CN110322500B (en) Optimization method and device for instant positioning and map construction, medium and electronic equipment
US10867189B2 (en) Systems and methods for lane-marker detection
US8442307B1 (en) Appearance augmented 3-D point clouds for trajectory and camera localization
CN111427032B (en) Room wall contour recognition method based on millimeter wave radar and terminal equipment
CN110689043A (en) Vehicle fine granularity identification method and device based on multiple attention mechanism
CN111915657A (en) Point cloud registration method and device, electronic equipment and storage medium
CN112198878B (en) Instant map construction method and device, robot and storage medium
CN115063454B (en) Multi-target tracking matching method, device, terminal and storage medium
CN114612665B (en) Pose estimation and dynamic vehicle detection method based on normal vector histogram features
CN114081536B (en) Nasopharyngeal swab sampling method, nasopharyngeal swab sampling device, electronic equipment and storage medium
CN112328715A (en) Visual positioning method, training method of related model, related device and equipment
CN110704652A (en) Vehicle image fine-grained retrieval method and device based on multiple attention mechanism
CN115546705B (en) Target identification method, terminal device and storage medium
CN113723425B (en) Aircraft model identification method, device, storage medium and equipment
CN111862208B (en) Vehicle positioning method, device and server based on screen optical communication
CN111104965A (en) Vehicle target identification method and device
CN113721240B (en) Target association method, device, electronic equipment and storage medium
CN116309643A (en) Face shielding score determining method, electronic equipment and medium
CN115131240A (en) Target identification method and system for three-dimensional point cloud data
CN112416128B (en) Gesture recognition method and terminal equipment
CN117011481A (en) Method and device for constructing three-dimensional map, electronic equipment and storage medium
CN113673288B (en) Idle parking space detection method and device, computer equipment and storage medium
CN111223139B (en) Target positioning method and terminal equipment
CN103514434B (en) Method and device for identifying image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant