CN111127547A - Positioning method, positioning device, robot and storage medium - Google Patents

Positioning method, positioning device, robot and storage medium Download PDF

Info

Publication number
CN111127547A
CN111127547A CN201911304195.8A CN201911304195A CN111127547A CN 111127547 A CN111127547 A CN 111127547A CN 201911304195 A CN201911304195 A CN 201911304195A CN 111127547 A CN111127547 A CN 111127547A
Authority
CN
China
Prior art keywords
line segment
posture
difference
projection
gesture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911304195.8A
Other languages
Chinese (zh)
Inventor
黄灿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Megvii Technology Co Ltd
Original Assignee
Beijing Megvii Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Megvii Technology Co Ltd filed Critical Beijing Megvii Technology Co Ltd
Priority to CN201911304195.8A priority Critical patent/CN111127547A/en
Publication of CN111127547A publication Critical patent/CN111127547A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Multimedia (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides a positioning method, a positioning device, a robot and a storage medium, wherein the method comprises the following steps: acquiring an image comprising an object to be positioned, and establishing a three-dimensional model of the object to be positioned; calculating difference information indicating a difference in positions of the projected line segment and the contour line segment corresponding to the projected line segment; and calculating the actual posture of the object to be positioned based on the candidate posture and the difference information. On one hand, the postures of all objects which are possibly required to be positioned need not to be determined in advance in a manual mode, and the two-dimensional code for representing the posture of each object which is required to be positioned need not to be pasted on each object which is required to be positioned, so that the cost for positioning the posture of the object to be positioned is saved, and on the other hand, the problem that the posture of the object cannot be positioned due to the fact that the position of the two-dimensional code on the object is not in the scanning range is solved.

Description

Positioning method, positioning device, robot and storage medium
Technical Field
The application relates to the field of positioning, in particular to a positioning method, a positioning device, a robot and a storage medium.
Background
During the working process of the robot, the posture of an object such as a charging pile is often required to be positioned, so that related operations such as automatic charging can be carried out when the object is close to or contacted with the robot.
At present, the commonly adopted mode is as follows: the method comprises the steps of manually determining the postures of all objects which possibly need to be positioned, pasting a two-dimensional code representing the posture of each object which needs to be positioned, and scanning the two-dimensional code by a robot to obtain the posture of each object so as to position the posture of each object.
However, on the one hand, it is necessary to manually determine the postures of all objects whose postures may need to be located in advance, and to paste the two-dimensional code representing the posture of each object whose posture needs to be located, which is costly, and on the other hand, since the two-dimensional code is manually pasted on the object, the position of the two-dimensional code on the object is uncertain, and in the case where the position of the two-dimensional code on some objects is not within the scanning range of the scanning device on the robot, the robot cannot scan the two-dimensional code, and cannot locate the posture of the object.
Disclosure of Invention
In order to overcome the problems in the related art, the application provides a positioning method, a positioning device, a robot and a storage medium.
According to a first aspect of embodiments of the present application, there is provided a positioning method, including:
acquiring an image including an object to be positioned, and establishing a three-dimensional model of the object to be positioned;
calculating difference information indicating a difference between a projection line segment related to a candidate pose and a contour line segment corresponding to the projection line segment, the projection line segment being obtained by projecting a three-dimensional model of the object to be positioned into the image based on the candidate pose, the contour line segment corresponding to the projection line segment being located on a contour of the object to be positioned;
calculating the actual posture of the object to be positioned based on the candidate posture and the difference information.
In some embodiments, the computing difference information indicative of differences in projected line segments associated with the candidate poses and contour line segments corresponding to the projected line segments comprises:
determining key points on the projection line segment and determining associated points corresponding to each key point, wherein the associated points corresponding to the key points are located on the contour line segment corresponding to the projection line segment;
generating a distance vector, wherein a component in the distance vector is a distance between each key point and the corresponding associated point;
determining the distance vector as the difference information.
In some embodiments, the determining keypoints on the projected line segment comprises:
sampling the projection line segment to obtain a plurality of sampling points;
and determining each sampling point as a key point.
In some embodiments, the determining the associated point corresponding to each of the key points includes:
for each key point, finding out a point which is in the normal direction of the key point and is positioned on the contour line segment corresponding to the projection line segment; and taking the found point as a correlation point corresponding to the key point.
In some embodiments, the method further comprises:
iteratively executing a gesture update operation until a preset stop condition is satisfied, the gesture update operation comprising:
based on the projection gesture adopted during the gesture updating operation, projecting the three-dimensional model of the object to be positioned into the image to obtain a projection line segment related to the projection gesture;
calculating difference information indicating a difference between a projected line segment related to a projection attitude and a contour line segment corresponding to the projected line segment;
calculating a posture difference quantity of a projection line segment related to the projection posture and the actual posture of the object to be positioned based on the difference information and preset associated information, wherein the preset associated information indicates an association relationship between the difference information and the posture difference quantity;
and taking the projection posture adopted when the posture updating operation is executed at this time as a candidate posture or taking the latest projection posture as the projection posture adopted when the next posture updating operation is executed, wherein the latest projection posture is the sum of the projection posture adopted when the posture updating operation is executed at this time and the calculated posture difference.
In some embodiments, prior to acquiring the image including the object to be located, the method further comprises:
determining the actual posture of a target object which belongs to the same type as the object to be positioned, and establishing a three-dimensional model of the target object;
for each of a plurality of different poses different from the actual pose of the object, projecting a three-dimensional model of the target object into an image comprising the target object based on the different pose, resulting in a projected line segment associated with the different pose; calculating difference information corresponding to the difference gesture, wherein the difference information corresponding to the difference gesture indicates the difference between a projection line segment related to the difference gesture and a contour line segment corresponding to the projection line segment;
and obtaining the preset associated information based on the difference information corresponding to each difference gesture and the gesture difference amount corresponding to each difference gesture, wherein the gesture difference amount corresponding to the difference gesture is the difference amount between the difference gesture and the actual gesture of the target object.
In some embodiments, the preset stop condition is one of: the number of times of execution of the posture updating operation is equal to the number threshold, and the newly calculated difference information is smaller than the threshold.
In some embodiments, the object to be positioned is a charging pile.
According to a second aspect of embodiments of the present application, there is provided a positioning apparatus, including:
an acquisition unit configured to acquire an image including an object to be positioned and to establish a three-dimensional model of the object to be positioned;
a calculation unit configured to calculate difference information indicating a difference between a projection line segment related to a candidate pose and a contour line segment corresponding to a projection line segment, the projection line segment being obtained by projecting a three-dimensional model of the object to be positioned into the image based on the candidate pose, the contour line segment corresponding to the projection line segment being located on a contour of the object to be positioned;
a positioning unit configured to calculate an actual pose of the object to be positioned based on the candidate pose and the difference information.
In some embodiments, the computing unit is further configured to:
determining key points on the projection line segment and determining associated points corresponding to each key point, wherein the associated points corresponding to the key points are located on the contour line segment corresponding to the projection line segment;
generating a distance vector, wherein a component in the distance vector is a distance between each key point and the corresponding associated point;
determining the distance vector as the difference information.
In some embodiments, the computing unit is further configured to:
sampling the projection line segment to obtain a plurality of sampling points; and determining each sampling point as a key point.
In some embodiments, the computing unit is further configured to:
for each key point, finding out a point which is in the normal direction of the key point and is positioned on the contour line segment corresponding to the projection line segment; and taking the found point as a correlation point corresponding to the key point.
In some embodiments, the positioning device further comprises:
an iteration unit configured to:
iteratively executing a gesture update operation until a preset stop condition is satisfied, the gesture update operation comprising:
based on the projection gesture adopted during the gesture updating operation, projecting the three-dimensional model of the object to be positioned into the image to obtain a projection line segment related to the projection gesture;
calculating difference information indicating a difference between a projected line segment related to a projection attitude and a contour line segment corresponding to the projected line segment;
calculating a posture difference quantity of a projection line segment related to the projection posture and the actual posture of the object to be positioned based on the difference information and preset associated information, wherein the preset associated information indicates an association relationship between the difference information and the posture difference quantity;
and taking the projection posture adopted when the posture updating operation is executed at this time as a candidate posture or taking the latest projection posture as the projection posture adopted when the next posture updating operation is executed, wherein the latest projection posture is the sum of the projection posture adopted when the posture updating operation is executed at this time and the calculated posture difference.
In some embodiments, the positioning device further comprises:
a determination unit configured to:
before an image including an object to be positioned is obtained, determining the actual posture of a target object which belongs to the same type as the object to be positioned, and establishing a three-dimensional model of the target object;
for each of a plurality of difference poses different from the actual pose, projecting a three-dimensional model of the target object into an image comprising the object based on the difference pose, resulting in a projected line segment associated with the difference pose; calculating difference information corresponding to the difference pose, the difference information corresponding to the difference pose indicating a difference between the projected line segment associated with the difference pose and the contour line segment corresponding to the projected line segment associated with the difference pose;
and obtaining the preset associated information based on the difference information corresponding to each posture and the posture difference amount corresponding to each posture, wherein the posture difference amount corresponding to the difference posture is the posture difference amount between the difference posture and the actual posture of the target object.
In some embodiments, the preset stop condition is one of: the number of times of execution of the posture updating operation is equal to the number threshold, and the newly calculated difference information is smaller than the threshold.
In some embodiments, the object to be positioned is a charging pile.
According to the image filtering method and device provided by the embodiment of the application, the image comprising the object to be positioned is obtained, and the three-dimensional model of the object to be positioned is established; calculating difference information indicating a difference between the projected line segment and the contour line segment corresponding to the projected line segment, and calculating an actual posture of the object to be positioned based on the candidate posture and the difference information. On one hand, the postures of all objects which are possibly required to be positioned need not to be determined in advance in a manual mode, and the two-dimensional code for representing the posture of each object which is required to be positioned need not to be pasted on each object which is required to be positioned, so that the cost for positioning the posture of the object to be positioned is saved, and on the other hand, the problem that the posture of the object cannot be positioned due to the fact that the position of the two-dimensional code on the object is not in the scanning range is solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
Fig. 1 is a flowchart illustrating a positioning method provided in an embodiment of the present application;
fig. 2 is a block diagram illustrating a positioning apparatus provided in an embodiment of the present application;
fig. 3 shows a block diagram of a robot according to an embodiment of the present disclosure.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Referring to fig. 1, a flowchart of a positioning method provided in an embodiment of the present application is shown, where the method includes:
step 101, obtaining an image including an object to be positioned, and establishing a three-dimensional model of the object to be positioned.
In the present application, before determining the actual pose of the object to be positioned, an image including the object to be positioned may be acquired, and a three-dimensional model of the object to be positioned may be established. The three-dimensional model of the object to be positioned may describe the contour of the object to be positioned.
For example, the object to be positioned is a charging pile, and before the actual posture of the charging pile is determined, a camera on the robot shoots an image including the charging pile, so that the image including the object to be positioned is obtained. The three-dimensional model of the charging pile can be established, and the three-dimensional model of the charging pile can describe the outline of the charging pile.
At step 102, difference information indicating a difference between a projected line segment associated with a candidate pose and a contour line segment corresponding to the projected line segment is calculated. In the present application, the projection line segment related to the candidate pose is obtained by projecting the three-dimensional model of the object to be positioned to the acquired image based on the candidate pose. The candidate pose is a pose calculated in the process of determining the actual pose of the object to be positioned that is likely to be the actual pose of the object to be positioned.
In an embodiment of the present application, the posture of the object to be positioned may be estimated based on a plurality of acquired images including the object to be positioned by SLAM (simultaneous localization and mapping) and VO (Visual odometer), and the posture obtained by estimating the posture of the object to be positioned is used as a candidate posture of the object to be positioned.
In the present application, after calculating the candidate pose of the object to be positioned, the three-dimensional model of the object to be positioned may be projected into the acquired image including the object to be positioned based on the candidate pose. The projected line segments projected by the candidate poses may be referred to as projected line segments associated with the candidate poses.
For example, a three-dimensional model of the object to be positioned may be projected into the acquired image including the object to be positioned based on the candidate pose using OPENGL (Open Graphics Library).
In the application, after the three-dimensional model of the object to be positioned is projected to the acquired image including the object to be positioned, a projection contour in the acquired image including the object to be positioned can be obtained, and the projection line segment is located on the projection contour, that is, the projection line segment related to the candidate posture is located on the projection contour.
In the present application, the projection line segments associated with the candidate poses correspond to contour line segments. And the contour line segment corresponding to the projection line segment related to the candidate posture is positioned on the contour of the object to be positioned in the acquired image. The contour line segment is a line segment on the contour of the object to be positioned in the acquired image including the object to be positioned. The contour of the object to be located can be determined using an edge detection algorithm, for example the Canny algorithm. The edge of the object to be positioned in the image including the object to be positioned, which is acquired by adopting an edge detection algorithm, can be used for determining the contour of the object to be positioned in the acquired image including the object to be positioned.
In the present application, a projection line segment related to a candidate pose and a corresponding contour line segment represent the same edge of the object to be positioned.
For example, the object to be positioned is a charging pile, after a projection is performed on one side of the outline of the charging pile, the obtained image including the object to be positioned includes a projection line segment representing the side and related to the candidate posture, and the obtained outline of the charging pile in the image including the object to be positioned includes an outline line segment representing the side. And the contour line segment representing the side in the contour of the charging pile is a contour line segment which is obtained after projection and corresponds to the projection line segment representing the side and related to the candidate posture.
In the present application, to calculate the actual pose of the object to be positioned, difference information indicating the difference between the projected line segment associated with the candidate pose and the contour line segment corresponding to the projected line segment may be calculated. The difference between the position of the projected line segment associated with the candidate pose and the position of the contour line segment corresponding to the projected line segment may be the difference between the position of the projected line segment and the position of the contour line segment corresponding to the projected line segment.
The difference between the projected line segment and the contour line segment corresponding to the projected line segment is caused by the difference between the candidate pose of the object to be positioned and the actual pose of the object to be positioned. When the three-dimensional model of the object to be positioned is projected into the acquired image by using the actual posture of the object to be positioned, the projection line segment and the corresponding contour line segment thereof are coincident and basically coincident. Because the candidate posture of the object to be positioned is calculated by estimating the posture of the object to be positioned, and the candidate posture of the object to be positioned is different from the actual posture of the object to be positioned, the difference between the projection line segment and the contour line segment corresponding to the projection line segment is caused after the three-dimensional model of the object to be positioned is projected to the acquired image by adopting the candidate posture of the object to be positioned. Therefore, the difference information is associated with the attitude difference measuring tool.
The above description takes the candidate pose as an example to illustrate the difference between the projected line segment and the contour line segment corresponding to the projected line segment, which is caused when the pose adopted during projection is not the actual pose. The difference information does not refer to the difference information obtained by projecting a posture that is not an actual posture, for example, a candidate posture, and the posture difference amount does not refer to the difference amount between the posture that is not an actual posture, for example, a candidate posture and an actual posture. When any posture which is not the actual posture is adopted for projection, the difference between the corresponding projection line segment and the corresponding contour line segment of the corresponding projection line segment can be caused, the corresponding difference information can be generated, and the posture which is not the actual posture has the corresponding posture difference quantity with the actual posture.
In some embodiments, calculating difference information indicative of a difference between a projected line segment associated with a candidate pose and a contour line segment corresponding to the projected line segment comprises: determining key points on the projection line segment and determining associated points corresponding to each key point, wherein the associated points corresponding to the key points are positioned on the contour line segment corresponding to the projection line segment; generating a distance vector, wherein a component in the distance vector is a distance between one key point and a corresponding associated point; the distance vector is used as difference information of the difference between the projection line segment related to the candidate posture and the contour line segment corresponding to the projection line segment.
In some embodiments, determining keypoints on the projected line segments associated with the candidate poses comprises: sampling the projection line segment to obtain a plurality of sampling points; and taking each sampling point as a key point.
In the present application, both endpoints on the projected line segment associated with the candidate pose may be considered as keypoints. Other keypoints are obtained by sampling the projection line segments associated with the candidate poses. Sampling may be performed on the projection line segment related to the candidate pose according to a preset sampling rule starting from one of the two endpoints, for example, each time sampling is performed at a position having a preset distance from the latest sampled sampling point, the next sampling point of the latest sampled sampling point is obtained.
In some embodiments, determining the associated point corresponding to each of the key points comprises: for each key point, finding out a point which is in the normal direction of the key point and is positioned on the contour line segment corresponding to the projection line segment related to the candidate gesture; and taking the found point as a correlation point corresponding to the key point.
In this application, when determining the associated point corresponding to each keypoint, for each keypoint, a point on the contour line segment corresponding to the projection line segment related to the candidate pose may be searched in the normal direction of the keypoint, and the searched point is used as the associated point corresponding to the keypoint. In other words, for each key point, the associated point corresponding to the key point is an intersection point of the normal of the key point and the contour line segment corresponding to the projection line segment related to the candidate pose.
In this application, the difference information indicating the difference between the projected line segment associated with the candidate pose and the contour line segment corresponding to the projected line segment may be a distance vector. After determining the associated point corresponding to each of the keypoints, a distance vector may be generated as difference information indicating a difference between a projected line segment related to the candidate pose and a contour line segment corresponding to the projected line segment, a component of the distance vector being a distance between one of the keypoints and its corresponding associated point. The distance between each key point and its corresponding key point can be calculated respectively to obtain a plurality of distances, and each distance is respectively used as a component in the vector.
In some embodiments, prior to acquiring the image including the object to be located, further comprising: determining the actual posture of a target object which belongs to the same type as the object to be positioned, and establishing a three-dimensional model of the target object; for each of a plurality of different poses different from the actual pose, projecting a three-dimensional model of the target object into an image including the target object based on the different pose, obtaining a projected line segment associated with the different pose; calculating difference information corresponding to the difference gesture, wherein the difference information corresponding to the difference gesture indicates the difference between a projection line segment related to the difference gesture and a contour line segment corresponding to the projection line segment; and obtaining preset associated information based on the difference information corresponding to each difference posture and the posture difference corresponding to each difference posture, wherein the posture difference corresponding to the difference posture is the difference between the difference posture and the actual posture of the target object.
In the present application, in order to obtain the preset correlation information, the actual posture of a target object belonging to the same type as the object to be positioned may be predetermined. For example, the object to be located is a charging pile, and the target object belonging to the same type as the object to be located is another charging pile for obtaining the preset associated information.
For the object, the actual pose of the target object can be accurately determined in advance. And projecting the three-dimensional model of the target object into an image comprising the target object respectively by adopting a plurality of different postures different from the actual postures of the object.
For each of a plurality of different poses different from the actual pose of the target object, projecting a three-dimensional model of the object into an image including the target object based on the different pose, obtaining a projected line segment associated with the different pose, and then calculating difference information corresponding to the different pose, wherein the difference information corresponding to the different pose indicates the difference between the projected line segment associated with the different pose and a contour line segment corresponding to the projected line segment.
In the present application, the projection line segment associated with the differential pose corresponds to a contour line segment. The relationship between the projection line segment associated with the different pose and the corresponding contour line segment may refer to the relationship between the projection line segment associated with the candidate pose and the corresponding contour line segment.
For each of a plurality of different poses different from the actual pose, the different pose corresponds to a difference information and a pose difference amount, and the pose difference amount corresponding to the different pose is the difference amount between the different pose and the actual pose of the target object.
The association relationship between the difference information and the posture difference amount may be determined based on the difference information corresponding to each difference posture and the posture difference amount corresponding to each difference posture, so as to obtain the preset association information.
In some embodiments, further comprising: iteratively executing a gesture updating operation until a preset stop condition is met, the gesture updating operation comprising: based on the projection gesture adopted during the gesture updating operation, projecting the three-dimensional model of the object to be positioned into the image comprising the object to be positioned to obtain a projection line segment related to the projection gesture; calculating difference information indicating a difference between a projected line segment related to the projection attitude and a contour line segment corresponding to the projected line segment; calculating a posture difference quantity of a projection line segment related to the projection posture and the actual posture of the object to be positioned based on the difference information and preset associated information, wherein the preset associated information indicates an association relationship between the difference information and the posture difference quantity; and taking the projection posture adopted when the posture updating operation is executed at this time as a candidate posture or taking the latest projection posture as the projection posture adopted when the next posture updating operation is executed, wherein the latest projection posture is the sum of the projection posture adopted when the posture updating operation is executed at this time and the calculated posture difference.
In the present application, the posture updating operation may be iteratively performed until a preset stop condition is satisfied.
For example, the number of times of execution of the posture updating operation reaches the number-of-times threshold, and when the number of times of execution of the posture updating operation reaches the number-of-times threshold, execution of the posture updating operation is stopped. And judging whether a preset stopping condition is met or not every time posture updating operation is completed.
Before the 1 st attitude updating operation is executed, the attitude of the object to be positioned can be estimated to obtain the estimated attitude of the object to be positioned, and the estimated attitude is used as the projection attitude adopted when the 1 st attitude updating operation is executed. A new projection pose may be obtained each time a pose update operation is performed. The projected pose adopted by the last performed pose update operation may be taken as the candidate pose.
The attitude of the object to be positioned can be estimated by means of SLAM (simultaneous localization and mapping, instant positioning and mapping), VO (Visual odometer), based on the acquired plurality of images including the object to be positioned, to obtain the estimated attitude of the object to be positioned.
When the 1 st posture updating operation is executed, based on the projection posture adopted when the 1 st posture updating operation is executed, namely the posture of the object to be positioned obtained by estimating the posture of the object to be positioned, the three-dimensional model of the object to be positioned is projected to the obtained image containing the object to be positioned. After the posture update operation is performed 1 st time, the latest projection posture, which is the sum of the projection posture taken when the posture update operation is performed 1 st time and the posture difference amount calculated by performing the posture update operation 1 st time, is calculated. And taking the latest projection posture as the projection posture adopted during the posture updating operation executed for the 2 nd time, projecting the three-dimensional model of the object to be positioned into the acquired image comprising the object to be positioned again based on the projection posture adopted during the posture updating operation executed for the 2 nd time when the posture updating operation is executed for the 2 nd time, and calculating the latest projection posture after the posture updating operation executed for the 2 nd time, wherein the latest projection posture is the sum of the projection posture adopted during the posture updating operation executed for the 2 nd time and the posture difference calculated by executing the posture updating operation executed for the 2 nd time. And taking the latest projection posture as the projection posture adopted in the posture updating operation executed at the 3 rd time, and so on. It is assumed that the preset stop condition is satisfied that the number of times of execution of the posture updating operation reaches 3 times. At this time, after the posture updating operation is performed for the 3 rd time, after judging whether the preset stop condition is satisfied, it is determined that the preset stop condition is satisfied, and the projection posture adopted when the posture updating operation performed for the 3 rd time is taken as the candidate posture.
In the present application, the projection line segment associated with the projection pose corresponds to a contour line segment. The relationship between the projection line segment associated with the projection pose and the corresponding contour line segment may refer to the relationship between the projection line segment associated with the candidate pose and the corresponding contour line segment.
In the application, in the process of executing the posture updating operation once, firstly, based on the projection posture adopted in executing the posture updating operation, the three-dimensional model of the object to be positioned is projected to the acquired image including the object to be positioned, so as to obtain the projection line segment related to the projection posture. Then, key points on the projection line segment related to the projection gesture can be determined, and associated points corresponding to each key point are determined, wherein the associated points corresponding to the key points are located on the contour line segment corresponding to the projection line segment related to the projection gesture. For example, two end points on the projected line segment related to the projection posture are determined as key points, and two end points on the contour line segment corresponding to the projected line segment related to the projection posture are taken as associated points corresponding to the key points on the projected line segment related to the projection posture.
The manner of calculating the difference information indicating the difference between the projected line segment associated with the candidate pose and the contour line segment corresponding to the projected line segment is applied to calculating the difference information indicating the difference between the projected line segment associated with the projected pose and the contour line segment corresponding to the projected line segment.
When calculating difference information indicating a difference between a projected line segment associated with a projected pose and a contour line segment corresponding to the projected line segment, difference information indicating a difference between the projected line segment associated with the projected pose and the contour line segment corresponding to the projected line segment may be calculated with reference to the manner of calculating difference information indicating a difference between the projected line segment associated with a candidate pose and the contour line segment corresponding to the projected line segment described above. In the present application, the difference information indicating the difference between the projection line segment related to the projection posture and the contour line segment corresponding to the projection line segment may be a distance vector.
In the present application, in the process of one executed gesture updating operation, after the distance vector is calculated, based on the difference information calculated in the process of the executed gesture updating operation and the preset associated information, the gesture difference amount between the projection gesture adopted in the process of executing the gesture updating operation and the actual gesture of the object to be positioned may be calculated.
In the present application, the association information indicating the association relationship of the posture difference amount with the distance vector may be a function indicating the association relationship of the posture difference amount with the difference information, such as the distance vector, set in advance. The argument in the function indicating the correlation of the posture difference amount with the difference information such as the distance vector is a term representing the distance vector, and the argument in the function indicating the correlation of the posture difference amount with the difference information such as the distance vector is a term representing the posture difference amount.
In the present application, when calculating the posture difference amount between the projected posture employed at the time of executing the posture updating operation and the actual posture of the object to be positioned, the derivative L may be obtained by deriving an argument in a function indicating the correlation between the posture difference amount and difference information such as a distance vector, that is, a term representing the difference information such as a distance vector, calculating a derivative of the term representing the difference information such as a distance vector at the difference information calculated by the executed posture updating operation such as a distance vector d, and obtaining the derivative LK。Derivative LKAnd indicating the influence degree of the posture difference between the projection posture adopted during executing the posture updating operation and the actual posture of the object to be positioned on the difference between the projection line segment and the contour line segment corresponding to the projection line segment. Then, the derivative L can be calculatedKproduct-L with difference information, e.g., distance vector d, calculated during the gesture update operation performed by that timeK -1D. Can be substituted by-LK -1D is used as the projection gesture adopted when the gesture updating operation is executed and the actual gesture of the object to be positionedThe amount of difference in posture of (2).
Then, the projection attitude r _ k adopted during the attitude updating operation and the attitude difference L between the projection attitude adopted during the attitude updating operation and the actual attitude of the object to be positioned are calculatedK -1And d are added to obtain the latest projection attitude r _ k + 1.
When a preset stop condition is satisfied, the projection attitude r _ k +1 adopted during the execution of the attitude updating operation is taken as a candidate attitude, and when the preset stop condition is not satisfied, the latest projection attitude r _ k +1 is taken as the projection attitude adopted during the execution of the next attitude updating operation.
In some embodiments, the preset stop condition may be one of: the number of times of execution of the posture updating operation is equal to the number threshold, and the newly calculated difference information such as the distance vector is smaller than the threshold.
In the present application, the preset stop condition may be that the number of times of execution of the posture updating operation is equal to a number threshold. When the execution times of the posture updating operation is equal to the time threshold after one posture updating operation is completed, a preset stop condition is met, the next posture updating operation is not executed any more after the posture updating operation is completed, and the projection posture adopted during the posture updating operation is taken as a candidate posture.
In the present application, the preset stop condition may be that the difference information calculated last, for example, the distance vector is smaller than a threshold value.
The newly calculated difference information is the difference information calculated during the currently performed posture updating operation.
When the difference information, such as the distance vector, calculated during the execution of the posture updating operation is smaller than the threshold value during the execution of one posture updating operation, a preset stop condition is satisfied, and after the execution of the posture updating operation is completed, the next posture updating operation is not executed any more, and the projection posture adopted during the execution of the posture updating operation is taken as a candidate posture.
In the present application, the preset stop condition may also be that the newly calculated difference information, for example, the two-norm of the distance vector, is smaller than a threshold value. The two-norm may also be referred to as the euclidean norm. In calculating the two-norm difference information such as the distance vector, first the sum of squares of absolute values of elements in the difference information such as the distance vector is obtained to obtain a summation result, and then the summation result is further squared to obtain the two-norm difference information such as the distance vector.
When the difference information calculated in the process of executing the posture updating operation once, such as the two-norm of the distance vector, is smaller than the threshold value in the process of executing the posture updating operation once, the preset stop condition is met, after the posture updating operation is executed once, the next posture updating operation is not executed any more, and the projection posture adopted in the process of executing the posture updating operation once is taken as the candidate posture.
And 103, calculating the actual posture of the object to be positioned based on the candidate posture and the difference information.
In the present application, when calculating the actual posture of the object to be positioned, the posture difference amount corresponding to the difference information indicating the difference between the projected line segment related to the candidate posture and the contour line segment corresponding to the projected line segment may be calculated according to the calculated difference information indicating the difference between the positions of the projected line segment and the contour line segment corresponding to the projected line segment and the correlation between the predetermined difference information and the posture difference amount. Then, the candidate posture of the object to be positioned and the calculated posture difference can be added to obtain the actual posture of the object to be positioned.
Referring to fig. 2, a block diagram of a positioning apparatus provided in an embodiment of the present application is shown. The positioning device may be mounted in the robot. The positioning device includes: the system comprises an acquisition unit 201, a calculation unit 202 and a positioning unit 203.
The acquisition unit 201 is configured to acquire an image comprising an object to be positioned and to build a three-dimensional model of the object to be positioned;
the calculation unit 202 is configured to calculate difference information indicating a difference between a projection line segment related to a candidate pose and a contour line segment corresponding to a projection line segment, the projection line segment being obtained by projecting a three-dimensional model of the object to be positioned into the image based on the candidate pose, the contour line segment corresponding to the projection line segment being located on a contour of the object to be positioned;
the positioning unit 203 is configured to calculate an actual pose of the object to be positioned based on the candidate poses and the difference information.
In some embodiments, the computing unit 202 is further configured to:
determining key points on the projection line segment and determining associated points corresponding to each key point, wherein the associated points corresponding to the key points are located on the contour line segment corresponding to the projection line segment;
generating a distance vector, wherein a component in the distance vector is a distance between each key point and the corresponding associated point;
determining the distance vector as the difference information.
In some embodiments, the computing unit 202 is further configured to:
sampling the projection line segment to obtain a plurality of sampling points; and determining each sampling point as a key point.
The calculation unit 202 is further configured to:
for each key point, finding out a point which is in the normal direction of the key point and is positioned on the contour line segment corresponding to the projection line segment; and taking the found point as a correlation point corresponding to the key point.
In some embodiments, the positioning device further comprises: :
an iteration unit configured to:
iteratively executing a gesture update operation until a preset stop condition is satisfied, the gesture update operation comprising:
based on the projection gesture adopted during the gesture updating operation, projecting the three-dimensional model of the object to be positioned into the image to obtain a projection line segment related to the projection gesture;
calculating difference information indicating a difference between a projected line segment related to a projection attitude and a contour line segment corresponding to the projected line segment;
calculating a posture difference quantity of a projection line segment related to the projection posture and the actual posture of the object to be positioned based on the difference information and preset associated information, wherein the preset associated information indicates an association relationship between the difference information and the posture difference quantity;
and taking the projection posture adopted when the posture updating operation is executed at this time as a candidate posture or taking the latest projection posture as the projection posture adopted when the next posture updating operation is executed, wherein the latest projection posture is the sum of the projection posture adopted when the posture updating operation is executed at this time and the calculated posture difference.
In some embodiments, the positioning device further comprises:
a determination unit configured to: before an image including an object to be positioned is obtained, determining the actual posture of a target object which belongs to the same type as the object to be positioned, and establishing a three-dimensional model of the target object;
for each of a plurality of different poses that are different from the actual pose of the object, projecting a three-dimensional model of the target object into an image that includes the target object based on the different pose, resulting in a projected line segment that is associated with the different pose; calculating difference information corresponding to the difference gesture, wherein the difference information corresponding to the difference gesture indicates the difference between a projection line segment related to the difference gesture and a contour line segment corresponding to the projection line segment;
and obtaining the preset associated information based on the difference information corresponding to each difference gesture and the gesture difference amount corresponding to each difference gesture, wherein the gesture difference amount corresponding to the difference gesture is the difference amount between the difference gesture and the actual gesture of the target object.
In some embodiments, the preset stop condition is one of: the number of times of execution of the posture updating operation is equal to the number threshold, and the newly calculated difference information is smaller than the threshold.
In some embodiments, the object to be positioned is a charging pile.
Fig. 3 is a block diagram of a robot according to the present embodiment. The robot 300 includes a processing component 322 that further includes one or more processors, and memory resources, represented by memory 332, for storing instructions, such as applications, that are executable by the processing component 322. The application programs stored in memory 332 may include one or more modules that each correspond to a set of instructions. Further, the processing component 322 is configured to execute instructions to perform the positioning method described above.
The robot 300 may also include a power component 326 configured to perform power management of the robot 300, a wired or wireless network interface 350 configured to connect the robot 300 to a network, and an input/output (I/O) interface 358. The robot 300 may operate based on an operating system stored in the memory 332, such as Windows Server, MacOSXTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
In an exemplary embodiment, there is also provided a storage medium comprising instructions, such as a memory comprising instructions, executable by a robot to perform the above method. Alternatively, the storage medium may be a non-transitory computer readable storage medium, which may be, for example, a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (11)

1. A method of positioning, the method comprising:
acquiring an image including an object to be positioned, and establishing a three-dimensional model of the object to be positioned;
calculating difference information indicating a difference between a projection line segment related to a candidate pose and a contour line segment corresponding to the projection line segment, the projection line segment being obtained by projecting a three-dimensional model of the object to be positioned into the image based on the candidate pose, the contour line segment corresponding to the projection line segment being located on a contour of the object to be positioned;
calculating the actual posture of the object to be positioned based on the candidate posture and the difference information.
2. The method of claim 1, wherein the computing difference information indicative of differences between projected line segments associated with candidate poses and contour line segments corresponding to the projected line segments comprises:
determining key points on the projection line segment and determining associated points corresponding to each key point, wherein the associated points corresponding to the key points are located on the contour line segment corresponding to the projection line segment;
generating a distance vector, wherein a component in the distance vector is a distance between each key point and the corresponding associated point;
determining the distance vector as the difference information.
3. The method of claim 2, wherein the determining keypoints on the projection line segment comprises:
sampling the projection line segment to obtain a plurality of sampling points;
and determining each sampling point as a key point.
4. The method of claim 3, wherein the determining the associated point corresponding to each of the plurality of keypoints comprises:
for each key point, finding out a point which is in the normal direction of the key point and is positioned on the contour line segment corresponding to the projection line segment; and taking the found point as a correlation point corresponding to the key point.
5. The method according to one of claims 1 to 4, characterized in that the method further comprises:
iteratively executing a gesture update operation until a preset stop condition is satisfied, the gesture update operation comprising:
based on the projection gesture adopted during the gesture updating operation, projecting the three-dimensional model of the object to be positioned into the image to obtain a projection line segment related to the projection gesture;
calculating difference information indicating a difference between a projected line segment related to the projection pose and a contour line segment corresponding to the projected line segment;
calculating a posture difference quantity of a projection line segment related to the projection posture and the actual posture of the object to be positioned based on the difference information and preset associated information, wherein the preset associated information indicates an association relationship between the difference information and the posture difference quantity;
and taking the projection posture adopted when the posture updating operation is executed at this time as a candidate posture or taking the latest projection posture as the projection posture adopted when the next posture updating operation is executed, wherein the latest projection posture is the sum of the projection posture adopted when the posture updating operation is executed at this time and the calculated posture difference.
6. The method of claim 5, wherein prior to acquiring the image including the object to be located, the method further comprises:
determining the actual posture of a target object which belongs to the same type as the object to be positioned, and establishing a three-dimensional model of the target object;
for each of a plurality of different poses different from the actual pose of the target object, projecting a three-dimensional model of the target object into an image including the target object based on the different pose, resulting in a projected line segment associated with the different pose; calculating difference information corresponding to the difference gesture, wherein the difference information corresponding to the difference gesture indicates the difference between a projection line segment related to the difference gesture and a contour line segment corresponding to the projection line segment;
and obtaining the preset associated information based on the difference information corresponding to each difference gesture and the gesture difference amount corresponding to each difference gesture, wherein the gesture difference amount corresponding to the difference gesture is the difference amount between the difference gesture and the actual gesture of the target object.
7. The method according to claim 5, wherein the preset stop condition is one of: the number of times of execution of the posture updating operation reaches a number threshold, and the newly calculated difference information is smaller than the threshold.
8. The method of claim 1, wherein the object to be positioned is a charging pile.
9. A positioning device, the device comprising:
an acquisition unit configured to acquire an image including an object to be positioned and to establish a three-dimensional model of the object to be positioned;
a calculation unit configured to calculate difference information indicating a difference between a projection line segment related to a candidate pose and a contour line segment corresponding to a projection line segment, the projection line segment being obtained by projecting a three-dimensional model of the object to be positioned into the image based on the candidate pose, the contour line segment corresponding to the projection line segment being located on a contour of the object to be positioned;
a positioning unit configured to calculate an actual pose of the object to be positioned based on the candidate pose and the difference information.
10. A robot, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method of any one of claims 1 to 8.
11. A storage medium in which instructions, when executed by a processor of a robot, enable the robot to perform the method of any of claims 1 to 8.
CN201911304195.8A 2019-12-17 2019-12-17 Positioning method, positioning device, robot and storage medium Pending CN111127547A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911304195.8A CN111127547A (en) 2019-12-17 2019-12-17 Positioning method, positioning device, robot and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911304195.8A CN111127547A (en) 2019-12-17 2019-12-17 Positioning method, positioning device, robot and storage medium

Publications (1)

Publication Number Publication Date
CN111127547A true CN111127547A (en) 2020-05-08

Family

ID=70499423

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911304195.8A Pending CN111127547A (en) 2019-12-17 2019-12-17 Positioning method, positioning device, robot and storage medium

Country Status (1)

Country Link
CN (1) CN111127547A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100245355A1 (en) * 2009-03-27 2010-09-30 Ju Yong Chang Method for Estimating 3D Pose of Specular Objects
US20150317821A1 (en) * 2014-04-30 2015-11-05 Seiko Epson Corporation Geodesic Distance Based Primitive Segmentation and Fitting for 3D Modeling of Non-Rigid Objects from 2D Images
CN105844276A (en) * 2015-01-15 2016-08-10 北京三星通信技术研究有限公司 Face posture correction method and face posture correction device
US20170287154A1 (en) * 2016-03-29 2017-10-05 Fujitsu Limited Image processing apparatus and image processing method
US20190206078A1 (en) * 2018-01-03 2019-07-04 Baidu Online Network Technology (Beijing) Co., Ltd . Method and device for determining pose of camera
CN110111388A (en) * 2019-05-10 2019-08-09 北京航空航天大学 Three-dimension object pose parameter estimation method and visual apparatus

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100245355A1 (en) * 2009-03-27 2010-09-30 Ju Yong Chang Method for Estimating 3D Pose of Specular Objects
US20150317821A1 (en) * 2014-04-30 2015-11-05 Seiko Epson Corporation Geodesic Distance Based Primitive Segmentation and Fitting for 3D Modeling of Non-Rigid Objects from 2D Images
CN105844276A (en) * 2015-01-15 2016-08-10 北京三星通信技术研究有限公司 Face posture correction method and face posture correction device
US20170287154A1 (en) * 2016-03-29 2017-10-05 Fujitsu Limited Image processing apparatus and image processing method
US20190206078A1 (en) * 2018-01-03 2019-07-04 Baidu Online Network Technology (Beijing) Co., Ltd . Method and device for determining pose of camera
CN110111388A (en) * 2019-05-10 2019-08-09 北京航空航天大学 Three-dimension object pose parameter estimation method and visual apparatus

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
马瑾;张国峰;戴树岭;曾锐;: "基于相对视觉位姿的Stewart平台运动学标定" *

Similar Documents

Publication Publication Date Title
US10810734B2 (en) Computer aided rebar measurement and inspection system
US10650528B2 (en) Systems and methods for edge points based monocular visual SLAM
US10996062B2 (en) Information processing device, data management device, data management system, method, and program
Zubizarreta et al. A framework for augmented reality guidance in industry
US10417781B1 (en) Automated data capture
CN108256479B (en) Face tracking method and device
US9443297B2 (en) System and method for selective determination of point clouds
JP4985516B2 (en) Information processing apparatus, information processing method, and computer program
CN111325796A (en) Method and apparatus for determining pose of vision device
US8755630B2 (en) Object pose recognition apparatus and object pose recognition method using the same
JP5671281B2 (en) Position / orientation measuring apparatus, control method and program for position / orientation measuring apparatus
US20170177746A1 (en) Model generating device, position and orientation calculating device, and handling robot device
US10354402B2 (en) Image processing apparatus and image processing method
US10861173B2 (en) Hole-based 3D point data alignment
Belter et al. Improving accuracy of feature-based RGB-D SLAM by modeling spatial uncertainty of point features
CN109255801B (en) Method, device and equipment for tracking edges of three-dimensional object in video and storage medium
CN113052907A (en) Positioning method of mobile robot in dynamic environment
JP6936974B2 (en) Position / orientation estimation device, position / orientation estimation method and program
CN113822996B (en) Pose estimation method and device for robot, electronic device and storage medium
JP5976089B2 (en) Position / orientation measuring apparatus, position / orientation measuring method, and program
US11145048B2 (en) Image processing apparatus, image processing method, and non-transitory computer-readable storage medium for storing program
CN112233161B (en) Hand image depth determination method and device, electronic equipment and storage medium
CN111127547A (en) Positioning method, positioning device, robot and storage medium
JP2014102805A (en) Information processing device, information processing method and program
US20220051436A1 (en) Learning template representation libraries

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination