CN114750147B - Space pose determining method and device of robot and robot - Google Patents

Space pose determining method and device of robot and robot Download PDF

Info

Publication number
CN114750147B
CN114750147B CN202210236826.2A CN202210236826A CN114750147B CN 114750147 B CN114750147 B CN 114750147B CN 202210236826 A CN202210236826 A CN 202210236826A CN 114750147 B CN114750147 B CN 114750147B
Authority
CN
China
Prior art keywords
pixel point
image
point
depth
pose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210236826.2A
Other languages
Chinese (zh)
Other versions
CN114750147A (en
Inventor
洪泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zbeetle Intelligent Co Ltd
Original Assignee
Shenzhen Zbeetle Intelligent Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zbeetle Intelligent Co Ltd filed Critical Shenzhen Zbeetle Intelligent Co Ltd
Priority to CN202210236826.2A priority Critical patent/CN114750147B/en
Publication of CN114750147A publication Critical patent/CN114750147A/en
Application granted granted Critical
Publication of CN114750147B publication Critical patent/CN114750147B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1669Programme controls characterised by programming, planning systems for manipulators characterised by special application, e.g. multi-arm co-operation, assembly, grasping

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Image Analysis (AREA)
  • Manipulator (AREA)
  • Image Processing (AREA)

Abstract

The application relates to a method and a device for determining a spatial pose of a robot and the robot. The method comprises the following steps: acquiring continuous two-frame pose images obtained by shooting by a shooting device of a target robot; the pose image comprises a color image and a depth image; extracting characteristic points from the two frames of color images respectively; wherein the number of the feature points selected in the area close to the central pixel point of the color image is larger than the number of the feature points selected in the area far from the central pixel point; the central pixel point is a pixel point corresponding to the image center of the color image; matching the feature points extracted from the two frames of color images to determine matching points; and according to the corresponding relation between the color image and the depth image, carrying out pose calculation by using the depth value corresponding to the matching point position in the depth image, and obtaining the space pose of the target robot. The method can improve the accuracy of determining the spatial pose of the robot.

Description

Space pose determining method and device of robot and robot
Technical Field
The present application relates to the field of robot positioning technologies, and in particular, to a method and an apparatus for determining a spatial pose of a robot, and a robot.
Background
Along with the rapid development of robot technology, in order to realize accurate positioning and control of a robot, an RGBD camera is generally carried on the robot at present, a pose image is obtained through shooting by the RGBD camera, the pose image comprises a color image and a depth image, the matched characteristic points are determined through the matching of the characteristic points by uniformly extracting the characteristic points from two continuous frames of color images, then the pose calculation is carried out according to the characteristic points corresponding to the depth image, the spatial pose of the robot is obtained, and then the operations such as obstacle avoidance control, SLAM (Simultaneous Localization And Mapping, synchronous positioning and map construction) positioning, image construction, navigation, three-dimensional reconstruction and the like are realized by utilizing the spatial pose. However, due to the imaging principle of the RGBD camera, feature points are directly extracted from a color image according to the traditional technical scheme, pose calculation is performed according to the feature points corresponding to the depth image, so that the determined spatial pose of the robot is not accurate enough, and errors exist in operations such as obstacle avoidance control, SLAM positioning mapping, navigation, three-dimensional reconstruction and the like performed by using the spatial pose.
Therefore, how to improve the accuracy of determining the spatial pose of the robot is a technical problem that a person skilled in the art needs to solve at present.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a space pose determination method, apparatus, computer device, robot, computer-readable storage medium, and computer program product of a robot capable of improving accuracy in determining a space pose of the robot.
In a first aspect, the application provides a method for determining a spatial pose of a robot. The method comprises the following steps:
acquiring continuous two-frame pose images obtained by shooting by a shooting device of a target robot; the pose image comprises a color image and a depth image;
extracting characteristic points from the two frames of color images respectively; the number of the characteristic points selected in the area close to the central pixel point of the color image is larger than the number of the characteristic points selected in the area far from the central pixel point; the center pixel point is a pixel point corresponding to the image center of the color image;
matching the characteristic points extracted from the two frames of the color images, and determining matching points;
and according to the corresponding relation between the color image and the depth image, performing pose calculation by using the depth value corresponding to the matching point position in the depth image to obtain the spatial pose of the target robot.
In one embodiment, the area near the central pixel point is an area formed by taking the central pixel point as a center and taking a preset number of pixels as a radius, or an area formed by a distance between the central pixel point and the central pixel point is less than or equal to a preset number of pixels, or an area formed by pixels in a preset neighborhood of the central pixel point;
the region far from the central pixel point is a region except the region close to the central pixel point in the color image.
In one embodiment, the area near the central pixel point is an area formed by taking the central pixel point as a center and taking a preset number of pixels as a radius; the region far from the central pixel point is a region except the region close to the central pixel point in the color image.
In one embodiment, the area near the central pixel point is an area formed by a distance between the central pixel point and a preset number of pixel points or less; the region far from the central pixel point is a region except the region close to the central pixel point in the color image.
In one embodiment, the area near the central pixel is an area formed by pixels in a preset neighborhood of the central pixel; the region far from the central pixel point is a region except the region close to the central pixel point in the color image.
In one embodiment, the number of feature points selected in the area close to the central pixel point is a preset multiple of the number of feature points selected in the area far from the central pixel point, and the preset multiple is greater than 1.
In one embodiment, the extracting feature points from the region near the center pixel point and the region far from the center pixel point of the two frames of the color images respectively includes:
extracting feature points from a region close to the central pixel point and a region far from the central pixel point according to the distance between the region where each pixel point of the color image is located and the central pixel point; the number of feature points extracted in the region is inversely proportional to the distance between the region and the center pixel point.
In one embodiment, before the pose calculation is performed by using the depth value corresponding to the matching point position in the depth image according to the correspondence between the color image and the depth image, and the spatial pose of the target robot is obtained, the method further includes:
And deleting characteristic points corresponding to the depth values, wherein the depth distance of the image in the depth image is larger than a preset distance range.
In a second aspect, the application further provides a device for determining the spatial pose of the robot. The device comprises:
the acquisition module is used for acquiring continuous two-frame pose images shot by the shooting device of the target robot; the pose image comprises a color image and a depth image;
the extraction module is used for extracting characteristic points from the two frames of color images respectively; the number of the characteristic points selected in the area close to the central pixel point of the color image is larger than the number of the characteristic points selected in the area far from the central pixel point; the center pixel point is a pixel point corresponding to the image center of the color image;
the matching module is used for matching the characteristic points extracted from the two frames of the color images and determining matching points;
and the calculation module is used for calculating the pose by utilizing the depth value corresponding to the matching point position in the depth image according to the corresponding relation between the color image and the depth image, so as to obtain the spatial pose of the target robot.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor which when executing the computer program performs the steps of:
Acquiring continuous two-frame pose images obtained by shooting by a shooting device of a target robot; the pose image comprises a color image and a depth image;
extracting characteristic points from the two frames of color images respectively; the number of the characteristic points selected in the area close to the central pixel point of the color image is larger than the number of the characteristic points selected in the area far from the central pixel point; the center pixel point is a pixel point corresponding to the image center of the color image;
matching the characteristic points extracted from the two frames of the color images, and determining matching points;
and according to the corresponding relation between the color image and the depth image, performing pose calculation by using the depth value corresponding to the matching point position in the depth image to obtain the spatial pose of the target robot.
In a fourth aspect, the present application further provides a robot, on which an RGBD camera is mounted, the robot further comprising a computer device as described above.
In a fifth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
Acquiring continuous two-frame pose images obtained by shooting by a shooting device of a target robot; the pose image comprises a color image and a depth image;
extracting characteristic points from the two frames of color images respectively; the number of the characteristic points selected in the area close to the central pixel point of the color image is larger than the number of the characteristic points selected in the area far from the central pixel point; the center pixel point is a pixel point corresponding to the image center of the color image;
matching the characteristic points extracted from the two frames of the color images, and determining matching points;
and according to the corresponding relation between the color image and the depth image, performing pose calculation by using the depth value corresponding to the matching point position in the depth image to obtain the spatial pose of the target robot.
In a sixth aspect, the application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of:
acquiring continuous two-frame pose images obtained by shooting by a shooting device of a target robot; the pose image comprises a color image and a depth image;
Extracting characteristic points from the two frames of color images respectively; the number of the characteristic points selected in the area close to the central pixel point of the color image is larger than the number of the characteristic points selected in the area far from the central pixel point; the center pixel point is a pixel point corresponding to the image center of the color image;
matching the characteristic points extracted from the two frames of the color images, and determining matching points;
and according to the corresponding relation between the color image and the depth image, performing pose calculation by using the depth value corresponding to the matching point position in the depth image to obtain the spatial pose of the target robot.
According to the space pose determining method, the device, the computer equipment, the robot, the storage medium and the computer program product of the robot, as the depth value of the central area of the depth image is the most accurate, the error of the depth value of the edge part of the image is larger, the method respectively selects the corresponding number of characteristic points for the area, close to the central pixel point, and the area, far from the central pixel point, of the color image, and the number of the characteristic points of the area, close to the central pixel point, is larger than the number of the characteristic points of the area, far from the central pixel point; and after the matching point is determined, according to the corresponding relation between the color image and the depth image, carrying out pose calculation by utilizing the depth value corresponding to the matching point in the depth image to obtain the spatial pose of the target robot.
Drawings
FIG. 1 is a flow diagram of a method for determining a spatial pose of a robot in one embodiment;
FIG. 2 is a schematic view of the area division of a color image according to one embodiment;
FIG. 3 is a block diagram of a spatial pose determination device of a robot in one embodiment;
fig. 4 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The space pose determining method of the robot can be directly applied to the robot, can be applied to a server, can also be applied to a system comprising a terminal and the server, and is realized through interaction of the terminal and the server. The robot in the embodiment of the application refers to a robot needing space movement, and the space pose of the robot needs to be determined; the specific type and model of the robot are not limited in this embodiment. The server may be implemented as a stand-alone server or as a server cluster composed of a plurality of servers.
As shown in fig. 1, a method for determining a spatial pose of a robot is provided, and the method is described as an example of application to a robot. The method comprises the following steps:
102, acquiring continuous two-frame pose images shot by a shooting device of a target robot; the pose image includes a color image and a depth image.
Specifically, the target robot is a robot that needs to determine a corresponding target pose in this embodiment; the photographing device refers to a device mounted on the target robot for acquiring an image of the surrounding environment, and continuously photographs to acquire continuous frame images. As a preferred embodiment, the photographing device may be an RGBD camera; RGBD cameras are cameras based on structured light technology, typically with two cameras, one RGB camera capturing color images and one IR camera capturing infrared images, i.e. depth images. It will be appreciated that in determining the spatial pose, at least two consecutive frames of pose images need to be acquired, i.e. two consecutive frames of color images and two frames of depth images corresponding to the color images.
Step 104, extracting characteristic points from the two frames of color images respectively; wherein the number of the feature points selected in the area close to the central pixel point of the color image is larger than the number of the feature points selected in the area far from the central pixel point; the center pixel point is the pixel point corresponding to the image center of the color image.
Specifically, the feature points refer to points with obvious features in the color image, and the feature points in the color image are extracted to match the feature points, so that the spatial pose of two continuous adjacent frames of images is determined.
The process of extracting feature points from the color image may be implemented by feature extraction algorithms such as SIFT (Scale-invariant feature transform, scale invariant feature transform), SURF (Speeded Up Robust Features, acceleration robust feature), FAST (Features from accelerated segment test, corner detection algorithm), GLOH (gradient location and orientation histogram), etc., or by improved algorithms such as PCA-SIFT, ICA-SIFT, P-ASURF, R-ASURF, radon-SIFT, etc.; the present embodiment is not limited to the algorithm for performing feature extraction.
When the characteristic extraction operation is carried out on the color image, the characteristic points are extracted from the color image, and the corresponding number of the characteristic points are selected according to the distance between each characteristic point and the central pixel point of the color image. Wherein, the central pixel point refers to a pixel positioned at the central part of the image in the color image; in the present embodiment, the number of feature points selected in the region near the center pixel point of the color image is larger than the number of feature points selected in the region far from the center pixel point; in this embodiment, the number of feature points selected in the area close to the central pixel point of the color image and the number of feature points selected in the area far from the central pixel point are not specifically limited, and may be set according to actual requirements.
And 106, matching the feature points extracted from the two frames of color images, and determining the matching points.
Specifically, after feature points are extracted from two frames of color images respectively, feature point matching is performed on the extracted feature points, namely, which features are the same point is judged through the difference of descriptors, and feature points successfully matched are determined, namely, matching points are determined.
In actual operation, the matching points can be determined according to the distance between the characteristic points in the two frames of color images; the method for calculating the distance between the feature points in the two frames of color images includes calculating the euclidean distance, the cosine distance, and the like, and the embodiment is not limited to the calculation method.
And step 108, according to the corresponding relation between the color image and the depth image, carrying out pose calculation by using the depth value corresponding to the matching point position in the depth image, and obtaining the spatial pose of the target robot.
Specifically, since each frame of color image has a depth image corresponding thereto, that is, the color image and the depth image are in one-to-one correspondence. Therefore, according to the position of the matching point determined in the color image and the corresponding relation between the color image and the depth image, the depth value corresponding to the matching point is determined in the depth image, and then the pose calculation is carried out by utilizing the depth value, so that the spatial pose of the target robot is obtained. Specifically, ICP (Iterative Closest Point) registration operation can be performed on depth values corresponding to the matching points, and the spatial pose of the target robot can be determined.
More specifically, it is assumed that two consecutive frames of color images and depth images are referred to as a first frame of color image and a second frame of color image, a first frame of depth image and a second frame of depth image, respectively; after the A1 point and the B1 point of the matching points are determined according to the first frame color image and the second frame color image, the depth values corresponding to the A1 point and the B1 point are directly read at the pixel point positions corresponding to the first frame depth image and the second frame depth image, and then pose calculation is carried out by utilizing the depth values, so that the spatial pose of the target robot is obtained. When the pose calculation is performed by using the depth values, the point cloud P1 and the point cloud P2 are calculated according to the first frame depth image and the second frame depth image respectively, and then the ICP registration operation is performed on the point clouds P1 and P2 to obtain the corresponding spatial pose. In general, the color image and the depth image are aligned strictly in time, and the accuracy of calculating the spatial pose can be improved.
According to the space pose determining method of the robot, as the depth value of the central area of the depth image is the most accurate, the error of the depth value of the edge part of the image is larger, the method respectively selects the corresponding number of characteristic points for the area close to the central pixel point and the area far from the central pixel point of the color image, and the number of the characteristic points of the area close to the central pixel point is larger than that of the characteristic points of the area far from the central pixel point; and after the matching point is determined, according to the corresponding relation between the color image and the depth image, carrying out pose calculation by utilizing the depth value corresponding to the matching point in the depth image to obtain the spatial pose of the target robot.
On the basis of the above embodiment, the technical solution is further described and optimized in this embodiment, and specifically, in this embodiment, the area near the central pixel is an area formed by taking the central pixel as the center and taking the preset number of pixels as the radius, or an area formed by the distance between the central pixel and the preset number of pixels or an area formed by the pixels of the preset neighborhood of the central pixel;
the region far from the center pixel is a region other than the region near the center pixel in the color image.
Specifically, in the present embodiment, the color image is divided into two areas, i.e., an area near the center pixel point and an area far from the center pixel point. The area formed by taking the central pixel point as the center and taking the preset number of pixels as the radius can be determined to be an area close to the central pixel point, namely, the area close to the central pixel point is determined to be the area close to the central pixel point by taking the central pixel point as the center of a circle and taking the preset number of pixel radii, and the pixel points in the corresponding circular range are all the pixel points close to the central pixel point.
In addition, in other embodiments, an area formed by a predetermined number of pixels or less from the center pixel may be determined as an area near the center pixel; firstly, determining a preset number of pixel points as a threshold value, then determining the distance between each pixel point and a central pixel point, and determining the pixel points with the distance smaller than or equal to the preset number as the pixel points close to the central pixel point; the pixel point forming area is the area close to the central pixel point.
In addition, in other embodiments, a region formed by pixels in a preset neighborhood of the central pixel may be determined as a region close to the central pixel; the preset neighborhood refers to a region corresponding to the pixel points around the central pixel point, each pixel point in the preset neighborhood of the central pixel point is a pixel point close to the central pixel point, and a region formed by each neighborhood is a region close to the central pixel point.
Correspondingly, after determining the area close to the central pixel point, determining the area except the area close to the central pixel point in the color image as the area far from the central pixel point.
Specifically, after determining the region close to the center pixel point and the region far from the center pixel point, when extracting the feature points in the two regions, the number of the feature points selected in the region close to the center pixel point of the color image is larger than the number of the feature points selected in the region far from the center pixel point.
Therefore, according to the method of the embodiment, the area close to the central pixel point and the area far from the central pixel point are determined, and the determination mode is convenient and easy to implement.
As a preferred embodiment, the number of feature points selected in the region close to the center pixel point is a preset multiple of the number of feature points selected in the region far from the center pixel point, and the preset multiple is greater than 1.
Specifically, in the present embodiment, when feature points are extracted from a region close to a center pixel point and a region far from the center pixel point, respectively, the number of extracted feature points is different by a preset multiple; in this embodiment, specific numerical values of the preset times are not limited, and the preset times are set according to actual operation requirements. Typically, the preset multiple is greater than 1. For example, assuming that the number of feature points selected in the region away from the center pixel point is N, the number of feature points selected in the region close to the center pixel point may be 2N or 3N or 4N, or the like.
Therefore, according to the embodiment, the number of the feature points extracted in the two areas can be determined conveniently and quickly by selecting the feature points in the area close to the central pixel point and the area far from the central pixel point respectively according to the number of the feature points selected in the area close to the central pixel point, which is a preset multiple of the number of the feature points selected in the area far from the central pixel point.
On the basis of the above embodiment, the present embodiment further describes and optimizes the technical solution, and specifically, in this embodiment, feature points are extracted from a region close to a central pixel point and a region far from the central pixel point of two pose images, respectively, including:
Extracting feature points from a region close to the central pixel point and a region far from the central pixel point according to the distance between the region where each pixel point of the pose image is located and the central pixel point; the number of feature points extracted in the region is inversely proportional to the distance between the region and the center pixel point.
In this embodiment, the region where the pixel point is located refers to a position region where the pixel point corresponds to in the color image; the region may be a region formed by adjacent pixels around the pixel, or may be a region in which the pixels corresponding to the pixels having the same distance are determined to be the same by calculating the distance between each pixel and the center pixel. Correspondingly, when the feature points are extracted in the areas, the distance between the areas and the central pixel point is used for determining the number of the feature points extracted in each area.
Specifically, the number of feature points extracted in the region is inversely proportional to the distance between the region and the center pixel point; that is, the closer the distance between the region and the center pixel point is, the larger the number of feature points extracted in the region is, and the further the distance between the region and the center pixel point is, the smaller the number of feature points extracted in the region is.
Fig. 2 is a schematic view showing the area division of a color image according to the present embodiment; assuming that the color image comprises three areas, namely an area (1), an area (2) and an area (3), wherein each area correspondingly comprises a plurality of pixel points, and the distances between the area (1), the area (2) and the area (3) and the central pixel point are gradually increased; for example, the distance between the region (1) and the central pixel point is smaller than the distance between the region (2) and the central pixel point and smaller than the distance between the region (3) and the central pixel point; correspondingly, the number of feature points extracted in the region (1) is larger than the number of feature points extracted in the region (2) and larger than the number of feature points extracted in the region (3).
Also, the number of feature points extracted in each of the region (1), the region (2) and the region (3) may be increased by a preset multiple, for example, the number of feature points extracted in the region (1) is a preset multiple of the number of feature points extracted in the region (2), and the number of feature points extracted in the region (2) is a preset multiple of the number of feature points extracted in the region (3). Or determining the number of the characteristic points respectively extracted in the region (1), the region (2) and the region (3) according to the actual distance between the region (1), the region (2) and the region (3) and the central pixel point and the linear function relation between the preset distance and the number of the characteristic points.
Therefore, according to the method of the embodiment, as the distance from the central pixel point is reduced, the number of extracted feature points is gradually increased, that is, the closer to the central pixel point, the more the number of extracted feature points is, the more the duty ratio of the precision depth value can be increased, and the precision of calculating the spatial pose can be improved.
On the basis of the above embodiment, the technical solution is further described and optimized in this embodiment, and specifically, in this embodiment, before pose calculation is performed by using feature points corresponding to matching points in the depth image according to the correspondence between the color image and the depth image, and a spatial pose of the target robot is obtained, the method further includes:
and deleting the feature points corresponding to the depth values, of which the image depth distances are larger than the preset distance range, in the depth image.
It can be understood that the photographing device of the target robot has its own optimal working distance, and the depth value corresponding to the optimal working distance has the highest precision, and the depth value corresponding to the optimal working distance is not the worse precision. For example, the optimum working distance of some RGBD cameras for structured light is 1m to 2m, and when the optimum working distance is exceeded, the error of the depth value increases sharply, and the accuracy of the depth value decreases drastically; for example, if the optimum working distance of the structured light camera is 2m, the error of the depth value increases rapidly and the accuracy of the depth value decreases significantly when the optimum working distance is exceeded.
In this embodiment, a preset distance range is set in advance according to an optimal working distance of the photographing device, then each depth value in the depth image is compared with the preset distance range, a depth value outside the preset distance range is determined, and feature points corresponding to the depth value outside the preset distance range are deleted.
More specifically, if the optimal working distance is a specific value, the preset distance range may be the specific value, or may be a distance range including a fault tolerance range. If the optimal working distance is a distance range, the preset distance range may be a distance range corresponding to the distance range, or may be a distance range including a fault tolerance range. For example, if the optimal working distance is 1 m-2 m and the fault tolerance range is 0.2m, the corresponding preset distance range may be (1±0.2) m to (2±0.2) m; the fault tolerance range is not limited in this embodiment, and the fault tolerance range is set according to actual requirements.
According to the embodiment, the feature points corresponding to the depth values, with the image depth distance larger than the preset distance range, in the depth image are further deleted, so that the accuracy of the selected depth values can be further improved, and the accuracy of determining the spatial pose of the target robot is improved.
In order to enable those skilled in the art to better understand the technical scheme of the present application, the technical scheme in the embodiment of the present application is described in detail below in conjunction with practical application scenarios. In the embodiment of the application, a method for determining the spatial pose of a robot comprises the following specific steps:
acquiring continuous two-frame pose images obtained by shooting by a shooting device of a target robot; the pose image comprises a color image and a depth image;
deleting characteristic points corresponding to depth values, wherein the depth distance of the images in the depth images is larger than a preset distance range;
extracting feature points from a region close to the central pixel point and a region far from the central pixel point according to the distance between the region where each pixel point of the color image is located and the central pixel point; the number of the feature points extracted in the area is inversely proportional to the distance between the area and the central pixel point; the central pixel point is a pixel point corresponding to the image center of the color image;
matching the feature points extracted from the two frames of color images to determine matching points;
and according to the corresponding relation between the color image and the depth image, carrying out pose calculation by using the depth value corresponding to the matching point position in the depth image, and obtaining the space pose of the target robot.
According to the space pose determining method of the robot, as the depth value of the central area of the depth image is the most accurate, the error of the depth value of the edge part of the image is larger, the method respectively selects the corresponding number of characteristic points for the area close to the central pixel point and the area far from the central pixel point of the color image, and the number of the characteristic points of the area close to the central pixel point is larger than that of the characteristic points of the area far from the central pixel point; and after the matching point is determined, according to the corresponding relation between the color image and the depth image, carrying out pose calculation by utilizing the depth value corresponding to the matching point in the depth image to obtain the spatial pose of the target robot.
It should be understood that, although the steps in the flowcharts related to the above embodiments are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a device for determining the space pose of the robot, which is used for realizing the method for determining the space pose of the robot. The implementation scheme of the device for solving the problem is similar to that described in the method, so the specific limitation in the embodiment of the spatial pose determining device for one or more robots provided below can be referred to the limitation of the spatial pose determining method for robots in the above description, and the description is omitted here.
In one embodiment, as shown in fig. 3, there is provided a spatial pose determining apparatus of a robot, including: an acquisition module 302, an extraction module 304, a matching module 306, and a calculation module 308, wherein:
the acquisition module 302 is configured to acquire two continuous frame pose images obtained by shooting by the shooting device of the target robot; the pose image comprises a color image and a depth image;
an extracting module 304, configured to extract feature points from two frames of color images respectively; wherein the number of the feature points selected in the area close to the central pixel point of the color image is larger than the number of the feature points selected in the area far from the central pixel point; the central pixel point is a pixel point corresponding to the image center of the color image;
The matching module 306 is configured to match feature points extracted from two frames of color images, and determine matching points;
the calculating module 308 is configured to calculate, according to the correspondence between the color image and the depth image, a pose by using a depth value corresponding to the matching point in the depth image, so as to obtain a spatial pose of the target robot.
The device for determining the spatial pose of the robot has the same beneficial effects as the method for determining the spatial pose of the robot.
In one embodiment, the area near the central pixel point is an area formed by taking the central pixel point as a center and taking a preset number of pixels as a radius, or an area formed by the distance between the central pixel point and the preset number of pixels or an area formed by the pixels of a preset neighborhood of the central pixel point;
the region far from the center pixel is a region other than the region near the center pixel in the color image.
In one embodiment, the extraction module 304 includes:
the extraction submodule is used for extracting feature points from a region close to the central pixel point and a region far from the central pixel point according to the distance between the region where each pixel point of the color image is located and the central pixel point; the number of feature points extracted in the region is inversely proportional to the distance between the region and the center pixel point.
In one embodiment, the apparatus for determining a spatial pose of a robot further includes:
and the deleting module is used for deleting the characteristic points corresponding to the depth values, of which the image depth distance is larger than the preset distance range, in the depth image.
The above-described respective modules in the robot spatial pose determination apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal, and the internal structure of which may be as shown in fig. 4. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program, when executed by a processor, implements a method for determining the spatial pose of a robot. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by persons skilled in the art that the architecture shown in fig. 4 is merely a block diagram of some of the architecture relevant to the present inventive arrangements and is not limiting as to the computer device to which the present inventive arrangements are applicable, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program performing the steps of:
acquiring continuous two-frame pose images obtained by shooting by a shooting device of a target robot; the pose image comprises a color image and a depth image;
extracting characteristic points from the two frames of color images respectively; wherein the number of the feature points selected in the area close to the central pixel point of the color image is larger than the number of the feature points selected in the area far from the central pixel point; the central pixel point is a pixel point corresponding to the image center of the color image;
matching the feature points extracted from the two frames of color images to determine matching points;
and according to the corresponding relation between the color image and the depth image, carrying out pose calculation by using the depth value corresponding to the matching point position in the depth image, and obtaining the space pose of the target robot.
The computer equipment provided by the embodiment of the application has the same beneficial effects as the method for determining the space pose of the robot.
In one embodiment, a robot is provided, on which an RGBD camera is mounted, the robot further comprising a computer device as above.
The robot provided by the embodiment of the application has the same beneficial effects as the method for determining the space pose of the robot.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring continuous two-frame pose images obtained by shooting by a shooting device of a target robot; the pose image comprises a color image and a depth image;
extracting characteristic points from the two frames of color images respectively; wherein the number of the feature points selected in the area close to the central pixel point of the color image is larger than the number of the feature points selected in the area far from the central pixel point; the central pixel point is a pixel point corresponding to the image center of the color image;
matching the feature points extracted from the two frames of color images to determine matching points;
And according to the corresponding relation between the color image and the depth image, carrying out pose calculation by using the depth value corresponding to the matching point position in the depth image, and obtaining the space pose of the target robot.
The computer readable storage medium provided by the embodiment of the application has the same beneficial effects as the method for determining the spatial pose of the robot.
In one embodiment, a computer program product is provided comprising a computer program which, when executed by a processor, performs the steps of:
acquiring continuous two-frame pose images obtained by shooting by a shooting device of a target robot; the pose image comprises a color image and a depth image;
extracting characteristic points from the two frames of color images respectively; wherein the number of the feature points selected in the area close to the central pixel point of the color image is larger than the number of the feature points selected in the area far from the central pixel point; the central pixel point is a pixel point corresponding to the image center of the color image;
matching the feature points extracted from the two frames of color images to determine matching points;
and according to the corresponding relation between the color image and the depth image, carrying out pose calculation by using the depth value corresponding to the matching point position in the depth image, and obtaining the space pose of the target robot.
The computer program product provided by the embodiment of the application has the same beneficial effects as the method for determining the spatial pose of the robot.
The user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or sufficiently authorized by each party.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (10)

1. A method for determining a spatial pose of a robot, the method comprising:
acquiring continuous two-frame pose images obtained by shooting by a shooting device of a target robot; the pose image comprises a color image and a depth image;
extracting characteristic points from the two frames of color images respectively; the number of the characteristic points selected in the area close to the central pixel point of the color image is larger than the number of the characteristic points selected in the area far from the central pixel point; the center pixel point is a pixel point corresponding to the image center of the color image, and the area close to the center pixel point is an area formed by taking the center pixel point as a center and taking a preset number of pixels as a radius, or an area formed by the distance between the center pixel point and the center pixel point is less than or equal to the preset number of pixel points, or an area formed by the pixel points of a preset neighborhood of the center pixel point; the area far from the central pixel point is an area except for the area close to the central pixel point in the color image, the number of the characteristic points selected in the area close to the central pixel point is a preset multiple of the number of the characteristic points selected in the area far from the central pixel point, and the preset multiple is larger than 1;
Or extracting feature points from a region close to the central pixel point and a region far from the central pixel point according to the distance between the region where each pixel point of the color image is located and the central pixel point; the number of the feature points extracted from the region is inversely proportional to the distance between the region and the central pixel point;
after characteristic points are respectively extracted from the two frames of color images, carrying out characteristic point matching on the extracted characteristic points, judging the characteristic points as characteristic points of the same point through the difference of descriptors, and determining the characteristic points successfully matched to obtain matching points;
according to the corresponding relation between the color image and the depth image, carrying out pose calculation by utilizing a depth value corresponding to the matching point position in the depth image to obtain the spatial pose of the target robot;
according to the correspondence between the color image and the depth image, performing pose calculation by using a depth value corresponding to the matching point position in the depth image, where obtaining the spatial pose of the target robot includes:
according to the corresponding relation between the color image and the depth image, a first point cloud and a second point cloud are calculated for a first frame depth image and a second frame depth image respectively, and then iterative closest point algorithm registration operation is carried out on the first point cloud and the second point cloud, so that the spatial pose of the target robot is obtained.
2. The method according to claim 1, characterized in that: the shooting device is an RGBD camera.
3. The method according to claim 1, wherein the calculating the pose by using the depth value corresponding to the matching point in the depth image according to the correspondence between the color image and the depth image, to obtain the spatial pose of the target robot includes:
determining a depth value corresponding to the matching point in the depth image according to the position of the matching point determined in the color image and the corresponding relation between the color image and the depth image;
and performing iterative nearest point algorithm registration operation based on the depth value corresponding to the matching point to obtain the spatial pose of the target robot.
4. The method of claim 1, wherein extracting feature points from the two frames of the color image respectively comprises:
and respectively extracting feature points from the two frames of the color images through a feature extraction algorithm.
5. The method according to claim 1, wherein before the pose calculation is performed by using the depth value corresponding to the matching point position in the depth image according to the correspondence between the color image and the depth image, the method further comprises:
And deleting characteristic points corresponding to the depth values, wherein the depth distance of the image in the depth image is larger than a preset distance range.
6. A spatial pose determination device of a robot, the device comprising:
the acquisition module is used for acquiring continuous two-frame pose images shot by the shooting device of the target robot; the pose image comprises a color image and a depth image;
the extraction module is used for extracting characteristic points from the two frames of color images respectively; the number of the characteristic points selected in the area close to the central pixel point of the color image is larger than the number of the characteristic points selected in the area far from the central pixel point; the center pixel point is a pixel point corresponding to the image center of the color image, and the area close to the center pixel point is an area formed by taking the center pixel point as a center and taking a preset number of pixels as a radius, or an area formed by the distance between the center pixel point and the center pixel point is less than or equal to the preset number of pixel points, or an area formed by the pixel points of a preset neighborhood of the center pixel point; the area far from the central pixel point is an area except for the area close to the central pixel point in the color image, the number of the characteristic points selected in the area close to the central pixel point is a preset multiple of the number of the characteristic points selected in the area far from the central pixel point, and the preset multiple is larger than 1; or extracting feature points from a region close to the central pixel point and a region far from the central pixel point according to the distance between the region where each pixel point of the color image is located and the central pixel point; the number of the feature points extracted from the region is inversely proportional to the distance between the region and the central pixel point;
The matching module is used for carrying out characteristic point matching on the extracted characteristic points after the characteristic points are respectively extracted from the two frames of the color images, judging the characteristic points as the characteristic points of the same point through the difference of descriptors, and determining the characteristic points successfully matched to obtain matching points;
the computing module is used for carrying out pose computation by utilizing the depth value corresponding to the matching point position in the depth image according to the corresponding relation between the color image and the depth image to obtain the spatial pose of the target robot;
the computing module is specifically configured to: according to the corresponding relation between the color image and the depth image, a first point cloud and a second point cloud are calculated for a first frame depth image and a second frame depth image respectively, and then iterative closest point algorithm registration operation is carried out on the first point cloud and the second point cloud, so that the spatial pose of the target robot is obtained.
7. The apparatus of claim 6, wherein the camera is an RGBD camera.
8. The apparatus of claim 6, wherein the computing module is specifically configured to: determining a depth value corresponding to the matching point in the depth image according to the position of the matching point determined in the color image and the corresponding relation between the color image and the depth image; and performing iterative nearest point algorithm registration operation based on the depth value corresponding to the matching point to obtain the spatial pose of the target robot.
9. The apparatus of claim 6, wherein the apparatus further comprises:
and the deleting module is used for deleting the characteristic points corresponding to the depth values, of which the image depth distance is larger than the preset distance range, in the depth image.
10. A robot having an RGBD camera mounted thereon, characterized in that the robot further comprises a computer device comprising a memory and a processor, the memory storing a computer program, the processor executing the computer program to carry out the steps of the method according to any one of claims 1 to 5.
CN202210236826.2A 2022-03-10 2022-03-10 Space pose determining method and device of robot and robot Active CN114750147B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210236826.2A CN114750147B (en) 2022-03-10 2022-03-10 Space pose determining method and device of robot and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210236826.2A CN114750147B (en) 2022-03-10 2022-03-10 Space pose determining method and device of robot and robot

Publications (2)

Publication Number Publication Date
CN114750147A CN114750147A (en) 2022-07-15
CN114750147B true CN114750147B (en) 2023-11-24

Family

ID=82325441

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210236826.2A Active CN114750147B (en) 2022-03-10 2022-03-10 Space pose determining method and device of robot and robot

Country Status (1)

Country Link
CN (1) CN114750147B (en)

Citations (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5444799A (en) * 1992-03-26 1995-08-22 Sanyo Electric Co., Ltd. Image processing apparatus and method of strain correction in such an image processing apparatus
WO2008099915A1 (en) * 2007-02-16 2008-08-21 Mitsubishi Electric Corporation Road/feature measuring device, feature identifying device, road/feature measuring method, road/feature measuring program, measuring device, measuring method, measuring program, measured position data, measuring terminal, measuring server device, drawing device, drawing method, drawing program, and drawing data
JP2011215052A (en) * 2010-03-31 2011-10-27 Aisin Aw Co Ltd Own-vehicle position detection system using scenic image recognition
JP2017053795A (en) * 2015-09-11 2017-03-16 株式会社リコー Information processing apparatus, position attitude measurement method, and position attitude measurement program
CN106920252A (en) * 2016-06-24 2017-07-04 阿里巴巴集团控股有限公司 A kind of image processing method, device and electronic equipment
CN108830191A (en) * 2018-05-30 2018-11-16 上海电力学院 Based on the mobile robot SLAM method for improving EMM and ORB algorithm
CN109859104A (en) * 2019-01-19 2019-06-07 创新奇智(重庆)科技有限公司 A kind of video generates method, computer-readable medium and the converting system of picture
CN110222688A (en) * 2019-06-10 2019-09-10 重庆邮电大学 A kind of instrument localization method based on multi-level correlation filtering
CN110340887A (en) * 2019-06-12 2019-10-18 西安交通大学 A method of the oiling robot vision guide based on image
CN110349213A (en) * 2019-06-28 2019-10-18 Oppo广东移动通信有限公司 Method, apparatus, medium and electronic equipment are determined based on the pose of depth information
CN110853100A (en) * 2019-10-24 2020-02-28 东南大学 Structured scene vision SLAM method based on improved point-line characteristics
CN111220148A (en) * 2020-01-21 2020-06-02 珊口(深圳)智能科技有限公司 Mobile robot positioning method, system and device and mobile robot
CN111340766A (en) * 2020-02-21 2020-06-26 北京市商汤科技开发有限公司 Target object detection method, device, equipment and storage medium
AU2020101932A4 (en) * 2020-07-16 2020-10-01 Xi'an University Of Science And Technology Binocular vision–based method and system for pose measurement of cantilever tunneling equipment
JP2020180914A (en) * 2019-04-26 2020-11-05 オムロン株式会社 Device, method, and program for detecting position attitude of object
CN111951335A (en) * 2020-08-13 2020-11-17 珠海格力电器股份有限公司 Method, device, processor and image acquisition system for determining camera calibration parameters
CN112070770A (en) * 2020-07-16 2020-12-11 国网安徽省电力有限公司检修分公司 High-precision three-dimensional map and two-dimensional grid map synchronous construction method
CN112164117A (en) * 2020-09-30 2021-01-01 武汉科技大学 V-SLAM pose estimation method based on Kinect camera
CN112288796A (en) * 2020-12-18 2021-01-29 南京佗道医疗科技有限公司 Method for extracting center of perspective image mark point
CN112416000A (en) * 2020-11-02 2021-02-26 北京信息科技大学 Unmanned formula car environment sensing and navigation method and steering control method
CN112752028A (en) * 2021-01-06 2021-05-04 南方科技大学 Pose determination method, device and equipment of mobile platform and storage medium
CN112894815A (en) * 2021-01-25 2021-06-04 西安工业大学 Method for detecting optimal position and posture for article grabbing by visual servo mechanical arm
CN113592946A (en) * 2021-07-27 2021-11-02 深圳甲壳虫智能有限公司 Pose positioning method and device, intelligent robot and storage medium
CN113902944A (en) * 2021-09-30 2022-01-07 青岛信芯微电子科技股份有限公司 Model training and scene recognition method, device, equipment and medium
CN113902932A (en) * 2021-10-22 2022-01-07 Oppo广东移动通信有限公司 Feature extraction method, visual positioning method and device, medium and electronic equipment
CN113947576A (en) * 2021-10-15 2022-01-18 北京极智嘉科技股份有限公司 Container positioning method and device, container access equipment and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010010514A1 (en) * 1999-09-07 2001-08-02 Yukinobu Ishino Position detector and attitude detector
JP2014123230A (en) * 2012-12-20 2014-07-03 Sony Corp Image processor, image processing method, and program
US20160238394A1 (en) * 2013-10-01 2016-08-18 Hitachi, Ltd.. Device for Estimating Position of Moving Body and Method for Estimating Position of Moving Body
US10593060B2 (en) * 2017-04-14 2020-03-17 TwoAntz, Inc. Visual positioning and navigation device and method thereof
CN108965687B (en) * 2017-05-22 2021-01-29 阿里巴巴集团控股有限公司 Shooting direction identification method, server, monitoring method, monitoring system and camera equipment
US10902634B2 (en) * 2018-12-04 2021-01-26 Here Global B.V. Method and apparatus for providing feature triangulation

Patent Citations (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5444799A (en) * 1992-03-26 1995-08-22 Sanyo Electric Co., Ltd. Image processing apparatus and method of strain correction in such an image processing apparatus
WO2008099915A1 (en) * 2007-02-16 2008-08-21 Mitsubishi Electric Corporation Road/feature measuring device, feature identifying device, road/feature measuring method, road/feature measuring program, measuring device, measuring method, measuring program, measured position data, measuring terminal, measuring server device, drawing device, drawing method, drawing program, and drawing data
JP2011215052A (en) * 2010-03-31 2011-10-27 Aisin Aw Co Ltd Own-vehicle position detection system using scenic image recognition
JP2017053795A (en) * 2015-09-11 2017-03-16 株式会社リコー Information processing apparatus, position attitude measurement method, and position attitude measurement program
CN106920252A (en) * 2016-06-24 2017-07-04 阿里巴巴集团控股有限公司 A kind of image processing method, device and electronic equipment
CN108830191A (en) * 2018-05-30 2018-11-16 上海电力学院 Based on the mobile robot SLAM method for improving EMM and ORB algorithm
CN109859104A (en) * 2019-01-19 2019-06-07 创新奇智(重庆)科技有限公司 A kind of video generates method, computer-readable medium and the converting system of picture
JP2020180914A (en) * 2019-04-26 2020-11-05 オムロン株式会社 Device, method, and program for detecting position attitude of object
CN110222688A (en) * 2019-06-10 2019-09-10 重庆邮电大学 A kind of instrument localization method based on multi-level correlation filtering
CN110340887A (en) * 2019-06-12 2019-10-18 西安交通大学 A method of the oiling robot vision guide based on image
CN110349213A (en) * 2019-06-28 2019-10-18 Oppo广东移动通信有限公司 Method, apparatus, medium and electronic equipment are determined based on the pose of depth information
CN110853100A (en) * 2019-10-24 2020-02-28 东南大学 Structured scene vision SLAM method based on improved point-line characteristics
CN111220148A (en) * 2020-01-21 2020-06-02 珊口(深圳)智能科技有限公司 Mobile robot positioning method, system and device and mobile robot
CN111340766A (en) * 2020-02-21 2020-06-26 北京市商汤科技开发有限公司 Target object detection method, device, equipment and storage medium
AU2020101932A4 (en) * 2020-07-16 2020-10-01 Xi'an University Of Science And Technology Binocular vision–based method and system for pose measurement of cantilever tunneling equipment
CN112070770A (en) * 2020-07-16 2020-12-11 国网安徽省电力有限公司检修分公司 High-precision three-dimensional map and two-dimensional grid map synchronous construction method
CN111951335A (en) * 2020-08-13 2020-11-17 珠海格力电器股份有限公司 Method, device, processor and image acquisition system for determining camera calibration parameters
CN112164117A (en) * 2020-09-30 2021-01-01 武汉科技大学 V-SLAM pose estimation method based on Kinect camera
CN112416000A (en) * 2020-11-02 2021-02-26 北京信息科技大学 Unmanned formula car environment sensing and navigation method and steering control method
CN112288796A (en) * 2020-12-18 2021-01-29 南京佗道医疗科技有限公司 Method for extracting center of perspective image mark point
CN112752028A (en) * 2021-01-06 2021-05-04 南方科技大学 Pose determination method, device and equipment of mobile platform and storage medium
CN112894815A (en) * 2021-01-25 2021-06-04 西安工业大学 Method for detecting optimal position and posture for article grabbing by visual servo mechanical arm
CN113592946A (en) * 2021-07-27 2021-11-02 深圳甲壳虫智能有限公司 Pose positioning method and device, intelligent robot and storage medium
CN113902944A (en) * 2021-09-30 2022-01-07 青岛信芯微电子科技股份有限公司 Model training and scene recognition method, device, equipment and medium
CN113947576A (en) * 2021-10-15 2022-01-18 北京极智嘉科技股份有限公司 Container positioning method and device, container access equipment and storage medium
CN113902932A (en) * 2021-10-22 2022-01-07 Oppo广东移动通信有限公司 Feature extraction method, visual positioning method and device, medium and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于RGB-D相机的移动机器人定位分析与实现;彭蔚枝;袁锋伟;周志伟;;智能计算机与应用(第03期);全文 *
机载光电成像平台的多目标自主定位***研究;周前飞;刘晶红;熊文卓;宋悦铭;;光学学报(第01期);全文 *

Also Published As

Publication number Publication date
CN114750147A (en) 2022-07-15

Similar Documents

Publication Publication Date Title
US10789717B2 (en) Apparatus and method of learning pose of moving object
CN107230225B (en) Method and apparatus for three-dimensional reconstruction
KR101470112B1 (en) Daisy descriptor generation from precomputed scale - space
US10726580B2 (en) Method and device for calibration
Jiang et al. Multiscale locality and rank preservation for robust feature matching of remote sensing images
US10839599B2 (en) Method and device for three-dimensional modeling
CN112990228B (en) Image feature matching method, related device, equipment and storage medium
JP2011508323A (en) Permanent visual scene and object recognition
CN114022558B (en) Image positioning method, image positioning device, computer equipment and storage medium
CN114063098A (en) Multi-target tracking method, device, computer equipment and storage medium
CN115661371B (en) Three-dimensional object modeling method and device, computer equipment and storage medium
CN116931583B (en) Method, device, equipment and storage medium for determining and avoiding moving object
Zhang et al. Data association between event streams and intensity frames under diverse baselines
CN113298871A (en) Map generation method, positioning method, system thereof, and computer-readable storage medium
CN114750147B (en) Space pose determining method and device of robot and robot
CN116977671A (en) Target tracking method, device, equipment and storage medium based on image space positioning
CN114882115B (en) Vehicle pose prediction method and device, electronic equipment and storage medium
CN116091998A (en) Image processing method, device, computer equipment and storage medium
CN116468753A (en) Target tracking method, apparatus, device, storage medium, and program product
CN115564639A (en) Background blurring method and device, computer equipment and storage medium
CN115063473A (en) Object height detection method and device, computer equipment and storage medium
de Lima et al. Toward a smart camera for fast high-level structure extraction
CN116481516B (en) Robot, map creation method, and storage medium
CN116758517B (en) Three-dimensional target detection method and device based on multi-view image and computer equipment
CN117576645B (en) Parking space detection method and device based on BEV visual angle and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant