CN113554703A - Robot positioning method, device, system and computer readable storage medium - Google Patents

Robot positioning method, device, system and computer readable storage medium Download PDF

Info

Publication number
CN113554703A
CN113554703A CN202010327611.2A CN202010327611A CN113554703A CN 113554703 A CN113554703 A CN 113554703A CN 202010327611 A CN202010327611 A CN 202010327611A CN 113554703 A CN113554703 A CN 113554703A
Authority
CN
China
Prior art keywords
robot
image
feature points
monocular camera
moments
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010327611.2A
Other languages
Chinese (zh)
Other versions
CN113554703B (en
Inventor
曹正江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Qianshi Technology Co Ltd
Original Assignee
Beijing Jingdong Qianshi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Qianshi Technology Co Ltd filed Critical Beijing Jingdong Qianshi Technology Co Ltd
Priority to CN202010327611.2A priority Critical patent/CN113554703B/en
Publication of CN113554703A publication Critical patent/CN113554703A/en
Application granted granted Critical
Publication of CN113554703B publication Critical patent/CN113554703B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a robot positioning method, a device and a system and a computer readable storage medium, and relates to the technical field of robots. The robot positioning method comprises the following steps: extracting the same characteristic points from images respectively shot downwards at two moments by a monocular camera arranged at the bottom of the robot; calculating the depth of the same characteristic point in each frame of image according to the height of the monocular camera from the driving surface, the focal length of the monocular camera and the position of the extracted characteristic point in each frame of image; respectively determining the positions of the robot relative to the real objects corresponding to the feature points at two moments according to the depth of the same feature points in each frame of image; and positioning the robot according to the positions of the robot relative to the real objects corresponding to the feature points at two moments. Therefore, the robot positioning scheme can be realized by deploying one monocular camera, and the deployment cost and the calculation burden are reduced.

Description

Robot positioning method, device, system and computer readable storage medium
Technical Field
The present invention relates to the field of robot technology, and in particular, to a robot positioning method, apparatus, system, and computer-readable storage medium.
Background
The visual odometer can estimate the motion of the camera according to the characteristic matching relation between image frames, and provides six-degree-of-freedom (3-degree-of-freedom position and 3-degree-of-freedom posture) positioning information for the mobile robot. In the related art, a general visual odometer needs two cameras, or is implemented by a sensor scheme in which a single camera and an Inertial Measurement Unit (IMU) are fused.
Disclosure of Invention
The inventor finds that the related technologies all need multiple sensors, the hardware cost is high, and meanwhile, the processing of the multiple sensors also increases the operation burden.
The embodiment of the invention aims to solve the technical problem that: how to reduce the cost of robot positioning and improve the calculation efficiency.
According to a first aspect of some embodiments of the present invention there is provided a robot positioning method comprising: extracting the same characteristic points from images respectively shot downwards at two moments by a monocular camera arranged at the bottom of the robot; calculating the depth of the same characteristic point in each frame of image according to the height of the monocular camera from the driving surface, the focal length of the monocular camera and the position of the extracted characteristic point in each frame of image; respectively determining the positions of the robot relative to the real objects corresponding to the feature points at two moments according to the depth of the same feature points in each frame of image; and positioning the robot according to the positions of the robot relative to the real objects corresponding to the feature points at two moments.
In some embodiments, calculating the depth of the same feature point in each frame of image according to the height of the monocular camera from the driving surface, the focal length of the monocular camera, and the position of the extracted feature point in each frame of image comprises: mapping an optical center of the monocular camera, feature points in an image shot by the monocular camera and real objects corresponding to the feature points to the same coordinate system; in a coordinate system, determining information of an included angle between a connecting line of the characteristic point and the optical center in the image and a connecting line of the central point and the optical center in the image according to the distance between the characteristic point and the central point of the image in the image and the focal length of the monocular camera; and determining the distance between the real object corresponding to the characteristic point and the optical center as the depth of the characteristic point according to the included angle and the height of the monocular camera from the driving surface.
In some embodiments, the two times include a time of known location and a time of unknown location; according to the positions of the robot relative to the real objects corresponding to the feature points at two moments, the positioning of the robot comprises the following steps: determining the position change of the robot at the moment of the unknown position relative to the moment of the known position according to the positions of the robot relative to the real object corresponding to the feature points at the moment of the known position and the moment of the unknown position; and determining the position of the robot at the moment of unknown position according to the position change and the position of the robot at the moment of known position.
In some embodiments, the robot positioning method further comprises: determining the postures of the robot relative to the real object corresponding to the feature points at two moments according to the depths of the same feature points in each frame of image; and determining the posture of the robot according to the postures of the robot relative to the real object corresponding to the feature points at two moments.
In some embodiments, determining the pose of the robot with respect to the real object corresponding to the feature point at two time instants according to the depth of the same feature point in each frame of image comprises: determining the coordinates of the real object corresponding to the feature points in the world coordinate system at each moment according to the depth of the same feature points in each frame of image; and determining the posture of the robot in the corresponding world coordinate system at each moment by utilizing an N-point perspective PnP algorithm according to the coordinates of the real object corresponding to the feature points in each world coordinate system and the coordinates of the feature points in the corresponding image.
In some embodiments, determining the pose of the robot from the poses of the robot relative to the real object corresponding to the feature points at the two moments comprises: determining a coordinate transformation relation between the world coordinate systems at two moments based on the coordinates of the real object corresponding to the feature points in the world coordinate systems at the two moments; determining the posture change of the robot between two moments according to the coordinate transformation relation and the posture of the robot in the corresponding world coordinate system at each moment; and determining the posture of the robot according to the posture change.
In some embodiments, the number of feature points extracted from each frame is 3.
In some embodiments, the two times comprise a first time and a second time; extracting the same feature points from images respectively taken at two moments by a monocular camera disposed at the bottom of the robot includes: extracting feature points from an image shot at a first moment; tracking the same feature point of the extracted feature points in the image shot at the second moment according to the optical flow information of the image; and extracting the characteristic points tracked in the image shot at the second moment.
In some embodiments, the feature points are acceleration segmentation detection feature Fast corner points; and/or the optical flow information is determined based on a sparse optical flow algorithm.
In some embodiments, the robot is an indoor mobile robot, or an automated guided vehicle.
According to a second aspect of some embodiments of the present invention there is provided a robot positioning device comprising: the characteristic point extraction module is configured to extract the same characteristic points from images respectively shot downwards at two moments by a monocular camera arranged at the bottom of the robot; the depth calculation module is configured to calculate the depth of the same characteristic point in each frame of image according to the height of the monocular camera from the driving surface, the focal length of the monocular camera and the position of the extracted characteristic point in each frame of image; the relative position determining module is configured to respectively determine the positions of the robot relative to the real objects corresponding to the feature points at two moments according to the depths of the same feature points in each frame of image; and the positioning module is configured to position the robot according to the positions of the robot relative to the real object corresponding to the characteristic points at two moments.
According to a third aspect of some embodiments of the present invention there is provided a robot positioning device comprising: a memory; and a processor coupled to the memory, the processor configured to perform any of the aforementioned robot positioning methods based on instructions stored in the memory.
According to a fourth aspect of some embodiments of the present invention there is provided a robot positioning system comprising: any of the aforementioned robot positioning devices; and the monocular camera is arranged at the bottom of the robot and shoots downwards.
According to a fifth aspect of some embodiments of the present invention there is provided a non-transitory computer readable storage medium having stored thereon a computer program, wherein the program when executed by a processor implements any of the aforementioned robot positioning methods.
Some embodiments of the above invention have the following advantages or benefits: by arranging the camera at the bottom of the robot, the position of the camera can be calculated by utilizing the characteristic that the height of the camera from the plane where the shot object is located is relatively fixed, so that the robot can be accurately positioned. Therefore, the robot positioning scheme can be realized by deploying one monocular camera, and the deployment cost and the calculation burden are reduced.
Other features of the present invention and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 shows a flow diagram of a robot positioning method according to some embodiments of the invention.
Fig. 2 schematically shows a deployment diagram of a monocular camera.
Fig. 3 exemplarily shows a positional relationship between the optical center, the feature point in the image, and the real object corresponding to the feature point.
FIG. 4 illustrates a flow diagram of a robot positioning and pose determination method according to some embodiments of the invention.
FIG. 5 illustrates a schematic diagram of a robot positioning device according to some embodiments of the present invention.
FIG. 6 illustrates a schematic diagram of a robot positioning system according to some embodiments of the invention.
Fig. 7 shows a schematic structural diagram of a robot positioning device according to further embodiments of the present invention.
FIG. 8 illustrates a schematic diagram of a robot positioning device according to further embodiments of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The relative arrangement of the components and steps, the numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless specifically stated otherwise.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
Fig. 1 shows a flow diagram of a robot positioning method according to some embodiments of the invention. As shown in fig. 1, the robot positioning method of this embodiment includes steps S102 to S108.
In step S102, the same feature points are extracted from the images respectively taken downward at two times by the monocular camera disposed at the bottom of the robot.
Fig. 2 schematically shows a deployment diagram of a monocular camera. In fig. 2, a monocular camera 21 is deployed at the bottom of the chassis of the robot 22. During the running of the robot, the monocular camera 21 and its optical center O are fixed because the height of the chassis from the running surface is fixedcThe height h relative to the driving surface is also fixed. So that calculations can be made using this known height. Moreover, because the camera is positioned at the bottom of the robot, the light is more stable than when the camera is arranged at other positions. Therefore, the brightness of the images of different frames is relatively close, and the same characteristic point in different images can be accurately carried out.
In some embodiments, the lens of the monocular camera takes a photograph of a driving surface, which refers to a surface that carries the robot to travel, such as the ground, an operating platform, a shelf, and the like. As shown in FIG. 2, the monocular camera 21 shoots towards the ground, and the shot object includes a certain point P on the groundm. Thus, the light in the photographed field of view can be more constant, and the vertical distance of the photographed object from the robot is also more constant.
In some embodiments, the robot is an indoor mobile robot, or an Automated Guided Vehicle (AGV). Such robots generally move in a plane, and therefore, the distance between the monocular camera disposed at the bottom of such robots and the photographed object is relatively fixed, and more accurate calculation can be achieved.
In extracting the feature points, tracking may be performed using optical flow information, for example, so as to obtain the same feature points in different images. In some embodiments, the two moments of shooting are assumed to be a first moment and a second moment. Extracting feature points from an image shot at a first moment; tracking the same feature point of the extracted feature points in the image shot at the second moment according to the optical flow information of the image; and extracting the characteristic points tracked in the image shot at the second moment. Thus, the same feature point in different images can be accurately identified using the optical flow information.
The number of feature points extracted per image may be one or more, but the feature points extracted between images are the same, as necessary. For example, the feature points P1 and P2 are extracted from the image captured at the first time, and the feature points P1 and P2 are also extracted from the image captured at the second time.
In some embodiments, the feature points are FAST (expedited segmentation detection feature) corner points.
In some embodiments, the optical flow information is determined based on a sparse optical flow (LK) algorithm.
In step S104, the depth of the same feature point in each frame image is calculated from the height of the monocular camera from the driving surface, the focal length of the monocular camera, and the position of the extracted feature point in each frame image. The depth of a feature point refers to the distance between the optical center of the monocular camera and the real object corresponding to the feature point.
The feature points in the image can be regarded as the projections of the real objects corresponding to the feature points on a two-dimensional plane, and the distance between the two-dimensional plane and the optical center of the monocular camera is the focal distance.
In some embodiments, for each feature point in each image, mapping the optical center of the monocular camera, the feature point in the image shot by the monocular camera, and the real object corresponding to the feature point into the same coordinate system; in a coordinate system, determining information of an included angle between a connecting line of the characteristic point and the optical center in the image and a connecting line of the central point and the optical center in the image according to the distance between the characteristic point and the central point of the image in the image and the focal length of the monocular camera; and determining the distance between the real object corresponding to the characteristic point and the optical center as the depth of the characteristic point according to the information of the included angle and the height of the monocular camera from the driving surface.
FIG. 3 exemplarily shows the optical center OcCharacteristic point P in image, real object P corresponding to characteristic point PmIn a positional relationship of PmIs located on the runningOn the face. Let OcThe connecting line between the central point O of the image and the central point of the image is positioned on the z axis, and the intersection point of the z axis and the driving surface is Om. Due to OcO is the known focal length, OcOmThe height of the optical center of the monocular camera from the driving surface, PO, can be obtained from the measurement image, and POcAnd OOcAngle therebetween, and PmOcAnd OmOcThe included angles between the two are the same included angle, so that the P can be calculated through the known informationmOcI.e. the depth of the feature point.
In some embodiments, let P coordinate (u)m,vm) The coordinate of O is (u)0,v0),POcAnd OOcThe included angle between the two is theta, the focal length is f, and the height from the optical center of the monocular camera to the driving surface is h. The depth of the feature point can be calculated using equations (1) and (2).
Figure BDA0002463776790000071
Figure BDA0002463776790000072
Because the monocular camera is positioned at the bottom of the robot, and the relative height of the monocular camera and the driving surface is fixed, the depth of the characteristic point can be accurately determined by using the formula.
In step S106, the positions of the robot with respect to the real object corresponding to the feature points at two moments are respectively determined according to the depths of the same feature points in each frame of image.
In step S108, the robot is positioned according to the positions of the robot with respect to the real object corresponding to the feature points at two times.
In some embodiments, the two times include a time of known location and a time of unknown location. Determining the position change of the robot at the moment of the unknown position relative to the moment of the known position according to the positions of the robot relative to the real object corresponding to the feature points at the moment of the known position and the moment of the unknown position; and determining the position of the robot at the moment of unknown position according to the position change and the position of the robot at the moment of known position.
By arranging the camera at the bottom of the robot, the position of the camera can be calculated by utilizing the characteristic that the height of the camera from the plane where the shot object is located is relatively fixed, so that the robot can be accurately positioned. Therefore, the robot positioning scheme can be realized by deploying one monocular camera, and the deployment cost and the calculation burden are reduced.
In some embodiments, the robot can be oriented through images shot by the monocular camera. An embodiment of a robot positioning and pose determination method is described below with reference to fig. 4.
FIG. 4 illustrates a flow diagram of a robot positioning and pose determination method according to some embodiments of the invention. As shown in fig. 4, the robot positioning and attitude determination method of the embodiment includes steps S402 to S408.
In step S402, the same feature points are extracted from the images respectively taken downward at two times by the monocular camera disposed at the bottom of the robot.
In step S404, the depth of the same feature point in each frame image is calculated from the height of the monocular camera from the driving surface, the focal length of the monocular camera, and the position of the extracted feature point in each frame image.
In step S406, the position and the posture of the robot with respect to the real object corresponding to the feature point at two moments are respectively determined according to the depth of the same feature point in each frame of image.
In some embodiments, according to the depth of the same feature point in each frame of image, determining the coordinate of the real object corresponding to the feature point in the world coordinate system at each moment; and determining the posture of the robot in the corresponding world coordinate system at each moment by utilizing an N-point perspective PnP algorithm according to the coordinates of the real object corresponding to the feature points in each world coordinate system and the coordinates of the feature points in the corresponding image.
In some embodimentsThe number of feature points extracted from each frame is 3, so that the P3P algorithm can be used for solving. The P3P algorithm has a limited number of solutions that can be obtained and therefore can be solved more quickly and accurately. Fig. 5 exemplarily shows a schematic diagram for solving using the P3P algorithm. The P3P algorithm requires 3 pairs of 3D-2D matching data, the output being the pose of the camera. The 3D point is a characteristic point tracked on the ground and is marked as pA、pB、pCThe projection of the 3D point in the image is pa、pb、pcAnd according to the matching relation, the pose of the monocular camera can be obtained.
In step S408, the position and posture of the robot are determined from the positions of the robot with respect to the real object corresponding to the feature points at two times.
In some embodiments, a coordinate transformation relationship between the world coordinate systems of the two moments is determined based on the coordinates of the real object corresponding to the feature point in the world coordinate systems of the two moments; determining the posture change of the robot between two moments according to the coordinate transformation relation and the posture of the robot in the corresponding world coordinate system at each moment; and determining the posture of the robot according to the posture change.
By the method of the embodiment, the robot can be positioned and the posture of the robot can be determined, so that the state of the robot can be determined more accurately.
An embodiment of the robot positioning device of the present invention is described below with reference to fig. 5.
FIG. 5 illustrates a schematic diagram of a robot positioning device according to some embodiments of the present invention. As shown in fig. 5, the robot positioning device 500 of this embodiment includes: a feature point extraction module 5100 configured to extract the same feature points from images respectively photographed downward at two times by monocular cameras disposed at the bottom of the robot; a depth calculating module 5200 configured to calculate the depth of the same feature point in each frame of image according to the height of the monocular camera from the driving surface, the focal length of the monocular camera, and the position of the extracted feature point in each frame of image; the relative position determining module 5300 is configured to determine, according to the depths of the same feature points in each frame of image, the positions of the robot relative to the real object corresponding to the feature points at two moments respectively; a positioning module 5400 configured to position the robot according to the positions of the robot relative to the real object corresponding to the feature points at two moments.
In some embodiments, the depth calculation module 5200 is further configured to map the optical center of the monocular camera, the feature points in the image captured by the monocular camera, and the real objects corresponding to the feature points into the same coordinate system; in a coordinate system, determining information of an included angle between a connecting line of the characteristic point and the optical center in the image and a connecting line of the central point and the optical center in the image according to the distance between the characteristic point and the central point of the image in the image and the focal length of the monocular camera; and determining the distance between the real object corresponding to the characteristic point and the optical center as the depth of the characteristic point according to the included angle and the height of the monocular camera from the driving surface.
In some embodiments, the two times include a time of known location and a time of unknown location; the positioning module 5400 is further configured to determine a change in position of the robot at a time of the unknown position relative to a time of the known position from the positions of the robot relative to the real object corresponding to the feature points at a time of the known position and a time of the unknown position; and determining the position of the robot at the moment of unknown position according to the position change and the position of the robot at the moment of known position.
In some embodiments, the robotic positioning device 500 further comprises: a pose determination module 550 configured to determine poses of the robot with respect to the real object corresponding to the feature points at two moments according to depths of the same feature points in each frame of image; and determining the posture of the robot according to the postures of the robot relative to the real object corresponding to the feature points at two moments.
In some embodiments, the pose determination module 550 is further configured to determine coordinates of the real object corresponding to the feature point in the world coordinate system at each time according to the depth of the same feature point in each frame of image; and determining the posture of the robot in the corresponding world coordinate system at each moment by utilizing an N-point perspective PnP algorithm according to the coordinates of the real object corresponding to the feature points in each world coordinate system and the coordinates of the feature points in the corresponding image.
In some embodiments, the pose determination module 550 is further configured to determine a coordinate transformation relationship between the two time-instants world coordinate systems based on the coordinates of the real object corresponding to the feature point in the two time-instants world coordinate systems; determining the posture change of the robot between two moments according to the coordinate transformation relation and the posture of the robot in the corresponding world coordinate system at each moment; and determining the posture of the robot according to the posture change.
In some embodiments, the number of feature points extracted from each frame is 3.
In some embodiments, the two times comprise a first time and a second time; the feature point extraction module 5100 is further configured to extract feature points from an image captured at a first time; tracking the same feature point of the extracted feature points in the image shot at the second moment according to the optical flow information of the image; and extracting the characteristic points tracked in the image shot at the second moment.
In some embodiments, the feature points are acceleration segmentation detection feature Fast corner points; and/or the optical flow information is determined based on a sparse optical flow algorithm.
In some embodiments, the robot is an indoor mobile robot, or an automated guided vehicle.
An embodiment of a robot positioning system of some embodiments of the invention is described below with reference to fig. 6.
FIG. 6 illustrates a schematic diagram of a robot positioning system according to some embodiments of the invention. As shown in fig. 6, the robot positioning system 60 of this embodiment includes a robot positioning device 610 and a monocular camera 620. Other embodiments can be referred to for specific implementation of the robot positioning device 610, and are not described herein. And the monocular camera 620 is arranged at the bottom of the robot and shoots downwards.
Fig. 7 shows a schematic structural diagram of a robot positioning device according to further embodiments of the present invention. As shown in fig. 7, the robot positioning device 70 of this embodiment includes: a memory 710 and a processor 720 coupled to the memory 710, the processor 720 being configured to perform the robot positioning method of any of the previous embodiments based on instructions stored in the memory 710.
Memory 710 may include, for example, system memory, fixed non-volatile storage media, and the like. The system memory stores, for example, an operating system, an application program, a Boot Loader (Boot Loader), and other programs.
FIG. 8 illustrates a schematic diagram of a robot positioning device according to further embodiments of the present invention. As shown in fig. 8, the robot positioning device 80 of this embodiment includes: the memory 810 and the processor 820 may further include an input/output interface 830, a network interface 840, a storage interface 850, and the like. These interfaces 830, 840, 850 and the memory 810 and the processor 820 may be connected, for example, by a bus 860. The input/output interface 830 provides a connection interface for input/output devices such as a display, a mouse, a keyboard, and a touch screen. The network interface 840 provides a connection interface for various networking devices. The storage interface 850 provides a connection interface for external storage devices such as an SD card and a usb disk.
An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, wherein the program, when executed by a processor, implements any of the aforementioned robot positioning methods.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable non-transitory storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (14)

1. A robot positioning method, comprising:
extracting the same characteristic points from images respectively shot downwards at two moments by a monocular camera arranged at the bottom of the robot;
calculating the depth of the same characteristic point in each frame of image according to the height of the monocular camera from a driving surface, the focal length of the monocular camera and the position of the extracted characteristic point in each frame of image;
respectively determining the positions of the robot relative to the real object corresponding to the feature points at the two moments according to the depths of the same feature points in each frame of image;
and positioning the robot according to the positions of the robot relative to the real objects corresponding to the feature points at the two moments.
2. The robot positioning method according to claim 1, wherein the calculating the depth of the same feature point in each frame image according to the height of the monocular camera from a driving surface, the focal length of the monocular camera, and the position of the extracted feature point in each frame image comprises:
mapping the optical center of the monocular camera, the feature points in the image shot by the monocular camera and the real objects corresponding to the feature points to the same coordinate system;
in the coordinate system, determining information of an included angle between a connecting line of the feature point in the image and the optical center and an included angle between the connecting line of the center point of the image and the optical center according to the distance between the feature point in the image and the center point of the image and the focal length of the monocular camera;
and determining the distance between the real object corresponding to the characteristic point and the optical center as the depth of the characteristic point according to the included angle and the height of the monocular camera from the driving surface.
3. The robot positioning method according to claim 1, wherein the two times include a time at which a position is known and a time at which a position is unknown;
the positioning the robot according to the positions of the robot relative to the real object corresponding to the feature points at the two moments comprises:
determining the position change of the robot at the moment of the unknown position relative to the moment of the known position according to the positions of the robot relative to the real object corresponding to the feature points at the moment of the known position and the moment of the unknown position;
and determining the position of the robot at the moment of the unknown position according to the position change and the position of the robot at the moment of the known position.
4. The robot positioning method of claim 1, further comprising:
determining the postures of the robot relative to the real object corresponding to the feature points at the two moments according to the depths of the same feature points in each frame of image;
and determining the posture of the robot according to the postures of the robot relative to the real object corresponding to the feature points at the two moments.
5. The robot positioning method according to claim 4, wherein the determining the pose of the robot with respect to the real object corresponding to the feature point at the two time instants according to the depth of the same feature point in each frame image comprises:
determining the coordinates of the real object corresponding to the feature points in the world coordinate system at each moment according to the depth of the same feature points in each frame of image;
and determining the posture of the robot in the corresponding world coordinate system at each moment by utilizing an N-point perspective PnP algorithm according to the coordinates of the real object corresponding to the feature points in each world coordinate system and the coordinates of the feature points in the corresponding image.
6. The robot positioning method according to claim 5, wherein the determining the pose of the robot from the poses of the robot with respect to the real object corresponding to the feature points at the two moments comprises:
determining a coordinate transformation relation between the world coordinate systems at the two moments based on the coordinates of the real object corresponding to the feature points in the world coordinate systems at the two moments;
determining the posture change of the robot between two moments according to the coordinate transformation relation and the posture of the robot in the corresponding world coordinate system at each moment;
and determining the posture of the robot according to the posture change.
7. The robot positioning method according to claim 5, wherein the number of feature points extracted from each frame is 3.
8. The robot positioning method according to any one of claims 1 to 7, wherein the two times include a first time and a second time;
the extracting the same feature points from the images respectively shot at two moments by the monocular camera arranged at the bottom of the robot comprises the following steps:
extracting feature points from the image shot at the first moment;
tracking the same feature point of the extracted feature points in the image shot at the second moment according to the optical flow information of the image;
and extracting the characteristic points tracked in the image shot at the second moment.
9. The robot positioning method of claim 8, wherein:
the characteristic points are Fast angular points of accelerated segmentation detection characteristics; and/or the presence of a gas in the gas,
the optical flow information is determined based on a sparse optical flow algorithm.
10. The robot positioning method according to any one of claims 1 to 7, wherein the robot is an indoor mobile robot or an automatic guided vehicle.
11. A robotic positioning device, comprising:
the characteristic point extraction module is configured to extract the same characteristic points from images respectively shot downwards at two moments by a monocular camera arranged at the bottom of the robot;
the depth calculation module is configured to calculate the depth of the same characteristic point in each frame of image according to the height of the monocular camera from a driving surface, the focal length of the monocular camera and the position of the extracted characteristic point in each frame of image;
a relative position determining module configured to determine the positions of the robot relative to the real object corresponding to the feature points at the two moments according to the depths of the same feature points in each frame of image;
and the positioning module is configured to position the robot according to the positions of the robot relative to the real object corresponding to the characteristic points at the two moments.
12. A robotic positioning device, comprising:
a memory; and
a processor coupled to the memory, the processor configured to perform the robot positioning method of any of claims 1-10 based on instructions stored in the memory.
13. A robot positioning system comprising:
the robotic positioning device of claim 11 or 12; and
and the monocular camera is arranged at the bottom of the robot and shoots downwards.
14. A non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the robot positioning method of any one of claims 1-10.
CN202010327611.2A 2020-04-23 2020-04-23 Robot positioning method, apparatus, system and computer readable storage medium Active CN113554703B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010327611.2A CN113554703B (en) 2020-04-23 2020-04-23 Robot positioning method, apparatus, system and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010327611.2A CN113554703B (en) 2020-04-23 2020-04-23 Robot positioning method, apparatus, system and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN113554703A true CN113554703A (en) 2021-10-26
CN113554703B CN113554703B (en) 2024-03-01

Family

ID=78129421

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010327611.2A Active CN113554703B (en) 2020-04-23 2020-04-23 Robot positioning method, apparatus, system and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113554703B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160093052A1 (en) * 2014-09-26 2016-03-31 Neusoft Corporation Method and apparatus for detecting obstacle based on monocular camera
CN108335327A (en) * 2017-01-19 2018-07-27 富士通株式会社 Video camera Attitude estimation method and video camera attitude estimating device
CN109816696A (en) * 2019-02-01 2019-05-28 西安全志科技有限公司 A kind of robot localization and build drawing method, computer installation and computer readable storage medium
US20190204084A1 (en) * 2017-09-29 2019-07-04 Goertek Inc. Binocular vision localization method, device and system
CN110631554A (en) * 2018-06-22 2019-12-31 北京京东尚科信息技术有限公司 Robot posture determining method and device, robot and readable storage medium
US20200005487A1 (en) * 2018-06-28 2020-01-02 Ubtech Robotics Corp Ltd Positioning method and robot using the same
CN110858403A (en) * 2018-08-22 2020-03-03 杭州萤石软件有限公司 Method for determining scale factor in monocular vision reconstruction and mobile robot

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160093052A1 (en) * 2014-09-26 2016-03-31 Neusoft Corporation Method and apparatus for detecting obstacle based on monocular camera
CN108335327A (en) * 2017-01-19 2018-07-27 富士通株式会社 Video camera Attitude estimation method and video camera attitude estimating device
US20190204084A1 (en) * 2017-09-29 2019-07-04 Goertek Inc. Binocular vision localization method, device and system
CN110631554A (en) * 2018-06-22 2019-12-31 北京京东尚科信息技术有限公司 Robot posture determining method and device, robot and readable storage medium
US20200005487A1 (en) * 2018-06-28 2020-01-02 Ubtech Robotics Corp Ltd Positioning method and robot using the same
CN110858403A (en) * 2018-08-22 2020-03-03 杭州萤石软件有限公司 Method for determining scale factor in monocular vision reconstruction and mobile robot
CN109816696A (en) * 2019-02-01 2019-05-28 西安全志科技有限公司 A kind of robot localization and build drawing method, computer installation and computer readable storage medium

Also Published As

Publication number Publication date
CN113554703B (en) 2024-03-01

Similar Documents

Publication Publication Date Title
US20210190497A1 (en) Simultaneous location and mapping (slam) using dual event cameras
CN110411441B (en) System and method for multi-modal mapping and localization
CN112634451B (en) Outdoor large-scene three-dimensional mapping method integrating multiple sensors
CN110807350B (en) System and method for scan-matching oriented visual SLAM
EP3407294B1 (en) Information processing method, device, and terminal
US11003939B2 (en) Information processing apparatus, information processing method, and storage medium
CN112734852B (en) Robot mapping method and device and computing equipment
WO2017041731A1 (en) Markerless multi-user multi-object augmented reality on mobile devices
CN110176032B (en) Three-dimensional reconstruction method and device
CN110926330B (en) Image processing apparatus, image processing method, and program
US20210183100A1 (en) Data processing method and apparatus
US20200143603A1 (en) Information processing apparatus, information processing method, and non-transitory computer-readable storage medium
CN113052907B (en) Positioning method of mobile robot in dynamic environment
WO2018142533A1 (en) Position/orientation estimating device and position/orientation estimating method
US11189053B2 (en) Information processing apparatus, method of controlling information processing apparatus, and non-transitory computer-readable storage medium
JP2012220271A (en) Attitude recognition apparatus, attitude recognition method, program and recording medium
CN113554703B (en) Robot positioning method, apparatus, system and computer readable storage medium
JP2011174891A (en) Device and method for measuring position and attitude, and program
CN113011212B (en) Image recognition method and device and vehicle
KR102619083B1 (en) Method and system for localization of artificial landmark
US20230033339A1 (en) Image processing system
Toriya et al. A mobile camera localization method using aerial-view images
CN116704022A (en) Pose estimation method, device and medium of VIO system based on structural line segment
CN114332430A (en) Collision detection method, apparatus, device, and medium for preventing overdetection
CN113345006A (en) Closed loop detection method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant