CN115307641A - Robot positioning method, device, robot and storage medium - Google Patents

Robot positioning method, device, robot and storage medium Download PDF

Info

Publication number
CN115307641A
CN115307641A CN202210907105.XA CN202210907105A CN115307641A CN 115307641 A CN115307641 A CN 115307641A CN 202210907105 A CN202210907105 A CN 202210907105A CN 115307641 A CN115307641 A CN 115307641A
Authority
CN
China
Prior art keywords
robot
pose
map frame
determining
positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210907105.XA
Other languages
Chinese (zh)
Inventor
吴金龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Pudu Technology Co Ltd
Original Assignee
Shenzhen Pudu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Pudu Technology Co Ltd filed Critical Shenzhen Pudu Technology Co Ltd
Priority to CN202210907105.XA priority Critical patent/CN115307641A/en
Publication of CN115307641A publication Critical patent/CN115307641A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application relates to a robot positioning method, a robot positioning device, a robot and a storage medium. The method comprises the following steps: acquiring a predicted pose of the robot in a target environment at the current moment; determining a map frame search range when the robot searches a reference map frame at the current moment according to the predicted pose; acquiring a reference map frame in a map frame search range based on a map database corresponding to a target environment; and determining the pose of the robot in the target environment at the current moment according to the reference map frame and the current environment image acquired by the robot in the target environment at the current moment. That is, the reference map frame is selected according to the map frame search range, so that the global search amount can be reduced, the influence of repeated texture and lamplight change on the global retrieval result is reduced, and the map frame matching precision is improved. Therefore, the robot is positioned by the reference map frame and the current environment image which are retrieved in a certain range, the determined robot positioning pose is more accurate, and the robot positioning precision is improved.

Description

Robot positioning method, device, robot and storage medium
Technical Field
The present disclosure relates to the field of robot technologies, and in particular, to a method and an apparatus for positioning a robot, and a storage medium.
Background
With the rapid development of the robot technology, the robot is widely used in industrial production and living services, such as a cleaning robot in an office building, a shopping guide robot in a shopping mall, a food delivery robot in a restaurant, and the like. When the robot executes different tasks in different scenes, the position of the robot needs to be determined through surrounding environment information, and therefore a safe moving path is planned to execute the tasks.
In the related technology, the robot vision positioning process comprises the following steps: the method comprises the steps of obtaining a current image frame of the environment where the robot is located through a camera, obtaining a target map frame matched with the current image frame from a global map, and determining the current pose of the robot according to the current image frame and the target map frame.
However, since the visual features are susceptible to lighting variation and repeated texture, the accuracy of retrieving the target map frame from the global map is low, and the accuracy of the robot visual positioning is low.
Disclosure of Invention
In view of the above, it is necessary to provide a robot positioning method, a robot positioning apparatus, a robot, and a storage medium capable of improving the accuracy of robot vision positioning.
In a first aspect, the present application provides a robot positioning method, comprising:
acquiring a predicted pose of the robot in a target environment at the current moment;
determining a map frame search range when the robot searches a reference map frame at the current moment according to the predicted pose;
acquiring a reference map frame in a map frame search range based on a map database corresponding to a target environment;
and determining the positioning pose of the robot in the target environment at the current moment according to the reference map frame and the current environment image acquired by the robot in the target environment at the current moment.
In one embodiment, acquiring the predicted pose of the robot in the target environment at the current moment comprises:
acquiring the historical pose of the robot in the target environment at the last positioning moment and the track data of the robot from the last positioning moment to the current moment;
and determining the predicted pose of the robot in the target environment at the current moment according to the historical pose and the track data.
In one embodiment, determining a map frame search range in which the robot searches for the reference map frame at the current time according to the predicted pose includes:
acquiring a target search radius corresponding to the reference map frame searched by the robot at the current moment;
and determining a map frame search range according to the predicted pose and the target search radius.
In one embodiment, obtaining a target search radius corresponding to a current time when the robot searches for the reference map frame includes:
determining a first search radius increment according to the interval duration between the last positioning time and the current time;
determining a target search radius corresponding to the current moment of the robot searching the reference map frame according to the historical search radius corresponding to the last positioning moment and the first search radius increment;
alternatively, the first and second electrodes may be,
determining a second search radius increment according to the interval duration between the first positioning time and the current time;
determining a target search radius corresponding to the reference map frame searched by the robot at the current moment according to the initial search radius corresponding to the first positioning moment and the second search radius increment;
wherein, the initial search radius corresponding to the first positioning time is a preset minimum search radius.
In one embodiment, determining a map frame search range according to the predicted pose and the target search radius comprises:
and taking the predicted pose as the circle center and the target search radius as the map frame search radius to obtain a map frame search range.
In one embodiment, determining a positioning pose of the robot in the target environment at the current moment according to the reference map frame and a current environment image acquired by the robot in the target environment at the current moment comprises:
acquiring a global feature descriptor of a current environment image;
determining a matching map frame from the reference map frame according to the global feature descriptor of the current environment image;
and determining the positioning pose of the robot in the target environment at the current moment according to the current environment image and the matched map frame.
In one embodiment, determining a positioning pose of the robot in the target environment at the current moment according to the current environment image and the matching map frame comprises:
determining candidate poses of the robot in the target environment at the current moment according to the current environment image and the matching map frame;
checking the candidate poses according to pose offsets between the predicted poses and the candidate poses to obtain a checking result;
and determining a positioning pose of the robot in the target environment at the current moment according to a pose determination strategy corresponding to the verification result.
In one embodiment, determining candidate poses of the robot in the target environment at the current moment according to the current environment image and the matching map frame comprises:
performing two-dimensional feature point matching on the current environment image and the matching map frame to obtain a plurality of two-dimensional matching feature point pairs;
rejecting the wrong matching relation in a plurality of two-dimensional matching feature point pairs to obtain a plurality of initial matching feature point pairs;
carrying out three-dimensional characteristic point matching through the matching relation among the plurality of initial matching characteristic point pairs to obtain a plurality of three-dimensional matching characteristic point pairs;
rejecting the wrong matching relation in a plurality of three-dimensional matching feature point pairs to obtain a plurality of standard matching feature point pairs;
and calculating the pose of the robot in the target environment at the current moment through the matching relation among the plurality of standard matching feature point pairs to obtain a candidate pose.
In one embodiment, verifying the candidate pose according to the pose offset between the predicted pose and the candidate pose to obtain a verification result includes:
if the pose offset is larger than the preset offset, determining that the verification result is that the candidate pose is incorrect;
and if the pose offset is smaller than the preset offset value, determining that the verification result is that the candidate pose is correct.
In one embodiment, determining a positioning pose of the robot in the target environment at the current moment according to a pose determination strategy corresponding to the verification result includes:
if the verification result is that the candidate pose is incorrect, taking the predicted pose as the positioning pose of the robot in the target environment at the current moment;
and if the verification result is that the candidate pose is correct, performing pose fusion on the predicted pose and the candidate pose, and determining a pose fusion result as the positioning pose of the robot in the target environment at the current moment.
In a second aspect, the present application also provides a robot positioning device, comprising:
the predicted pose acquisition module is used for acquiring the predicted pose of the robot in the target environment at the current moment;
the searching range determining module is used for determining a map frame searching range when the robot searches the reference map frame at the current moment according to the predicted pose;
the map frame acquisition module is used for acquiring a reference map frame in a map frame search range based on a map database corresponding to the target environment;
and the pose determining module is used for determining the positioning pose of the robot in the target environment at the current moment according to the reference map frame and the current environment image acquired by the robot in the target environment at the current moment.
In a third aspect, the present application further provides a robot, where the robot includes a memory and a processor, the memory stores a computer program, and the processor calls the computer program to implement the steps of any one of the method embodiments in the first aspect.
In a fourth aspect, the present application further provides a computer-readable storage medium. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of any of the method embodiments of the first aspect described above.
In a fifth aspect, the present application further provides a computer program product. Computer program product comprising a computer program which, when executed by a processor, performs the steps of any of the method embodiments of the first aspect described above.
According to the robot positioning method, the robot positioning device, the robot and the storage medium, the predicted pose of the robot in the target environment at the current moment is obtained; determining a map frame search range when the robot searches a reference map frame at the current moment according to the predicted pose; acquiring a reference map frame in a map frame search range based on a map database corresponding to a target environment; and determining the pose of the robot in the target environment at the current moment according to the reference map frame and the current environment image acquired by the robot in the target environment at the current moment. That is, the matching map frame of the current environment image is determined not by traversing the map database, but by the predicted pose of the robot at the current moment, the map frame search range is determined, and then the reference map frame of the current environment image is determined from the map database based on the map frame search range. Therefore, the number of map frames traversed by global search can be reduced through the map frame search range, the influence of repeated texture and lamplight change on the global search result can be reduced, and the map frame matching precision is improved. Furthermore, the robot is positioned through the reference map frame and the current environment image retrieved in a certain range, the determined positioning pose of the robot at the current moment is more accurate, and the positioning precision of the robot in the target environment is improved.
Drawings
FIG. 1 is a schematic flow chart diagram of a method for positioning a robot in one embodiment;
FIG. 2 is a schematic diagram of a process for acquiring a predicted pose of a robot according to one embodiment;
FIG. 3 is a schematic diagram illustrating a process for determining a search range of a map frame of a robot in accordance with an embodiment;
FIG. 4 is a schematic flow chart of determining a positioning pose of a robot in one embodiment;
FIG. 5 is a schematic diagram of a pose verification process in one embodiment;
FIG. 6 is a flow diagram illustrating the determination of candidate poses in one embodiment;
FIG. 7 is a schematic flow chart of a robot positioning method according to another embodiment;
FIG. 8 is a block diagram of the construction of a robotic positioning device in one embodiment;
fig. 9 is an internal structural view of the robot in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
With the continuous development of science and technology, mobile robots are gradually appearing in various aspects of life, such as shopping guide robots in shopping malls, meal delivery robots in restaurants, transfer robots in factories and the like. In order to ensure that the robot smoothly executes corresponding operation tasks in different scenes, the robot must be capable of realizing autonomous positioning and navigation. In other words, the robot needs to determine the position of the robot in the target environment through the target environment where the robot is located, so as to plan a navigation path according to an instruction preset or issued by a user in real time, and execute a corresponding operation task.
Wherein, the robot vision positioning process is: and the robot calculates the global descriptor of the current environment image according to the environment image acquired by the camera. And then, traversing the map database according to the global descriptor of the current environment image, determining the map frame which is most matched with the current environment image, and obtaining the matched map frame. Further, according to the local descriptors of the feature points in the current environment image and the local descriptors of the map points in the matching map frame, the matching relation between the image feature points in the current environment image and the map points in the matching map frame is calculated, and the current pose of the robot is positioned according to the matching relation.
However, the visual features acquired by the robot in the visual positioning process are susceptible to lighting changes and repeated textures, so that the robot in the visual positioning process may have the following problems:
(1) When global retrieval is carried out in the map database and a matching map frame of the current environment image is obtained, if other places which are very similar to the scene in the current environment of the robot exist in the scene of the map frame, the map frame matched with the similar scene after the global retrieval is caused, and the accuracy of the matching map frame of the global retrieval is low.
(2) When the lighting or wall texture in the target environment where the robot is located is different from the mapping process, the matching map frame is searched globally, the target environment where the robot is located is mistakenly similar to the real matching map frame but is similar to the map frame with the more similar lighting or texture condition, so that the map frame matched with the similar scene after global search is caused, and the accuracy of the matching map frame of the global search is low.
(3) When map frame matching or feature point matching is performed, wrong matching relations exist in matching results, and the accuracy of pose calculation results is affected by the wrong matching relations.
(4) When the positioning result of the robot at the current moment is wrong, the whole positioning system of the robot is also seriously influenced.
That is, the visual positioning is affected by the light change and the repeated texture, and the positioning accuracy of the robot is low in the case of errors in the matching map frame of the global search and/or errors in the matching of the local feature points.
Based on the above, the application provides a robot positioning method, a robot positioning device, a robot and a storage medium, wherein the robot positioning method, the robot positioning device, the robot and the storage medium improve the robustness of a robot positioning system in environments such as lamplight change, repeated texture and the like by obtaining the predicted pose of the robot in a target environment at the current moment and optimizing the predicted pose in three aspects of global retrieval, pose calculation and pose calculation result verification.
In one embodiment, the robot positioning method provided by the present application may be applied to a robot positioning apparatus, which may be implemented as part or all of a processor integrated in a computer device by software, hardware, or a combination of software and hardware. The computer equipment can be a robot capable of moving autonomously or other intelligent terminals.
Next, the technical solutions of the embodiments of the present application, and how to solve the above technical problems will be specifically described in detail through embodiments and with reference to the accompanying drawings. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. It is to be understood that the embodiments described are only some of the embodiments of the present application and not all of them.
In one embodiment, as shown in fig. 1, a robot positioning method is provided, which is exemplified by applying the method to a robot, and includes the following steps:
step 110: and acquiring the predicted pose of the robot in the target environment at the current moment.
The predicted pose is the pose of the robot in the target environment at the estimated current moment, and although the predicted pose is not necessarily equal to the actual pose, the deviation between the predicted pose and the actual pose is not large.
It should be understood that the target environment is a robot working environment, such as an office building, a restaurant, a mall, a factory, and the like, which is not limited in this embodiment, that is, the target environment may be any scene that requires the robot to perform an autonomous positioning navigation to perform a working task.
In one possible implementation manner, the implementation process of step 110 may be: and determining the predicted pose of the robot in the target environment at the current moment according to the positioning data acquired by at least one non-visual sensor.
If one non-visual sensor is used, determining the predicted pose of the robot in the target environment at the current moment according to the positioning data acquired by the non-visual sensor; if the number of the non-visual sensors is multiple, data fusion can be carried out on positioning data acquired by the multiple non-visual sensors, and therefore the predicted pose of the robot in the target environment at the current moment is determined.
As one example, the non-visual sensor may be a locator, a laser sensor, an infrared sensor, or the like.
That is, in the process of robot vision positioning, the position and the posture of the robot at the current moment in the target environment can be estimated through positioning data acquired by other non-vision sensors, and the obtained predicted posture can provide a reference range for determining the positioning posture of the robot at the current moment.
Step 120: and determining a map frame search range when the robot searches the reference map frame at the current moment according to the predicted pose.
The reference map frame may be one frame or multiple frames. The similarity between the environment information of the reference map frame and the environment information of the target environment in which the robot is currently located is highest.
It should be noted that, in order to reduce the global search amount and avoid retrieving a wrong map frame under the same lighting or repeated textures, in the embodiment of the present application, a map frame search range of the robot is determined according to the predicted pose, and then, a reference map frame is searched in the map frame search range. That is, the reference map frame is determined only from within the map frame search range, and may be a partial or entire map frame within the map frame search range.
In this step, a map frame search range may be determined after a preset local search distance is expanded all around in the target environment based on the predicted pose.
The map frame searching range is a searching range in a three-dimensional space and is used for limiting environment information in a space area in a target environment where the robot is located.
In addition, if the map frame search range is large, robustness against repeated texture is reduced. If the search range of the map frame is small, after a long time is separated from the last visual positioning, a large error occurs between the predicted pose and the actual pose, and a correct and effective reference map frame can never be retrieved in the search range of the map frame.
Therefore, when determining the local search distance, the local search distance may be implemented based on a deep learning neural network model, may be determined by using other reference frame search algorithms, and may also be determined based on human experience and trial and error, which is not limited in this embodiment.
It should be noted that, in the case that the map frame search range is appropriate, not only the global search amount can be reduced, but also the matching efficiency between the reference map frame and the current environment image can be improved to a certain extent.
Step 130: and acquiring a reference map frame in the map frame search range based on the map database corresponding to the target environment.
It should be understood that the map database based on the target environment is constructed when the robot traverses the target environment for the first time, and includes a plurality of continuous map frames generated by a vision positioning method based on a priori-free map, such as Simultaneous positioning And Mapping (SLAM), a Switching Fabric Module (SFM), and the like. The positioning pose of the robot in the target environment is determined by a visual sensor and/or other auxiliary sensors carried on the robot, the positioning result does not have global consistency, and a global map can be formed only after loop verification is carried out.
Further, after the map database of the target environment is constructed, when positioning (also referred to as visual repositioning) is performed based on the current environment image acquired by the visual sensor, a reference map frame corresponding to the current environment image may be acquired first, and then the positioning pose of the robot with respect to the current environment image is calculated according to the change conditions of the environment information in the reference map frame and the current environment image.
In this step, based on the map frame search range, at least one map frame in the map frame search range is acquired from the map database corresponding to the target environment, i.e. a reference map frame is obtained.
Optionally, after the reference map frame is obtained, the reference map frame may be further processed, such as denoising, data enhancement, and the like, which is not limited in this embodiment.
Step 140: and determining the positioning pose of the robot in the target environment at the current moment according to the reference map frame and the current environment image acquired by the robot in the target environment at the current moment.
The current environment image is generated by the robot through the environment data acquired by the vision sensor in real time/according to the preset acquisition frequency. The vision sensor can collect the environment data of the environment where the robot is located and generate an environment image. Wherein, the visual sensor can be a camera, a monocular camera, a binocular camera, etc.
In a possible implementation manner, when the visual sensor is a top view sensor, the data acquisition direction of the top view sensor is the top direction of the environment where the robot is located, and may include a ceiling of a building, a ceiling lamp, an air conditioning opening, an ornament, and the like. Further, the environmental data collected by the top view sensor may be embodied as at least one of contour data, depth data and texture data.
Further, the reference map frame may be one map frame or may be a plurality of map frames. If the reference map frame is a map frame, directly performing feature point matching on the reference map frame and the current environment image, and calculating the positioning pose of the robot in the target environment at the current moment according to the feature point matching result; if the reference map frame is a plurality of map frames, determining a matching map frame with the highest similarity to the current environment image from the plurality of map frames, then performing feature point matching on the matching map frame and the current environment image, and further calculating the positioning pose of the robot in the target environment at the current moment according to the feature point matching result.
Optionally, after the positioning pose of the robot is determined, correctness verification can be performed on the positioning pose through predicting the pose.
In the robot positioning method, computer equipment acquires the predicted pose of the robot in the target environment at the current moment; determining a map frame search range when the robot searches a reference map frame at the current moment according to the predicted pose; acquiring a reference map frame in a map frame search range based on a map database corresponding to a target environment; and determining the pose of the robot in the target environment at the current moment according to the reference map frame and the current environment image acquired by the robot in the target environment at the current moment. That is, the matching map frame of the current environment image is determined not by traversing the map database, but by the predicted pose of the robot at the current moment, the map frame search range is determined, and then the reference map frame of the current environment image is determined from the map database based on the map frame search range. Therefore, the number of map frames traversed by global search can be reduced through the map frame search range, the influence of repeated texture and lamplight change on the global search result can be reduced, and the map frame matching precision is improved. Furthermore, the robot is positioned through the reference map frame and the current environment image which are retrieved in a certain range, the positioning pose of the robot at the current moment is determined more accurately, and the positioning precision of the robot in the target environment is improved.
In one embodiment, as shown in fig. 2, the implementation process of obtaining the predicted pose of the robot in the target environment at the current moment in step 110 may include the following steps:
step 210: and acquiring the historical pose of the robot in the target environment at the last positioning moment and the track data of the robot from the last positioning moment to the current moment.
The historical pose may be a predicted pose at the last positioning time, a positioning pose determined after the last positioning time is executed in the steps 110 to 140, or a pose obtained by performing fusion processing on the predicted pose and the positioning pose at the last positioning time, which is not limited in this embodiment.
In one possible implementation, the trajectory data of the robot between the last positioning time and the current time can be obtained from the robot positioner. The robot positioner can continuously acquire the position data of the robot and generate the track data of the robot in the moving process of the robot.
As one example, the robotic positioner may include at least one of a wheeled encoder, a base station position fix, inertial navigation, and satellite navigation.
Specifically, when the robot positioner is a wheel type encoder, the driving mileage of the robot can be recorded through the wheel type encoder, and the track data of the robot is generated according to the driving mileage; when the robot locator is a 5G communication chip, the robot continuously receives the signaling sent by the 5G base station in the moving process, and generates the track data of the robot according to the robot position information carried in the signaling received within a period of time.
Step 220: and determining the predicted pose of the robot in the target environment at the current moment according to the historical pose and the track data.
The historical pose is the position and the posture of the robot in the target environment at the last positioning moment, the predicted pose is the estimated position and the posture of the robot in the target environment at the current moment, and the track data comprises the position and the orientation of the robot at each positioning moment.
As one example, a position in the target environment of the robot may be represented by three-dimensional spatial coordinates (X, Y, Z); the pose may be represented by the rotation angles (θ 1, θ 2, θ 3) of the robot in the target environment. The heading angle theta 1 refers to the rotation angle of the robot on the X-Y plane, the pitch angle theta 2 refers to the rotation angle of the robot on the X-Z plane, and the roll angle theta 3 refers to the rotation angle of the robot on the Y-Z plane.
Based on the historical pose and the track data of the robot, the predicted pose of the robot in the target environment after the robot moves according to the track data by taking the historical pose as a starting point can be estimated.
In the embodiment, the position and the posture of the robot in the target environment at the current moment are estimated according to the historical pose of the robot in the target environment at the last positioning moment and the track data of the robot in the positioning interval, so that a reference range is provided for determining the positioning pose of the robot. Meanwhile, the predicted pose is also used for defining a map frame search range in the application so as to realize accurate search of a reference map frame in a small range.
In one embodiment, as shown in fig. 3, the implementation process of determining the map frame search range when the robot searches the reference map frame at the current time according to the predicted pose in step 120 may include the following steps:
step 310: and acquiring a corresponding target search radius when the robot searches the reference map frame at the current moment.
It should be noted that, when the robot positioner is a wheel encoder, the mileage counting process has accumulated drift, and the counting error is accumulated over time. Therefore, the present application provides an adaptive search radius calculation method to determine the target search radius at the current time, rather than setting a fixed search radius for each positioning time.
In one possible implementation manner, the implementation process of step 310 may be: determining a first search radius increment according to the interval duration between the last positioning time and the current time; and determining a target search radius corresponding to the current moment of the robot searching the reference map frame according to the historical search radius corresponding to the last positioning moment and the first search radius increment.
Taking the wheel type encoder as an example, the mileage calculation error of the wheel type encoder in unit time can be measured in advance, and then the search radius in unit time can be determined according to the mileage calculation error. Based on the method, in practical application, the search radius increment of the robot at the current moment in visual positioning can be determined according to the positioning interval duration and the search radius in unit time.
It should be noted that, for the first positioning time, that is, the first positioning time, the initial search radius corresponding to the first positioning time may be a preset minimum search radius. And when the second positioning time is reached, determining the search radius increment corresponding to the second positioning time based on the interval duration of the second positioning time and the first positioning time, and then determining the target search radius at the second positioning time based on the search radius increment corresponding to the second positioning time and the initial search radius. And so on.
The minimum search radius value may be determined based on human experience, or may be a minimum search radius value in unit time determined based on a mileage calculation error of the wheel encoder, which is not limited in this embodiment.
Specifically, when the target search radius of the robot at the current time is determined, the search radius increment at the current time and the historical search radius corresponding to the last positioning time may be summed to obtain the target search radius at the current time.
In another possible implementation manner, the implementation procedure of step 310 may be: determining a second search radius increment according to the interval duration between the first positioning time and the current time; and determining a corresponding target search radius when the robot searches the reference map frame at the current moment according to the initial search radius corresponding to the first positioning moment and the second search radius increment.
Wherein, the initial target search radius corresponding to the first positioning time is a preset minimum search radius.
Specifically, when the target search radius of the robot at the current time is determined, the search radius increment at the current time and the initial search radius corresponding to the first positioning time may be summed to obtain the target search radius at the current time.
Optionally, when the search radius increment is determined to be calculated as a linear function based on the interval duration between the current time and the last positioning time, the two possible implementation manners are the same as the calculation results of determining the search radius increment based on the time interval between the current time and the first positioning time.
Step 320: and determining a map frame search range according to the predicted pose and the target search radius.
In one possible implementation manner, the implementation procedure of step 320 may be: and taking the predicted pose as the circle center and the target search radius as the map frame search radius to obtain a map frame search range.
Namely, a closed space area formed after the target search radius is expanded to the periphery by taking the position point in the predicted pose as the center of a circle is the map frame search range.
In this embodiment, since the search radius increment corresponding to each positioning time is determined according to the positioning interval duration, the search radius increment at each positioning time is different, and the target search radius of the robot at each positioning time is also different, so that the determined map frame search range is necessarily different. Therefore, the optimal search range for searching the reference map frame at the current moment can be effectively defined by determining the self-adaptive search radius at the current moment, the accuracy of the search range of the map frame is improved, and the accurate and effective reference map frame can be conveniently searched subsequently.
In one embodiment, as shown in fig. 4, the implementation process of determining the positioning pose of the robot in the target environment at the current time according to the reference map frame and the current environment image acquired by the robot in the target environment at the current time in step 140 above may include the following steps:
step 410: and acquiring a global feature descriptor of the current environment image.
The global feature descriptor is used for describing a global feature of the current environment image, and the global feature descriptor can be generated according to at least one local feature descriptor in the current environment image.
It should be understood that the local feature descriptor is a descriptor for describing local features in an image, and is an algorithm for generating local features of a pixel point in the image according to distribution features of the pixel points around the pixel point.
In one possible implementation manner, the implementation procedure of step 410 may be: the method comprises the steps of firstly obtaining at least one local feature descriptor of a current environment image, then reducing the dimension of each local feature descriptor by adopting a principal component analysis method to obtain the local feature descriptor after the dimension reduction, and then converting the local feature descriptor after the dimension reduction into a global feature descriptor for expressing the visual feature of the current environment image.
Step 420: and determining a matching map frame from the reference map frames according to the global feature descriptor of the current environment image.
It should be noted that, in the process of constructing the map database, the global feature descriptor corresponding to each map frame may be calculated, and the global feature descriptor may be stored as tag information of the map frame. In addition, referring to step 410, after the reference map frame is acquired, the global feature descriptor of the reference map frame may also be calculated in real time. The present embodiment does not limit this.
In step 420, if the reference map frame is a map frame, the reference map frame is directly determined as a matching map frame; if the reference map frame is a plurality of map frames, determining a reference map frame with the highest global feature descriptor similarity as a matching map frame according to the global feature descriptor of the current environment image and the global feature descriptors of the reference map frames.
Optionally, if the similarity of the global feature descriptor is lower than a preset similarity threshold, the search range of the map frame may be expanded, and the corresponding reference map frame may be searched for the current environment image again.
As one example, expanding the map frame search range may increase an initial search radius based on the in-place map frame search range to form a new map frame search range. In addition, the search range of the original image frame may also be expanded based on other preset radius values, which is not limited herein.
Step 430: and determining the positioning pose of the robot in the target environment at the current moment according to the current environment image and the matched map frame.
In one possible implementation manner, the implementation procedure of step 430 may be: performing feature point matching on the current environment image and the matching map frame to obtain a plurality of matching feature point pairs; and determining offset information of the corresponding feature points in the map frame of which the feature points in the current environment image are more matched based on the matched feature point pairs, and further calculating the positioning pose of the robot in the target environment at the current moment according to the offset information of the feature points in the current environment image.
The offset information of the feature points includes, but is not limited to, the translation amount and rotation amount of the feature points.
As an example, when a plurality of pairs of matching feature points are obtained, two feature points with the highest similarity of the local feature descriptors may be determined as a matching feature point pair based on the local feature descriptors of the feature points in the current environment image and the local feature descriptors of the feature points in the matching map frame. By analogy, a plurality of matched feature point pairs are obtained by traversing and comparing feature points in the current environment image and the matched map frame.
Further, based on the matching feature point pairs in the current environment image and the matching map frame, the positioning pose of the robot in the target environment at the current moment can be calculated by adopting a pnp algorithm.
The pnp algorithm is a method for estimating the pose of a camera with n three-dimensional coordinates of points (relative to a specified coordinate system, such as a robot coordinate system) and two-dimensional projection positions thereof being known. Because the camera is arranged on the robot and the external reference of the camera is known, the pose of the robot can be further determined according to the pose of the camera.
In the embodiment, the map frame matching of the current environment image is determined based on the global feature descriptor instead of the local feature descriptor, so that the map frame matching efficiency can be improved. After the matching map frame is determined, the positioning pose of the robot in the target environment at the current moment can be calculated according to the current environment image and the plurality of matching feature point pairs in the matching map frame. Therefore, under the condition that the matched map frame is most similar to the current environment image, the calculated positioning pose is more accurate.
In the embodiment of the method corresponding to fig. 4, when the positioning pose of the robot in the target environment at the current moment is determined, the predicted pose can be used as a reference to check whether the positioning pose is reliable, so that the problem of low positioning accuracy caused by large deviation between the calculated positioning pose and the actual pose of the robot is solved.
In one embodiment, as shown in fig. 5, when the positioning pose of the robot in the target environment at the current moment is determined in the above step 430 according to the current environment image and the matching map frame, the implementation process may include the following steps:
step 510: and determining the candidate pose of the robot in the target environment at the current moment according to the current environment image and the matching map frame.
In one possible implementation manner, the implementation procedure of step 510 may be: performing feature point matching on the current environment image and the matching map frame to obtain a plurality of matching feature point pairs; and based on the matched feature point pairs, determining offset information of corresponding feature points in the map frame of which the feature points in the current environment image are matched, and further calculating the candidate pose of the robot in the target environment at the current moment according to the offset information of the feature points in the current environment image.
It should be noted that, the implementation manner of step 510 is similar to that of step 430 described above, and is different in that, instead of directly determining the candidate pose calculated based on the current environment image and the matching map frame as the positioning pose of the robot in the target environment at the current time, the present embodiment needs to perform further verification.
Step 520: and checking the candidate pose according to the pose offset between the predicted pose and the candidate pose to obtain a checking result.
Wherein the pose offset comprises a position offset and a pose offset between the predicted pose and the candidate pose. Therefore, in calculating the pose offset amounts, it is necessary to calculate the position offset amounts of the candidate pose and the predicted pose, and the pose offset amounts between the candidate pose and the predicted pose, respectively.
In one possible implementation manner, the implementation procedure of step 520 may be: if the pose offset is larger than the preset offset, determining that the verification result is that the candidate pose is incorrect; and if the pose offset is smaller than the preset offset value, determining that the verification result is that the candidate pose is correct.
As an example, a robot moving indoors generally works on a horizontal ground, and therefore, the correctness of the candidate pose can be determined only by judging the offset of the Z coordinate value. And when the difference value of the Z coordinate values corresponding to the positions in the candidate poses of the visual positioning and the positions in the predicted poses is larger than a preset Z coordinate offset value, judging that the candidate poses are incorrect.
As another example, the rotation angle of the robot in the X-Y plane is within a preset heading angle range (e.g., 30 ° -120 ° rotation angle) when the robot is not drifting and rolling over. Therefore, the correctness of the candidate pose can be determined by judging whether the heading angle is in the preset heading angle range. And when the course angle (namely the rotation angle of the robot on the X-Y plane) in the candidate pose of the visual positioning is not in the preset course angle range, namely the course angle is less than 30 degrees or the course angle is more than 120 degrees, judging that the candidate pose is incorrect.
Step 530: and determining the positioning pose of the robot in the target environment at the current moment according to a pose determination strategy corresponding to the verification result.
If the candidate pose is incorrect according to the inspection result, determining the pose determination strategy as a positioning pose of the robot in the target environment at the current moment according to the predicted pose; and if the detection result is that the candidate pose is correct, determining the positioning pose of the robot in the target environment at the current moment according to the predicted pose and the candidate pose or according to the candidate pose.
Then in one possible implementation, the implementation of step 530 may be: if the verification result is that the candidate pose is incorrect, taking the predicted pose as the pose of the robot in the target environment at the current moment; and if the verification result is that the candidate pose is correct, performing pose fusion on the predicted pose and the candidate pose, and determining a pose fusion result as the positioning pose of the robot in the target environment at the current moment.
Furthermore, when pose fusion is performed, the predicted pose determined by the trajectory data can be used as a measured value through an error Kalman filter, the candidate pose calculated by visual positioning is used as an observed value, the predicted pose and the candidate pose are fused, and a smoother robot positioning pose is output.
In the embodiment, the correctness of the candidate pose calculated by the visual positioning is checked by predicting the pose, and the positioning pose of the robot in the target environment at the current moment is determined according to the pose determination strategy corresponding to the check result. Therefore, through the pose correctness inspection, the influence of wrong vision positioning poses on the positioning system can be effectively avoided, and the reliability of the vision positioning result is improved. Furthermore, the influence caused by the calculation jitter of the visual pose is solved through error Kalman filtering, and a smoother robot positioning pose is output. Therefore, even if the visual positioning is lost, the positioning pose determined by pose fusion of the predicted pose and the candidate pose can be kept stable within a certain time.
When the candidate pose of the robot is calculated based on the current environment image and the matching map frame according to the method embodiment corresponding to fig. 5, if an incorrect matching relationship exists in the matching feature points, the calculation accuracy of the candidate pose is seriously affected. Based on the method and the device, when the feature points in the current environment image and the matched map frame are matched, the accuracy of the candidate pose is improved through twice elimination operations.
In one embodiment, as shown in fig. 6, the implementation process of determining the candidate pose of the robot in the target environment at the current moment in step 510 according to the current environment image and the matching map frame may include the following steps:
step 610: and carrying out two-dimensional feature point matching on the current environment image and the matching map frame to obtain a plurality of two-dimensional matching feature point pairs.
The matching map frame comprises two-dimensional feature points and three-dimensional feature points, and the two-dimensional feature points and the three-dimensional feature points need to be matched in sequence when the feature points are matched.
In a possible implementation manner, the two-dimensional feature point matching may be performed through a local feature descriptor of each feature point in the image, and then a plurality of two-dimensional feature point pairs matching with each other are determined from the two-dimensional feature point of the current environment image and the two-dimensional feature point of the matching map frame.
It should be understood that the two-dimensional feature points in the two-dimensional matching feature point pair have the same gray scale value and may correspond to the same environmental information in the actual environment. For example, if the pixel point a in the current environment image and the pixel point B in the matching location image frame are a two-dimensional matching feature point pair, the pixel point a and the pixel point B may describe the same location point of the same article in the actual environment.
Step 620: and eliminating the error matching relation in the plurality of two-dimensional matching feature point pairs to obtain a plurality of initial matching feature point pairs.
In one possible implementation, the erroneous matching relationship may be eliminated from the plurality of pairs of two-dimensional matching feature points by a Random Sample Consensus (RANSAC) algorithm.
After consistency judgment is carried out through the RANSAC algorithm, the two-dimensional matching feature point pairs in the maximum consistency data set are initial matching feature point pairs, the two-dimensional matching feature point pairs outside the maximum consistency data set are error matching feature point pairs, and elimination processing is carried out on the two-dimensional matching feature point pairs.
Step 630: and carrying out three-dimensional characteristic point matching through the matching relation among the plurality of initial matching characteristic point pairs to obtain a plurality of three-dimensional matching characteristic point pairs.
It should be noted that, for a two-dimensional matching feature point pair, in the case of introducing distance information (i.e., a Z value describing depth information of each pixel point in an image), there may be a feature point pair whose depth distance information is not matched in an initial matching feature point pair, and therefore, it is necessary to perform three-dimensional feature point matching on the initial feature point pair to adjust an original matching relationship.
Specifically, three-dimensional feature point matching is performed by adopting a pnp algorithm based on the matching relationship between a plurality of initial matching feature point pairs, so as to obtain a plurality of three-dimensional matching feature point pairs.
Step 640: and eliminating the wrong matching relation in the plurality of three-dimensional matching feature point pairs to obtain a plurality of standard matching feature point pairs.
Similarly, referring to the step 620, when the mismatch relationship in the three-dimensional matching feature point pair is eliminated, the mismatch relationship may also be implemented by using a RANSAC algorithm, which is not described herein again.
Step 650: and calculating the pose of the robot in the target environment at the current moment through the matching relation among the plurality of standard matching feature point pairs to obtain a candidate pose.
In the embodiment, randomness in feature point matching can be reduced through the RANSAC algorithm twice, accuracy of standard matching feature points can be improved by eliminating an error matching relation, and further a positioning pose calculation result is more reliable under the condition of reducing error matching influence.
In summary of the foregoing embodiments of the method, the present application also provides another robot positioning method, as shown in fig. 7, which is also described by taking the method as an example of being applied to a robot, and includes the following steps:
step 701: acquiring the historical pose of the robot in the target environment at the last positioning moment and the track data of the robot from the last positioning moment to the current moment;
step 702: according to the historical pose and the track data, determining the predicted pose of the robot in the target environment at the current moment;
step 703: determining a search radius increment according to the interval duration between the last positioning time and the current time;
step 704: determining a target search radius corresponding to the current time when the robot searches the reference map frame according to the historical search radius and the search radius increment corresponding to the last positioning time;
it should be noted that the above steps 703 and 704 are only one way to determine the target search radius.
In another implementation manner, the implementation process of determining the corresponding target search radius when the robot searches the reference map frame at the current time may further include: determining a second search radius increment according to the interval duration between the first positioning time and the current time; determining a target search radius corresponding to the reference map frame searched by the robot at the current moment according to the initial search radius corresponding to the first positioning moment and the second search radius increment; wherein, the initial search radius corresponding to the first positioning time is a preset minimum search radius.
Step 705: determining a map frame search range according to the predicted pose and the target search radius;
step 706: acquiring a reference map frame in a map frame search range based on a map database corresponding to a target environment;
step 707: acquiring a global feature descriptor of a current environment image;
step 708: determining a matching map frame from the reference map frame according to the global feature descriptor of the current environment image;
step 709: performing two-dimensional feature point matching on the current environment image and the matching map frame to obtain a plurality of two-dimensional matching feature point pairs;
step 710: rejecting the wrong matching relation in a plurality of two-dimensional matching feature point pairs to obtain a plurality of initial matching feature point pairs;
step 711: carrying out three-dimensional characteristic point matching through the matching relation among the plurality of initial matching characteristic point pairs to obtain a plurality of three-dimensional matching characteristic point pairs;
step 712: rejecting the wrong matching relation in a plurality of three-dimensional matching feature point pairs to obtain a plurality of standard matching feature point pairs;
step 713: calculating the pose of the robot in the target environment at the current moment through the matching relation among the plurality of standard matching feature point pairs to obtain a candidate pose;
step 714: checking the candidate pose according to the pose offset between the predicted pose and the candidate pose to obtain a checking result;
step 715: if the pose offset is larger than a preset offset value, determining that the verification result is that the candidate pose is incorrect;
step 716: if the verification result is that the candidate pose is incorrect, taking the predicted pose as the positioning pose of the robot in the target environment at the current moment;
step 717: if the pose offset is smaller than a preset offset value, determining that the verification result is that the candidate pose is correct;
step 718: and if the verification result is that the candidate pose is correct, performing pose fusion on the predicted pose and the candidate pose, and determining a pose fusion result as the positioning pose of the robot in the target environment at the current moment.
It should be noted that the implementation principle and technical effect of each step in the robot positioning method provided in this embodiment are similar to those of the foregoing method embodiments, and specific limitations and explanations may refer to the foregoing method embodiments, which are not described herein again.
It should be understood that, although the steps in the flowcharts related to the embodiments as described above are sequentially displayed as indicated by arrows, the steps are not necessarily performed sequentially as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the embodiments described above may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the execution order of the steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the present application further provides a robot positioning device for implementing the robot positioning method. The solution to the problem provided by the device is similar to the solution described in the above method, so the specific limitations in one or more embodiments of the robot positioning device provided below can be referred to the limitations of the above robot positioning method, and are not described herein again.
In one embodiment, as shown in fig. 8, there is provided a robot positioning device 800 comprising: a predicted pose acquisition module 810, a search range determination module 820, a map frame acquisition module 830, and a pose determination module 840, wherein:
the predicted pose acquisition module 810 is used for acquiring the predicted pose of the robot in the target environment at the current moment;
a search range determining module 820, configured to determine, according to the predicted pose, a map frame search range when the robot searches for the reference map frame at the current time;
a map frame acquiring module 830, configured to acquire a reference map frame in a map frame search range based on a map database corresponding to a target environment;
and the pose determining module 840 is used for determining the positioning pose of the robot in the target environment at the current moment according to the reference map frame and the current environment image acquired by the robot in the target environment at the current moment.
In one embodiment, the predicted pose acquisition module 810 includes:
the historical data acquisition unit is used for acquiring the historical pose of the robot in the target environment at the last positioning moment and the track data of the robot from the last positioning moment to the current moment;
and the predicted pose determining unit is used for determining the predicted pose of the robot in the target environment at the current moment according to the historical pose and the track data.
In one embodiment, the search range determining module 820 includes:
the searching radius acquiring unit is used for acquiring a target searching radius corresponding to the current moment of the robot searching the reference map frame;
and the search range determining unit is used for determining the search range of the map frame according to the predicted pose and the target search radius.
In one embodiment, the search radius obtaining unit includes:
the increment determining subunit is used for determining a first search radius increment according to the interval duration between the last positioning time and the current time;
the radius determining subunit is used for determining a target search radius corresponding to the reference map frame searched by the robot at the current moment according to the historical search radius corresponding to the last positioning moment and the first search radius increment;
alternatively, the first and second electrodes may be,
the increment determining subunit is further configured to determine a second search radius increment according to an interval duration between the first positioning time and the current time;
the radius determining subunit is further configured to determine, according to the initial search radius and the second search radius increment corresponding to the first positioning time, a target search radius corresponding to when the reference map frame is searched by the robot at the current time;
wherein, the initial search radius corresponding to the first positioning time is a preset minimum search radius.
In one embodiment, the second determining unit is specifically configured to:
and obtaining a map frame search range by taking the predicted pose as a circle center and the target search radius as a map frame search radius.
In one embodiment, the pose determination module 840 includes:
the characteristic descriptor acquisition unit is used for acquiring a global characteristic descriptor of the current environment image;
the matching map frame determining unit is used for determining a matching map frame from the reference map frame according to the global feature descriptor of the current environment image;
and the positioning unit is used for determining the positioning pose of the robot in the target environment at the current moment according to the current environment image and the matched map frame.
In one embodiment, the positioning unit includes:
the first positioning subunit is used for determining the candidate pose of the robot in the target environment at the current moment according to the current environment image and the matching map frame;
the checking subunit is used for checking the candidate pose according to the pose offset between the predicted pose and the candidate pose to obtain a checking result;
and the second positioning subunit is used for determining the pose of the robot in the target environment at the current moment according to the pose determination strategy corresponding to the verification result.
In one embodiment, the first positioning subunit is specifically configured to:
performing two-dimensional feature point matching on the current environment image and the matching map frame to obtain a plurality of two-dimensional matching feature point pairs;
rejecting the wrong matching relation in a plurality of two-dimensional matching feature point pairs to obtain a plurality of initial matching feature point pairs;
carrying out three-dimensional characteristic point matching through matching relations among the plurality of initial matching characteristic point pairs to obtain a plurality of three-dimensional matching characteristic point pairs;
rejecting the error matching relation in a plurality of three-dimensional matching feature point pairs to obtain a plurality of standard matching feature point pairs;
and calculating the pose of the robot in the target environment at the current moment through the matching relation among the plurality of standard matching feature point pairs to obtain a candidate pose.
In one embodiment, the syndrome unit is specifically configured to:
if the pose offset is larger than a preset offset value, determining that the verification result is that the candidate pose is incorrect;
and if the pose offset is smaller than the preset offset value, determining that the verification result is that the candidate pose is correct.
In one embodiment, the second positioning subunit is specifically configured to:
if the verification result is that the candidate pose is incorrect, taking the predicted pose as the positioning pose of the robot in the target environment at the current moment;
and if the verification result is that the candidate pose is correct, performing pose fusion on the predicted pose and the candidate pose, and determining a pose fusion result as the positioning pose of the robot in the target environment at the current moment.
The various modules in the robot positioning device described above may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a robot is provided, the internal structure of which may be as shown in fig. 9. The computer device comprises a processor, a memory, a communication interface, a display screen and an input device which are connected through a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operating system and the computer program to run on the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a robot positioning method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 9 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, which may be the robot or other smart mobile device; the system comprises a memory and a processor, wherein a computer program is stored in the memory, and the processor executes the computer program to realize the following steps:
acquiring a predicted pose of the robot in a target environment at the current moment;
determining a map frame search range when the robot searches a reference map frame at the current moment according to the predicted pose;
acquiring a reference map frame in a map frame search range based on a map database corresponding to a target environment;
and determining the positioning pose of the robot in the target environment at the current moment according to the reference map frame and the current environment image acquired by the robot in the target environment at the current moment.
The implementation principle and technical effect of the computer device provided by the above embodiment are similar to those of the above method embodiment, and are not described herein again.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring a predicted pose of the robot in a target environment at the current moment;
determining a map frame searching range when the robot searches a reference map frame at the current moment according to the predicted pose;
acquiring a reference map frame in a map frame search range based on a map database corresponding to a target environment;
and determining the positioning pose of the robot in the target environment at the current moment according to the reference map frame and the current environment image acquired by the robot in the target environment at the current moment.
The implementation principle and technical effect of the computer-readable storage medium provided by the above embodiments are similar to those of the above method embodiments, and are not described herein again.
In one embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, performs the steps of:
acquiring a predicted pose of the robot in a target environment at the current moment;
determining a map frame searching range when the robot searches a reference map frame at the current moment according to the predicted pose;
acquiring a reference map frame in a map frame search range based on a map database corresponding to a target environment;
and determining the positioning pose of the robot in the target environment at the current moment according to the reference map frame and the current environment image acquired by the robot in the target environment at the current moment.
The foregoing embodiments provide a computer program product, which has similar implementation principles and technical effects to those of the foregoing method embodiments, and will not be described herein again.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is specific and detailed, but not to be understood as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, and these are all within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (13)

1. A method of robot positioning, the method comprising:
acquiring a predicted pose of the robot in a target environment at the current moment;
determining a map frame search range of the robot when searching for a reference map frame at the current moment according to the predicted pose;
acquiring a reference map frame in the map frame search range based on a map database corresponding to the target environment;
and determining the positioning pose of the robot in the target environment at the current moment according to the reference map frame and the current environment image acquired by the robot in the target environment at the current moment.
2. The method of claim 1, wherein the obtaining the predicted pose of the robot in the target environment at the current time comprises:
acquiring the historical pose of the robot in the target environment at the last positioning time and the track data of the robot from the last positioning time to the current time;
and determining the predicted pose of the robot in the target environment at the current moment according to the historical pose and the track data.
3. The method according to claim 1 or 2, wherein the determining a map frame search range in which the robot searches for a reference map frame at the current time according to the predicted pose comprises:
acquiring a target search radius corresponding to the reference map frame searched by the robot at the current moment;
and determining the map frame search range according to the predicted pose and the target search radius.
4. The method according to claim 3, wherein the obtaining of the corresponding target search radius when the robot searches the reference map frame at the current moment comprises:
determining a first search radius increment according to the interval duration between the last positioning time and the current time;
determining a target search radius corresponding to the current time when the robot searches the reference map frame according to the historical search radius corresponding to the last positioning time and the first search radius increment;
alternatively, the first and second electrodes may be,
determining a second search radius increment according to the interval duration between the first positioning time and the current time;
determining a target search radius corresponding to the reference map frame searched by the robot at the current moment according to the initial search radius corresponding to the first positioning moment and the second search radius increment;
and the initial search radius corresponding to the first positioning moment is a preset minimum search radius.
5. The method of claim 3, wherein determining the map frame search range from the predicted pose and the target search radius comprises:
and taking the predicted pose as a circle center and the target search radius as a map frame search radius to obtain the map frame search range.
6. The method according to claim 1 or 2, wherein the determining of the positioning pose of the robot at the current moment in the target environment from the reference map frame and the current environment image acquired by the robot at the current moment in the target environment comprises:
acquiring a global feature descriptor of the current environment image;
determining a matching map frame from the reference map frame according to the global feature descriptor of the current environment image;
and determining the positioning pose of the robot in the target environment at the current moment according to the current environment image and the matching map frame.
7. The method according to claim 6, wherein the determining a positioning pose of the robot in the target environment at the current moment according to the current environment image and the matching map frame comprises:
determining a candidate pose of the robot in the target environment at the current moment according to the current environment image and the matching map frame;
checking the candidate pose according to the pose offset between the predicted pose and the candidate pose to obtain a checking result;
and determining the positioning pose of the robot in the target environment at the current moment according to a pose determination strategy corresponding to the verification result.
8. The method of claim 7, wherein determining the candidate pose of the robot in the target environment at the current moment in time from the current environment image and the matching map frame comprises:
performing two-dimensional feature point matching on the current environment image and the matching map frame to obtain a plurality of two-dimensional matching feature point pairs;
rejecting the error matching relation in the plurality of two-dimensional matching feature point pairs to obtain a plurality of initial matching feature point pairs;
carrying out three-dimensional characteristic point matching through matching relations among the plurality of initial matching characteristic point pairs to obtain a plurality of three-dimensional matching characteristic point pairs;
rejecting the error matching relation in the three-dimensional matching feature point pairs to obtain a plurality of standard matching feature point pairs;
and calculating the pose of the robot in the target environment at the current moment according to the matching relation among the plurality of standard matching feature point pairs to obtain the candidate pose.
9. The method according to claim 7, wherein the verifying the candidate pose according to the pose offset between the predicted pose and the candidate pose to obtain a verification result comprises:
if the pose offset is larger than a preset offset value, determining that the verification result is that the candidate pose is incorrect;
and if the pose offset is smaller than the preset offset value, determining that the verification result is that the candidate pose is correct.
10. The method according to claim 9, wherein the determining a positioning pose of the robot in the target environment at the current moment according to the pose determination strategy corresponding to the verification result comprises:
if the verification result is that the candidate pose is incorrect, taking the predicted pose as a positioning pose of the robot in the target environment at the current moment;
and if the verification result is that the candidate pose is correct, performing pose fusion on the predicted pose and the candidate pose, and determining a pose fusion result as a positioning pose of the robot in the target environment at the current moment.
11. A robot positioning determining apparatus, characterized in that the apparatus comprises:
the predicted pose acquisition module is used for acquiring the predicted pose of the robot in the target environment at the current moment;
the searching range determining module is used for determining a map frame searching range when the robot searches a reference map frame at the current moment according to the predicted pose;
the map frame acquisition module is used for acquiring a reference map frame in the map frame search range based on a map database corresponding to the target environment;
and the pose determining module is used for determining the positioning pose of the robot in the target environment at the current moment according to the reference map frame and the current environment image acquired by the robot in the target environment at the current moment.
12. A robot, characterized in that the robot comprises a memory storing a computer program and a processor implementing the steps of the method of any of claims 1-10 when executing the computer program.
13. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 10.
CN202210907105.XA 2022-07-29 2022-07-29 Robot positioning method, device, robot and storage medium Pending CN115307641A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210907105.XA CN115307641A (en) 2022-07-29 2022-07-29 Robot positioning method, device, robot and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210907105.XA CN115307641A (en) 2022-07-29 2022-07-29 Robot positioning method, device, robot and storage medium

Publications (1)

Publication Number Publication Date
CN115307641A true CN115307641A (en) 2022-11-08

Family

ID=83858349

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210907105.XA Pending CN115307641A (en) 2022-07-29 2022-07-29 Robot positioning method, device, robot and storage medium

Country Status (1)

Country Link
CN (1) CN115307641A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024037299A1 (en) * 2022-08-17 2024-02-22 深圳市普渡科技有限公司 Localization method and apparatus, and robot and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024037299A1 (en) * 2022-08-17 2024-02-22 深圳市普渡科技有限公司 Localization method and apparatus, and robot and storage medium

Similar Documents

Publication Publication Date Title
Dubé et al. An online multi-robot SLAM system for 3D LiDARs
Yousif et al. An overview to visual odometry and visual SLAM: Applications to mobile robotics
CN109186606B (en) Robot composition and navigation method based on SLAM and image information
Lingemann et al. Indoor and outdoor localization for fast mobile robots
CN108051002A (en) Transport vehicle space-location method and system based on inertia measurement auxiliary vision
Aider et al. A model-based method for indoor mobile robot localization using monocular vision and straight-line correspondences
CN112734852A (en) Robot mapping method and device and computing equipment
CN105043396A (en) Method and system for indoor map self-establishment of mobile robot
US11927448B2 (en) Deep smartphone sensors fusion for indoor positioning and tracking
CN115267796B (en) Positioning method, positioning device, robot and storage medium
Skrzypczyński Mobile robot localization: Where we are and what are the challenges?
Zhang et al. Seeing Eye Phone: a smart phone-based indoor localization and guidance system for the visually impaired
CN112750161A (en) Map updating method for mobile robot and mobile robot positioning method
CN115307641A (en) Robot positioning method, device, robot and storage medium
Shoukat et al. Cognitive robotics: Deep learning approaches for trajectory and motion control in complex environment
CA2894863A1 (en) Indoor localization using crowdsourced data
Rybski et al. Appearance-based minimalistic metric SLAM
Cupec et al. Global localization based on 3d planar surface segments
US12001218B2 (en) Mobile robot device for correcting position by fusing image sensor and plurality of geomagnetic sensors, and control method
CN113115214A (en) Indoor human body orientation recognition system based on non-reversible positioning tag
Baligh Jahromi et al. Layout slam with model based loop closure for 3d indoor corridor reconstruction
Shu et al. An imu/sonar-based extended kalman filter for mini-uav localization in indoor environment
Hu et al. Accurate fiducial mapping for pose estimation using manifold optimization
Sim et al. Self-organizing visual maps
Pacheco Trilateration-Based Localization in Known Environments with Object Detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination