CN110727265A - Robot repositioning method and device and storage device - Google Patents

Robot repositioning method and device and storage device Download PDF

Info

Publication number
CN110727265A
CN110727265A CN201810689594.XA CN201810689594A CN110727265A CN 110727265 A CN110727265 A CN 110727265A CN 201810689594 A CN201810689594 A CN 201810689594A CN 110727265 A CN110727265 A CN 110727265A
Authority
CN
China
Prior art keywords
frame image
search range
key frame
current frame
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810689594.XA
Other languages
Chinese (zh)
Other versions
CN110727265B (en
Inventor
熊友军
蒋晨晨
刘志超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ubtech Robotics Corp
Original Assignee
Ubtech Robotics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ubtech Robotics Corp filed Critical Ubtech Robotics Corp
Priority to CN201810689594.XA priority Critical patent/CN110727265B/en
Publication of CN110727265A publication Critical patent/CN110727265A/en
Application granted granted Critical
Publication of CN110727265B publication Critical patent/CN110727265B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0253Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • G05D1/0285Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using signals transmitted via a public communication network, e.g. GSM network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Image Analysis (AREA)
  • Manipulator (AREA)

Abstract

The application discloses a robot repositioning method, a robot repositioning device and a storage device, wherein the method comprises the following steps: judging whether the current frame image is matched with a key frame image stored in a first search range; if not, judging whether the current frame image contains the distance information of the robot or not, wherein the distance information is provided by an ultra-wideband base station arranged on the robot; if the first search range is judged to be the first search range, the distance information is combined to obtain a first search range of the key frame image; judging whether the current frame image has a matched key frame in a second search range; and if the matching is successful, obtaining the current pose of the camera through the matched key frame images. By the method, the search range of the key frame image can be narrowed, the search range of the subsequent bag-of-words model can be reduced, the search time is shortened, and the subsequent matching efficiency is improved.

Description

Robot repositioning method and device and storage device
Technical Field
The present application relates to the field of robot vision positioning technologies, and in particular, to a method, an apparatus, and a storage apparatus for robot relocation.
Background
Positioning navigation based on machine vision is a very key technology in the field of robots. However, the existing visual scheme still has insufficient robustness, and often has the problems of tracking loss, relocation failure and the like. Meanwhile, the illumination also has a great influence on the visual positioning, and the positioning can also fail if the robot passes through the same place but the illumination is different. If the positioning fails, subsequent functions of the robot, such as navigation, can not be performed. The distance sensor has the property of accumulated error, and if the absolute position of the robot from a certain position can be obtained, the accumulated error of the robot can be corrected in time. In view of the above problems, it is imperative to find a method for improving robot relocation robustness without increasing the external facilities of the robot.
Typical methods now include: (1) the visual inertial navigation positioning technology is based on a plurality of Ultra Wide Band (UWB) base stations. The method adds an Inertial Measurement Unit (IMU) sensor on the basis of vision, and three to four UWB base stations are used for accurately positioning the position of the robot. While increasing robot positioning robustness, it also increases the difficulty of using the robot. For example, a plurality of base stations need to be installed around the robot, and each base station needs to perform clock synchronization or the like. (2) Visual positioning technology based on WiFi or bluetooth technology. The method can use a Received Signal Strength Indication (RSSI) and Time of Arrival (TOA) method to obtain the distance from the robot to a certain router or a base station, and although the cost is low, the power consumption and the interference are large, and the method is not suitable for 3D positioning.
Disclosure of Invention
The application provides a robot repositioning method, device and storage device, which can reduce the search range of key frame images, reduce the search range of subsequent bag-of-words models, and enable the bag-of-words models to avoid searching a first search range, thereby reducing the search time and improving the subsequent matching efficiency.
In order to solve the technical problem, the application adopts a technical scheme that: a method of robot repositioning is provided, the method comprising: judging whether the current frame image is matched with a key frame image stored in a first search range; if not, judging whether the current frame image contains distance information of the robot or not, wherein the distance information is provided by an ultra-wideband base station installed on the robot; if so, combining the distance information to obtain a second search range of the key frame image, wherein the second search range is smaller than the first search range; judging whether the current frame image has a matched key frame image in the second search range; and if the matching is successful, obtaining the current pose of the camera through the matched key frame image.
In order to solve the above technical problem, another technical solution adopted by the present application is: providing a robot repositioning device, wherein the device comprises a processor and a memory, and the processor is connected with the memory; the processor is used for judging whether the current frame image is matched with the key frame image stored in the memory in the first search range; if not, judging whether the current frame image contains distance information of the robot or not, wherein the distance information is provided by an ultra-wideband base station installed on the robot; if so, combining the distance information to obtain a second search range of the key frame image, wherein the second search range is smaller than the first search range; judging whether the current frame image has a matched key frame in the second search range; and if the matching is successful, obtaining the current pose of the camera through the matched key frame image.
In order to solve the above technical problem, the present application adopts another technical solution: there is provided a storage device storing a program file capable of implementing any one of the above-described methods
The beneficial effect of this application is: the robot repositioning method, the robot repositioning device and the robot repositioning storage device are provided, the searching range of the key frame image is narrowed by combining the distance information from the robot to the UWB base station, the searching range of the subsequent word bag model can be reduced, the first searching range does not need to be searched by the word bag model, the searching time is shortened, and the subsequent matching efficiency is improved.
Drawings
FIG. 1 is a schematic flow chart diagram of a first embodiment of a robot repositioning method of the present application;
FIG. 2 is a schematic flow chart diagram illustrating an embodiment of step S6 in FIG. 1;
FIG. 3 is a schematic flow chart diagram illustrating an embodiment of step S64 in FIG. 2;
FIG. 4 is a schematic diagram of an embodiment of a robotic repositioning apparatus of the present application;
fig. 5 is a schematic structural diagram of an embodiment of a memory device according to the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to the drawings, fig. 1 is a schematic flow chart of a first embodiment of a robot repositioning method according to the present application.
In the technical scheme that this application adopted, adopt single Ultra Wide Band (UWB) basic station to assist reinforcing machine vision location robustness, and in the embodiment of this application, can save a plurality of UWB basic station facilities of installation in can filling electric pile integrated to the robot with this UWB basic station, make the removal of robot more convenient. Of course, in other embodiments, the UWB base station may also be integrated into other components of the robot, and is not further limited herein. The UWB base station provides absolute distance information from the robot to the base station, so that the search range during visual relocation can be reduced, and the relocation speed and reliability are increased.
In an SLAM (Simultaneous Localization and Mapping) system, robot relocation refers to the situation that when a camera of a robot is tracked and lost or the SLAM system is started for the second time in a known environment, camera pose information is estimated by using a single frame image under the condition that no prior pose information exists. In the aspect, most SLAM systems assume that the motion between frames of a camera is small, and estimate the pose of a current frame according to the pose of the camera of the previous frame. For example, when the camera moves rapidly, since the pose estimation is a non-convex problem, general pose estimation methods such as ICP or minimization of reprojection errors fall into local optimality, and tracking is lost. On the other hand, when the user has obtained a map of a scene and turned off the system, if the SLAM system is turned on again, it is not possible to fuse a new map with a previously constructed map of the scene because the pose of the first frame at startup with respect to the known map cannot be determined. The repositioning technology is to estimate the camera pose of the first frame under the condition that no front frame pose reference exists and only a single frame image or map exists. Namely, the repositioning technology can continuously match the current frame image with the stored key frame image until the most similar key frame is found to estimate the pose of the current camera, thereby ensuring the continuity of the map.
The robot repositioning method in the application specifically comprises the following steps:
and S1, extracting feature point descriptors in all the saved key frame images.
The robot relocation in the application adopts a method based on a visual bag-of-words model, a bag-of-words model bag of words (bag words) is initially applied to the fields of text retrieval and the like and is used for text classification and the like, the visual bag-of-words model takes visual features in images as words, classification or similarity detection of the images is carried out by using frequency difference of appearance of the visual words, the method is a technology for converting single images into sparse vectors by using a visual dictionary, and the method can process more image sets.
In the specific implementation, the feature descriptors in all the key frame images that have been saved before the current time are generally extracted and clustered. The determination of the key frame generally adopts a key frame selection method of a local map. The local map is a set of a series of key frame images having common feature points with the current frame image, wherein a key frame image having the largest number of common feature points with the current frame image is called a reference key frame image, and the key frame selection method of the local map may include the following conditions:
1. the current frame image is spaced at least 20 image frames from the last repositioning to ensure a good repositioning result.
2. The current frame image is separated from the previous key frame by at least 20 image frames, so that a large amount of redundant information between adjacent key frame images is avoided.
3. The current frame image and the previous key frame at least contain 50 same feature points, and enough matching information among the key frame images is guaranteed to obtain good pose tracking.
4. The number of the same feature points of the current frame image and the reference key frame image is less than 90% of the total number of features of the current frame image and the reference key frame image, so that two adjacent key frame images have enough image change.
And S2, discretizing all feature point descriptor subsets to obtain a visual dictionary.
Clustering all the extracted feature descriptors of the stored key frame images to form discrete words, clustering a feature point descriptor set into K classes by adopting a K-means + + algorithm, and finally clustering the center of each class to obtain a word in a visual dictionary for efficient search, wherein the visual dictionary is stored by adopting a hierarchical tree structure, and the nodes of each layer are clustered again by K-means to obtain the nodes of the next layer until the set maximum depth is reached, so that the words of the dictionary are stored in a plurality of leaf nodes of the tree. Meanwhile, each word stores a weight value, the weight value is a ratio, the numerator is the frequency of the word in all the feature points, the denominator is the frequency of the word in which images in the training set appear, the weight value represents the distinguishing capability of the word, and the larger the weight value is, the stronger the distinguishing capability of the word is.
S3, it is determined whether the current frame image matches a key frame image already stored in the first search range.
In this embodiment, after the current frame image obtained by the t-etching camera is determined as a key frame, and it needs to be determined whether the current frame image matches with all key frame images stored before the time t, where the first search range may be the whole map, if the first search range exists, the step S8 is performed to obtain the current pose of the camera according to the obtained matched key frame image, and the next frame image is repositioned. If not, the process proceeds to step S4.
And S4, judging whether the current frame image contains the distance information of the robot, wherein the distance information is provided by an ultra-wideband base station installed on the robot.
In this embodiment, the current frame image obtained by the robot at the time of repositioning t needs to be matched with all the key frame images stored before the time of t to find a picture similar to the current frame. If the repositioning fails, whether the current frame image contains the distance information of the robot is further judged, wherein the distance information is provided by a single UWB base station integrated on the robot charging pile, and the distance information is the absolute distance from the robot to the UWB base station. In addition, in this embodiment, the scanning time of the UWB base station and the scanning time of the camera are not synchronized, so that each frame of image does not include the distance information.
Therefore, when it is determined that the current frame image does not include the distance information, the process proceeds to step S8 to reposition the next frame image. If the current frame image includes the absolute distance information, the process proceeds to step S5.
And S5, obtaining a second search range of the key frame image by combining the distance information, wherein the second search range is smaller than the first search range.
In step S5, after determining that the current frame image includes the absolute distance from the robot to the UWB base station, the absolute distance may be set to H. After the relocation fails, combining the absolute distance H, narrowing the search range of the key frame image, that is, searching the key frame image with the distance H approximately equal to that of the UWB base station in the first search range (the whole), namely obtaining the second search range, thereby knowing that the second search range is necessarily smaller than the first search range.
Furthermore, the absolute position of the robot from the UWB base station is adopted to reduce the search range of the key frame image, so that the search range of a subsequent bag-of-words model can be reduced, and the bag-of-words model does not need to search the first search range (whole map), thereby reducing the search time and improving the subsequent matching efficiency.
And S6, judging whether the current frame image has a matched key frame in the second searching range.
After the absolute distance between the robot and the UWB base station is combined in step S5, and the search range of the key frame is narrowed, it is further required to determine whether a key frame image matching the current frame image exists in the second search range, please refer to fig. 2, which further includes the following sub-steps:
and S61, converting the current frame image into a bag-of-word vector through a visual dictionary.
The current frame image is converted into bag-of-word vectors through the visual dictionary formed in steps S1-S2, and the bag-of-word vectors of the current frame image and the bag-of-word vectors of all the key frame images in the second search range are subjected to filtering matching, and a BOW database is maintained during the operation of the whole SLAM system, and the database holds the bag-of-word vectors of all the key frame images up to time t.
And S62, respectively calculating the similarity of the bag-of-word vector of the current frame image and the bag-of-word vector of the key frame image in the second search range.
In this embodiment, for the current frame image, the similarity of all the key frame image bag-of-word vectors in the second search range is calculated, respectively, where the minimum similarity value between the current frame image and the key frame image in the second search range is recorded as the first similarity threshold.
And S63, screening out the key frame images with the similarity meeting the first similarity threshold as candidate key frame images.
In step S63, in the specific screening process, the similarity screening value of the keyframe image bag-of-words vector in the second search range is continuously reduced. That is, in a specific screening process, the determination value of the similarity between the current frame image and the key frame image is changed, that is, when the similarity screening value of the two is set to 0.9, it is determined whether there is a key frame image whose similarity to the current frame image satisfies the screening condition, if not, the screening condition is further reduced, for example, the similarity value of the two is set to 0.8, and then it is further determined whether there is a key frame image whose similarity to the current frame image satisfies the screening condition, if not, the screening condition is further reduced until the similarity value is not less than the first similarity threshold, that is, the minimum similarity value between the current frame image and the key frame image in the second search range, in this application, the first similarity threshold may be set to 0.3, that is, in the screening process, the similarity values of the current frame image and all key frame images in the second search range cannot be lower than the threshold, if the threshold of all the key frame images in the second search range is smaller than the first similarity threshold, step S8 is directly entered to reposition the next frame image.
And conversely, screening out the key frame images with the similarity not less than the first similarity threshold value as candidate key frame images.
And S64, matching the candidate frame image with the current frame image to obtain a key frame image matched with the current frame image.
After obtaining a candidate key frame image whose similarity to the current frame image satisfies a first similarity threshold, the candidate key frame image and the current frame image need to be matched, as shown in fig. 3, which further includes the following sub-steps:
s641 extracts feature descriptors in the current frame image and all candidate frame images, respectively.
When the feature matching is performed on the two frames of images, the distance between the feature descriptors of the corresponding feature points of the two frames of images needs to be calculated, and whether the matching is successful or not is judged according to the distance. In a specific embodiment, feature descriptors in all candidate key frame pictures need to be extracted separately.
And S642, respectively judging whether the distances between the feature descriptors in the current frame image and the feature descriptors in all the candidate frame images meet a preset threshold value.
In the present application, the criterion for determining the distance between feature descriptors may be determined in several ways:
1. and the single threshold method gives a threshold value, and when the distance between the feature descriptors of the feature points is smaller than the threshold value, the matching between the feature points is judged to be successful.
2. The nearest neighbor method is to give a smaller threshold value, find the minimum matching of the feature descriptor distances among feature points, and when the distance is smaller than the threshold value, judge that the matching is successful.
3. The nearest neighbor ratio method selects two candidate points with the nearest and next nearest feature descriptor distances in the feature point field, and when the distance between the two candidate points is larger, namely the distance between the two candidate points and the feature point is smaller than a certain threshold value, the nearest candidate point is determined as a matching point.
Of course, in other embodiments, other matching manners may be adopted, which is not further limited herein, and if it is determined that the distances between the feature descriptors in the current frame image and the feature descriptors in all candidate frame images do not satisfy the preset threshold, the process proceeds to step S8, and the next frame image is repositioned.
And S643, acquiring a key frame image matched with the current frame image.
And S7, obtaining the current pose of the camera through the matched key frame images.
After obtaining the key frame image matched with the current frame image, the pose of the camera can be further obtained through a PnP (Passive-n-Point) algorithm, so that the repositioning is completed.
It is to be noted that steps S1-S2 and S8 are not essential to the implementation of the present application, and may be modified or omitted by those skilled in the art according to the actual use situation.
In the above embodiment, by combining the distance information from the robot to the UWB base station, the search range of the key frame image is narrowed, the search range of the subsequent bag-of-words model can be reduced, and the bag-of-words model does not need to search the first search range (the whole map), thereby reducing the search time and improving the subsequent matching efficiency.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an embodiment of a robot repositioning device according to the present application. As shown in fig. 4, the apparatus includes a processor 11 and a memory 12, and the processor 11 is connected to the memory 12.
The processor 11 is configured to determine whether the current frame image matches a key frame image stored in the first search range through the memory 12; if not, judging whether the current frame image contains the distance information of the robot or not, wherein the distance information is provided by an ultra-wideband base station arranged on the robot; if the first search range is smaller than the first search range, obtaining a second search range of the key frame image by combining the distance information; judging whether the current frame image has a matched key frame image in a second search range; and if the matching is successful, obtaining the current pose of the camera through the matched key frame images.
Wherein, among the technical scheme that this application adopted, adopt single Ultra Wide Band (UWB) basic station to assist reinforcing machine vision location robustness, and in the embodiment of this application, can save a plurality of UWB basic station facilities of installation in can integrateing the electric pile that fills of robot with this UWB basic station, make the removal of robot more convenient. Of course, in other embodiments, the UWB base station may also be integrated into other components of the robot, and is not further limited herein. The UWB base station provides absolute distance information from the robot to the base station, so that the search range during visual relocation can be reduced, and the relocation speed and reliability are increased.
The processor 11 may also be referred to as a CPU (Central Processing Unit). The processor 11 may be an integrated circuit chip having signal processing capabilities. The processor 11 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The processor in the apparatus may respectively execute the corresponding steps in the method embodiments, and thus details are not repeated here, and please refer to the description of the corresponding steps above.
In the above embodiment, by combining the distance information from the robot to the UWB base station, the search range of the key frame image is narrowed, the search range of the subsequent bag-of-words model can be reduced, and the bag-of-words model does not need to search the first search range (the whole map), thereby reducing the search time and improving the subsequent matching efficiency.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an embodiment of a memory device according to the present application. The storage device of the present application stores a program file 21 capable of implementing all the methods described above, wherein the program file 21 may be stored in the storage device in the form of a software product, and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute all or part of the steps of the methods described in the embodiments of the present application. The aforementioned storage device includes: various media capable of storing program codes, such as a usb disk, a mobile hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, or terminal devices, such as a computer, a server, a mobile phone, and a tablet.
In summary, it is easily understood by those skilled in the art that the present application provides a method, an apparatus, and a storage apparatus for robot relocation, which combine the distance information from the robot to the UWB base station, so as to narrow the search range of the keyframe image, reduce the search range of the subsequent bag-of-words model, and make the bag-of-words model not need to search the first search range (the whole map), thereby reducing the search time and improving the subsequent matching efficiency.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (10)

1. A method of robot repositioning, the method comprising:
judging whether the current frame image is matched with a key frame image stored in a first search range;
if not, judging whether the current frame image contains distance information of the robot or not, wherein the distance information is provided by an ultra-wideband base station installed on the robot;
if so, combining the distance information to obtain a second search range of the key frame image, wherein the second search range is smaller than the first search range;
judging whether the current frame image has a matched key frame image in the second search range;
and if the matching is successful, obtaining the current pose of the camera through the matched key frame image.
2. The method of claim 1, wherein the distance information is an absolute distance of the robot from the ultra-wideband base station.
3. The method of claim 1, wherein before determining whether the current frame image matches the stored key frame image in the first search range comprises:
extracting feature point descriptors in all the saved key frame images;
and discretizing all the feature point description subset to obtain a visual dictionary.
4. The method of claim 3, wherein the determining whether the current frame image has a matching key frame image in the second search range comprises:
converting the current frame image into a bag-of-words vector through the visual dictionary;
respectively calculating the similarity of the bag-of-word vector of the current frame image and the bag-of-word vector of the key frame image in the second search range;
screening out the key frame images with the similarity not less than a first similarity threshold value as candidate key frame images;
and matching the candidate frame image with the current frame image to obtain a key frame image matched with the current frame image.
5. The method of claim 4, wherein the first similarity threshold is a minimum similarity value between the current frame image and the key frame image in the second search range.
6. The method of claim 4, wherein the matching the candidate frame image and the current frame image to obtain a key frame image matched with the current frame image comprises:
respectively extracting feature descriptors in the current frame image and all the candidate frame images;
respectively judging whether the distances between the feature descriptors in the current frame image and the feature descriptors in all the candidate frame images meet a preset threshold value;
and if so, acquiring a key frame image matched with the current frame image.
7. The method according to claim 1, wherein if the current frame image is judged to match the stored key frame image, the current pose of the camera is obtained through the matched key frame image.
8. A robot relocating device, characterized in that the device comprises a processor and a memory, the processor is connected with the memory; the processor is used for judging whether the current frame image is matched with the key frame image stored in the memory in the first search range; if not, judging whether the current frame image contains distance information of the robot or not, wherein the distance information is provided by an ultra-wideband base station installed on the robot; if so, combining the distance information to obtain a second search range of the key frame image, wherein the second search range is smaller than the first search range; judging whether the current frame image has a matched key frame in the second search range; and if the matching is successful, obtaining the current pose of the camera through the matched key frame image.
9. The apparatus of claim 8, wherein the distance information is an absolute distance of the robot from the ultra-wideband base station.
10. A storage device in which a program file capable of implementing the method according to any one of claims 1 to 7 is stored.
CN201810689594.XA 2018-06-28 2018-06-28 Robot repositioning method and device and storage device Active CN110727265B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810689594.XA CN110727265B (en) 2018-06-28 2018-06-28 Robot repositioning method and device and storage device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810689594.XA CN110727265B (en) 2018-06-28 2018-06-28 Robot repositioning method and device and storage device

Publications (2)

Publication Number Publication Date
CN110727265A true CN110727265A (en) 2020-01-24
CN110727265B CN110727265B (en) 2022-09-23

Family

ID=69216724

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810689594.XA Active CN110727265B (en) 2018-06-28 2018-06-28 Robot repositioning method and device and storage device

Country Status (1)

Country Link
CN (1) CN110727265B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111340887A (en) * 2020-02-26 2020-06-26 Oppo广东移动通信有限公司 Visual positioning method and device, electronic equipment and storage medium
CN112697151A (en) * 2020-12-24 2021-04-23 北京百度网讯科技有限公司 Method, apparatus and storage medium for determining initial point of mobile robot
CN112699266A (en) * 2020-12-30 2021-04-23 视辰信息科技(上海)有限公司 Visual map positioning method and system based on key frame correlation
CN112710299A (en) * 2020-12-04 2021-04-27 深圳市优必选科技股份有限公司 Repositioning method, repositioning device, terminal equipment and storage medium
CN113048978A (en) * 2021-02-01 2021-06-29 苏州澜途科技有限公司 Mobile robot repositioning method and mobile robot
CN113297259A (en) * 2021-05-31 2021-08-24 深圳市优必选科技股份有限公司 Robot and environment map construction method and device thereof

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130216098A1 (en) * 2010-09-17 2013-08-22 Tokyo Institute Of Technology Map generation apparatus, map generation method, moving method for moving body, and robot apparatus
CN105856230A (en) * 2016-05-06 2016-08-17 简燕梅 ORB key frame closed-loop detection SLAM method capable of improving consistency of position and pose of robot
CN106092104A (en) * 2016-08-26 2016-11-09 深圳微服机器人科技有限公司 The method for relocating of a kind of Indoor Robot and device
CN106885574A (en) * 2017-02-15 2017-06-23 北京大学深圳研究生院 A kind of monocular vision robot synchronous superposition method based on weight tracking strategy
CN107179080A (en) * 2017-06-07 2017-09-19 纳恩博(北京)科技有限公司 The localization method and device of electronic equipment, electronic equipment, electronic positioning system
CN107292949A (en) * 2017-05-25 2017-10-24 深圳先进技术研究院 Three-dimensional rebuilding method, device and the terminal device of scene
CN107369183A (en) * 2017-07-17 2017-11-21 广东工业大学 Towards the MAR Tracing Registration method and system based on figure optimization SLAM

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130216098A1 (en) * 2010-09-17 2013-08-22 Tokyo Institute Of Technology Map generation apparatus, map generation method, moving method for moving body, and robot apparatus
CN105856230A (en) * 2016-05-06 2016-08-17 简燕梅 ORB key frame closed-loop detection SLAM method capable of improving consistency of position and pose of robot
CN106092104A (en) * 2016-08-26 2016-11-09 深圳微服机器人科技有限公司 The method for relocating of a kind of Indoor Robot and device
CN106885574A (en) * 2017-02-15 2017-06-23 北京大学深圳研究生院 A kind of monocular vision robot synchronous superposition method based on weight tracking strategy
CN107292949A (en) * 2017-05-25 2017-10-24 深圳先进技术研究院 Three-dimensional rebuilding method, device and the terminal device of scene
CN107179080A (en) * 2017-06-07 2017-09-19 纳恩博(北京)科技有限公司 The localization method and device of electronic equipment, electronic equipment, electronic positioning system
CN107369183A (en) * 2017-07-17 2017-11-21 广东工业大学 Towards the MAR Tracing Registration method and system based on figure optimization SLAM

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李同等: "基于ORB词袋模型的SLAM回环检测研究", 《信息通信》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111340887A (en) * 2020-02-26 2020-06-26 Oppo广东移动通信有限公司 Visual positioning method and device, electronic equipment and storage medium
CN111340887B (en) * 2020-02-26 2023-12-29 Oppo广东移动通信有限公司 Visual positioning method, visual positioning device, electronic equipment and storage medium
CN112710299A (en) * 2020-12-04 2021-04-27 深圳市优必选科技股份有限公司 Repositioning method, repositioning device, terminal equipment and storage medium
CN112710299B (en) * 2020-12-04 2024-05-17 深圳市优必选科技股份有限公司 Repositioning method, repositioning device, terminal equipment and storage medium
CN112697151A (en) * 2020-12-24 2021-04-23 北京百度网讯科技有限公司 Method, apparatus and storage medium for determining initial point of mobile robot
CN112697151B (en) * 2020-12-24 2023-02-21 北京百度网讯科技有限公司 Method, apparatus, and storage medium for determining initial point of mobile robot
CN112699266A (en) * 2020-12-30 2021-04-23 视辰信息科技(上海)有限公司 Visual map positioning method and system based on key frame correlation
CN113048978A (en) * 2021-02-01 2021-06-29 苏州澜途科技有限公司 Mobile robot repositioning method and mobile robot
CN113048978B (en) * 2021-02-01 2023-10-20 苏州澜途科技有限公司 Mobile robot repositioning method and mobile robot
CN113297259A (en) * 2021-05-31 2021-08-24 深圳市优必选科技股份有限公司 Robot and environment map construction method and device thereof
WO2022252482A1 (en) * 2021-05-31 2022-12-08 深圳市优必选科技股份有限公司 Robot, and environment map construction method and apparatus therefor

Also Published As

Publication number Publication date
CN110727265B (en) 2022-09-23

Similar Documents

Publication Publication Date Title
CN110727265B (en) Robot repositioning method and device and storage device
CN110657803B (en) Robot positioning method, device and storage device
AU2016201908B2 (en) Joint depth estimation and semantic labeling of a single image
Mei et al. Closing loops without places
US8774526B2 (en) Intelligent image search results summarization and browsing
US11361019B2 (en) Image query method and apparatus
US20150178930A1 (en) Systems, methods, and apparatus for generating metadata relating to spatial regions of non-uniform size
US20180089832A1 (en) Place recognition algorithm
JP7430243B2 (en) Visual positioning method and related equipment
KR20010053788A (en) System for content-based image retrieval and method using for same
US20230266470A1 (en) Robot relocalization method and apparatus, and storage medium and electronic device
CN110390356B (en) Visual dictionary generation method and device and storage medium
CN115205470B (en) Continuous scanning repositioning method, device, equipment, storage medium and three-dimensional continuous scanning method
Fond et al. Facade proposals for urban augmented reality
US20150086118A1 (en) Method for recognizing a visual context of an image and corresponding device
AU2021203821A1 (en) Methods, devices, apparatuses and storage media of detecting correlated objects involved in images
Orhan et al. Semantic pose verification for outdoor visual localization with self-supervised contrastive learning
Sui et al. An accurate indoor localization approach using cellphone camera
CN108256543A (en) A kind of localization method and electronic equipment
US11500937B1 (en) Data retrieval system
Li et al. Vision-based indoor localization via a visual SLAM approach
CN115049731B (en) Visual image construction and positioning method based on binocular camera
CN116664812B (en) Visual positioning method, visual positioning system and electronic equipment
CN115358379B (en) Neural network processing method, neural network processing device, information processing method, information processing device and computer equipment
CN113888608A (en) Target tracking method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant