WO2022121640A1 - 机器人重定位方法、装置、机器人和可读存储介质 - Google Patents

机器人重定位方法、装置、机器人和可读存储介质 Download PDF

Info

Publication number
WO2022121640A1
WO2022121640A1 PCT/CN2021/131147 CN2021131147W WO2022121640A1 WO 2022121640 A1 WO2022121640 A1 WO 2022121640A1 CN 2021131147 W CN2021131147 W CN 2021131147W WO 2022121640 A1 WO2022121640 A1 WO 2022121640A1
Authority
WO
WIPO (PCT)
Prior art keywords
estimated pose
pose
robot
preset
relocation
Prior art date
Application number
PCT/CN2021/131147
Other languages
English (en)
French (fr)
Inventor
郭睿
刘志超
何婉君
Original Assignee
深圳市优必选科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市优必选科技股份有限公司 filed Critical 深圳市优必选科技股份有限公司
Publication of WO2022121640A1 publication Critical patent/WO2022121640A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders

Definitions

  • the present application relates to the field of artificial intelligence, and in particular, to a method, device, robot and readable storage medium for relocation of a robot.
  • a single sensor solution such as single-lens, multi-lens Vision or single-line, multi-line laser, and simple multi-sensor fusion solutions, such as monocular/polyocular vision + single-line laser, monocular vision + multi-line laser, etc.
  • monocular/polyocular vision + single-line laser, monocular vision + multi-line laser, etc. due to its own field of view or range, as well as feature description and huge data volume Due to the limitations of processing and other capabilities, it is often difficult to provide high-precision and robust relocation services at the same time.
  • the present application proposes a robot relocation method, device, robot and readable storage medium.
  • An embodiment of the present application proposes a method for relocating a robot, the method comprising:
  • the multi-vision relocation using each image frame obtained by a preset first number of synchronous cameras includes:
  • the loopback retrieval information corresponding to each image frame is set to be empty;
  • the pose of the laser key frame corresponding to the first loop closure index is used as the first estimated pose.
  • the first estimated pose is optimized by using each loop closure retrieval information and the z-th image frame acquired by each synchronous camera.
  • the robot relocation method described in the embodiment of the present application further includes:
  • the yaw angle of the first estimated pose is compensated by using the difference between the odometer corresponding to the current image frame and the odometer corresponding to the image frame used to determine the first estimated pose.
  • the multi-line laser relocation according to the first estimated pose includes:
  • the multi-line laser is used to obtain the adjacent key frames of each laser;
  • the preset second distance value is the radius, and each adjacent laser key frame corresponding to the second distance value is obtained by using the multi-line laser, and the second distance value is the radius.
  • the distance value is less than the first distance value
  • the multi-line laser relocation fails
  • the multi-line laser relocation is successful, and the compensated best adjacent pose is used as the second estimated pose.
  • each adjacent laser key frame performs a preset second number of compensations on the yaw angle corresponding to the optimal adjacent pose, including:
  • the yaw angle corresponding to the best adjacent pose is compensated by using a preset angle constant of p-1 times, 1 ⁇ p ⁇ P, where P is the preset second number of compensations;
  • the estimated pose corresponding to the p-th compensation is corrected by using the relative change pose until the preset Compensation is completed twice.
  • determining the relocation mode of the robot according to whether there is a preset initial pose of the robot in the two-dimensional grid map includes:
  • the relocation mode is a local relocation mode
  • the relocation mode is a global relocation mode
  • selecting the estimated pose to be corrected from the first estimated pose and the second estimated pose according to the relocation mode including:
  • the distance between the position of the first estimated pose and the position of the second estimated pose is less than a preset distance threshold, And when the absolute value of the difference between the navigation angle of the first estimated pose and the navigation angle of the second estimated pose is less than a preset angle difference threshold, the second estimated pose is selected as the estimate to be corrected pose;
  • the second estimated pose is selected as the estimated pose to be corrected
  • the absolute value of the difference between the navigation angle of the first estimated pose and the navigation angle of the second estimated pose is less than
  • the preset angle difference threshold is set, the second estimated pose is selected as the estimated pose to be corrected
  • the first estimated pose is selected as the estimated pose to be corrected.
  • Yet another embodiment of the present application provides a robot repositioning device, the device comprising:
  • a relocation mode determination module configured to determine the relocation mode of the robot according to whether there is a preset initial pose of the robot in the two-dimensional grid map
  • the first estimated pose determination module is used to perform multi-vision relocation using each image frame acquired by a preset first number of synchronous cameras during the in-situ rotation of the robot, so as to determine the first position of the robot. estimated pose;
  • a second estimated pose determination module configured to perform multi-line laser repositioning according to the first estimated pose to determine the second estimated pose of the robot
  • an estimated pose selection module to be corrected configured to select an estimated pose to be corrected from the first estimated pose and the second estimated pose according to the relocation mode
  • the estimated pose correction module is used to iteratively correct the selected estimated pose to be corrected by using the relocation correction algorithm until the position covariance of the iterative correction converges and is smaller than the preset position corresponding to the estimated pose to be corrected covariance threshold, and the iteratively corrected angle covariance converges and is smaller than the preset angle covariance threshold corresponding to the estimated pose to be corrected.
  • the embodiments of the present application relate to a robot, including a memory and a processor, where the memory is used to store a computer program, and the computer program executes the robot relocation method described in the embodiments of the present application when the computer program runs on the processor.
  • the embodiments of the present application relate to a readable storage medium, which stores a computer program, and the computer program executes the robot relocation method described in the embodiments of the present application when the computer program runs on a processor.
  • the robot relocation method disclosed in the present application includes: determining a relocation mode of the robot according to whether there is a preset initial pose of the robot in the two-dimensional grid map; Perform multi-vision relocation on each image frame acquired by the first number of synchronous cameras to determine the first estimated pose of the robot; perform multi-line laser relocation according to the first estimated pose to determine the robot the second estimated pose; select the estimated pose to be corrected from the first estimated pose and the second estimated pose according to the relocation mode; The estimated pose is iteratively revised until the iteratively revised position covariance converges and is smaller than the preset position covariance threshold corresponding to the estimated pose to be revised, and the iteratively revised angular covariance converges and is smaller than the to-be-revised estimate The preset angle covariance threshold corresponding to the pose.
  • the technical solution of the present application realizes the repositioning of the robot by combining multi-eye vision and multi-line laser, so that the accuracy and robustness of the repositioning of the robot are higher.
  • FIG. 1 shows a schematic flowchart of a method for relocating a robot proposed by an embodiment of the present application
  • FIG. 2 shows a schematic diagram of the layout of a multi-eye camera and a multi-line laser on the rigid body structure of a robot proposed by an embodiment of the present application;
  • FIG. 3 shows a schematic flowchart of the multi-eye relocation of a robot using a multi-eye camera according to an embodiment of the present application
  • FIG. 4 shows a schematic flowchart of another robot using a multi-eye camera to perform multi-eye vision relocation according to an embodiment of the present application
  • FIG. 5 shows a schematic flowchart of another kind of robot that uses a multi-eye camera to perform multi-eye vision relocation according to an embodiment of the present application
  • FIG. 6 shows a schematic flowchart of a multi-line laser relocation performed by a robot using a multi-line laser according to an embodiment of the present application
  • FIG. 7 shows a schematic flowchart of a method for yaw angle compensation of a robot proposed by an embodiment of the present application
  • FIG. 8 shows a schematic flowchart of a method for a robot to select an estimated pose to be corrected according to an embodiment of the present application.
  • 1-Robot relocation device 10-Relocation mode determination module; 20-First estimated pose determination module; 30-Second estimated pose determination module; 40-To-be-corrected estimated pose selection module; 50-Estimated pose Correction module.
  • the robot relocation method disclosed in the present application takes a quad camera and a single multi-line lidar as an example, as shown in FIG. 2 , the rectangular body in the figure represents the rigid body structure installed by the quad camera and the multi-line lidar; Four cameras are installed around, that is, one camera is installed on the front, back, left and right.
  • the four-eye camera can be arranged on the horizontal plane of the rigid body center, and the time of the multi-eye cameras is synchronized.
  • a multi-line laser radar is installed on the top of the rectangular body, and the multi-line laser radar is arranged. It is located in the center position directly above the rigid body, and can scan the field of view horizontally in 360 degrees.
  • the rigid body structure in the figure schematically represents the robot.
  • the robot can be a robot with a human body structure, a cylindrical sweeping robot, or a robot with any structure.
  • the shape of the robot is not limited here.
  • This application includes two modes: global relocation and local relocation.
  • the distinction between global relocation and local relocation depends on whether there is a user-specified position input.
  • the prerequisites for the establishment of this application mainly include the following three points:
  • the simultaneous localization and mapping (SLAM) SLAM process of the environmental map has been successfully completed by multi-line laser fusion multi-eye vision, and the laser key frame point cloud features and corresponding pose information have been successfully obtained, and successfully passed
  • the 2D grid navigation map of the application scene is generated by compressing the 3D point cloud.
  • the SLAM process is that the robot starts from an unknown location in an unknown environment, locates its own position and attitude by repeatedly observing map features (such as corners, pillars, etc.) during the movement process, and then incrementally based on its own position Build a map to achieve simultaneous localization and map construction.
  • Visual keyframes bind laser keyframes one by one, successfully obtain the two-dimensional feature information of multi-eye visual keyframes, and successfully map and optimize the generation by combining the bound laser keyframe pose and the relative pose relationship between the camera and the laser Multi-eye vision sparse feature maps for application scenarios.
  • each frame of visual image needs to be synchronously bound with one frame of chassis odometer information for moving when the rotation stops.
  • Accurate estimation of machine pose where each frame of visual image refers to a set of image data consisting of various cameras at the same time.
  • the visual features can be any artificial corner features such as harris, ORB, SIFT, and SURF.
  • the multi-vision relocation algorithm and the multi-line laser relocation algorithm are called to determine the estimated pose, and then the relocation correction algorithm is used to iteratively correct the estimated pose until the iteratively corrected position covariance converges and is smaller than the The preset position covariance threshold corresponding to the modified estimated pose, and the iteratively modified angle covariance converges and is smaller than the preset angle covariance threshold corresponding to the estimated pose to be modified.
  • FIG. 1 shows a robot relocation method including the following steps:
  • S10 Determine the relocation mode of the robot according to whether there is a preset initial pose of the robot in the two-dimensional grid map.
  • the preset initial pose of the robot the user can manually input the coordinate position of the two-dimensional grid map and the yaw angle of the robot as the initial pose of the robot, or directly select a grid position in the two-dimensional grid map , and set the yaw angle of the robot to determine the initial pose of the robot.
  • the robot pose includes the coordinate position and the yaw angle.
  • the relocation mode is a local relocation mode; if there is no preset initial pose of the robot in the two-dimensional grid map, then The relocation mode is a global relocation mode.
  • the preset first number of synchronous cameras constitutes a multi-eye camera, and the preferred preset first number may be 4.
  • Cameras for acquiring environmental images can be installed at the front, rear, left, and right of the robot, and the four cameras can simultaneously acquire environmental images. It can be understood that the environment images obtained by the quad camera synchronously can cover the environment around the robot. In theory, there is no blind spot of vision, which is beneficial for the robot to perform multi-eye visual relocation according to each image frame obtained by the quad camera, so as to determine the position of the robot.
  • the first estimated pose is provided to determine the position of the robot.
  • S30 Perform multi-line laser repositioning according to the first estimated pose to determine the second estimated pose of the robot.
  • the multi-line lidar is arranged in the center position directly above the robot, and can scan the field of view in all directions with the robot as the center of the sphere, and can obtain all obstacles around the robot.
  • Multi-line laser repositioning may be performed according to the first estimated pose determined by the polycular vision repositioning to determine the second estimated pose of the robot.
  • S40 Select an estimated pose to be corrected from the first estimated pose and the second estimated pose according to the relocation mode.
  • the estimated pose to be corrected is selected from the estimated pose and the second estimated pose.
  • the relocation mode is the partial relocation mode:
  • the distance between the position of the first estimated pose and the position of the second estimated pose is less than a preset distance threshold, And when the absolute value of the difference between the navigation angle of the first estimated pose and the navigation angle of the second estimated pose is less than a preset angle difference threshold, the second estimated pose is selected as the estimate to be corrected pose; if only the second estimated pose is successfully obtained, the second estimated pose is selected as the estimated pose to be corrected;
  • the relocation mode is the global relocation mode:
  • the second estimated pose is selected as the estimated pose to be corrected; if only the first estimated pose is successfully obtained, the first estimated pose is selected as the to-be-corrected pose Estimated pose.
  • S50 Use the relocation correction algorithm to iteratively correct the selected estimated pose to be corrected until the iteratively corrected position covariance converges and is smaller than the preset position covariance threshold corresponding to the estimated pose to be corrected, and iteratively The corrected angle covariance converges and is smaller than the preset angle covariance threshold corresponding to the estimated pose to be corrected.
  • the relocation correction algorithm may be adaptive Monte Carlo Localization (AMCL).
  • AMCL relocation correction algorithm can be used to continuously perform multiple particle resampling and optimization operations to converge the position covariance and the angle covariance, until the iteratively corrected position covariance converges and is smaller than the predicted pose corresponding to the estimated pose to be corrected.
  • a position covariance threshold is set, and the iteratively corrected angle covariance converges and is smaller than the preset angle covariance threshold corresponding to the estimated pose to be corrected.
  • the method for relocating a robot disclosed in this embodiment includes: determining a relocation mode of the robot according to whether there is a preset initial pose of the robot in the two-dimensional grid map; Suppose that each image frame acquired by a first number of synchronous cameras is subjected to multi-vision relocation to determine the first estimated pose of the robot; multi-line laser relocation is performed according to the first estimated pose to determine the the second estimated pose of the robot; select the estimated pose to be corrected from the first estimated pose and the second estimated pose according to the relocation mode; use the relocation correction algorithm to correct the selected pose to be corrected
  • the estimated pose is iteratively corrected until the iteratively corrected position covariance converges and is smaller than the preset position covariance threshold corresponding to the estimated pose to be corrected, and the iteratively corrected angular covariance converges and is smaller than the to-be-corrected angular covariance
  • the preset angle covariance threshold corresponding to the estimated pose includes: determining a relocation mode of the robot according to whether there is
  • This embodiment shows that the multi-vision relocation method of the robot includes the following steps:
  • each camera in the quad camera is a synchronous camera, and each camera operates synchronously.
  • i represents the number of the image frame acquired by the robot after one rotation.
  • each image frame can be extracted Corresponding image features and image descriptors
  • S22 Utilize image features and image descriptors Perform a loopback search in the corresponding loopback database to determine the first The number of matching points between each image frame and each loop closure candidate frame in the loop closure database.
  • the number of matching points between each image frame and each loopback candidate frame in the loopback database is determined.
  • the most similar loop closure candidate frame of each image frame, and then the pose of the robot can be estimated according to the pose information corresponding to the most similar loop closure candidate frame.
  • S23 Determine whether the maximum number of introverted points matching is less than a preset matching number threshold.
  • the matching number threshold may be 15, or a larger value may be selected. It should be understood that if the matching number threshold is too small, it may lead to a large error in the positioning of the robot, but if the matching number threshold is too large, it may cause the robot to fail.
  • the multi-eye vision localization process has a low success rate.
  • step S24 is performed; if the maximum matching number of introverted points is greater than or equal to the matching number threshold, step S25 is performed.
  • S25 Determine the first loopback index of the loopback frame corresponding to the maximum number of inbound point matches, and determine the first loopback index according to the loopback retrieval information corresponding to the first loopback index The loopback retrieval information corresponding to each image frame.
  • the loopback retrieval information corresponding to each image frame can be recorded as Loopback to retrieve information Generally, it includes the loop closure index of the loop closure frame corresponding to the maximum number of convergent point matches, the two-dimensional feature of the loop closure frame, the two-dimensional feature descriptor of the loop closure frame, the The involute point matching relationship between each image frame and the loop closure frame corresponding to the maximum inbound point matching number, and the pose bound to the loop closure frame corresponding to the introverted point matching number and the maximum inbound point matching number.
  • step S28 is performed.
  • the first loopback index is the loopback index of the loopback frame corresponding to the maximum number of involute point matching, and the pose of the laser key frame corresponding to the first loopback index may be used as the first estimated pose.
  • the first position covariance threshold and the first angle covariance threshold corresponding to the first estimated pose may be preset as 1 and 1.08, respectively.
  • the multi-vision relocation method of the robot further includes the following steps:
  • S281 Determine the z-th image frame acquired by the t-th synchronous camera for obtaining the maximum number of convergent point matches, and the z-th image frame The loopback retrieval information corresponding to each image frame.
  • S283 Determine whether the second loop closure index corresponding to the maximum number of in-convex point matches corresponding to each synchronous camera except the t th synchronous camera is equal to the first loop closure index.
  • step S284 is executed; if each synchronous camera except the t-th synchronous camera If the second loopback index corresponding to the maximum number of introverted point matches is not equal to the first loopback index, steps S285 to S286 are performed.
  • S284 Retain the loopback retrieval information corresponding to the z-th image frame of each synchronized camera.
  • S286 Optimize the first estimated pose by using each loopback retrieval information and the z-th image frame collected by each synchronous camera.
  • each loopback retrieval information and the z-th image frame collected by each synchronous camera establish an overdetermined equation, solve the optimal pose matrix or spatial point coordinates, and optimize the first estimated pose
  • the first position covariance threshold and the first angle covariance threshold corresponding to the optimized first estimated pose may be preset as 0.25 and 0.11, respectively.
  • the multi-vision relocation method of the robot further includes the following steps:
  • S213 Compensate the yaw angle of the first estimated pose by using the difference between the odometer corresponding to the current image frame and the odometer corresponding to the image frame used to determine the first estimated pose.
  • the rotation of the robot stops and if the multi-eye vision positioning is successful, the last frame of the image obtained by the last multi-eye camera and the odometer corresponding to the last frame are obtained synchronously, namely, and M0 represents the total number of image frames obtained by each synchronous camera.
  • M0 represents the total number of image frames obtained by each synchronous camera.
  • This embodiment shows that the multi-line laser repositioning method of the robot includes the following steps:
  • the preset first distance value can be 5m, use the kdtree fast retrieval algorithm, take the position coordinates corresponding to the first estimated pose as the center, the preset first distance value is the radius, and use multi-line lasers to obtain adjacent key frames of each laser .
  • S33 Determine the pose corresponding to the laser adjacent key frame closest to the first estimated pose as the best adjacent pose.
  • the preset second distance value is the radius, and each adjacent laser key frame corresponding to the second distance value is obtained by using the multi-line laser, and the The second distance value is smaller than the first distance value.
  • the preset second distance value is smaller than the first distance value, and the second distance value can be 3m.
  • the preset second distance value is centered on the position coordinates corresponding to the best adjacent poses. Takes keyframes for each adjacent laser using a multi-line laser for the radius. It can be understood that each adjacent laser key frame obtained with the second distance value as the radius is different from each adjacent laser key frame obtained with the first distance value as the radius.
  • each adjacent laser key frame is used to perform a preset second number of compensations on the yaw angle corresponding to the optimal adjacent pose, and the ICP matching algorithm is used to determine the corresponding compensation results.
  • S36 Determine the minimum mean square error from the mean square error of the ICP matching corresponding to each adjacent laser key frame.
  • the multi-line laser relocation fails; if the minimum mean square error is less than the preset mean square error threshold, the multi-line laser relocation succeeds, and step S39 is performed.
  • the second position covariance threshold and the second angle covariance threshold corresponding to the second estimated pose may be preset as 0.1 and 0.031, respectively.
  • This embodiment shows that the yaw angle compensation in the multi-line laser repositioning method of the robot includes the following steps:
  • S351 During the p-th compensation, use a preset angle constant of p-1 times to compensate for the yaw angle corresponding to the best adjacent pose, where 1 ⁇ p ⁇ P, where P is the preset second number of compensations.
  • the yaw angle will be ideally compensated. If N is too large, the complexity of the algorithm will be excessively increased, and the calculation time will be long.
  • S352 Map the laser point cloud information corresponding to the estimated pose corresponding to the p-th compensation to the map coordinate system to determine the point cloud map corresponding to the estimated pose corresponding to the p-th compensation.
  • each pose has corresponding laser point cloud information
  • the laser point cloud information corresponding to the estimated pose corresponding to the p-th compensation is mapped to the map coordinate system,
  • the point cloud map corresponding to the estimated pose corresponding to the p-th compensation can be determined.
  • the pre-built map map is constructed by using multiple laser key frames.
  • the estimated pose corresponding to the p-th compensation is determined, the estimated pose corresponding to the p-th compensation when the map map is pre-built can be extracted.
  • the local laser point cloud information in the map coordinate system corresponding to the preset number of frames of laser key frames is used to determine the local matching sub-map corresponding to the estimated pose corresponding to the p-th compensation by using each local laser point cloud information.
  • the preset number of frames of laser key frames before and after the estimated pose corresponding to the p-th compensation may be 15 laser key frames before and after the estimated pose corresponding to the p-th compensation, using the 15 frames of laser key frames before and after and The laser key frame corresponding to the estimated pose corresponding to the p-th compensation, that is, 31 frames of laser key frames, determine the local matching sub-map corresponding to the estimated pose corresponding to the p-th compensation.
  • S354 Perform ICP matching on the point cloud image corresponding to the estimated pose corresponding to the p-th compensation and the local matching sub-map corresponding to the estimated pose corresponding to the p-th compensation, and calculate the mean square error of the ICP matching for the p-th compensation and sum Relative change pose.
  • the mean square error of the ICP matching of the p-th compensation is greater than or equal to a preset mean square error threshold.
  • the mean square error threshold can take a value of 0.1. If the mean square error of the ICP matching of the p-th compensation is greater than or equal to the preset mean-square error threshold, the p-th compensation is invalid, and this compensation can be abandoned; if the mean square error of the ICP-matching of the p-th compensation is less than the preset mean square error is the mean square error threshold, then execute steps S356 to S358.
  • the relative change pose is ⁇ T
  • the estimated pose corresponding to the p-th compensation is T'
  • S358 Determine whether p is greater than a preset second number.
  • the yaw angle compensation is completed. If it is less than or equal to the preset second number, S351 to SS358 are repeatedly executed until p is greater than the preset second number, and the yaw angle compensation is completed.
  • a robot repositioning device 1 includes: a repositioning mode determination module 10 , a first estimated pose determination module 20 , a second estimated pose determination module 30 , and an estimated pose to be corrected Selection module 40 and estimated pose modification module 50 .
  • the relocation mode determination module 10 is used to determine the relocation mode of the robot according to whether there is a preset initial robot pose in the two-dimensional grid map; the first estimated pose determination module 20 is used to determine the robot's relocation mode.
  • each image frame obtained by a preset first number of synchronous cameras is used to perform multi-vision repositioning to determine the first estimated pose of the robot;
  • the second estimated pose determination module 30 uses to perform multi-line laser repositioning according to the first estimated pose to determine the second estimated pose of the robot;
  • the estimated pose selection module 40 to be corrected is configured to select a module 40 from the first estimated pose according to the repositioning mode
  • the estimated pose to be corrected is selected from the estimated pose and the second estimated pose; the estimated pose correction module 50 is used to iteratively revise the selected estimated pose to be corrected by using a relocation correction algorithm until the iteration
  • the modified position covariance converges and is smaller than the preset position covariance threshold corresponding to the estimated pose to be modified, and the iteratively
  • the first estimated pose determination module 20 includes:
  • Image feature and descriptor extraction unit for Extract corresponding image features from each image frame and image descriptors Represents the i-th image frame captured by the j-th synchronized camera.
  • the first introverted point matching number determination unit is used to utilize image features and image descriptors Perform a loopback search in the corresponding loopback database to determine the first The number of matching points between each image frame and each loop closure candidate frame in the loop closure database.
  • the first loopback retrieval information setting unit is used for if the maximum number of introverted points matching is less than the preset matching number threshold, then the first The loopback retrieval information corresponding to each image frame is set to be empty.
  • the first loop closure retrieval information setting unit is configured to determine the first loop closure index of the loop closure frame corresponding to the maximum number of introverted point matching if the maximum number of inbound point matches is greater than or equal to the matching number threshold, and according to the first loop closure index corresponding
  • the loopback retrieval information determines the first The loopback retrieval information corresponding to each image frame;
  • the visual relocation failure judging unit is configured to fail the multi-eye visual relocation when the loopback retrieval information corresponding to all the image frames is empty.
  • the visual relocation success judging unit is configured to use the pose of the laser key frame corresponding to the first loopback index as the first estimated pose when the loopback retrieval information corresponding to all the image frames is not all empty.
  • the first estimated pose determination module 20 further includes:
  • the target image frame determination unit is used to determine the z-th image frame acquired by the t-th synchronized camera for obtaining the maximum number of in-convex point matches, and the z-th image frame The loopback retrieval information corresponding to each image frame.
  • the second introverted point matching number determination unit is configured to determine the number of introverted point matches between the z th image frame collected by each synchronous camera except the t th synchronous camera and each loop closure candidate frame in the loop closure database.
  • the loopback retrieval information updating unit is configured to use the corresponding second loopback index if the second loopback index corresponding to the maximum number of introverted point matches corresponding to each synchronous camera except the t-th synchronous camera is not equal to the first loopback index
  • the corresponding loopback retrieval result updates the corresponding loopback retrieval information
  • the first estimated pose optimization unit is configured to optimize the first estimated pose by using each loop closure retrieval information and the z-th image frame collected by each synchronous camera.
  • the first estimated pose determination module 20 further includes:
  • the odometer acquiring unit is used to acquire the odometer corresponding to each image frame.
  • the current odometer determination unit is configured to determine the odometer corresponding to the current image frame when the robot completes rotation.
  • a first yaw angle compensation unit configured to use the difference between the odometer corresponding to the current image frame and the odometer corresponding to the image frame used to determine the first estimated pose to compensate for the first estimated pose. Yaw angle.
  • the second estimated pose determination module 30 includes:
  • a laser adjacent key frame obtaining unit configured to use the multi-line laser to obtain each laser adjacent key frame with the position coordinates corresponding to the first estimated pose as the center and the preset first distance value as the radius;
  • an adjacent key frame distance calculation unit configured to respectively calculate the distance between the first estimated pose and the pose corresponding to each of the adjacent key frames of the laser
  • an optimal adjacent pose determination unit configured to determine the pose corresponding to the laser adjacent key frame closest to the first estimated pose as the optimal adjacent pose
  • the adjacent laser key frame acquisition unit is configured to take the position corresponding to the best adjacent pose as the center, the preset second distance value is the radius, and use the multi-line laser to obtain each corresponding to the second distance value. Adjacent laser key frames, the second distance value is smaller than the first distance value;
  • the second yaw angle compensation unit is configured to use each adjacent laser key frame to perform a preset second number of compensations on the yaw angle corresponding to the optimal adjacent pose, and determine the average value of the ICP matching corresponding to the compensation result. square error;
  • the minimum mean square error determination unit is used to determine the minimum mean square error from the mean square errors of the ICP matching corresponding to each adjacent laser key frame.
  • the multi-line laser relocation failure judgment unit is used for if the minimum mean square error is greater than or equal to the preset mean square error threshold, the multi-line laser relocation fails;
  • the multi-line laser repositioning success judgment unit is used for if the minimum mean square error is less than the preset mean square error threshold, the multi-line laser repositioning is successful, and the compensated best adjacent pose is used as the second estimated pose.
  • the second yaw angle compensation unit includes:
  • the initial compensation subunit for estimating the pose is used to compensate the yaw angle corresponding to the best adjacent pose by using a preset angle constant of p-1 times during the p-th compensation, where 1 ⁇ p ⁇ P, P is the Preset the second number of compensations;
  • the point cloud image determination subunit is used to map the laser point cloud information corresponding to the estimated pose corresponding to the p-th compensation to the map coordinate system, so as to determine the point cloud map corresponding to the estimated pose corresponding to the p-th compensation.
  • the local matching sub-map determination sub-unit is used to extract the local laser point cloud information in the map coordinate system corresponding to the pre-set number of frames of laser key frames before and after the p-th compensation corresponding to the estimated pose when the map map is constructed in advance, so as to use
  • Each local laser point cloud information determines the local matching sub-map corresponding to the estimated pose corresponding to the p-th compensation.
  • the mean square error and relative change pose determination subunit is used to perform ICP matching between the point cloud image corresponding to the estimated pose corresponding to the p-th compensation and the local matching sub-map corresponding to the estimated pose corresponding to the p-th compensation, and calculate The mean square error and relative change pose of the ICP matching of the p-th compensation;
  • the compensation invalidity judgment subunit is used for if the mean square error of the ICP matching of the pth compensation is greater than or equal to the preset mean square error threshold, the pth compensation is invalid;
  • Compensation completion determination sub-unit used for performing the estimated pose corresponding to the p-th compensation by using the relative change pose if the mean square error of the ICP matching of the p-th compensation is less than a preset mean-square error threshold Correction until the preset second number of compensations are completed.
  • the relocation mode determination module 10 includes:
  • the local repositioning determination unit is used for setting the repositioning mode to be a local repositioning mode if there is a preset initial pose of the robot.
  • the global repositioning determination unit is used to set the repositioning mode to the global repositioning mode if there is no preset initial pose of the robot.
  • the to-be-corrected estimated pose selection module 40 includes:
  • the relocation mode is the local relocation mode: if the first estimated pose and the second estimated pose are successfully acquired, the position of the first estimated pose and the second estimated pose When the distance of the position of the pose is less than a preset distance threshold, and the absolute value of the difference between the navigation angle of the first estimated pose and the navigation angle of the second estimated pose is less than the preset angle difference threshold, Select the second estimated pose as the estimated pose to be corrected; if only the second estimated pose is successfully obtained, select the second estimated pose as the estimated pose to be corrected;
  • the relocation mode is the global relocation mode: if the first estimated pose and the second estimated pose are successfully acquired, the navigation angle of the first estimated pose and the second estimated pose When the absolute value of the difference between the navigation angles of the estimated pose is smaller than the preset angle difference threshold, the second estimated pose is selected as the estimated pose to be corrected; if only the first estimated pose is successfully obtained, Then, the first estimated pose is selected as the estimated pose to be corrected.
  • a robot repositioning device 1 disclosed in this embodiment uses a repositioning mode determination module 10, a first estimated pose determination module 20, a second estimated pose determination module 30, an estimated pose selection module 40 to be corrected, and an estimated pose
  • the correction module 50 is used in conjunction to execute the robot repositioning method described in the above embodiments.
  • the implementations and beneficial effects involved in the above embodiments are also applicable in this embodiment, and will not be repeated here.
  • the embodiments of the present application relate to a robot, including a memory and a processor, where the memory is used to store a computer program, and the computer program executes the robot relocation described in the embodiments of the present application when the computer program runs on the processor method.
  • the embodiments of the present application relate to a readable storage medium, which stores a computer program, and the computer program executes the robot relocation method described in the embodiments of the present application when the computer program runs on the processor.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more functions for implementing the specified logical function(s) executable instructions. It should also be noted that, in alternative implementations, the functions noted in the block may occur out of the order noted in the figures.
  • each block of the block diagrams and/or flow diagrams, and combinations of blocks in the block diagrams and/or flow diagrams can be implemented using dedicated hardware-based systems that perform the specified functions or actions. be implemented, or may be implemented in a combination of special purpose hardware and computer instructions.
  • each functional module or unit in each embodiment of the present application may be integrated together to form an independent part, or each module may exist independently, or two or more modules may be integrated to form an independent part.
  • the functions are implemented in the form of software function modules and sold or used as independent products, they can be stored in a computer-readable storage medium.
  • the technical solution of the present application can be embodied in the form of a software product in essence, or the part that contributes to the prior art or the part of the technical solution.
  • the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a smart phone, a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, Read-Only Memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program codes .

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Manipulator (AREA)

Abstract

一种机器人重定位方法、装置、机器人和可读存储介质,方法包括:利用预设第一数目个同步相机获取的各个图像帧进行多目视觉重定位,以确定机器人的第一估计位姿;根据第一估计位姿进行多线激光重定位,以确定机器人的第二估计位姿;根据重定位模式从第一估计位姿和第二估计位姿中选择待修正的估计位姿;利用重定位修正算法对所选择的待修正的估计位姿进行迭代修正,直至迭代修正的位置协方差收敛并小于待修正的估计位姿对应的预设位置协方差阈值,且迭代修正的角度协方差收敛并小于待修正的估计位姿对应的预设角度协方差阈值。结合多目视觉和多线激光实现机器人的重定位,使得机器人重定位的准确度更高、鲁棒性更强。重定位装置与重定位方法相对应;机器人执行重定位方法;可读存储介质存储定位方法。

Description

机器人重定位方法、装置、机器人和可读存储介质
相关申请的交叉引用
本申请要求于2020年12月07日提交中国专利局的申请号为2020114403272、名称为“机器人重定位方法、装置、机器人和可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人工智能领域,尤其涉及一种机器人重定位方法、装置、机器人和可读存储介质。
背景技术
在商超、机场、办公、园区等大型导航应用场景中,因往往存在多动态物体、多重复场景以及多空旷狭长区域等高复杂环境的影响,单一传感器方案,如仅靠单目、多目视觉或单线、多线激光,和简单的多传感器融合方案,如单目/多目视觉+单线激光、单目视觉+多线激光等,由于其自身视野或量程、以及特征描述和庞大数据量处理等能力的局限性,往往难以同时提供高精度、高鲁棒性的重定位服务。
申请内容
鉴于上述问题,本申请提出一种机器人重定位方法、装置、机器人和可读存储介质。
本申请的一个实施例提出一种机器人重定位方法,该方法包括:
根据二维栅格地图中是否有预先设定的机器人初始位姿确定所述机器人的重定位模式;
在所述机器人原地旋转的过程中,利用预设第一数目个同步相机获取的各个图像帧进行多目视觉重定位,以确定所述机器人的第一估计位姿;
根据所述第一估计位姿进行多线激光重定位,以确定所述机器人的第二估计位姿;
根据所述重定位模式从所述第一估计位姿和所述第二估计位姿中选择待修正的估计位姿;
利用重定位修正算法对所选择的待修正的估计位姿进行迭代修正,直至迭代修正的位置协方差收敛并小于所述待修正的估计位姿对应的预设位置协方差阈值,且迭代修正的角度协方差收敛并小于所述待修正的估计位姿对应的预设角度协方差阈值。
本申请实施例所述的机器人重定位方法,所述利用预设第一数目个同步相机获取的各个图像帧进行多目视觉重定位,包括:
对第
Figure PCTCN2021131147-appb-000001
个图像帧提取对应的图像特征
Figure PCTCN2021131147-appb-000002
和图像描述子
Figure PCTCN2021131147-appb-000003
表示第j个同步相机采集的第i个图像帧;
利用图像特征
Figure PCTCN2021131147-appb-000004
和图像描述子
Figure PCTCN2021131147-appb-000005
在对应的回环数据库中进行回环检索,以确定第
Figure PCTCN2021131147-appb-000006
个图像帧与回环数据库中各个回环候选帧的内敛点匹配数目;
若最大内敛点匹配数目小于预设的匹配数目阈值,则第
Figure PCTCN2021131147-appb-000007
个图像帧对应的回环检索信息设置为空;
若最大内敛点匹配数目大于等于所述匹配数目阈值,则确定最大内敛点匹配数目对应的回环帧的第一回环索引,并根据所述第一回环索引对应的回环检索信息确定第
Figure PCTCN2021131147-appb-000008
个图像帧对应的回环检索信息;
在全部图像帧对应的回环检索信息均为空时,所述多目视觉重定位失败;
在全部图像帧对应的回环检索信息不全部为空时,将所述第一回环索引对应的激光关键帧位姿作为所述第一估计位姿。
本申请实施例所述的机器人重定位方法,还包括:
确定用于获得最大内敛点匹配数目的第t个同步相机采集的第z个图像帧,以及第
Figure PCTCN2021131147-appb-000009
个图像帧对应的回环检索信息;
确定除了第t个同步相机以外的各个同步相机采集的第z个图像帧与所述回环数据库中各个回环候选帧的内敛点匹配数目;
若除了第t个同步相机以外的各个同步相机对应的最大内敛点匹配数目对应的第二回环索引不等于所述第一回环索引,则利用对应的第二回环索引对应的回环检索结果更新对应的回环检索信息;
利用各个回环检索信息和各个同步相机采集的第z个图像帧优化所述第一估计位姿。
进一步的,本申请实施例所述的机器人重定位方法,还包括:
获取各个图像帧对应的里程计;
在所述机器人旋转完成时,确定当前图像帧对应的里程计;
利用所述当前图像帧对应的里程计与用于确定所述第一估计位姿的图像帧对应的里程计的差值补偿所述第一估计位姿的偏航角。
本申请实施例所述的机器人重定位方法,所述根据所述第一估计位姿进行多线激光重定位,包括:
以所述第一估计位姿对应的位置坐标为中心,预设的第一距离值为半径利用多线激光获取各个激光相邻关键帧;
分别计算所述第一估计位姿与所述各个激光相邻关键帧对应的位姿之间的距离;
确定距离所述第一估计位姿最近的激光相邻关键帧对应的位姿作为最佳相邻位姿;
以所述最佳相邻位姿对应的位置为中心,预设的第二距离值为半径利用所述多线激光获取所述第二距离值对应的各个相邻激光关键帧,所述第二距离值小于所述第一距离值;
利用各个相邻激光关键帧对所述最佳相邻位姿对应的偏航角进行预设第二数目次补偿,并确定补偿结果对应的ICP(Iterative Closest Point,迭代最近点)匹配的均方误差;
从各个相邻激光关键帧对应的ICP匹配的均方误差中确定最小均方误差;
若最小均方误差大于等于预设的均方误差阈值,则多线激光重定位失败;
若最小均方误差小于预设的均方误差阈值,则多线激光重定位成功,将经过补偿的最佳相邻位姿作为第二估计位姿。
本申请实施例所述的机器人重定位方法,每一个相邻激光关键帧对所述最佳相邻位姿对应的偏航角进行预设第二数目次补偿,包括:
在第p次补偿时,利用p-1倍的预设角度常量补偿最佳相邻位姿对应的偏航角,1≤p≤P,P为所述预设第二数目次补偿;
将第p次补偿对应的估计位姿对应的激光点云信息映射至map坐标系下,以确定第p次补偿对应的估计位姿对应的点云图;
抽取在预先构建map地图时第p次补偿对应的估计位姿前后各预设数目帧激光关键帧对应的map坐标系下的局部激光点云信息,以利用各个局部激光点云信息确定第p次补偿对应的估计位姿对应局部匹配子图;
将第p次补偿对应的估计位姿对应的点云图与第p次补偿对应的估计位姿对应的局部匹配子图进行ICP匹配,并计算第p次补偿的ICP匹配的均方误差和相对变化位姿;
若第p次补偿的ICP匹配的均方误差大于等于预设的均方误差阈值,则第p次补偿无效;
若第p次补偿的ICP匹配的均方误差小于预设的均方误差阈值,则利用所述相对变化位姿对所述第p次补偿对应的估计位姿进行修正,直至所述预设第二数目次补偿完成。
本申请实施例所述的机器人重定位方法,所述根据二维栅格地图中是否有预先设定的机器人初始位姿确定所述机器人的重定位模式,包括:
若有预先设定的机器人初始位姿,则所述重定位模式为局部重定位模式;
若无预先设定的机器人初始位姿,则所述重定位模式为全局重定位模式;
进一步的,根据所述重定位模式从所述第一估计位姿和所述第二估计位姿中选择待修正的估计位姿,包括:
在所述重定位模式为局部重定位模式时:
若成功获取到所述第一估计位姿和所述第二估计位姿,则在所述第一估计位姿的位置和所述第二估计位姿的位置的距离小于预设的距离阈值,以及所述第一估计位姿的导航角度和所述第二估计位姿的导航角度的之差的绝对值小于预设的角度差阈值时,选择所述第二估计位姿作为待修正的估计位姿;
若仅成功获取到所述第二估计位姿,则选择所述第二估计位姿作为待修正的估计位姿;
在所述重定位模式为全局重定位模式时:
若成功获取到所述第一估计位姿和所述第二估计位姿,则在所述第一估计位姿的导航角度和所述第二估计位姿的导航角度的之差的绝对值小于预设的角度差阈值时,选择所述第二估计位姿作为待修正的估计位姿;
若仅成功获取到所述第一估计位姿,则选择所述第一估计位姿作为待修正的估计位姿。
本申请的再一个实施例提供的一种机器人重定位装置,该装置包括:
重定位模式确定模块,用于根据二维栅格地图中是否有预先设定的机器人初始位姿确定所述机器人的重定位模式;
第一估计位姿确定模块,用于在所述机器人原地旋转的过程中,利用预设第一数目个同步相机获取的各个图像帧进行多目视觉重定位,以确定所述机器人的第一估计位姿;
第二估计位姿确定模块,用于根据所述第一估计位姿进行多线激光重定位,以确定所述机器人的第二估计位姿;
待修正估计位姿选择模块,用于根据所述重定位模式从所述第一估计位姿和所述第二估计位姿中选择待修正的估计位姿;
估计位姿修正模块,用于利用重定位修正算法对所选择的待修正的估计位姿进行迭代修正,直至迭代修正的位置协方差收敛并小于所述待修正的估计位姿对应的预设位置协方差阈值,且迭代修正的角度协方差收敛并小于所述待修正的估计位姿对应的预设角度协方差阈值。
本申请实施例涉及一种机器人,包括存储器和处理器,所述存储器用于存储计算机程序,所述计算机程序在所述处理器上运行时执行本申请实施例所述的机器人重定位方法。
本申请实施例涉及一种可读存储介质,其存储有计算机程序,所述计算机程序在处理器上运行时执行本申请实施例所述的机器人重定位方法。
本申请公开的机器人重定位方法包括:根据二维栅格地图中是否有预先设定的机器人初始位姿确定所述机器人的重定位模式;在所述机器人原地旋转的过程中,利用预设第一数目个同步相机获取的各个图像帧进行多目视觉重定位,以确定所述机器人的第一估计位姿;根据所述第一估计位姿进行多线激光重定位,以确定所述机器人的第二估计位姿;根据所述重定位模式从所述第一估计位姿和所述第二估计位姿中选择待修正的估计位姿;利用重定位修正算法对所选择的待修正的估计位姿进行迭代修正,直至迭代修正的位置协方差收敛并小于所述待修正的估计位姿对应的预设位置协方差阈值,且迭代修正的角度协方差收敛并小于所述待修正的估计位姿对应的预设角度协方差阈值。本申请的技术方案结合多目视觉和多线激光实现机器人的重定位,使得机器人重定位的准确度更高、鲁棒性更强。
附图说明
为了更清楚地说明本申请的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,应当理解,以下附图仅示出了本申请的某些实施例,因此不应被看作是对本申请保护范围的限定。在各个附图中,类似的构成部分采用类似的编号。
图1示出了本申请实施例提出的一种机器人重定位方法的流程示意图;
图2示出了本申请实施例提出的一种多目相机和多线激光在机器人刚体结构上的布局示意图;
图3示出了本申请实施例提出的一种机器人利用多目相机进行多目视觉重定位的流程示意图;
图4示出了本申请实施例提出的另一种机器人利用多目相机进行多目视觉重定位的流程示意图;
图5示出了本申请实施例提出的另再种机器人利用多目相机进行多目视觉重定位的流程示意图;
图6示出了本申请实施例提出的一种机器人利用多线激光进行多线激光重定位的流程示意图;
图7示出了本申请实施例提出的一种机器人偏航角补偿方法的流程示意图;
图8示出了本申请实施例提出的一种机器人选择待修正估计位姿方法的流程示意图。
主要元件符号说明:
1-机器人重定位装置;10-重定位模式确定模块;20-第一估计位姿确定模块;30-第二估计位姿确定模块;40-待修正估计位姿选择模块;50-估计位姿修正模块。
具体实施方式
下面将结合本申请实施例中附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。
通常在此处附图中描述和示出的本申请实施例的组件可以以各种不同的配置来布置和设计。因此,以下对在附图中提供的本申请的实施例的详细描述并非旨在限制要求保护的本申请的范围,而是仅仅表示本申请的选定实施例。基于本申请的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本申请保护的范围。
在下文中,可在本申请的各种实施例中使用的术语“包括”、“具有”及其同源词仅意在表示特定特征、数字、步骤、操作、元件、组件或前述项的组合,并且不应被理解为首先排除一个或更多个其它特征、数字、步骤、操作、元件、组件或前述项的组合的存在或增加一个或更多个特征、数字、步骤、操作、元件、组件或前述项的组合的可能性。
此外,术语“第一”、“第二”、“第三”等仅用于区分描述,而不能理解为指示或暗示相对重要性。
除非另有限定,否则在这里使用的所有术语(包括技术术语和科学术语)具有与本申请的各种实施例所属领域普通技术人员通常理解的含义相同的含义。所述术语(诸如在一般使用的词典中限定的术语)将被解释为具有与在相关技术领域中的语境含义相同的含义并且将不被解释为具有理想化的含义或过于正式的含义,除非在本申请的各种实施例中被清楚地限定。
本申请公开的机器人重定位方法以四目相机和单颗多线激光雷达为例,如图2所示,图中矩形体代表四目相机和多线激光雷达安装的刚体结构;图中矩形体四周,即前后左右个安装一个相机,可以将四目相机前后左右分别布置在刚体中心的水平面上,多目相机时间同步;图中矩形体顶部安装一颗多线激光雷达,多线激光雷达布置于刚体正上方中心位置,并可以360度水平扫描视野。可以理解,图中的刚体结构示意性的表示机器人,机器人可以是具有人体结构的机器人、可以是圆柱体扫地机器人,还可以是任意结构的机器人,在此机器人的形状不做限制。
本申请包括全局重定位和局部重定位两种模式,全局重定位和局部重定位模式的区分取决于是否有用户指定的位置输入。本申请成立的先决条件主要有以下三点:
1.已成功由多线激光融合多目视觉完成环境地图的同步定位与地图构建(Simultaneous localization and mapping,SLAM)SLAM过程,成功获取激光关键帧点云特征和相应的位姿信息,并成功通过压缩3D点云的方式生成了应用场景的二维栅格导航地图。其中,应当理解,SLAM过程是机器人从未知环境的未知地点出发,在运动过程中通过重复观测到的地图特征(比如,墙角,柱子等)定位自身位置和姿态,再根据自身位置增量式的构建地图,从而达到同时定位和地图构建的目的。
2.视觉关键帧一一绑定激光关键帧,成功获取多目视觉关键帧二维特征信息,并结合所绑定的激光关键帧位姿以及相机和激光的相对位姿关系成功映射、优化生成了应用场景的多目视觉稀疏特征图。
3.重定位过程中,要求定位机器人必须至少原地旋转一周,以提高视觉重定位的成功率;且每帧视觉图像需要一一同步绑定一帧底盘里程计信息,用于旋转停止时移动机器位姿的精确估计,其中每帧视觉图像指包含同一时刻各个相机组成的一组图像数据。
机器人重定位启动前,需要确保该应用场景中先决条件所述的3D激光关键帧位姿、点云信息、二维栅格导航地图信息、多目视觉关键帧二维特征及其所对应的稀疏点云信息均已成功完成加载。此处的视觉特征可以是harris、ORB、SIFT和SURF等任何人工角点特征。定位启动后,先查看用户在栅格导航地图中是否有指定位姿输入,如果有,则设置当前***状态为局部重定位模式,否则,将当前***状态设置为全局重定位模式。之后,移动机器会自发原地旋转一周回到起始位姿。在旋转过程中调用多目视觉重定位算法和多线激光重定位算法确定估计位姿,然后利用重定位修正算法对估计位姿进行迭代修正,直至迭代修正的位置协方差收敛并小于所述待修正的估计位姿对应的预设位置协方差阈值,且迭代修正的角度协方差收敛并小于所述待修正的估计位姿对应的预设角度协方差阈值。
实施例1
本实施例,参见图1,示出了一种机器人重定位方法包括以下步骤:
S10:根据二维栅格地图中是否有预先设定的机器人初始位姿确定所述机器人的重定位模式。
预先设定的机器人初始位姿,用户可以手动输入二维栅格地图的坐标位置以及机器人的偏航角度作为机器人初始位姿,也可以在二维栅格地图中直接选定某一栅格位置,并设置机器人的偏航角度,进而确定机器人初始位姿。可以理解,机器人位姿包括坐标位置和偏航角度。
进一步的,若二维栅格地图中有预先设定的机器人初始位姿,则所述重定位模式为局部重定位模式;若二维栅格地图中无预先设定的机器人初始位姿,则所述重定位模式为全局重定位模式。
S20:在所述机器人原地旋转的过程中,利用预设第一数目个同步相机获取的各个图像帧进行多目视觉重定位,以确定所述机器人的第一估计位姿。
预设第一数目个同步相机构成多目相机,优选的预设第一数目可以是4,可以在机器人的前后左右分别安装用于获取环境图像的相机,4个相机同步获取环境图像。可以理解,四目相机同步获取的环境图像可以覆盖机器人四周的环境,理论上,无视觉死角,有利于机器人根据四目相机获取的各个图像帧进行多目视觉重定位,以确定所述机器人的第一估计位姿。
S30:根据所述第一估计位姿进行多线激光重定位,以确定所述机器人的第二估计位姿。
多线激光雷达布置于机器人的正上方中心位置,并可以机器人为球心,全方位扫描视野,可以获取机器人周围的全部障碍物。可以根据多目视觉重定位确定的第一估计位姿进行多线激光重定位,以确定所述机器人的第二估计位姿。
S40:根据所述重定位模式从所述第一估计位姿和所述第二估计位姿中选择待修正的估计位姿。
应当理解,多目视觉重定位和多线激光重定位均存在失败的情况,在不同的重定位模式,可以根据多目视觉重定位和多线激光重定位成功或失败的情况从所述第一估计位姿和所述第二估计位姿中选择待修正的估计位姿。
示范性的,在所述重定位模式为局部重定位模式时:
若成功获取到所述第一估计位姿和所述第二估计位姿,则在所述第一估计位姿的位置和所述第二估计位姿的位置的距离小于预设的距离阈值,以及所述第一估计位姿的导航角度和所述第二估计位姿的导航角度的之差的绝对值小于预设的角度差阈值时,选择所述第二估计位姿作为待修正的估计位姿;若仅成功获取到所述第二估计位姿,则选择所述第二估计位姿作为待修正的估计位姿;
示范性的,在所述重定位模式为全局重定位模式时:
若成功获取到所述第一估计位姿和所述第二估计位姿,则在所述第一估计位姿的导航角度和所述第二估计位姿的导航角度的之差的绝对值小于预设的角度差阈值时,选择所述第二估计位姿作为待修正的估计位姿;若仅成功获取到所述第一估计位姿,则选择所述第一估计位姿作为待修正的估计位姿。
S50:利用重定位修正算法对所选择的待修正的估计位姿进行迭代修正,直至迭代修正的位置协方差收敛并小于所述待修正的估计位姿对应的预设位置协方差阈值,且迭代修正的角度协方差收敛并小于所述待修正的估计位姿对应的预设角度协方差阈值。
重定位修正算法可以是自适应蒙特卡洛定位(adaptive Monte Carlo Localization,AMCL)。可以利用AMCL重定位修正算法连续进行多次的粒子重采样、优化操作用以收敛位置协方差和角度协方差,直至迭代修正的位置协方差收敛并小于所述待修正的估计位姿对应的预设位置协方差阈值,且迭代修正的角度协方差收敛并小于所述待修正的估计位姿对应的预设角度协方差阈值。
本实施例公开的机器人重定位方法包括:根据二维栅格地图中是否有预先设定的机器人初始位姿确定所述机器人的重定位模式;在所述机器人原地旋转的过程中,利用预设第一数目个同步相机获取的各个图像帧进行多目视觉重定位,以确定所述机器人的第一估计位姿;根据所述第一估计位姿进行多线激光重定位,以确定所述机器人的第二估计位姿;根据所述重定位模式从所述第一估计位姿和所述第二估计位姿中选择待修正的估计位姿;利用重定位修正算法对所选择的待修正的估计位姿进行迭代修正,直至迭代修正的位置协方差收敛并小于所述待修正的估计位姿对应的预设位置协方差阈值,且迭代修正的角度协方差收敛并小于所述待修正的估计位姿对应的预设角度协方差阈值。本实施例的技术方案结合多目视觉和多线激光实现机器人的重定位,使得机器人重定位的准确度更高、鲁棒性更强。
实施例2
本实施例,参见图3,示出了机器人的多目视觉重定位方法包括以下步骤:
S21:对第
Figure PCTCN2021131147-appb-000010
个图像帧提取对应的图像特征
Figure PCTCN2021131147-appb-000011
和图像描述子
Figure PCTCN2021131147-appb-000012
表示第j个同步相机采集的第i个图像帧。
以四目相机为例,j的取值范围是1~4,1≤j≤4,代表相机的编号,本实施例中四目相机中的各个相机为同步相机,各个相机为同步运行。i代表机器人旋转一周采集的图像帧的编号。
利用特征提取算法,例如,Harris、SIFT、SURF、FAST、BRIEF、ORB,可以提取每一个图像帧
Figure PCTCN2021131147-appb-000013
对应的图像特征
Figure PCTCN2021131147-appb-000014
和图像描述子
Figure PCTCN2021131147-appb-000015
S22:利用图像特征
Figure PCTCN2021131147-appb-000016
和图像描述子
Figure PCTCN2021131147-appb-000017
在对应的回环数据库中进行回环检索,以确定第
Figure PCTCN2021131147-appb-000018
个图像帧与回环数据库中各个回环候选帧的内敛点匹配数目。
利用图像特征
Figure PCTCN2021131147-appb-000019
和图像描述子
Figure PCTCN2021131147-appb-000020
在对应的回环数据库中进行回环检索,可以理解,回环数据库中预先存有多目相机获取的多张回环候选帧,并且每一张回环候选帧都有对应的位姿信息。
可以通过确定第
Figure PCTCN2021131147-appb-000021
个图像帧与回环数据库中各个回环候选帧的内敛点匹配数目,确定与第
Figure PCTCN2021131147-appb-000022
个图像帧最相似的回环候选帧,进而可以根据最相似的回环候选帧对应的位姿信息估计机器人的位姿。
S23:判断最大内敛点匹配数目是否小于预设的匹配数目阈值。
优选的,匹配数目阈值可以为15,也可以选取更大的数值,应当理解,匹配数目阈值若过小,可能导致机器人定位的误差很大,但是,匹配数目阈值若过大,可能导致机器人的多目视觉定位过程成功率较低。
若最大内敛点匹配数目小于预设的匹配数目阈值,则执行步骤S24;若最大内敛点匹配数目大于等于所述匹配数目阈值,则执行步骤S25。
S24:第
Figure PCTCN2021131147-appb-000023
个图像帧对应的回环检索信息设置为空。
S25:确定最大内敛点匹配数目对应的回环帧的第一回环索引,并根据所述第一回环索引对应的回环检索信息确定第
Figure PCTCN2021131147-appb-000024
个图像帧对应的回环检索信息。
Figure PCTCN2021131147-appb-000025
个图像帧对应的回环检索信息可以记为
Figure PCTCN2021131147-appb-000026
回环检索信息
Figure PCTCN2021131147-appb-000027
一般包含最大内敛点匹配数目对应的回环帧的回环索引、回环帧二维特征、回环帧二维特征描述子、第
Figure PCTCN2021131147-appb-000028
个图像帧与最大内敛点匹配数目对应的回环帧的内敛点匹配关系以及内敛点匹配数目和最大内敛点匹配数目对应的回环帧所绑定的位姿。
S26:在获取到全部图像帧对应的回环检索信息后,判断全部图像帧对应的回环检索信息是否均为空。
若全部图像帧对应的回环检索信息均为空,则多目视觉重定位失败;若全部图像帧对应的回环检索信息不全部为空,则执行步骤S28。
S27:多目视觉重定位失败。
S28:将所述第一回环索引对应的激光关键帧位姿作为所述第一估计位姿。
第一回环索是最大内敛点匹配数目对应的回环帧的回环索引,可以将第一回环索引对应的激光关键帧位姿作为所述第一估计位姿。此时,第一估计位姿对应的第一位置协方差阈值和第一角度协方差阈值可以分别预设为1和1.08。
实施例3
进一步的,本实施例,参见图4,示出了机器人的多目视觉重定位方法还包括以下步骤:
S281:确定用于获得最大内敛点匹配数目的第t个同步相机采集的第z个图像帧,以及第
Figure PCTCN2021131147-appb-000029
个图像帧对应的回环检索信息。
S282:确定除了第t个同步相机以外的各个同步相机采集的第z个图像帧与所述回环数据库中各个回环候选帧的内敛点匹配数目。
S283:判断除了第t个同步相机以外的各个同步相机对应的最大内敛点匹配数目对应的第二回环索引是否等于所述第一回环索引。
若除了第t个同步相机以外的各个同步相机对应的最大内敛点匹配数目对应的第二回环索引等于所述第一回环索引,则执行步骤S284;若除了第t个同步相机以外的各个同步相机对应的最大内敛点匹配数目对应的第二回环索引不等于所述第一回环索引,则执行步骤S285~S286。
S284:保留各个同步相机的第z个图像帧对应的回环检索信息。
S285:利用对应的第二回环索引对应的回环检索结果更新对应的回环检索信息。
S286:利用各个回环检索信息和各个同步相机采集的第z个图像帧优化所述第一估计位姿。
本实施例利用BA优化算法,各个回环检索信息和各个同步相机采集的第z个图像帧建立超定方程,解出最优的位姿矩阵或空间点坐标,优化所述第一估计位姿,优化后的第一估计位姿对应的第一位置协方差阈值和第一角度协方差阈值可以分别预设为0.25和0.11。
实施例4
进一步的,本实施例,参见图5,示出了机器人的多目视觉重定位方法还包括以下步骤:
S211:获取各个图像帧对应的里程计。
S212:在所述机器人旋转完成时,确定当前图像帧对应的里程计。
S213:利用所述当前图像帧对应的里程计与用于确定所述第一估计位姿的图像帧对应的里程计的差值补偿所述第一估计位姿的偏航角。
示范性的,机器人旋转停止,如果多目视觉定位成功,则同步获取最后多目相机获取的最后一帧图像和最后一帧图像对应的里程计,即分别为
Figure PCTCN2021131147-appb-000030
Figure PCTCN2021131147-appb-000031
M0表示每一个同步相机获取到的图像帧的总数,计算多目视觉定位成功时到机器人旋转停止时刻,机器人位姿中的角度变化,记作▲od,有
Figure PCTCN2021131147-appb-000032
表示机器人旋转停止时刻的偏航角,odz表示机器人多目视觉定位成功时的偏航角。进一步的,利用▲od补偿所述第一估计位姿的偏航角。
实施例5
本实施例,参见图6,示出了机器人的多线激光重定位方法包括以下步骤:
S31:以所述第一估计位姿对应的位置坐标为中心,预设的第一距离值为半径利用多线激光获取各个激光相邻关键帧。
预设的第一距离值可以是5m,利用kdtree快速检索算法,以第一估计位姿对应的位置坐标为中心,预设的第一距离值为半径利用多线激光获取各个激光相邻关键帧。
应当理解,若激光相邻关键帧的数目为0,多线激光重定位失败。
S32:分别计算所述第一估计位姿与所述各个激光相邻关键帧对应的位姿之间的距离。
S33:确定距离所述第一估计位姿最近的激光相邻关键帧对应的位姿作为最佳相邻位姿。
S34:以所述最佳相邻位姿对应的位置为中心,预设的第二距离值为半径利用所述多线激光获取所述第二距离值对应的各个相邻激光关键帧,所述第二距离值小于所述第一距离值。
预设的第二距离值小于所述第一距离值,第二距离值可以是3m,利用kdtree快速检索算法,以最佳相邻位姿对应的位置坐标为中心,预设的第二距离值为半径利用多线激光获取各个相邻激光关键帧。可以理解,以第二距离值为半径获取的各个相邻激光关键帧区别于以第以第一距离值为半径获取的各个激光相邻关键帧。
S35:利用各个相邻激光关键帧对所述最佳相邻位姿对应的偏航角进行预设第二数目次补偿,并确定补偿结果对应的ICP匹配的均方误差。
通过遍历所有相邻激光关键帧,分别利用每一相邻激光关键帧对所述最佳相邻位姿对应的偏航角进行预设第二数目次补偿,并通过ICP匹配算法确定补偿结果对应的ICP匹配的均方误差。
S36:从各个相邻激光关键帧对应的ICP匹配的均方误差中确定最小均方误差。
S37:判断最小均方误差是否大于等于预设的均方误差阈值。
若最小均方误差大于等于预设的均方误差阈值,则多线激光重定位失败;若最小均方误差小于预设的均方误差阈值,多线激光重定位成功,执行步骤S39。
S38:多线激光重定位失败。
S39:将经过补偿的最佳相邻位姿作为第二估计位姿。
示范性的,第二估计位姿对应的第二位置协方差阈值和第二角度协方差阈值可以分别预设为0.1和0.031。
实施例6
本实施例,参见图7,示出了机器人的多线激光重定位方法中偏航角补偿包括以下步骤:
S351:在第p次补偿时,利用p-1倍的预设角度常量补偿最佳相邻位姿对应的偏航角,1≤p≤P,P为所述预设第二数目次补偿。
预设角度常量可表示为▲θ,▲θ=2*PI/N,其中,PI=3.1415926,N的取值范围为0<N<36,优选的,N=8,在N=8机器人的偏航角会得到较理想的补偿,N若过大,过度增加算法的复杂度,计算时间较长。
示范性的,第p次补偿对应的偏航角yaw_θP=yaw_θ+(p-1)*▲θ,其中,yaw_θ为最佳相邻位姿对应的偏航角。可以理解,在p=1时,即第1次补偿时,第1次补偿对应的偏航角yaw_θ1=yaw_θ。
S352:将第p次补偿对应的估计位姿对应的激光点云信息映射至map坐标系下,以确定第p次补偿对应的估计位姿对应的点云图。
可以理解,环境地图数据库中预先存储有多个位姿,每一位姿有对应的激光点云信息,将第p次补偿对应的估计位姿对应的激光点云信息映射至map坐标系下,可以确定第p次补偿对应的估计位姿对应的点云图。
S353:抽取在预先构建map地图时第p次补偿对应的估计位姿前后各预设数目帧激光关键帧对应的map坐标系下的局部激光点云信息,以利用各个局部激光点云信息确定第p次补偿对应的估计位姿对应局部匹配子图。
可以理解,预先构建的map地图是利用多个激光关键帧构建的,在确定第p次补偿对应的估计位姿后,可以抽取在预先构建map地图时第p次补偿对应的估计位姿前后各预设数目帧激光关键帧对应的map坐标系下的局部激光点云信息,以利用各个局部激光点云信息确定第p次补偿对应的估计位姿对应局部匹配子图。
示范性的,第p次补偿对应的估计位姿前后各预设数目帧激光关键帧,可以是第p次补偿对应的估计位姿前后各15帧激光关键帧,利用前后15帧激光关键帧以及第p次补偿对应的估计位姿对应的激光关键帧,即31帧激光关键帧确定第p次补偿对应的估计位姿对应局部匹配子图。
S354:将第p次补偿对应的估计位姿对应的点云图与第p次补偿对应的估计位姿对应的局部匹配子图进行ICP匹配,并计算第p次补偿的ICP匹配的均方误差和相对变化位姿。
S355:第p次补偿的ICP匹配的均方误差大于等于预设的均方误差阈值。
其中,均方误差阈值可以取值0.1。若第p次补偿的ICP匹配的均方误差大于等于预设的均方误差阈值,则第p次补偿无效,可以放弃本次补偿;若第p次补偿的ICP匹配的均方误差小于预设的均方误差阈值,则执行步骤S356~S358。
S356:利用所述相对变化位姿对所述第p次补偿对应的估计位姿进行修正。
示范性的,相对变化位姿为▲T,第p次补偿对应的估计位姿为T’,对所述第p次补偿对应的估计位姿进行修正的修正结果为Tp=T’*▲T或Tp=▲T*T’。
S357:p=p+1。
S358:判断p是否大于预设的第二数目。
若大于预设的第二数目,则偏航角补偿完成。若小于等于预设的第二数目,则重复执行S351~SS358,直至p大于预设的第二数目,偏航角补偿完成。
实施例7
本实施例,参见图8,示出了一种机器人重定位装置1包括:重定位模式确定模块10、第一估计位姿确定模块20、第二估计位姿确定模块30、待修正估计位姿选择模块40和估计位姿修正模块50。
重定位模式确定模块10,用于根据二维栅格地图中是否有预先设定的机器人初始位姿确定所述机器人的重定位模式;第一估计位姿确定模块20,用于在所述机器人原地旋转的过程中,利用预设第一数目个同步相机获取的各个图像帧进行多目视觉重定位,以确定所述机器人的第一估计位姿;第二估计位姿确定模块30,用于根据所述第一估计位姿进行多线激光重定位,以确定所述机器人的第二估计位姿;待修正估计位姿选择模块40,用于根据所述重定位模式从所述第一估计位姿和所述第二估计位姿中选择待修正的估计位姿;估计位姿修正模块50,用于利用重定位修正算法对所选择的待修正的估计位姿进行迭代修正,直至迭代修正的位置协方差收敛并小于所述待修正的估计位姿对应的预设位置协方差阈值,且迭代修正的角度协方差收敛并小于所述待修正的估计位姿对应的预设角度协方差阈值。
进一步的,第一估计位姿确定模块20包括:
图像特征和描述子提取单元,用于对第
Figure PCTCN2021131147-appb-000033
个图像帧提取对应的图像特征
Figure PCTCN2021131147-appb-000034
和图像描述子
Figure PCTCN2021131147-appb-000035
表示第j个同步相机采集的第i个图像帧。
第一内敛点匹配数目确定单元,用于利用图像特征
Figure PCTCN2021131147-appb-000036
和图像描述子
Figure PCTCN2021131147-appb-000037
在对应的回环数据库中进行回环检索,以确定第
Figure PCTCN2021131147-appb-000038
个图像帧与回环数据库中各个回环候选帧的内敛点匹配数目。
第一回环检索信息设置单元,用于若最大内敛点匹配数目小于预设的匹配数目阈值,则第
Figure PCTCN2021131147-appb-000039
个图像帧对应的回环检索信息设置为空。
第一回环检索信息设置单元,用于若最大内敛点匹配数目大于等于所述匹配数目阈值,则确定最大内敛点匹配数目对应的回环帧的第一回环索引,并根据所述第一回环索引对应的回环检索信息确定第
Figure PCTCN2021131147-appb-000040
个图像帧对应的回环检索信息;
视觉重定位失败判断单元,用于在全部图像帧对应的回环检索信息均为空时,所述多目视觉重定位失败。
视觉重定位成功判断单元,用于在全部图像帧对应的回环检索信息不全部为空时,将所述第一回环索引对应的激光关键帧位姿作为所述第一估计位姿。
进一步的,第一估计位姿确定模块20还包括:
目标图像帧确定单元,用于确定用于获得最大内敛点匹配数目的第t个同步相机采集的第z个图像帧,以及第
Figure PCTCN2021131147-appb-000041
个图像帧对应的回环检索信息。
第二内敛点匹配数目确定单元,用于确定除了第t个同步相机以外的各个同步相机采集的第z个图像帧与所述回环数据库中各个回环候选帧的内敛点匹配数目。
回环检索信息更新单元,用于若除了第t个同步相机以外的各个同步相机对应的最大内敛点匹配数目对应的第二回环索引不等于所述第一回环索引,则利用对应的第二回环索引对应的回环检索结果更新对应的回环检索信息;
第一估计位姿优化单元,用于利用各个回环检索信息和各个同步相机采集的第z个图像帧优化所述第一估计位姿。
第一估计位姿确定模块20还包括:
里程计获取单元,用于获取各个图像帧对应的里程计。
当前里程计确定单元,用于在所述机器人旋转完成时,确定当前图像帧对应的里程计。
第一偏航角补偿单元,用于利用所述当前图像帧对应的里程计与用于确定所述第一估计位姿的图像帧对应的里程计的差值补偿所述第一估计位姿的偏航角。
进一步的,第二估计位姿确定模块30包括:
激光相邻关键帧获取单元,用于以所述第一估计位姿对应的位置坐标为中心,预设的第一距离值为半径利用多线激光获取各个激光相邻关键帧;
相邻关键帧距离计算单元,用于分别计算所述第一估计位姿与所述各个激光相邻关键帧对应的位姿之间的距离;
最佳相邻位姿确定单元,用于确定距离所述第一估计位姿最近的激光相邻关键帧对应的位姿作为最佳相邻位姿;
相邻激光关键帧获取单元,用于以所述最佳相邻位姿对应的位置为中心,预设的第二距离值为半径利用所述多线激光获取所述第二距离值对应的各个相邻激光关键帧,所述第二距离值小于所述第一距离值;
第二偏航角补偿单元,用于利用各个相邻激光关键帧对所述最佳相邻位姿对应的偏航角进行预设第二数目次补偿,并确定补偿结果对应的ICP匹配的均方误差;
最小均方误差确定单元,用于从各个相邻激光关键帧对应的ICP匹配的均方误差中确定最小均方误差。
多线激光重定位失败判断单元,用于若最小均方误差大于等于预设的均方误差阈值,则多线激光重定位失败;
多线激光重定位成功判断单元,用于若最小均方误差小于预设的均方误差阈值,则多线激光重定位成功,将经过补偿的最佳相邻位姿作为第二估计位姿。
进一步的,第二偏航角补偿单元包括:
估计位姿初始补偿子单元,用于在第p次补偿时,利用p-1倍的预设角度常量补偿最佳相邻位姿对应的偏航角,1≤p≤P,P为所述预设第二数目次补偿;
点云图确定子单元,用于将第p次补偿对应的估计位姿对应的激光点云信息映射至map坐标系下,以确定第p次补偿对应的估计位姿对应的点云图。
局部匹配子图确定子单元,用于抽取在预先构建map地图时第p次补偿对应的估计位姿前后各预设数目帧激光关键帧对应的map坐标系下的局部激光点云信息,以利用各个局部激光点云信息确定第p次补偿对应的估计位姿对应局部匹配子图。
均方误差和相对变化位姿确定子单元,用于将第p次补偿对应的估计位姿对应的点云图与第p次补偿对应的估计位姿对应的局部匹配子图进行ICP匹配,并计算第p次补偿的ICP匹配的均方误差和相对变化位姿;
补偿无效判断子单元,用于若第p次补偿的ICP匹配的均方误差大于等于预设的均方误差阈值,则第p次补偿无效;
补偿完成确定子单元,用于若第p次补偿的ICP匹配的均方误差小于预设的均方误差阈值,则利用所述相对变化位姿对所述第p次补偿对应的估计位姿进行修正,直至所述预设第二数目次补偿完成。
进一步的,重定位模式确定模块10包括:
局部重定位确定单元,用于若有预先设定的机器人初始位姿,则所述重定位模式为局部重定位模式。
全局重定位确定单元,用于若无预先设定的机器人初始位姿,则所述重定位模式为全局重定位模式。
进一步的,待修正估计位姿选择模块40包括:
在所述重定位模式为局部重定位模式时:若成功获取到所述第一估计位姿和所述第二估计位姿,则在所述第一估计位姿的位置和所述第二估计位姿的位置的距离小于预设的距离阈值,以及所述第一估计位姿的导航角度和所述第二估计位姿的导航角度的之差的绝对值小于预设的角度差阈值时,选择所述第二估计位姿作为待修正的估计位姿;若仅成功获取到所述第二估计位姿,则选择所述第二估计位姿作为待修正的估计位姿;
在所述重定位模式为全局重定位模式时:若成功获取到所述第一估计位姿和所述第二估计位姿,则在所述第一估计位姿的导航角度和所述第二估计位姿的导航角度的之差的绝对值小于预设的角度差阈值时,选择所述第二估计位姿作为待修正的估计位姿;若仅成功获取到所述第一估计位姿,则选择所述第一估计位姿作为待修正的估计位姿。
本实施例公开的一种机器人重定位装置1通过重定位模式确定模块10、第一估计位姿确定模块20、第二估计位姿确定模块30、待修正估计位姿选择模块40和估计位姿 修正模块50的配合使用,用于执行上述实施例所述的机器人重定位方法,上述实施例所涉及的实施方案以及有益效果在本实施例中同样适用,在此不再赘述。
可以理解,本申请实施例涉及一种机器人,包括存储器和处理器,所述存储器用于存储计算机程序,所述计算机程序在所述处理器上运行时执行本申请实施例所述的机器人重定位方法。
可以理解,本申请实施例涉及一种可读存储介质,其存储有计算机程序,所述计算机程序在处理器上运行时执行本申请实施例所述的机器人重定位方法。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,也可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,附图中的流程图和结构图显示了根据本申请的多个实施例的装置、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或代码的一部分,所述模块、程序段或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在作为替换的实现方式中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,结构图和/或流程图中的每个方框、以及结构图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的***来实现,或者可以用专用硬件与计算机指令的组合来实现。
另外,在本申请各个实施例中的各功能模块或单元可以集成在一起形成一个独立的部分,也可以是各个模块单独存在,也可以两个或更多个模块集成形成一个独立的部分。
所述功能如果以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是智能手机、个人计算机、服务器、或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。

Claims (10)

  1. 一种机器人重定位方法,其特征在于,该方法包括:
    根据二维栅格地图中是否有预先设定的机器人初始位姿确定所述机器人的重定位模式;
    在所述机器人原地旋转的过程中,利用预设第一数目个同步相机获取的各个图像帧进行多目视觉重定位,以确定所述机器人的第一估计位姿;
    根据所述第一估计位姿进行多线激光重定位,以确定所述机器人的第二估计位姿;
    根据所述重定位模式从所述第一估计位姿和所述第二估计位姿中选择待修正的估计位姿;
    利用重定位修正算法对所选择的待修正的估计位姿进行迭代修正,直至迭代修正的位置协方差收敛并小于所述待修正的估计位姿对应的预设位置协方差阈值,且迭代修正的角度协方差收敛并小于所述待修正的估计位姿对应的预设角度协方差阈值。
  2. 根据权利要求1所述的机器人重定位方法,其特征在于,所述利用预设第一数目个同步相机获取的各个图像帧进行多目视觉重定位,包括:
    对第
    Figure PCTCN2021131147-appb-100001
    个图像帧提取对应的图像特征
    Figure PCTCN2021131147-appb-100002
    和图像描述子
    Figure PCTCN2021131147-appb-100003
    表示第j个同步相机采集的第i个图像帧;
    利用图像特征
    Figure PCTCN2021131147-appb-100004
    和图像描述子
    Figure PCTCN2021131147-appb-100005
    在对应的回环数据库中进行回环检索,以确定第
    Figure PCTCN2021131147-appb-100006
    个图像帧与回环数据库中各个回环候选帧的内敛点匹配数目;
    若最大内敛点匹配数目小于预设的匹配数目阈值,则第
    Figure PCTCN2021131147-appb-100007
    个图像帧对应的回环检索信息设置为空;
    若最大内敛点匹配数目大于等于所述匹配数目阈值,则确定最大内敛点匹配数目对应的回环帧的第一回环索引,并根据所述第一回环索引对应的回环检索信息确定第
    Figure PCTCN2021131147-appb-100008
    个图像帧对应的回环检索信息;
    在全部图像帧对应的回环检索信息均为空时,所述多目视觉重定位失败;
    在全部图像帧对应的回环检索信息不全部为空时,将所述第一回环索引对应的激光关键帧位姿作为所述第一估计位姿。
  3. 根据权利要求2所述的机器人重定位方法,其特征在于,还包括:
    确定用于获得最大内敛点匹配数目的第t个同步相机采集的第z个图像帧,以及第
    Figure PCTCN2021131147-appb-100009
    个图像帧对应的回环检索信息;
    确定除了第t个同步相机以外的各个同步相机采集的第z个图像帧与所述回环数据库中各个回环候选帧的内敛点匹配数目;
    若除了第t个同步相机以外的各个同步相机对应的最大内敛点匹配数目对应的第二回环索引不等于所述第一回环索引,则利用对应的第二回环索引对应的回环检索结果更新对应的回环检索信息;
    利用各个回环检索信息和各个同步相机采集的第z个图像帧优化所述第一估计位姿。
  4. 根据权利要求2或3所述的机器人重定位方法,其特征在于,还包括:
    获取各个图像帧对应的里程计;
    在所述机器人旋转完成时,确定当前图像帧对应的里程计;
    利用所述当前图像帧对应的里程计与用于确定所述第一估计位姿的图像帧对应的里程计的差值补偿所述第一估计位姿的偏航角。
  5. 根据权利要求1所述的机器人重定位方法,其特征在于,所述根据所述第一估计位姿进行多线激光重定位,包括:
    以所述第一估计位姿对应的位置坐标为中心,预设的第一距离值为半径利用多线激光获取各个激光相邻关键帧;
    分别计算所述第一估计位姿与所述各个激光相邻关键帧对应的位姿之间的距离;
    确定距离所述第一估计位姿最近的激光相邻关键帧对应的位姿作为最佳相邻位姿;
    以所述最佳相邻位姿对应的位置为中心,预设的第二距离值为半径利用所述多线激光获取所述第二距离值对应的各个相邻激光关键帧,所述第二距离值小于所述第一距离值;
    利用各个相邻激光关键帧对所述最佳相邻位姿对应的偏航角进行预设第二数目次补偿,并确定补偿结果对应的ICP匹配的均方误差;
    从各个相邻激光关键帧对应的ICP匹配的均方误差中确定最小均方误差;
    若最小均方误差大于等于预设的均方误差阈值,则多线激光重定位失败;
    若最小均方误差小于预设的均方误差阈值,则多线激光重定位成功,将经过补偿的最佳相邻位姿作为第二估计位姿。
  6. 根据权利要求5所述的机器人重定位方法,其特征在于,每一个相邻激光关键帧 对所述最佳相邻位姿对应的偏航角进行预设第二数目次补偿,包括:
    在第p次补偿时,利用p-1倍的预设角度常量补偿最佳相邻位姿对应的偏航角,1≤p≤P,P为所述预设第二数目次补偿;
    将第p次补偿对应的估计位姿对应的激光点云信息映射至map坐标系下,以确定第p次补偿对应的估计位姿对应的点云图;
    抽取在预先构建map地图时第p次补偿对应的估计位姿前后各预设数目帧激光关键帧对应的map坐标系下的局部激光点云信息,以利用各个局部激光点云信息确定第p次补偿对应的估计位姿对应局部匹配子图;
    将第p次补偿对应的估计位姿对应的点云图与第p次补偿对应的估计位姿对应的局部匹配子图进行ICP匹配,并计算第p次补偿的ICP匹配的均方误差和相对变化位姿;
    若第p次补偿的ICP匹配的均方误差大于等于预设的均方误差阈值,则第p次补偿无效;
    若第p次补偿的ICP匹配的均方误差小于预设的均方误差阈值,则利用所述相对变化位姿对所述第p次补偿对应的估计位姿进行修正,直至所述预设第二数目次补偿完成。
  7. 根据权利要求1所述的机器人重定位方法,其特征在于,
    所述根据二维栅格地图中是否有预先设定的机器人初始位姿确定所述机器人的重定位模式,包括:
    若有预先设定的机器人初始位姿,则所述重定位模式为局部重定位模式;
    若无预先设定的机器人初始位姿,则所述重定位模式为全局重定位模式;
    根据所述重定位模式从所述第一估计位姿和所述第二估计位姿中选择待修正的估计位姿,包括:
    在所述重定位模式为局部重定位模式时:
    若成功获取到所述第一估计位姿和所述第二估计位姿,则在所述第一估计位姿的位置和所述第二估计位姿的位置的距离小于预设的距离阈值,以及所述第一估计位姿的导航角度和所述第二估计位姿的导航角度的之差的绝对值小于预设的角度差阈值时,选择所述第二估计位姿作为待修正的估计位姿;
    若仅成功获取到所述第二估计位姿,则选择所述第二估计位姿作为待修正的估计位姿;
    在所述重定位模式为全局重定位模式时:
    若成功获取到所述第一估计位姿和所述第二估计位姿,则在所述第一估计位姿的导航角度和所述第二估计位姿的导航角度的之差的绝对值小于预设的角度差阈值时,选择所述第二估计位姿作为待修正的估计位姿;
    若仅成功获取到所述第一估计位姿,则选择所述第一估计位姿作为待修正的估计位姿。
  8. 一种机器人重定位装置,其特征在于,该装置包括:
    重定位模式确定模块,用于根据二维栅格地图中是否有预先设定的机器人初始位姿确定所述机器人的重定位模式;
    第一估计位姿确定模块,用于在所述机器人原地旋转的过程中,利用预设第一数目个同步相机获取的各个图像帧进行多目视觉重定位,以确定所述机器人的第一估计位姿;
    第二估计位姿确定模块,用于根据所述第一估计位姿进行多线激光重定位,以确定所述机器人的第二估计位姿;
    待修正估计位姿选择模块,用于根据所述重定位模式从所述第一估计位姿和所述第二估计位姿中选择待修正的估计位姿;
    估计位姿修正模块,用于利用重定位修正算法对所选择的待修正的估计位姿进行迭代修正,直至迭代修正的位置协方差收敛并小于所述待修正的估计位姿对应的预设位置协方差阈值,且迭代修正的角度协方差收敛并小于所述待修正的估计位姿对应的预设角度协方差阈值。
  9. 一种机器人,其特征在于,包括存储器和处理器,所述存储器用于存储计算机程序,所述计算机程序在所述处理器上运行时执行权利要求1至7任一项所述的机器人重定位方法。
  10. 一种可读存储介质,其特征在于,其存储有计算机程序,所述计算机程序在处理器上运行时执行权利要求1至7任一项所述的机器人重定位方法。
PCT/CN2021/131147 2020-12-07 2021-11-17 机器人重定位方法、装置、机器人和可读存储介质 WO2022121640A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011440327.2A CN112461230B (zh) 2020-12-07 2020-12-07 机器人重定位方法、装置、机器人和可读存储介质
CN202011440327.2 2020-12-07

Publications (1)

Publication Number Publication Date
WO2022121640A1 true WO2022121640A1 (zh) 2022-06-16

Family

ID=74801853

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/131147 WO2022121640A1 (zh) 2020-12-07 2021-11-17 机器人重定位方法、装置、机器人和可读存储介质

Country Status (2)

Country Link
CN (1) CN112461230B (zh)
WO (1) WO2022121640A1 (zh)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115290098A (zh) * 2022-09-30 2022-11-04 成都朴为科技有限公司 一种基于变步长的机器人定位方法和***
CN115326051A (zh) * 2022-08-03 2022-11-11 广州高新兴机器人有限公司 一种基于动态场景的定位方法、装置、机器人及介质
CN115375870A (zh) * 2022-10-25 2022-11-22 杭州华橙软件技术有限公司 回环检测优化方法、电子设备及计算机可读存储装置
CN116155814A (zh) * 2023-04-20 2023-05-23 四川汉科计算机信息技术有限公司 一种数字战场信息控制***及传输方法、补偿方法
CN117132648A (zh) * 2023-04-28 2023-11-28 荣耀终端有限公司 一种视觉定位方法、电子设备及计算机可读存储介质
CN117291984A (zh) * 2023-11-22 2023-12-26 武汉理工大学 一种基于位姿约束的多帧描述符匹配重定位方法及***
CN117506884A (zh) * 2023-01-06 2024-02-06 奇勃(深圳)科技有限公司 机器人的视觉识别定位方法、装置、设备及存储介质
CN117589154A (zh) * 2024-01-19 2024-02-23 深圳竹芒科技有限公司 自移动设备的重定位方法、自移动设备和可读存储介质
CN117761717A (zh) * 2024-02-21 2024-03-26 天津大学四川创新研究院 一种自动回环三维重建***及运行方法
EP4382030A1 (en) 2022-12-07 2024-06-12 Essilor International Method and system for determining a personalized value of an optical feature of a corrective ophthalmic lens

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112461230B (zh) * 2020-12-07 2023-05-09 优必康(青岛)科技有限公司 机器人重定位方法、装置、机器人和可读存储介质
CN113172658A (zh) * 2021-04-09 2021-07-27 北京猎户星空科技有限公司 一种机器人的定位方法、装置、设备及介质
CN113739819B (zh) * 2021-08-05 2024-04-16 上海高仙自动化科技发展有限公司 校验方法、装置、电子设备、存储介质及芯片
CN113436264B (zh) * 2021-08-25 2021-11-19 深圳市大道智创科技有限公司 基于单目多目混合定位的位姿计算方法及***
CN117804423A (zh) * 2022-09-26 2024-04-02 华为云计算技术有限公司 一种重定位方法以及装置

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105652871A (zh) * 2016-02-19 2016-06-08 深圳杉川科技有限公司 移动机器人的重定位方法
CN106092104A (zh) * 2016-08-26 2016-11-09 深圳微服机器人科技有限公司 一种室内机器人的重定位方法及装置
WO2018112795A1 (en) * 2016-12-21 2018-06-28 Intel Corporation Large scale cnn regression based localization via two-dimensional map
CN108759844A (zh) * 2018-06-07 2018-11-06 科沃斯商用机器人有限公司 机器人重定位与环境地图构建方法、机器人及存储介质
CN109084732A (zh) * 2018-06-29 2018-12-25 北京旷视科技有限公司 定位与导航方法、装置及处理设备
EP3447448A1 (en) * 2017-07-24 2019-02-27 PerceptIn, Inc. Fault-tolerance to provide robust tracking for autonomous and non-autonomous positional awareness
CN109556607A (zh) * 2018-10-24 2019-04-02 上海大学 一种快速处理移动机器人定位“绑架”问题的方法
CN109579849A (zh) * 2019-01-14 2019-04-05 浙江大华技术股份有限公司 机器人定位方法、装置和机器人及计算机存储介质
CN109633664A (zh) * 2018-12-29 2019-04-16 南京理工大学工程技术研究院有限公司 基于rgb-d与激光里程计的联合定位方法
CN109974704A (zh) * 2019-03-01 2019-07-05 深圳市智能机器人研究院 一种全局定位与局部定位互校准的机器人及其控制方法
CN110389348A (zh) * 2019-07-30 2019-10-29 四川大学 基于激光雷达与双目相机的定位与导航方法及装置
CN111045017A (zh) * 2019-12-20 2020-04-21 成都理工大学 一种激光和视觉融合的巡检机器人变电站地图构建方法
CN111060101A (zh) * 2018-10-16 2020-04-24 深圳市优必选科技有限公司 视觉辅助的距离slam方法及装置、机器人
CN111337943A (zh) * 2020-02-26 2020-06-26 同济大学 一种基于视觉引导激光重定位的移动机器人定位方法
CN111402331A (zh) * 2020-02-25 2020-07-10 华南理工大学 基于视觉词袋和激光匹配的机器人重定位方法
CN111765888A (zh) * 2019-04-01 2020-10-13 阿里巴巴集团控股有限公司 设备定位方法、装置、电子设备及可读存储介质
CN111983639A (zh) * 2020-08-25 2020-11-24 浙江光珀智能科技有限公司 一种基于Multi-Camera/Lidar/IMU的多传感器SLAM方法
CN112461230A (zh) * 2020-12-07 2021-03-09 深圳市优必选科技股份有限公司 机器人重定位方法、装置、机器人和可读存储介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106679648B (zh) * 2016-12-08 2019-12-10 东南大学 一种基于遗传算法的视觉惯性组合的slam方法
CN107796397B (zh) * 2017-09-14 2020-05-15 杭州迦智科技有限公司 一种机器人双目视觉定位方法、装置和存储介质
CN107908185A (zh) * 2017-10-14 2018-04-13 北醒(北京)光子科技有限公司 一种机器人自主全局重定位方法及机器人
CN108303096B (zh) * 2018-02-12 2020-04-10 杭州蓝芯科技有限公司 一种视觉辅助激光定位***及方法
CN109307508B (zh) * 2018-08-29 2022-04-08 中国科学院合肥物质科学研究院 一种基于多关键帧的全景惯导slam方法
CN109141437B (zh) * 2018-09-30 2021-11-26 中国科学院合肥物质科学研究院 一种机器人全局重定位方法
CN111145251B (zh) * 2018-11-02 2024-01-02 深圳市优必选科技有限公司 一种机器人及其同步定位与建图方法和计算机存储设备

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105652871A (zh) * 2016-02-19 2016-06-08 深圳杉川科技有限公司 移动机器人的重定位方法
CN106092104A (zh) * 2016-08-26 2016-11-09 深圳微服机器人科技有限公司 一种室内机器人的重定位方法及装置
WO2018112795A1 (en) * 2016-12-21 2018-06-28 Intel Corporation Large scale cnn regression based localization via two-dimensional map
EP3447448A1 (en) * 2017-07-24 2019-02-27 PerceptIn, Inc. Fault-tolerance to provide robust tracking for autonomous and non-autonomous positional awareness
CN108759844A (zh) * 2018-06-07 2018-11-06 科沃斯商用机器人有限公司 机器人重定位与环境地图构建方法、机器人及存储介质
CN109084732A (zh) * 2018-06-29 2018-12-25 北京旷视科技有限公司 定位与导航方法、装置及处理设备
CN111060101A (zh) * 2018-10-16 2020-04-24 深圳市优必选科技有限公司 视觉辅助的距离slam方法及装置、机器人
CN109556607A (zh) * 2018-10-24 2019-04-02 上海大学 一种快速处理移动机器人定位“绑架”问题的方法
CN109633664A (zh) * 2018-12-29 2019-04-16 南京理工大学工程技术研究院有限公司 基于rgb-d与激光里程计的联合定位方法
CN109579849A (zh) * 2019-01-14 2019-04-05 浙江大华技术股份有限公司 机器人定位方法、装置和机器人及计算机存储介质
CN109974704A (zh) * 2019-03-01 2019-07-05 深圳市智能机器人研究院 一种全局定位与局部定位互校准的机器人及其控制方法
CN111765888A (zh) * 2019-04-01 2020-10-13 阿里巴巴集团控股有限公司 设备定位方法、装置、电子设备及可读存储介质
CN110389348A (zh) * 2019-07-30 2019-10-29 四川大学 基于激光雷达与双目相机的定位与导航方法及装置
CN111045017A (zh) * 2019-12-20 2020-04-21 成都理工大学 一种激光和视觉融合的巡检机器人变电站地图构建方法
CN111402331A (zh) * 2020-02-25 2020-07-10 华南理工大学 基于视觉词袋和激光匹配的机器人重定位方法
CN111337943A (zh) * 2020-02-26 2020-06-26 同济大学 一种基于视觉引导激光重定位的移动机器人定位方法
CN111983639A (zh) * 2020-08-25 2020-11-24 浙江光珀智能科技有限公司 一种基于Multi-Camera/Lidar/IMU的多传感器SLAM方法
CN112461230A (zh) * 2020-12-07 2021-03-09 深圳市优必选科技股份有限公司 机器人重定位方法、装置、机器人和可读存储介质

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GUOLAI JIANG, LEI YIN, SHAOKUN JIN, CHAORAN TIAN, XINBO MA, YONGSHENG OU: "A Simultaneous Localization and Mapping (SLAM) Framework for 2.5D Map Building Based on Low-Cost LiDAR and Vision Fusion", APPLIED SCIENCES, vol. 9, no. 10, pages 2105, XP055740701, DOI: 10.3390/app9102105 *
YANG SHUANG: "Research on SLAM Algorithm Based on Multi-sensor Data Fusion in Complex Scenarios", COMPUTER ENGINEERING AND APPLICATIONS, HUABEI JISUAN JISHU YANJIUSUO, CN, vol. 56, no. 18, 30 September 2020 (2020-09-30), CN , XP055941281, ISSN: 1002-8331, DOI: 10.27029/d.cnki.ggdgu.2020.000661 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115326051A (zh) * 2022-08-03 2022-11-11 广州高新兴机器人有限公司 一种基于动态场景的定位方法、装置、机器人及介质
CN115290098B (zh) * 2022-09-30 2022-12-23 成都朴为科技有限公司 一种基于变步长的机器人定位方法和***
CN115290098A (zh) * 2022-09-30 2022-11-04 成都朴为科技有限公司 一种基于变步长的机器人定位方法和***
CN115375870A (zh) * 2022-10-25 2022-11-22 杭州华橙软件技术有限公司 回环检测优化方法、电子设备及计算机可读存储装置
CN115375870B (zh) * 2022-10-25 2023-02-10 杭州华橙软件技术有限公司 回环检测优化方法、电子设备及计算机可读存储装置
EP4382030A1 (en) 2022-12-07 2024-06-12 Essilor International Method and system for determining a personalized value of an optical feature of a corrective ophthalmic lens
CN117506884A (zh) * 2023-01-06 2024-02-06 奇勃(深圳)科技有限公司 机器人的视觉识别定位方法、装置、设备及存储介质
CN116155814A (zh) * 2023-04-20 2023-05-23 四川汉科计算机信息技术有限公司 一种数字战场信息控制***及传输方法、补偿方法
CN117132648A (zh) * 2023-04-28 2023-11-28 荣耀终端有限公司 一种视觉定位方法、电子设备及计算机可读存储介质
CN117291984B (zh) * 2023-11-22 2024-02-09 武汉理工大学 一种基于位姿约束的多帧描述符匹配重定位方法及***
CN117291984A (zh) * 2023-11-22 2023-12-26 武汉理工大学 一种基于位姿约束的多帧描述符匹配重定位方法及***
CN117589154A (zh) * 2024-01-19 2024-02-23 深圳竹芒科技有限公司 自移动设备的重定位方法、自移动设备和可读存储介质
CN117589154B (zh) * 2024-01-19 2024-05-24 深圳竹芒科技有限公司 自移动设备的重定位方法、自移动设备和可读存储介质
CN117761717A (zh) * 2024-02-21 2024-03-26 天津大学四川创新研究院 一种自动回环三维重建***及运行方法
CN117761717B (zh) * 2024-02-21 2024-05-07 天津大学四川创新研究院 一种自动回环三维重建***及运行方法

Also Published As

Publication number Publication date
CN112461230B (zh) 2023-05-09
CN112461230A (zh) 2021-03-09

Similar Documents

Publication Publication Date Title
WO2022121640A1 (zh) 机器人重定位方法、装置、机器人和可读存储介质
CN107909612B (zh) 一种基于3d点云的视觉即时定位与建图的方法与***
KR101532864B1 (ko) 모바일 디바이스들에 대한 평면 맵핑 및 트래킹
RU2642167C2 (ru) Устройство, способ и система для реконструкции 3d-модели объекта
CN107430686B (zh) 用于移动设备定位的区域描述文件的众包创建和更新
KR102149374B1 (ko) 로컬화 영역 설명 파일에 대한 프라이버시-민감 질의
EP3295129B1 (en) Privacy filtering of area description file prior to upload
CN109544636A (zh) 一种融合特征点法和直接法的快速单目视觉里程计导航定位方法
CN111094895B (zh) 用于在预构建的视觉地图中进行鲁棒自重新定位的***和方法
CN111127524A (zh) 一种轨迹跟踪与三维重建方法、***及装置
EP3326156B1 (en) Consistent tessellation via topology-aware surface tracking
CN114782646A (zh) 房屋模型的建模方法、装置、电子设备和可读存储介质
CN115349140A (zh) 基于多种特征类型的有效定位
CN115564639A (zh) 背景虚化方法、装置、计算机设备和存储介质
CN114463429B (zh) 机器人、地图创建方法、定位方法及介质
CN115294280A (zh) 三维重建方法、装置、设备、存储介质和程序产品
Zieliński et al. 3d dense mapping with the graph of keyframe-based and view-dependent local maps
CN116481516B (zh) 机器人、地图创建方法和存储介质
Krzysztof et al. 3D Dense Mapping with the Graph of Keyframe-Based and View-Dependent Local Maps
Zielinski et al. 3D Dense Mapping with the Graph of Keyframe-Based and View-Dependent Local Maps.
Aguilar-Gonzalez Monocular-SLAM dense mapping algorithm and hardware architecture for FPGA acceleration
Liao et al. High completeness multi-view stereo for dense reconstruction of large-scale urban scenes
CN118397213A (zh) 一种3d点云网格的处理方法及装置
CN117455990A (zh) 基于imu的关键帧追踪方法、装置、头戴式设备及介质
CN114812540A (zh) 一种建图方法、装置和计算机设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21902347

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21902347

Country of ref document: EP

Kind code of ref document: A1