WO2022261814A1 - 移动机器人故障下同时fdd和slam的方法及*** - Google Patents

移动机器人故障下同时fdd和slam的方法及*** Download PDF

Info

Publication number
WO2022261814A1
WO2022261814A1 PCT/CN2021/100035 CN2021100035W WO2022261814A1 WO 2022261814 A1 WO2022261814 A1 WO 2022261814A1 CN 2021100035 W CN2021100035 W CN 2021100035W WO 2022261814 A1 WO2022261814 A1 WO 2022261814A1
Authority
WO
WIPO (PCT)
Prior art keywords
time
robot
linear velocity
sample
state mode
Prior art date
Application number
PCT/CN2021/100035
Other languages
English (en)
French (fr)
Inventor
段琢华
Original Assignee
电子科技大学中山学院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 电子科技大学中山学院 filed Critical 电子科技大学中山学院
Priority to PCT/CN2021/100035 priority Critical patent/WO2022261814A1/zh
Publication of WO2022261814A1 publication Critical patent/WO2022261814A1/zh
Priority to ZA2023/07096A priority patent/ZA202307096B/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00

Definitions

  • the invention relates to the field of robot fault diagnosis and mapping, in particular to a method and system for simultaneous FDD and SLAM under the fault of a mobile robot.
  • FDD Fault Detection and Diagnosis
  • Detection is to determine whether the system is faulty
  • diagnosis is to determine the faulty component and the type of fault, that is, to determine the behavior mode of each component.
  • Simutaneous Localization and Mapping is one of the key issues for autonomous navigation of mobile robots. When the left wheel encoder and/or the right wheel encoder of a mobile robot fails, how to perform concurrent localization and mapping is critical. However, at present, there is no solution to simultaneously solve the problems of mobile robot fault diagnosis and concurrent localization and mapping.
  • the purpose of the present invention is to provide a method and system for simultaneous FDD and SLAM under the failure of a mobile robot.
  • the present invention provides the following scheme:
  • a method for simultaneous FDD and SLAM under mobile robot faults including:
  • attitude samples of the robot at time t are determined by the empirical probability of each state mode of the robot at time t, and the empirical probability of each state mode of the robot at time t is determined by the robot t - State mode estimation and state transition probability determination at time t; attitude sample distribution in any state mode at time t follows the normal distribution of robot attitude in the corresponding state mode, and the mean value in the normal distribution is determined by the corresponding state mode
  • the estimated values of the left and right wheel linear velocities are determined by the linear velocities measured by the encoders of the left and right wheels at time t or by the linear velocities measured by the encoders of the left and right wheels and the driving speed at time t.
  • the normal distribution The standard deviation in is determined by the empirical standard deviation of the corresponding state mode;
  • the state mode includes normal mode, left wheel encoder failure mode, right wheel encoder failure mode and left and right wheel encoder failure mode simultaneously;
  • the environment point cloud data of robot at t moment and the environment map estimate of robot at t-1 moment determine the weight of each attitude sample;
  • described environment point cloud data is actual measurement data;
  • each attitude sample belongs at t moment and the weight of each attitude sample at t moment, calculate the estimated probability of occurrence of each state pattern at t moment, and select the state pattern with the largest estimated probability as the state pattern estimate at t moment;
  • the target attitude sample includes the attitude sample corresponding to the state mode estimation at time t;
  • the environment map estimation at time t is determined.
  • determining the weight of each attitude sample according to the attitude sample of the robot at time t, the environment point cloud data of the robot at time t, and the environment map estimation of the robot at time t-1 specifically includes:
  • the calculation of the estimated probability of occurrence of each state mode at time t according to the state mode to which each attitude sample belongs at time t and the weight of each attitude sample at time t specifically includes:
  • S k represents the k-th state mode
  • s t represents the state mode at time t
  • the i-th attitude sample at time t When belonging to the state mode S k , is 1, otherwise is 0,
  • N indicates the number of attitude samples at time t.
  • the normal distribution of the robot attitude includes the normal distribution of the linear velocity and the normal distribution of the yaw rate
  • the mean value in the normal distribution of the linear velocity is determined by the estimated value of the linear velocity of the left and right wheels in the corresponding state mode
  • the The standard deviation in the normal distribution of the linear velocity is determined by the empirical standard deviation of the corresponding state mode
  • the mean value in the normal distribution of the yaw rate is determined by the estimated value of the linear velocity of the left and right wheels in the corresponding state mode
  • the yaw rate is positive
  • the standard deviation in the state distribution is determined from the empirical standard deviation of the corresponding state pattern.
  • sampling linear velocity samples and sampling yaw rate samples include:
  • W represents the axis length connecting the left and right wheels of the robot, Indicates the estimated linear velocity of the left wheel of the robot at time t, Represents the estimated value of the linear velocity of the right wheel of the robot at time t; wherein, in the normal mode, the estimated linear velocity of the left wheel of the robot at time t is the linear velocity measured by the encoder of the left wheel, and the linear velocity of the right wheel of the robot at time t is estimated is the linear velocity measured by the encoder of the right wheel; in the failure mode of the left wheel encoder, the estimated value of the linear velocity of the left wheel of the robot at time t is the driving speed of the left wheel, and the estimated value of the linear velocity of the right wheel of the robot at time t is the right
  • the encoder of the wheel measures the linear velocity; in the failure mode of the right wheel encoder, the estimated linear velocity of the
  • the present invention also provides a system for simultaneous FDD and SLAM under the failure of a mobile robot, including:
  • the attitude sample acquisition module is used to acquire the attitude samples of the robot at time t; wherein, the distribution probability of the attitude samples of the robot at each state mode at time t is determined by the empirical probability of each state mode of the robot at time t, and each state mode occurs at time t of the robot.
  • the empirical probability of the mode is determined according to the state mode estimation and state transition probability of the robot at time t-1; the attitude sample distribution in any state mode at time t follows the normal distribution of robot attitude in the corresponding state mode, and the The mean value is determined by the estimated value of the linear velocity of the left and right wheels in the corresponding state mode.
  • the estimated value of the linear velocity of the left and right wheels is determined by the linear velocity measured by the encoder of the left and right wheels at time t or by the linear velocity measured by the encoder of the left and right wheels at time t and the driving speed It is determined that the standard deviation in the normal distribution is determined by the empirical standard deviation of the corresponding state mode; the state mode includes a normal mode, a failure mode of the left wheel encoder, a failure mode of the right wheel encoder, and a simultaneous failure mode of the left and right wheel encoders;
  • the sample weight determination module is used to determine the weight of each attitude sample according to the attitude sample of the robot at time t, the environment point cloud data of the robot at time t, and the environment map estimation of the robot at time t-1; wherein, the environment point cloud data is actual measurement data;
  • the state mode estimation determination module is used to calculate the estimated probability of each state mode at time t according to the state mode to which each attitude sample belongs at time t and the weight of each attitude sample at time t, and select the state mode with the largest estimated probability as the state mode at time t state mode estimation;
  • the pose estimation determination module is used to use the pose sample with the largest posterior probability in the target pose sample as the pose estimate of the robot at time t, and the target pose sample includes a pose sample corresponding to the state mode estimation at time t;
  • the environment map estimation determination module is used to determine the environment map estimation at time t according to the environment point cloud data of the robot at time t and the pose estimation of the robot at time t.
  • the sample weight determination module specifically includes:
  • the registration point pair determination unit is used to use the attitude sample as the initial pose estimation, and use the iterative closest point method to determine the registration point pair between the environmental point cloud data at time t and the environmental map estimation at time t-1 during the registration process;
  • a distance calculation unit configured to calculate the distance between two points in each of the registration point pairs
  • a weight determining unit configured to determine a ratio of the registration point pairs whose distance is smaller than a set threshold among the registration point pairs, and use the ratio as the weight of the posture sample at time t.
  • the state mode estimation determination module specifically includes:
  • the state mode estimation probability calculation unit is used for Calculate the estimated probability of occurrence of each state mode at time t Among them, S k represents the k-th state mode, s t represents the state mode at time t, when the i-th attitude sample at time t When belonging to the state mode S k , is 1, otherwise is 0, Indicates the normalized weight of the i-th attitude sample at time t, and N indicates the number of attitude samples at time t.
  • the system also includes:
  • the line speed sample sampling module is used to follow the empirical probability of the occurrence of each state mode at time t and the line speed normal distribution under each state mode of the robot, and sample the line speed sample;
  • the yaw rate sample sampling module is used to sample the yaw rate samples according to the empirical probability of occurrence of each state mode at time t and the normal distribution of the yaw rate under each state mode of the robot;
  • Attitude sample determination module used to determine the attitude sample of the robot at time t according to the linear velocity sample and the yaw rate sample;
  • the normal distribution of the robot attitude includes the normal distribution of the linear velocity and the normal distribution of the yaw rate
  • the mean value in the normal distribution of the linear velocity is determined by the estimated value of the linear velocity of the left and right wheels in the corresponding state mode
  • the The standard deviation in the normal distribution of the linear velocity is determined by the empirical standard deviation of the corresponding state mode
  • the mean value in the normal distribution of the yaw rate is determined by the estimated value of the linear velocity of the left and right wheels in the corresponding state mode
  • the yaw rate is positive
  • the standard deviation in the state distribution is determined from the empirical standard deviation of the corresponding state pattern.
  • the system also includes:
  • Linear velocity normal distribution determination module for:
  • the mean in the normal distribution of the line speed at time t in Indicates the estimated linear velocity of the left wheel of the robot at time t, Represents the estimated value of the linear velocity of the right wheel of the robot at time t; wherein, in the normal mode, the estimated linear velocity of the left wheel of the robot at time t is the linear velocity measured by the encoder of the left wheel, and the linear velocity of the right wheel of the robot at time t is estimated is the linear velocity measured by the encoder of the right wheel; in the failure mode of the left wheel encoder, the estimated value of the linear velocity of the left wheel of the robot at time t is the driving speed of the left wheel, and the estimated value of the linear velocity of the right wheel of the robot at time t is the right The encoder of the wheel measures the linear velocity; in the failure mode of the right wheel encoder, the estimated linear velocity of the left wheel of the robot at time t is the linear velocity measured by the encoder of the left wheel, and the estimated linear velocity of the right wheel of the robot at time t is the right
  • the yaw rate normal distribution determination module is used for:
  • W represents the axis length connecting the left and right wheels of the robot.
  • the embodiment of the present invention first estimates the state mode at this moment, then selects an accurate pose estimation in the state mode, and finally, according to the pose estimation, The estimation of the environmental map at the previous moment and the environmental point cloud data at the current moment are carried out by using the map matching method to establish the map at this moment. Simultaneous fault diagnosis and concurrent positioning and mapping are realized in the state mode of the mobile robot.
  • FIG. 1 is a flow chart of a method for simultaneous FDD and SLAM under a mobile robot failure provided in Embodiment 1 of the present invention
  • Fig. 2 is a schematic diagram of the system structure of simultaneous FDD and SLAM under the failure of the mobile robot provided by Embodiment 2 of the present invention.
  • this embodiment provides a method for simultaneous FDD and SLAM under a mobile robot failure.
  • the mobile robot targeted by this method is a two-wheel differential robot, with encoders installed on the left and right wheels, and a two-dimensional laser radar installed on the mobile robot body. , the method includes the following steps:
  • Step 101 Obtain a pose sample of the robot at time t.
  • the distribution probability of the attitude samples of the robot on each state mode at time t is determined by the empirical probability of each state mode of the robot at time t, and the empirical probability of each state mode of the robot at time t is estimated according to the state mode of the robot at time t-1 and State transition probabilities are determined.
  • the state mode is shown in Table 1, and the state transition probability can be shown in Table 2.
  • the empirical probability of occurrence of each state mode at time t is obtained, and the attitude samples are selected according to the empirical probability.
  • the state mode at time t-1 is estimated to be S 1
  • the empirical probabilities of each state mode S 1 , S 2 , S 3 , and S 4 at time t are 0.5, 0.2, 0.2,0.1.
  • the attitude samples belonging to state mode S1 account for 0.5
  • the attitude samples belonging to state mode S2 account for 0.2
  • the attitude samples belonging to state mode S3 account for 0.2
  • the attitude samples belonging to state mode S4 account for 0.1.
  • the attitude sample distribution in any state mode at time t follows the normal distribution of the robot attitude in the corresponding state mode, and the mean value in the normal distribution is determined by the estimated values of the left and right wheel linear velocities in the corresponding state mode, and the left and right wheel lines
  • the speed estimate is determined by the linear velocity measured by the encoders of the wheels around the time t or by the linear velocity measured by the encoders of the wheels around the time t and the driving speed, and the standard deviation in the normal distribution is determined by the empirical standard deviation of the corresponding state mode .
  • the empirical standard deviation of state mode S 1 is ⁇ 1
  • the empirical standard deviation of state mode S 2 is ⁇ 2
  • the empirical standard deviation of state mode S 3 is ⁇ 3
  • the empirical standard deviation of state mode S 4 is ⁇ 4 .
  • the 50 attitude samples belonging to the state mode S 1 follow the normal distribution ( ⁇ 1 , ⁇ 1 ), the 20 attitude samples belonging to the state mode S 2 follow the normal distribution ( ⁇ 2 , ⁇ 2 ), and belong to the state mode S
  • the 20 attitude samples of 3 follow normal distribution ( ⁇ 3 , ⁇ 3 ), and the 10 attitude samples belonging to state pattern S 4 follow normal distribution ( ⁇ 4 , ⁇ 4 ).
  • ⁇ 1 , ⁇ 2 , ⁇ 3 , ⁇ 4 are determined according to the estimated values of the left and right wheel linear velocities in the state mode, for example, ⁇ 1 is determined by the estimated values of the left and right wheel linear velocities in the state mode S 1 with Determined, ⁇ 2 is determined by the estimated value of the linear velocity of the left and right wheels in the state mode S 2 with Sure.
  • the method of determining the estimated linear velocity of the left and right wheels in different state modes is as follows:
  • the estimated linear velocity of the robot’s left wheel at time t is the linear velocity measured by the encoder of the left wheel, and the estimated linear velocity of the robot’s right wheel at time t is the linear velocity measured by the encoder of the right wheel ;
  • the estimated linear velocity of the robot’s left wheel at time t is the driving speed of the left wheel, and the estimated linear velocity of the robot’s right wheel at time t is the linear velocity measured by the encoder of the right wheel;
  • the linear velocity of the robot’s left wheel at time t is The estimated value is the linear velocity measured by the encoder of the left wheel, and the estimated linear velocity of the robot’s right wheel at time t is the driving speed of the right wheel ;
  • the estimated linear velocity of the robot’s left wheel at time t is the driving speed of the left wheel
  • the estimated linear velocity of the right wheel of the robot at time t is the driving speed of the right wheel.
  • Step 102 Determine the weight of each pose sample according to the pose sample of the robot at time t, the environment point cloud data of the robot at time t, and the environment map estimation at time t-1.
  • the environmental point cloud data in this embodiment is obtained by actually scanning and measuring the surrounding environment of the robot by laser radar.
  • step 102 specifically includes:
  • Step 103 Calculate the estimated probability of occurrence of each state mode at time t according to the state mode to which each attitude sample belongs and the weight of each attitude sample at time t, and select the state mode with the highest estimated probability as the state mode estimation at time t.
  • step 103 includes:
  • S k represents the k-th state mode
  • s t represents the state mode at time t
  • the i-th attitude sample at time t When belonging to the state mode S k , is 1, otherwise is 0,
  • N indicates the number of attitude samples at time t.
  • Step 104 Use the pose sample with the highest posterior probability among the target pose samples as the pose estimation of the robot at time t, the target pose sample including the pose sample corresponding to the state mode estimation at time t.
  • Step 105 According to the environment point cloud data of the robot at time t and the pose estimation of the robot at time t, determine the environment map estimation at time t.
  • the normal distribution of robot attitude includes linear velocity normal distribution and yaw rate normal distribution. In one example, before step 101, it also includes:
  • the estimated linear velocity of the left wheel of the robot at time t is the linear velocity measured by the encoder of the left wheel
  • the estimated linear velocity of the right wheel of the robot at time t is the linear velocity measured by the encoder of the right wheel
  • the estimated linear velocity of the left wheel of the robot at time t is the driving speed of the left wheel
  • the estimated linear velocity of the right wheel of the robot at time t is the encoder measuring linear velocity of the right wheel
  • the estimated line speed of the left wheel of the robot at time t is the encoder measurement line speed of the left wheel
  • the estimated line speed of the right wheel of the robot at time t is the driving speed of the right wheel
  • the estimated linear velocity of the left wheel of the robot at time t is the driving speed of the left wheel
  • the estimated linear velocity of the right wheel of the robot at time t is the driving speed of the right wheel.
  • the mean value in the normal distribution of the linear velocity is determined by the estimated value of the linear velocity of the left and right wheels in the corresponding state mode
  • the standard deviation in the normal distribution of the linear velocity is determined by the empirical standard deviation of the corresponding state mode
  • the yaw rate is determined by the empirical standard deviation in the corresponding state mode.
  • v t and Respectively represent the robot linear velocity (unit: m/s) and yaw rate (unit: radian/s) at time t, and ⁇ represents the sampling interval (unit: s).
  • W represents the axis length connecting the left and right wheels of the mobile robot, unit: m
  • r represents the radius of the left and right wheels of the mobile robot, unit: m
  • M t respectively represent the readings of the left wheel encoder, right wheel encoder and two-dimensional laser radar at time t
  • ⁇ t is the environment map at time t, represented by a two-dimensional laser point cloud
  • s t is the state mode at time t, and the value range is shown in Table 1
  • the output is the state mode at time t pose at time t map at time t
  • Each sensor has two modes: normal and fault.
  • a mobile robot with two encoders has four state modes, including one normal mode and three a failure mode.
  • the number of particles is set to N, and the particle set is initialized
  • the state mode at time t is
  • the method uses the particle filter as the framework to realize FDD and SLAM of the mobile robot at the same time.
  • the results of FDD are used to provide kinematic models under different fault conditions, and more accurate pose estimation is performed on the kinematic model adapted to the fault.
  • a perceptual model for fault diagnosis and location is realized by map matching method.
  • this embodiment provides a system for simultaneous FDD and SLAM under a mobile robot failure, the system includes:
  • the attitude sample acquisition module 201 is used to acquire the attitude samples of the robot at time t; wherein, the distribution probability of the attitude samples of the robot at each state mode at time t is determined by the empirical probability of each state mode of the robot at time t, and each state mode occurs at time t of the robot.
  • the empirical probability of the state mode is determined according to the state mode estimation and state transition probability of the robot at time t-1; the attitude sample distribution in any state mode at time t follows the normal distribution of the robot attitude in the corresponding state mode, and in the normal distribution
  • the mean value of is determined by the estimated value of the linear velocity of the left and right wheels in the corresponding state mode.
  • the estimated value of the linear velocity of the left and right wheels is determined by the linear velocity measured by the encoder of the left and right wheels at time t or by the linear velocity measured by the encoder of the left and right wheels at time t and the driving
  • the speed is determined, and the standard deviation in the normal distribution is determined by the empirical standard deviation of the corresponding state mode;
  • the state mode includes normal mode, left wheel encoder failure mode, right wheel encoder failure mode and left and right wheel encoder failure mode simultaneously ;
  • the sample weight determination module 202 is used to determine the weight of each pose sample according to the pose sample of the robot at time t, the robot environment point cloud data at time t, and the environment map estimation at time t-1, and the robot lidar is used to measure the environment map ;
  • the state mode estimation determination module 203 is used to calculate the estimated probability of occurrence of each state mode at time t according to the state mode to which each attitude sample belongs at time t and the weight of each attitude sample at time t, and select the state mode with the largest estimated probability as time time t
  • the pose estimation determination module 204 is used to use the pose sample with the largest posterior probability in the target pose sample as the pose estimate of the robot at time t, the target pose sample including the pose sample corresponding to the state mode estimation at time t;
  • the environment map estimation determination module 205 is configured to determine the environment map estimation at time t according to the robot environment point cloud data at time t and the pose estimation of the robot at time t.
  • the sample weight determination module 202 specifically includes:
  • the registration point pair determination unit is used to use the attitude sample as the initial pose estimation, and use the iterative closest point method to determine the registration point pair between the environmental point cloud data at time t and the environmental map estimation at time t-1 during the registration process;
  • a distance calculation unit configured to calculate the distance between two points in each of the registration point pairs
  • a weight determining unit configured to determine a ratio of the registration point pairs whose distance is smaller than a set threshold among the registration point pairs, and use the ratio as the weight of the posture sample at time t.
  • the state mode estimation determination module 203 specifically includes:
  • the state mode estimation probability calculation unit is used for Calculate the estimated probability of occurrence of each state mode at time t Among them, S k represents the k-th state mode, s t represents the state mode at time t, when the i-th attitude sample at time t When belonging to the state mode S k , is 1, otherwise is 0, Indicates the normalized weight of the i-th attitude sample at time t, and N indicates the number of attitude samples at time t.
  • system provided by this embodiment further includes: a linear velocity normal distribution determination module, a yaw rate normal distribution determination module, a linear velocity sample sampling module, a yaw rate sample sampling module, and an attitude sample determination module. in:
  • Line speed normal distribution determination module for:
  • the yaw rate normal distribution determination module is used for:
  • W represents the axis length connecting the left and right wheels of the robot
  • the line speed sample sampling module is used to follow the empirical probability of the occurrence of each state mode at time t and the line speed normal distribution under each state mode of the robot, and sample the line speed sample;
  • the yaw rate sample sampling module is used to sample the yaw rate samples according to the empirical probability of occurrence of each state mode at time t and the normal distribution of the yaw rate under each state mode of the robot;
  • An attitude sample determining module configured to determine an attitude sample of the robot at time t according to the linear velocity sample and the yaw rate sample.
  • the normal distribution of the robot attitude includes the normal distribution of the linear velocity and the normal distribution of the yaw rate
  • the mean value in the normal distribution of the linear velocity is determined by the estimated value of the linear velocity of the left and right wheels in the corresponding state mode
  • the The standard deviation in the normal distribution of the linear velocity is determined by the empirical standard deviation of the corresponding state mode
  • the mean value in the normal distribution of the yaw rate is determined by the estimated value of the linear velocity of the left and right wheels in the corresponding state mode
  • the yaw rate is positive
  • the standard deviation in the state distribution is determined from the empirical standard deviation of the corresponding state pattern.
  • W represents the axis length connecting the left and right wheels of the robot, Indicates the estimated linear velocity of the left wheel of the robot at time t, Represents the estimated value of the linear velocity of the right wheel of the robot at time t; wherein, in the normal mode, the estimated linear velocity of the left wheel of the robot at time t is the linear velocity measured by the encoder of the left wheel, and the linear velocity of the right wheel of the robot at time t is estimated is the linear velocity measured by the encoder of the right wheel; in the failure mode of the left wheel encoder, the estimated value of the linear velocity of the left wheel of the robot at time t is the driving speed of the left wheel, and the estimated value of the linear velocity of the right wheel of the robot at time t is the right
  • the encoder of the wheel measures the linear velocity; in the failure mode of the right wheel encoder, the estimated linear velocity of the left wheel of the robot at time t is the linear velocity measured by the encoder of the left wheel, and the estimated linear velocity of the right wheel of the robot at time

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

一种移动机器人故障下同时FDD和SLAM的方法及***。该方法包括:获取t时刻机器人的姿态样本(S101);根据t时刻机器人的姿态样本、t时刻机器人的环境点云数据和t-1时刻的环境地图估计,确定各姿态样本的权重(S102);根据t时刻各姿态样本所属的状态模式和t时刻各姿态样本的权重,计算t时刻发生各状态模式的估计概率,选取估计概率最大的状态模式作为t时刻的状态模式估计(S103);将目标姿态样本中后验概率最大的姿态样本作为t时刻机器人的位姿估计,目标姿态样本包括t时刻的状态模式估计对应的姿态样本(S104);根据t时刻机器人环境点云数据以及位姿估计,确定t时刻的环境地图估计(S105)。该方法可以实现移动机器人故障下同时进行故障诊断和并发定位与建图。

Description

移动机器人故障下同时FDD和SLAM的方法及*** 技术领域
本发明涉及机器人故障诊断与建图领域,特别是涉及一种移动机器人故障下同时FDD和SLAM的方法及***。
背景技术
故障检测与诊断(Fault Detection and Diagnosis,FDD)包含检测与诊断两个过程,检测是确定***是否发生故障,诊断则是确定发生故障的部件以及故障类型,即确定各部件的行为模式。并发定位与建图(Simutaneous Localization and Mapping,SLAM)是移动机器人自主导航的关键问题之一。当移动机器人的左轮编码器和/或右轮编码器发生故障时,如何进行并发定位与建图十分关键,但目前,尚未出现同时解决移动机器人故障诊断和并发定位与建图问题的方案。
发明内容
本发明的目的是提供一种移动机器人故障下同时FDD和SLAM的方法及***。
为实现上述目的,本发明提供了如下方案:
一种移动机器人故障下同时FDD和SLAM的方法,包括:
获取t时刻机器人的姿态样本;其中,t时刻机器人的姿态样本在各状态模式上的分布概率由t时刻机器人发生各状态模式的经验概率确定,机器人t时刻发生各状态模式的经验概率根据机器人t-1时刻的状态模式估计以及状态转移概率确定;t时刻任一状态模式下的姿态样本分布遵循对应状态模式下机器人姿态的正态分布,所述正态分布中的均值由对应状态模式下的左右车轮线速度估计值确定,所述左右车轮线速度估计值由t时刻左右车轮的编码器测量线速度确定或由t时刻左右车轮的编码器测量线速度以及驱动速度确定,所述正态分布中的标准差由对应状态模式的经验标准差确定;所述状态模式包括正常模式、左轮编码器故障模式、右轮编码器故障模式以及左右轮编码器同时故障模式;
根据t时刻机器人的姿态样本、t时刻机器人的环境点云数据以及t-1时刻机器人的环境地图估计,确定各姿态样本的权重;其中,所述环境点云数据为实际测量数据;
根据t时刻各姿态样本所属的状态模式以及t时刻各姿态样本的权重,计算t时刻发生各状态模式的估计概率,并选取估计概率最大的状态模式作为t时刻的状态模式估计;
将目标姿态样本中后验概率最大的姿态样本作为t时刻机器人的位姿估计,所述目标姿态样本包括所述t时刻的状态模式估计对应的姿态样本;
根据t时刻机器人的环境点云数据以及t时刻机器人的位姿估计,确定t时刻的环境地图估计。
可选的,所述根据t时刻机器人的姿态样本、t时刻机器人的环境点云数据以及t-1时刻机器人的环境地图估计,确定各姿态样本的权重,具体包括:
对t时刻任一所述姿态样本:
以所述姿态样本为初始位姿估计,利用迭代最近点方法确定t时刻环境点云数据与t-1时刻的环境地图估计在配准过程中的配准点对;
计算各所述配准点对中两点的距离;
确定所述配准点对中所述距离小于设定阈值的配准点对的比例,并将所述比例作为t时刻所述姿态样本的权重。
可选的,所述根据t时刻各姿态样本所属的状态模式以及t时刻各姿态样本的权重,计算t时刻发生各状态模式的估计概率,具体包括:
根据
Figure PCTCN2021100035-appb-000001
计算t时刻发生各状态模式的估计概率
Figure PCTCN2021100035-appb-000002
其中,S k表示第k状态模式,s t表示t时刻的状态模式,当t时刻的第i姿态样本
Figure PCTCN2021100035-appb-000003
属于状态模式S k时,
Figure PCTCN2021100035-appb-000004
为1,否则
Figure PCTCN2021100035-appb-000005
为0,
Figure PCTCN2021100035-appb-000006
表示t时刻的第i姿态样本归一化后的权重,N表示t时刻姿态样本的数量。
可选的,在所述获取t时刻机器人的姿态样本之前,还包括:
遵循t时刻各状态模式发生的经验概率以及机器人各状态模式下的线速度正态分布,采样线速度样本;
遵循t时刻各状态模式发生的经验概率以及机器人各状态模式下的偏航率正态分布,采样偏航率样本;
根据所述线速度样本和所述偏航率样本确定t时刻机器人的姿态样本;
其中,所述机器人姿态的正态分布包括线速度正态分布和偏航率正态分布,所述线速度正态分布中的均值由对应状态模式下的左右车轮线速度估计值确定,所述线速度正态分布中的标准差由对应状态模式的经验标准差确定,所述偏航率正态分布中的均值由对应状态模式下的左右车轮线速度估计值确定,所述偏航率正态分布中的标准差由对应状态模式的经验标准差确定。
可选的,在采样线速度样本和采样偏航率样本之前,还包括:
获取机器人的左车轮线速度和右车轮线速度;
根据
Figure PCTCN2021100035-appb-000007
计算t时刻线速度正态分布中的均值
Figure PCTCN2021100035-appb-000008
和t时刻偏航率正态分布中的均值
Figure PCTCN2021100035-appb-000009
其中,W表示连接机器人左右车轮的轴长,
Figure PCTCN2021100035-appb-000010
表示t时刻机器人的左车轮线速度估计值,
Figure PCTCN2021100035-appb-000011
表示t时刻机器人的右车轮线速度估计值;其中,在所述正常模式下,t时刻机器人的左车轮线速度估计值为左车轮的编码器测量线速度,t时刻机器人的右车轮线速度估计值为右车轮的编码器测量线速度;在所述左轮编码器故障模式下,t时刻机器人的左车轮线速度估计值为左车轮的驱动速度,t时刻机器人的右车轮线速度估计值为右车轮的编码器测量线速度;在右轮编码器故障模式下,t时刻机器人的左车轮线速度估计值为左车轮的编码器测量线速度,t时刻机器人的右车轮线速度估计值为右车轮的驱动速度;在所述左右轮编码器同时故障模式下,t时刻机器人的左车轮线速度估计值为左车轮的 驱动速度,t时刻机器人的右车轮线速度估计值为右车轮的驱动速度。
本发明还提供了一种移动机器人故障下同时FDD和SLAM的***,包括:
姿态样本获取模块,用于获取t时刻机器人的姿态样本;其中,t时刻机器人的姿态样本在各状态模式上的分布概率由t时刻机器人发生各状态模式的经验概率确定,机器人t时刻发生各状态模式的经验概率根据机器人t-1时刻的状态模式估计以及状态转移概率确定;t时刻任一状态模式下的姿态样本分布遵循对应状态模式下机器人姿态的正态分布,所述正态分布中的均值由对应状态模式下的左右车轮线速度估计值确定,所述左右车轮线速度估计值由t时刻左右车轮的编码器测量线速度确定或由t时刻左右车轮的编码器测量线速度以及驱动速度确定,所述正态分布中的标准差由对应状态模式的经验标准差确定;所述状态模式包括正常模式、左轮编码器故障模式、右轮编码器故障模式以及左右轮编码器同时故障模式;
样本权重确定模块,用于根据t时刻机器人的姿态样本、t时刻机器人的环境点云数据以及t-1时刻机器人的环境地图估计,确定各姿态样本的权重;其中,所述环境点云数据为实际测量数据;
状态模式估计确定模块,用于根据t时刻各姿态样本所属的状态模式以及t时刻各姿态样本的权重,计算t时刻发生各状态模式的估计概率,并选取估计概率最大的状态模式作为t时刻的状态模式估计;
位姿估计确定模块,用于将目标姿态样本中后验概率最大的姿态样本作为t时刻机器人的位姿估计,所述目标姿态样本包括所述t时刻的状态模式估计对应的姿态样本;
环境地图估计确定模块,用于根据t时刻机器人的环境点云数据以及t时刻机器人的位姿估计,确定t时刻的环境地图估计。
可选的,所述样本权重确定模块,具体包括:
配准点对确定单元,用于以所述姿态样本为初始位姿估计,利用迭代最近点方法确定t时刻环境点云数据与t-1时刻的环境地图估计在配准过程中的配准点对;
距离计算单元,用于计算各所述配准点对中两点的距离;
权重确定单元,用于确定所述配准点对中所述距离小于设定阈值的配准点对的比例,并将所述比例作为t时刻所述姿态样本的权重。
可选的,所述状态模式估计确定模块,具体包括:
状态模式估计概率计算单元,用于根据
Figure PCTCN2021100035-appb-000012
计算t时刻发生各状态模式的估计概率
Figure PCTCN2021100035-appb-000013
其中,S k表示第k状态模式,s t表示t时刻的状态模式,当t时刻的第i姿态样本
Figure PCTCN2021100035-appb-000014
属于状态模式S k时,
Figure PCTCN2021100035-appb-000015
为1,否则
Figure PCTCN2021100035-appb-000016
为0,
Figure PCTCN2021100035-appb-000017
表示t时刻的第i姿态样本归一化后的权重,N表示t时刻姿态样本的数量。
可选的,所述***还包括:
线速度样本采样模块,用于遵循t时刻各状态模式发生的经验概率以及机器人各状态模式下的线速度正态分布,采样线速度样本;
偏航率样本采样模块,用于遵循t时刻各状态模式发生的经验概率以及机器人各状态模式下的偏航率正态分布,采样偏航率样本;
姿态样本确定模块,用于根据所述线速度样本和所述偏航率样本确定t时刻机器人的姿态样本;
其中,所述机器人姿态的正态分布包括线速度正态分布和偏航率正态分布,所述线速度正态分布中的均值由对应状态模式下的左右车轮线速度估计值确定,所述线速度正态分布中的标准差由对应状态模式的经验标准差确定,所述偏航率正态分布中的均值由对应状态模式下的左右车轮线速度估计值确定,所述偏航率正态分布中的标准差由对应状态模式的经验标准差确定。
可选的,所述***还包括:
线速度正态分布确定模块,用于:
根据
Figure PCTCN2021100035-appb-000018
计算t时刻线速度正态分布中的均值
Figure PCTCN2021100035-appb-000019
其中,
Figure PCTCN2021100035-appb-000020
表示t时刻机器人的左车轮线速度估计值,
Figure PCTCN2021100035-appb-000021
表示t时刻机器人的右车轮线 速度估计值;其中,在所述正常模式下,t时刻机器人的左车轮线速度估计值为左车轮的编码器测量线速度,t时刻机器人的右车轮线速度估计值为右车轮的编码器测量线速度;在所述左轮编码器故障模式下,t时刻机器人的左车轮线速度估计值为左车轮的驱动速度,t时刻机器人的右车轮线速度估计值为右车轮的编码器测量线速度;在右轮编码器故障模式下,t时刻机器人的左车轮线速度估计值为左车轮的编码器测量线速度,t时刻机器人的右车轮线速度估计值为右车轮的驱动速度;在所述左右轮编码器同时故障模式下,t时刻机器人的左车轮线速度估计值为左车轮的驱动速度,t时刻机器人的右车轮线速度估计值为右车轮的驱动速度;
偏航率正态分布确定模块,用于:
根据
Figure PCTCN2021100035-appb-000022
计算t时刻偏航率正态分布中的均值
Figure PCTCN2021100035-appb-000023
其中,W表示连接机器人左右车轮的轴长。
根据本发明提供的具体实施例,公开了以下技术效果:本发明实施例首先对本时刻的状态模式进行了估计,然后在该状态模式下选取精确的位姿估计,最后,根据该位姿估计、上一时刻的环境地图估计以及本时刻的环境点云数据,利用地图匹配方法进行本时刻的地图建立。实现了移动机器人状态模式下同时进行故障诊断和并发定位与建图。
说明书附图
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明实施例1提供的移动机器人故障下同时FDD和SLAM的方法流程图;
图2为本发明实施例2提供的移动机器人故障下同时FDD和SLAM 的***结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
为使本发明的上述目的、特征和优点能够更加明显易懂,下面结合附图和具体实施方式对本发明作进一步详细的说明。
实施例1
参见图1,本实施例提供了一种移动机器人故障下同时FDD和SLAM的方法,该方法针对的移动机器人为双轮差速机器人,左右车轮安装编码器,移动机器人本体上安装二维激光雷达,该方法包括以下步骤:
步骤101:获取t时刻机器人的姿态样本。
其中,t时刻机器人的姿态样本在各状态模式上的分布概率由t时刻机器人发生各状态模式的经验概率确定,机器人t时刻发生各状态模式的经验概率根据机器人t-1时刻的状态模式估计以及状态转移概率确定。状态模式如表1所示,状态转移概率可如表2所示。
表1 状态模式
状态模式 故障元件
S 1 正常
S 2 左轮编码器故障
S 3 右轮编码器故障
S 4 左轮、右轮编码器同时出现故障
表2 p(s t=S k|s t-1=S j)k,j∈{1,2,3,4}
Figure PCTCN2021100035-appb-000024
根据t-1时刻的状态模式估计以及状态转移概率得到t时刻各状态模式发生的经验概率,姿态样本的选取依据该经验概率进行。比如,t-1时刻的状态模式估计为S 1时,根据表2中的状态转移概率可知,t时刻各状态模式S 1、S 2、S 3、S 4的经验概率分别为0.5,0.2,0.2,0.1。那么t时刻采样的姿态样本中属于状态模式S 1的姿态样本占0.5,属于状态模式S 2的姿态样本占0.2,属于状态模式S 3的姿态样本占0.2,属于状态模式S 4的姿态样本占0.1。
t时刻任一状态模式下的姿态样本分布遵循对应状态模式下机器人姿态的正态分布,所述正态分布中的均值由对应状态模式下的左右车轮线速度估计值确定,所述左右车轮线速度估计值由t时刻左右车轮的编码器测量线速度确定或由t时刻左右车轮的编码器测量线速度以及驱动速度确定,所述正态分布中的标准差由对应状态模式的经验标准差确定。
比如,状态模式S 1的经验标准差为σ 1,状态模式S 2的经验标准差为σ 2,状态模式S 3的经验标准差为σ 3,状态模式S 4的经验标准差为σ 4。那么,如果t时刻有100个姿态样本:属于状态模式S 1的姿态样本有50个,属于状态模式S 2的姿态样本占20个,属于状态模式S 3的姿态样本占20个,属于状态模式S 4的姿态样本占10个。其中,属于状态模式S 1的50个姿态样本遵循正态分布(μ 1,σ 1),属于状态模式S 2的20个姿态样本遵循正态分布(μ 2,σ 2),属于状态模式S 3的20个姿态样本遵循正态分布 (μ 3,σ 3),属于状态模式S 4的10个姿态样本遵循正态分布(μ 4,σ 4)。
上述μ 1、μ 2、μ 3、μ 4对应状态模式下的左右车轮线速度估计值确定,比如,μ 1由状态模式S 1下的左右车轮线速度估计值
Figure PCTCN2021100035-appb-000025
Figure PCTCN2021100035-appb-000026
确定,μ 2由状态模式S 2下的左右车轮线速度估计值
Figure PCTCN2021100035-appb-000027
Figure PCTCN2021100035-appb-000028
确定。不同状态模式下左右车轮线速度估计值的确定方法如下:
在S 1下,t时刻机器人的左车轮线速度估计值为左车轮的编码器测量线速度,t时刻机器人的右车轮线速度估计值为右车轮的编码器测量线速度;在S 2下,t时刻机器人的左车轮线速度估计值为左车轮的驱动速度,t时刻机器人的右车轮线速度估计值为右车轮的编码器测量线速度;在S 3下,t时刻机器人的左车轮线速度估计值为左车轮的编码器测量线速度,t时刻机器人的右车轮线速度估计值为右车轮的驱动速度;在S 4下,t时刻机器人的左车轮线速度估计值为左车轮的驱动速度,t时刻机器人的右车轮线速度估计值为右车轮的驱动速度。
步骤102:根据t时刻机器人的姿态样本、t时刻机器人的环境点云数据以及t-1时刻的环境地图估计,确定各姿态样本的权重。本实施例中的环境点云数据采用激光雷达对机器人的周围环境实际扫描测量得到。
在一个示例中,步骤102具体包括:
对t时刻任一所述姿态样本:
以所述姿态样本为初始位姿估计,利用迭代最近点方法确定t时刻的环境点云数据与t-1时刻的环境地图估计在配准过程中的配准点对;
计算各所述配准点对中两点的距离;
确定所述配准点对中所述距离小于设定阈值的配准点对的比例,并将所述比例作为t时刻所述姿态样本的权重。
步骤103:根据t时刻各姿态样本所属的状态模式以及t时刻各姿态样本的权重,计算t时刻发生各状态模式的估计概率,并选取估计概率最大的状态模式作为t时刻的状态模式估计。
在一个示例中,步骤103包括:
根据
Figure PCTCN2021100035-appb-000029
计算t时刻发生各状态模式的估计概率
Figure PCTCN2021100035-appb-000030
其中,S k表示第k状态模式,s t表示t时刻的状态模式,当t时刻的第i姿态样本
Figure PCTCN2021100035-appb-000031
属于状态模式S k时,
Figure PCTCN2021100035-appb-000032
为1,否则
Figure PCTCN2021100035-appb-000033
为0,
Figure PCTCN2021100035-appb-000034
表示t时刻的第i姿态样本归一化后的权重,N表示t时刻姿态样本的数量。
步骤104:将目标姿态样本中后验概率最大的姿态样本作为t时刻机器人的位姿估计,所述目标姿态样本包括所述t时刻的状态模式估计对应的姿态样本。
步骤105:根据t时刻机器人的环境点云数据以及t时刻机器人的位姿估计,确定t时刻的环境地图估计。
机器人姿态的正态分布包括线速度正态分布和偏航率正态分布,在一个示例中,在步骤101之前,还包括:
(1)获取t时刻机器人的左车轮线速度和右车轮线速度。
(2)根据
Figure PCTCN2021100035-appb-000035
计算t时刻线速度正态分布中的均值
Figure PCTCN2021100035-appb-000036
和t时刻偏航率正态分布中的均值
Figure PCTCN2021100035-appb-000037
其中,
Figure PCTCN2021100035-appb-000038
表示t时刻机器人的左车轮线速度估计值,
Figure PCTCN2021100035-appb-000039
表示t时刻机器人的右车轮线速度估计值,W表示连接机器人左右车轮的轴长。
其中,在所述正常模式下,t时刻机器人的左车轮线速度估计值为左车轮的编码器测量线速度,t时刻机器人的右车轮线速度估计值为右车轮的编码器测量线速度;在所述左轮编码器故障模式下,t时刻机器人的左车轮线速度估计值为左车轮的驱动速度,t时刻机器人的右车轮线速度估计值为右车轮的编码器测量线速度;在右轮编码器故障模式下,t时刻机器人的左车轮线速度估计值为左车轮的编码器测量线速度,t时刻机器人的右车轮线速度估计值为右车轮的驱动速度;在所述左右轮编码器同时故 障模式下,t时刻机器人的左车轮线速度估计值为左车轮的驱动速度,t时刻机器人的右车轮线速度估计值为右车轮的驱动速度。
(3)遵循t时刻各状态模式发生的经验概率以及机器人各状态模式下的线速度正态分布,采样线速度样本。
(4)遵循t时刻各状态模式发生的经验概率以及机器人各状态模式
下的偏航率正态分布,采样偏航率样本。
其中,线速度正态分布中的均值由对应状态模式下的左右车轮线速度估计值确定,所述线速度正态分布中的标准差由对应状态模式的经验标准差确定,所述偏航率正态分布中的均值由对应状态模式下的左右车轮线速度估计值确定,所述偏航率正态分布中的标准差由对应状态模式的经验标准差确定。
比如,需要采样100个线速度样本,t时刻状态模式S 1发生的经验概率为0.5,那么,在采样时,需要采集50个属于状态模式S 1的线速度样本,而且,这50个线速度样本要符合正态分布(μ 1,σ 1),其中,μ 1采用上述方式求取,σ 1为状态模式S 1对应的线速度经验标准差,该线速度经验标准差为根据经验设定。
(5)根据所述线速度样本和所述偏航率样本确定t时刻机器人的姿态样本。
下面对上述内容所涉及到的特征进行具体详细的介绍
设x t=[x t,y tt]表示t时刻移动机器人的位姿(2维位置与航向角)
v t
Figure PCTCN2021100035-appb-000040
分别表示t时刻机器人线速度(单位:m/s)和偏航率(单位:弧度/s),τ表示采样间隔(单位:s).
Figure PCTCN2021100035-appb-000041
为t时刻左侧线速度,
Figure PCTCN2021100035-appb-000042
为t时刻右侧线速度,单位:m/s
Figure PCTCN2021100035-appb-000043
为t时刻左轮驱动速度,
Figure PCTCN2021100035-appb-000044
为t时刻右轮驱动速度,单位:m/s
W表示连接移动机器人左右轮的轴长,单位:m
r表示移动机器人左右车轮的半径,单位:m
Figure PCTCN2021100035-appb-000045
M t分别表示t时刻左轮编码器、右轮轮编码器仪、二维激光雷达的读数
Θ t为t时刻的环境地图,采用二维激光点云表示
Figure PCTCN2021100035-appb-000046
为t时刻的第i个样本表示的环境地图
s t为t时刻的状态模式,取值范围如表1
Figure PCTCN2021100035-appb-000047
为t时刻的线速度标准差
Figure PCTCN2021100035-appb-000048
为t时刻的偏航率标准差
Figure PCTCN2021100035-appb-000049
为状态模式S 1情况下的线速度标准差
Figure PCTCN2021100035-appb-000050
为状态模式S 1情况下的偏航率标准差
Figure PCTCN2021100035-appb-000051
为状态模式S 2情况下的线速度标准差
Figure PCTCN2021100035-appb-000052
为状态模式S 2情况下的偏航率标准差
Figure PCTCN2021100035-appb-000053
为状态模式S 3情况下的线速度标准差
Figure PCTCN2021100035-appb-000054
为状态模式S 3情况下的偏航率标准差
Figure PCTCN2021100035-appb-000055
为状态模式S 4情况下的线速度标准差
Figure PCTCN2021100035-appb-000056
为状态模式S 4情况下的偏航率标准差
本发明的输入为从1时刻到T时刻的激光雷达扫描M 1..t=M 1,...,M t,左轮编码器测量
Figure PCTCN2021100035-appb-000057
右轮编码器测量
Figure PCTCN2021100035-appb-000058
左轮驱动速度
Figure PCTCN2021100035-appb-000059
右轮驱动速度
Figure PCTCN2021100035-appb-000060
输出为t时刻状态模式
Figure PCTCN2021100035-appb-000061
t时刻位姿
Figure PCTCN2021100035-appb-000062
t时刻地图
Figure PCTCN2021100035-appb-000063
(1)建立故障空间
本案考查移动机器人左右轮编码器故障的诊断,每个传感器存在正常和故障两种模式,考虑编码器故障,包含2个编码器的移动机器人移动具 有4种状态模式,包含一个正常模式,和三个故障模式。
(2)初始化
粒子数目设置为N,初始化粒子集
Figure PCTCN2021100035-appb-000064
Figure PCTCN2021100035-appb-000065
Figure PCTCN2021100035-appb-000066
(3)根据表2所示的离散状态转移概率抽取离散样本
Figure PCTCN2021100035-appb-000067
Figure PCTCN2021100035-appb-000068
Figure PCTCN2021100035-appb-000069
为S k的概率为
Figure PCTCN2021100035-appb-000070
(4)按照以下方式得到连续状态
Figure PCTCN2021100035-appb-000071
如果
Figure PCTCN2021100035-appb-000072
Figure PCTCN2021100035-appb-000073
如果
Figure PCTCN2021100035-appb-000074
Figure PCTCN2021100035-appb-000075
如果
Figure PCTCN2021100035-appb-000076
Figure PCTCN2021100035-appb-000077
如果
Figure PCTCN2021100035-appb-000078
Figure PCTCN2021100035-appb-000079
Figure PCTCN2021100035-appb-000080
从正态分布
Figure PCTCN2021100035-appb-000081
采样线速度样本
Figure PCTCN2021100035-appb-000082
从正态分布
Figure PCTCN2021100035-appb-000083
采样偏航率样本
Figure PCTCN2021100035-appb-000084
Figure PCTCN2021100035-appb-000085
Figure PCTCN2021100035-appb-000086
(5)对于每一个连续状态
Figure PCTCN2021100035-appb-000087
和地图
Figure PCTCN2021100035-appb-000088
按以下方式计算 权重
Figure PCTCN2021100035-appb-000089
Figure PCTCN2021100035-appb-000090
为初始位姿估计,利用迭代最近点方法(ICP)对M t
Figure PCTCN2021100035-appb-000091
进行配准,设获得的位姿估计为
Figure PCTCN2021100035-appb-000092
Figure PCTCN2021100035-appb-000093
为M t经过位姿变换
Figure PCTCN2021100035-appb-000094
后的点云,
Figure PCTCN2021100035-appb-000095
Figure PCTCN2021100035-appb-000096
中每一点到
Figure PCTCN2021100035-appb-000097
的最近点的距离。Δ t中小于阈值Υ的点的数目为β,
Figure PCTCN2021100035-appb-000098
位姿样本修正:
Figure PCTCN2021100035-appb-000099
(6)权重规格化
Figure PCTCN2021100035-appb-000100
(7)故障诊断
计算边缘概率分布
Figure PCTCN2021100035-appb-000101
Figure PCTCN2021100035-appb-000102
For k=2 to 4 do
if
Figure PCTCN2021100035-appb-000103
End for
t时刻状态模式为
Figure PCTCN2021100035-appb-000104
(8)地图样本更新:
对于i=1,...,N,将M t经过位姿变换
Figure PCTCN2021100035-appb-000105
得到
Figure PCTCN2021100035-appb-000106
(9)在与状态模式
Figure PCTCN2021100035-appb-000107
相同的粒子中选择最大后验概率的粒子进行定位与建图:
Figure PCTCN2021100035-appb-000108
按最大后验估计定位与建图
Figure PCTCN2021100035-appb-000109
则t时刻位姿估计为
Figure PCTCN2021100035-appb-000110
环境地图估计为
Figure PCTCN2021100035-appb-000111
(10)t=t+1
(11)如果t<=T,转步骤(3),否则结束。
方法以粒子滤波器作为框架,同时实现移动机器人FDD和SLAM,FDD的结果用于提供不同故障情况下的运动学模型,在与故障相适应的运动学模型上进行更为精确的位姿估计,利用地图匹配方法实现对故障诊断和定位的感知模型。
实施例2
参见图2,本实施例提供了一种移动机器人故障下同时FDD和SLAM的***,该***包括:
姿态样本获取模块201,用于获取t时刻机器人的姿态样本;其中,t时刻机器人的姿态样本在各状态模式上的分布概率由t时刻机器人发生各状态模式的经验概率确定,机器人t时刻发生各状态模式的经验概率根据机器人t-1时刻的状态模式估计以及状态转移概率确定;t时刻任一状态模式下的姿态样本分布遵循对应状态模式下机器人姿态的正态分布,所述正态分布中的均值由对应状态模式下的左右车轮线速度估计值确定,所述左右车轮线速度估计值由t时刻左右车轮的编码器测量线速度确定或由t时刻左右车轮的编码器测量线速度以及驱动速度确定,所述正态分布中的标准差由对应状态模式的经验标准差确定;所述状态模式包括正常模式、左轮编码器故障模式、右轮编码器故障模式以及左右轮编码器同时故障模式;
样本权重确定模块202,用于根据t时刻机器人的姿态样本、t时刻 机器人环境点云数据以及t-1时刻的环境地图估计,确定各姿态样本的权重,所述机器人激光雷达用于测量环境地图;
状态模式估计确定模块203,用于根据t时刻各姿态样本所属的状态模式以及t时刻各姿态样本的权重,计算t时刻发生各状态模式的估计概率,并选取估计概率最大的状态模式作为t时刻的状态模式估计;
位姿估计确定模块204,用于将目标姿态样本中后验概率最大的姿态样本作为t时刻机器人的位姿估计,所述目标姿态样本包括所述t时刻的状态模式估计对应的姿态样本;
环境地图估计确定模块205,用于根据t时刻机器人环境点云数据以及t时刻机器人的位姿估计,确定t时刻的环境地图估计。
在一个示例中,所述样本权重确定模块202,具体包括:
配准点对确定单元,用于以所述姿态样本为初始位姿估计,利用迭代最近点方法确定t时刻环境点云数据与t-1时刻的环境地图估计在配准过程中的配准点对;
距离计算单元,用于计算各所述配准点对中两点的距离;
权重确定单元,用于确定所述配准点对中所述距离小于设定阈值的配准点对的比例,并将所述比例作为t时刻所述姿态样本的权重。
所述状态模式估计确定模块203,具体包括:
状态模式估计概率计算单元,用于根据
Figure PCTCN2021100035-appb-000112
计算t时刻发生各状态模式的估计概率
Figure PCTCN2021100035-appb-000113
其中,S k表示第k状态模式,s t表示t时刻的状态模式,当t时刻的第i姿态样本
Figure PCTCN2021100035-appb-000114
属于状态模式S k时,
Figure PCTCN2021100035-appb-000115
为1,否则
Figure PCTCN2021100035-appb-000116
为0,
Figure PCTCN2021100035-appb-000117
表示t时刻的第i姿态样本归一化后的权重,N表示t时刻姿态样本的数量。
在一个示例中,本实施例提供的***还包括:线速度正态分布确定模块、偏航率正态分布确定模块、线速度样本采样模块、偏航率样本采样模块以及姿态样本确定模块。其中:
线速度正态分布确定模块,用于:
根据
Figure PCTCN2021100035-appb-000118
计算t时刻线速度正态分布中的均值
Figure PCTCN2021100035-appb-000119
其中,
Figure PCTCN2021100035-appb-000120
表示t时刻机器人的左车轮线速度,
Figure PCTCN2021100035-appb-000121
表示t时刻机器人的右车轮线速度;
偏航率正态分布确定模块,用于:
根据
Figure PCTCN2021100035-appb-000122
计算t时刻偏航率正态分布中的均值
Figure PCTCN2021100035-appb-000123
其中,W表示连接机器人左右车轮的轴长;
线速度样本采样模块,用于遵循t时刻各状态模式发生的经验概率以及机器人各状态模式下的线速度正态分布,采样线速度样本;
偏航率样本采样模块,用于遵循t时刻各状态模式发生的经验概率以及机器人各状态模式下的偏航率正态分布,采样偏航率样本;
姿态样本确定模块,用于根据所述线速度样本和所述偏航率样本确定t时刻机器人的姿态样本。
其中,所述机器人姿态的正态分布包括线速度正态分布和偏航率正态分布,所述线速度正态分布中的均值由对应状态模式下的左右车轮线速度估计值确定,所述线速度正态分布中的标准差由对应状态模式的经验标准差确定,所述偏航率正态分布中的均值由对应状态模式下的左右车轮线速度估计值确定,所述偏航率正态分布中的标准差由对应状态模式的经验标准差确定。W表示连接机器人左右车轮的轴长,
Figure PCTCN2021100035-appb-000124
表示t时刻机器人的左车轮线速度估计值,
Figure PCTCN2021100035-appb-000125
表示t时刻机器人的右车轮线速度估计值;其中,在所述正常模式下,t时刻机器人的左车轮线速度估计值为左车轮的编码器测量线速度,t时刻机器人的右车轮线速度估计值为右车轮的编码器测量线速度;在所述左轮编码器故障模式下,t时刻机器人的左车轮线速度估计值为左车轮的驱动速度,t时刻机器人的右车轮线速度估计值为右车轮的编码器测量线速度;在右轮编码器故障模式下,t时刻机器人的左车轮线速度估计值为左车轮的编码器测量线速度,t时刻机器人的右车轮线速度估计值为右车轮的驱动速度;在所述左右轮编码器同时故障模式下,t时刻机器人的左车轮线速度估计值为左车轮的驱动速度,t时刻机器人的右车轮线速度估计值为右车轮的驱动速度。
本说明书中各个实施例采用递进的方式描述,每个实施例重点说明的 都是与其他实施例的不同之处,各个实施例之间相同相似部分互相参见即可。对于实施例公开的***而言,由于其与实施例公开的方法相对应,所以描述的比较简单,相关之处参见方法部分说明即可。
本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处。综上所述,本说明书内容不应理解为对本发明的限制。

Claims (10)

  1. 一种移动机器人故障下同时FDD和SLAM的方法,其特征在于,包括:
    获取t时刻机器人的姿态样本;其中,t时刻机器人的姿态样本在各状态模式上的分布概率由t时刻机器人发生各状态模式的经验概率确定,机器人t时刻发生各状态模式的经验概率根据机器人t-1时刻的状态模式估计以及状态转移概率确定;t时刻任一状态模式下的姿态样本分布遵循对应状态模式下机器人姿态的正态分布,所述正态分布中的均值由对应状态模式下的左右车轮线速度估计值确定,所述左右车轮线速度估计值由t时刻左右车轮的编码器测量线速度确定或由t时刻左右车轮的编码器测量线速度以及驱动速度确定,所述正态分布中的标准差由对应状态模式的经验标准差确定;所述状态模式包括正常模式、左轮编码器故障模式、右轮编码器故障模式以及左右轮编码器同时故障模式;
    根据t时刻机器人的姿态样本、t时刻机器人的环境点云数据以及t-1时刻机器人的环境地图估计,确定各姿态样本的权重;其中,环境点云数据为实际测量数据;
    根据t时刻各姿态样本所属的状态模式以及t时刻各姿态样本的权重,计算t时刻发生各状态模式的估计概率,并选取估计概率最大的状态模式作为t时刻的状态模式估计;
    将目标姿态样本中后验概率最大的姿态样本作为t时刻机器人的位姿估计,所述目标姿态样本包括所述t时刻的状态模式估计对应的姿态样本;
    根据t时刻机器人的环境点云数据以及t时刻机器人的位姿估计,确定t时刻的环境地图估计。
  2. 根据权利要求1所述的移动机器人故障下同时FDD和SLAM的方法,其特征在于,所述根据t时刻机器人的姿态样本、t时刻机器人的环境点云数据以及t-1时刻机器人的环境地图估计,确定各姿态样本的权重,具体包括:
    对t时刻任一所述姿态样本:
    以所述姿态样本为初始位姿估计,利用迭代最近点方法确定t时刻环境点云数据与t-1时刻的环境地图估计在配准过程中的配准点对;
    计算各所述配准点对中两点的距离;
    确定所述配准点对中所述距离小于设定阈值的配准点对的比例,并将所述比例作为t时刻所述姿态样本的权重。
  3. 根据权利要求1所述的移动机器人故障下同时FDD和SLAM的方法,其特征在于,所述根据t时刻各姿态样本所属的状态模式以及t时刻各姿态样本的权重,计算t时刻发生各状态模式的估计概率,具体包括:
    根据
    Figure PCTCN2021100035-appb-100001
    计算t时刻发生各状态模式的估计概率
    Figure PCTCN2021100035-appb-100002
    其中,S k表示第k状态模式,s t表示t时刻的状态模式,当t时刻的第i姿态样本
    Figure PCTCN2021100035-appb-100003
    属于状态模式S k时,
    Figure PCTCN2021100035-appb-100004
    为1,否则
    Figure PCTCN2021100035-appb-100005
    为0,
    Figure PCTCN2021100035-appb-100006
    表示t时刻的第i姿态样本归一化后的权重,N表示t时刻姿态样本的数量。
  4. 根据权利要求1所述的移动机器人故障下同时FDD和SLAM的方法,其特征在于,在所述获取t时刻机器人的姿态样本之前,还包括:
    遵循t时刻各状态模式发生的经验概率以及机器人各状态模式下的线速度正态分布,采样线速度样本;
    遵循t时刻各状态模式发生的经验概率以及机器人各状态模式下的偏航率正态分布,采样偏航率样本;
    根据所述线速度样本和所述偏航率样本确定t时刻机器人的姿态样本;
    其中,所述机器人姿态的正态分布包括线速度正态分布和偏航率正态分布,所述线速度正态分布中的均值由对应状态模式下的左右车轮线速度估计值确定,所述线速度正态分布中的标准差由对应状态模式的经验标准差确定,所述偏航率正态分布中的均值由对应状态模式下的左右车轮线速度估计值确定,所述偏航率正态分布中的标准差由对应状态模式的经验标准差确定。
  5. 根据权利要求4所述的移动机器人故障下同时FDD和SLAM的方法,其特征在于,在采样线速度样本和采样偏航率样本之前,还包括:
    获取机器人的左车轮线速度和右车轮线速度;
    根据
    Figure PCTCN2021100035-appb-100007
    计算t时刻线速度正态分布中的均值
    Figure PCTCN2021100035-appb-100008
    和t时刻偏航率正态分布中的均值
    Figure PCTCN2021100035-appb-100009
    其中,W表示连接机器人左右车轮的轴长,
    Figure PCTCN2021100035-appb-100010
    表示t时刻机器人的左车轮线速度估计值,
    Figure PCTCN2021100035-appb-100011
    表示t时刻机器人的右车轮线速度估计值;其中,在所述正常模式下,t时刻机器人的左车轮线速度估计值为左车轮的编码器测量线速度,t时刻机器人的右车轮线速度估计值为右车轮的编码器测量线速度;在所述左轮编码器故障模式下,t时刻机器人的左车轮线速度估计值为左车轮的驱动速度,t时刻机器人的右车轮线速度估计值为右车轮的编码器测量线速度;在右轮编码器故障模式下,t时刻机器人的左车轮线速度估计值为左车轮的编码器测量线速度,t时刻机器人的右车轮线速度估计值为右车轮的驱动速度;在所述左右轮编码器同时故障模式下,t时刻机器人的左车轮线速度估计值为左车轮的驱动速度,t时刻机器人的右车轮线速度估计值为右车轮的驱动速度。
  6. 一种移动机器人故障下同时FDD和SLAM的***,其特征在于,包括:
    姿态样本获取模块,用于获取t时刻机器人的姿态样本;其中,t时刻机器人的姿态样本在各状态模式上的分布概率由t时刻机器人发生各状态模式的经验概率确定,机器人t时刻发生各状态模式的经验概率根据机器人t-1时刻的状态模式估计以及状态转移概率确定;t时刻任一状态模式下的姿态样本分布遵循对应状态模式下机器人姿态的正态分布,所述正态分布中的均值由对应状态模式下的左右车轮线速度估计值确定,所述左右车轮线速度估计值由t时刻左右车轮的编码器测量线速度确定或由t时刻左右车轮的编码器测量线速度以及驱动速度确定,所述正态分布中的标准差由对应状态模式的经验标准差确定;所述状态模式包括正常模式、左轮编码器故障模式、右轮编码器故障模式以及左右轮编码器同时故障模 式;
    样本权重确定模块,用于根据t时刻机器人的姿态样本、t时刻机器人的环境点云数据以及t-1时刻机器人的环境地图估计,确定各姿态样本的权重;其中,环境点云数据为实际测量数据;
    状态模式估计确定模块,用于根据t时刻各姿态样本所属的状态模式以及t时刻各姿态样本的权重,计算t时刻发生各状态模式的估计概率,并选取估计概率最大的状态模式作为t时刻的状态模式估计;
    位姿估计确定模块,用于将目标姿态样本中后验概率最大的姿态样本作为t时刻机器人的位姿估计,所述目标姿态样本包括所述t时刻的状态模式估计对应的姿态样本;
    环境地图估计确定模块,用于根据t时刻机器人的环境点云数据以及t时刻机器人的位姿估计,确定t时刻的环境地图估计。
  7. 根据权利要求6所述的移动机器人故障下同时FDD和SLAM的***,其特征在于,所述样本权重确定模块,具体包括:
    配准点对确定单元,用于以所述姿态样本为初始位姿估计,利用迭代最近点方法确定t时刻环境点云数据与t-1时刻的环境地图估计在配准过程中的配准点对;
    距离计算单元,用于计算各所述配准点对中两点的距离;
    权重确定单元,用于确定所述配准点对中所述距离小于设定阈值的配准点对的比例,并将所述比例作为t时刻所述姿态样本的权重。
  8. 根据权利要求6所述的移动机器人故障下同时FDD和SLAM的***,其特征在于,所述状态模式估计确定模块,具体包括:
    状态模式估计概率计算单元,用于根据
    Figure PCTCN2021100035-appb-100012
    计算t时刻发生各状态模式的估计概率
    Figure PCTCN2021100035-appb-100013
    其中,S k表示第k状态模式,s t表示t时刻的状态模式,当t时刻的第i姿态样本
    Figure PCTCN2021100035-appb-100014
    属于状态模式S k时,
    Figure PCTCN2021100035-appb-100015
    为1,否则
    Figure PCTCN2021100035-appb-100016
    为0,
    Figure PCTCN2021100035-appb-100017
    表示t时刻的第i姿态样本归一化后的权重,N表示t时刻姿态样本的数量。
  9. 根据权利要求6所述的移动机器人故障下同时FDD和SLAM的***,其特征在于,所述***还包括:
    线速度样本采样模块,用于遵循t时刻各状态模式发生的经验概率以及机器人各状态模式下的线速度正态分布,采样线速度样本;
    偏航率样本采样模块,用于遵循t时刻各状态模式发生的经验概率以及机器人各状态模式下的偏航率正态分布,采样偏航率样本;
    姿态样本确定模块,用于根据所述线速度样本和所述偏航率样本确定t时刻机器人的姿态样本;
    其中,所述机器人姿态的正态分布包括线速度正态分布和偏航率正态分布,所述线速度正态分布中的均值由对应状态模式下的左右车轮线速度估计值确定,所述线速度正态分布中的标准差由对应状态模式的经验标准差确定,所述偏航率正态分布中的均值由对应状态模式下的左右车轮线速度估计值确定,所述偏航率正态分布中的标准差由对应状态模式的经验标准差确定。
  10. 根据权利要求9所述的移动机器人故障下同时FDD和SLAM的***,其特征在于,所述***还包括:
    线速度正态分布确定模块,用于:
    根据
    Figure PCTCN2021100035-appb-100018
    计算t时刻线速度正态分布中的均值
    Figure PCTCN2021100035-appb-100019
    其中,
    Figure PCTCN2021100035-appb-100020
    表示t时刻机器人的左车轮线速度估计值,
    Figure PCTCN2021100035-appb-100021
    表示t时刻机器人的右车轮线速度估计值;其中,在所述正常模式下,t时刻机器人的左车轮线速度估计值为左车轮的编码器测量线速度,t时刻机器人的右车轮线速度估计值为右车轮的编码器测量线速度;在所述左轮编码器故障模式下,t时刻机器人的左车轮线速度估计值为左车轮的驱动速度,t时刻机器人的右车轮线速度估计值为右车轮的编码器测量线速度;在右轮编码器故障模式下,t时刻机器人的左车轮线速度估计值为左车轮的编码器测量线速度,t时刻机器人的右车轮线速度估计值为右车轮的驱动速度;在所述左右轮编码 器同时故障模式下,t时刻机器人的左车轮线速度估计值为左车轮的驱动速度,t时刻机器人的右车轮线速度估计值为右车轮的驱动速度;
    偏航率正态分布确定模块,用于:
    根据
    Figure PCTCN2021100035-appb-100022
    计算t时刻偏航率正态分布中的均值
    Figure PCTCN2021100035-appb-100023
    其中, W表示连接机器人左右车轮的轴长。
PCT/CN2021/100035 2021-06-15 2021-06-15 移动机器人故障下同时fdd和slam的方法及*** WO2022261814A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2021/100035 WO2022261814A1 (zh) 2021-06-15 2021-06-15 移动机器人故障下同时fdd和slam的方法及***
ZA2023/07096A ZA202307096B (en) 2021-06-15 2023-07-14 Method and system for simultaneously performing fdd and slam under mobile robot fault

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/100035 WO2022261814A1 (zh) 2021-06-15 2021-06-15 移动机器人故障下同时fdd和slam的方法及***

Publications (1)

Publication Number Publication Date
WO2022261814A1 true WO2022261814A1 (zh) 2022-12-22

Family

ID=84526731

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/100035 WO2022261814A1 (zh) 2021-06-15 2021-06-15 移动机器人故障下同时fdd和slam的方法及***

Country Status (2)

Country Link
WO (1) WO2022261814A1 (zh)
ZA (1) ZA202307096B (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5739657A (en) * 1995-05-10 1998-04-14 Fujitsu Limited Apparatus for controlling motion of normal wheeled omni-directional vehicle and method thereof
CN103795373A (zh) * 2013-11-29 2014-05-14 电子科技大学中山学院 一种不完备***故障诊断的生成粒子滤波器方法
CN109129574A (zh) * 2018-11-08 2019-01-04 山东大学 服务机器人运动***云端故障诊断***及方法
CN109669350A (zh) * 2017-10-13 2019-04-23 电子科技大学中山学院 一种三轮全向移动机器人车轮滑动定量估计方法
CN110216680A (zh) * 2019-07-05 2019-09-10 山东大学 一种服务机器人云地协同故障诊断***和方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5739657A (en) * 1995-05-10 1998-04-14 Fujitsu Limited Apparatus for controlling motion of normal wheeled omni-directional vehicle and method thereof
CN103795373A (zh) * 2013-11-29 2014-05-14 电子科技大学中山学院 一种不完备***故障诊断的生成粒子滤波器方法
CN109669350A (zh) * 2017-10-13 2019-04-23 电子科技大学中山学院 一种三轮全向移动机器人车轮滑动定量估计方法
CN109129574A (zh) * 2018-11-08 2019-01-04 山东大学 服务机器人运动***云端故障诊断***及方法
CN110216680A (zh) * 2019-07-05 2019-09-10 山东大学 一种服务机器人云地协同故障诊断***和方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
DUAN ZHUO-HUA, CAI ZI-XING, YU JIN-XIA: "Fault Diagnosis and Fault Tolerant Control for Wheeled Mobile Robots under Unknown Environments: A Survey", ROBOTICS AND AUTOMATION, 2005. PROCEEDINGS OF THE 2005 IEEE INTERNATIONAL CONFERENCE ON BARCELONA, SPAIN 18-22 APRIL 2005, PISCATAWAY, NJ, USA,IEEE, 18 April 2005 (2005-04-18) - 22 April 2005 (2005-04-22), pages 3428 - 3433, XP010871452, ISBN: 978-0-7803-8914-4, DOI: 10.1109/ROBOT.2005.1570640 *

Also Published As

Publication number Publication date
ZA202307096B (en) 2023-09-27

Similar Documents

Publication Publication Date Title
Meng et al. Iterative-learning error compensation for autonomous parking of mobile manipulator in harsh industrial environment
CN111025250B (zh) 一种车载毫米波雷达在线标定方法
WO2019047641A1 (zh) 车载摄像头的姿态误差估计方法和装置
CN110986968B (zh) 三维重建中实时全局优化和错误回环判断的方法及装置
CN107066806B (zh) 航迹关联方法及装置
WO2023071442A1 (zh) 一种数据处理方法和装置
CN111024124B (zh) 一种多传感器信息融合的组合导航故障诊断方法
CN111337018A (zh) 定位方法及装置、智能机器人及计算机可读存储介质
CN103344963B (zh) 一种基于激光雷达的鲁棒航迹推算方法
CN107976296B (zh) 一种基于回溯自适应算法的飞行器气动特性在线辨识方法
CN113436274B (zh) 一种移动机器人的校准方法、装置及设备
CN111930094A (zh) 一种基于扩展卡尔曼滤波的无人机执行机构故障诊断方法
WO2022261814A1 (zh) 移动机器人故障下同时fdd和slam的方法及***
CN113375658B (zh) 移动机器人故障下同时fdd和slam的方法及***
Liu et al. LGC-Net: A lightweight gyroscope calibration network for efficient attitude estimation
CN114252073B (zh) 一种机器人姿态数据融合方法
CN114266776A (zh) 一种应用复合裂纹位移场函数的数字图像相关方法
CN113503891A (zh) 一种sinsdvl对准校正方法、***、介质及设备
Mortari et al. Attitude and position estimation from vector observations
Zhao et al. Vehicle-Motion-Constraint-Based Visual-Inertial-Odometer Fusion With Online Extrinsic Calibration
CN111829473A (zh) 一种行进间的运动底盘测距方法及***
CN110749327A (zh) 一种合作环境下的车辆导航方法
Wang et al. A robust online extrinsic calibration method for GNSS-RTK and IMU system and vehicle setups
Xu et al. Automatic and accurate vision-based measurement of camber and toe-in alignment of vehicle wheel
CN114326731B (zh) 基于激光雷达的多机器人编队跟踪采样控制方法及***

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21945407

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21945407

Country of ref document: EP

Kind code of ref document: A1