WO2022086446A1 - Uwb anchor deployment - Google Patents

Uwb anchor deployment Download PDF

Info

Publication number
WO2022086446A1
WO2022086446A1 PCT/SG2021/050633 SG2021050633W WO2022086446A1 WO 2022086446 A1 WO2022086446 A1 WO 2022086446A1 SG 2021050633 W SG2021050633 W SG 2021050633W WO 2022086446 A1 WO2022086446 A1 WO 2022086446A1
Authority
WO
WIPO (PCT)
Prior art keywords
uwb
data
robot
timestamps
vio
Prior art date
Application number
PCT/SG2021/050633
Other languages
French (fr)
Inventor
Hoang Thien NGUYEN
Pham Nhat Thien Minh NGUYEN
Lihua Xie
Original Assignee
Nanyang Technological University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanyang Technological University filed Critical Nanyang Technological University
Publication of WO2022086446A1 publication Critical patent/WO2022086446A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/02Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using radio waves
    • G01S5/0284Relative positioning
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/02Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using radio waves
    • G01S5/0252Radio frequency fingerprinting
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/02Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using radio waves
    • G01S5/0294Trajectory determination or predictive filtering, e.g. target tracking or Kalman filtering
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras

Definitions

  • the present disclosure relates to the field of localization, and more particularly to the deployment of ultra-wideband (UWB) anchors.
  • UWB ultra-wideband
  • UWB-based localization is sometimes used indoors if it is possible to first set up multiple UWB anchors in known locations, so that at any moment an autonomous robot may self- localize by triangulating with reference to the known locations of at least three UWB anchors. It can be appreciated that UWB-based localization is traditionally unsuitable for use in unknown environments.
  • the present disclosure provides a robot comprising a processor coupled with a memory, a UWB (ultra-wideband) sensor, and a VIO (visual inertial odometry) sensor.
  • the memory being configurable to store instructions executable by the processor to perform a method of calibrating a UWB anchor comprises using the UWB sensor, acquiring a plurality of UWB data from the UWB anchor over a time period, each respective UWB data from the plurality of UWB data being associated with a respective UWB timestamp in a series of UWB timestamps; using the VIO sensor, acquiring a plurality of odometric data of the robot, the robot being configured to move independently of the UWB anchor, each of the plurality of odometric data being associated with a respective VIO timestamp from one of at least one time series of VIO timestamps, each of the at least one time series of VIO timestamps being independent of the series of UW
  • the series of UWB timestamps is asynchronous relative to each of the at least one time series of VIO timestamps.
  • the series of UWB timestamps is characterized by a number of UWB timestamps spaced apart at irregular time intervals within the time period.
  • the series of UWB timestamps comprises fewer UWB timestamps than a total number of VIO timestamps in the time period of time.
  • the method may further comprise determining one or both of a velocity of the robot at the respective UWB timestamp and a displacement of the robot at the respective UWB timestamp, wherein the determining of the location error value of the UWB anchor is performed if the velocity of the robot is greater than a threshold velocity and if the displacement of the robot is greater than a threshold displacement.
  • the method may further comprise selectively forming a dataset, the dataset including the plurality of selected UWB data and the corresponding odometric data corresponding to each of the plurality of selected UWB data, the corresponding odometric data in the dataset being defined with respect to a coordinate frame.
  • the method may further comprise iteratively determining the lowest location error value of the UWB anchor based on the dataset, wherein the lowest location error value is defined with respect to the coordinate frame.
  • the iteratively determining the lowest location error value comprises determining the lowest location error value at time instances corresponding to one of the UWB timestamp in the series of UWB timestamps.
  • the iteratively determining the lowest location error value is terminated in response to any one of the following conditions being satisfied: (i) a maximum value of a covariance matrix is smaller than a threshold value; (ii) a number of iterations of applying optimization rules is equal to a predetermined maximum number of iterations; (iii) a maximum optimization time is reached.
  • an accuracy of the method is further characterized by at least one of: (i) a distance between the UWB anchor and the robot; (ii) a distance of the robot's movement prior to acquiring the UWB data and the odometric data; and (iii) the robot's spatial movement distribution in a three-dimensional space.
  • the VIO sensor comprises a camera configured to acquire image data according to a first time series of VIO timestamps; and an inertial measurement unit coupled to the camera, the inertial measurement unit being configured to acquire velocity data according to a second time series of VIO timestamps, and wherein the series of UWB timestamps is asynchronous relative to at least one of the first time series and the second time series.
  • method may further comprise using the UWB sensor, acquiring a plurality of second UWB data from a second UWB anchor over a second time period, each respective second UWB data from the plurality of second UWB data being associated with a respective second UWB timestamp in a series of second UWB timestamps; using the VIO sensor, acquiring a plurality of second odometric data of the robot, each of the plurality of second odometric data being associated with a respective second VIO timestamp from one of at least one time series of second VIO timestamps, each of the at least one time series of second VIO timestamps being independent of the series of second UWB timestamps; and for each of a plurality of selected second UWB data, determining one of the plurality of second odometric data to be a corresponding second odometric data such that the respective second VIO timestamp of the corresponding second odometric data is one nearest
  • the second estimated location of the second UWB anchor includes determining a lowest second location error value from the plurality of second location error value of the second UWB anchor.
  • the robot is disposed in an indoor setting, the robot configured to perform indoor localization.
  • a system for UWB anchor deployment comprises a UWB anchor, the UWB anchor being configured to be mobile such that the UWB anchor is disposable at an unknown location; and the robot configured to determine the estimated location of the UWB anchor according to the above.
  • a method of UWB anchor deployment comprises using a UWB sensor , acquiring a plurality of UWB data from a UWB anchor over a time period, each respective UWB data from the plurality of UWB data being associated with a respective UWB timestamp in a series of UWB timestamps; using a VIO sensor coupled to the UAV, acquiring a plurality of odometric data of the UAV, each of the plurality of odometric data being associated with a respective VIO timestamp from one of at least one time series of VIO timestamps, each of the at least one time series of VIO timestamps being independent of the series of UWB timestamps; and for each of a plurality of selected UWB data, selecting one of the plurality of odometric data to be a corresponding odometric data such that the respective VIO timestamp of the corresponding odometric data
  • the method may further comprise iteratively determining the estimated location of the UWB anchor; and stopping the iteratively determining in response to any one of the following: (i) any of a plurality of estimated variances of the estimated location in any one or more directions in a three-dimensional frame of reference is smaller than a predetermined variance threshold; (ii) a number of iterations to determine the estimated location is larger than or equal to a predetermined iteration limit; and (iii) a time taken to perform the number of iterations is greater than a predetermined ceiling time.
  • an accuracy of the estimated location is dependent on one or more of the following: (i) a distance between the UWB anchor and the UWB sensor; (ii) a radius defined by a trajectory of the UWB sensor during steps of acquiring UWB data and/or odometric data; and (iii) a spatial movement distribution of the UWB sensor in three-dimensional space.
  • the UWB sensor is coupled to an unmanned aerial vehicle (UAV).
  • UAV unmanned aerial vehicle
  • FIGs. 1 and 2 are schematic diagrams of a system according to an embodiment of the present disclosure
  • Fig. 3 is a timeline showing timing of data acquisition according to the system of Fig. 1;
  • FIG. 4 is a schematic diagram of a system according to another embodiment
  • Fig. 5 is a timeline showing timing of data acquisition according to the system of Fig. 4;
  • Fig. 6 is a schematic flow chart illustrating a method of UWB anchor deployment
  • FIG. 7 is a schematic diagram of a system according to another embodiment.
  • Fig. 8 is a timeline showing timing of data acquisition according to the system of Fig. 7;
  • FIG. 9 is a schematic diagram of a system according to another embodiment.
  • Fig. 10 is a timeline showing timing of data acquisition according to the system of Fig. 9 during operation;
  • FIG. 11 is a schematic diagram of a system according to another embodiment.
  • Fig. 12 is a timeline showing timing of data acquisition according to the system of Fig. 11;
  • Fig. 13 is a schematic flow chart of a robot localization method based on the UWB anchor calibration method of Fig. 6;
  • Fig. 14 is a plot illustrating localization error against a ratio between a distance to a UWB anchor to a measurement radius based on simulation result for a conventional position-focused method.
  • Fig. 15 is a plot illustrating localization error against a ratio between a distance to UWB anchor to a measurement radius based on simulation result for a range-focused method according to embodiments of the present disclosure.
  • robot may refer to a “drone”, a “vehicle”, an “aerial vehicle”, etc., without being limited by the mode of travel or the nature of the task of the robot.
  • Fig. 1 schematically illustrates a robot (200) according to one embodiment of the present disclosure.
  • the robot (200) may be an unmanned aerial vehicle and/or the robot (200) may travel on the ground.
  • the robot (200) may be configured as a vehicle controllable via telemetry by a human operator or by another machine.
  • the robot (200) and one or more UWB anchors (500) may be collectively referred to as a localization system (100).
  • the robot is configured to move in a trajectory independently of the one or more UWB anchors.
  • the robot (200) includes a processor (210) coupled with a memory (220).
  • the memory (220) is configurable to store instructions executable by the processor (210).
  • the processor (210) may be configured to perform a method of deploying a UWB anchor (500) according to various embodiments of the present disclosure.
  • the processor (210) may be configured to determine a location of the UWB anchor (500) with respect to a world frame (80).
  • the processor (210) may be configured to perform online calibration of the UWB anchor, that is, the UWB anchor calibration can be performed concurrently with the robot (200) being in (actual or normal) operation, including the robot self-navigating in a new or unknown environment.
  • the processor may be configured to perform self-localization and mapping of the robot (200) relative to a world frame (80).
  • the world frame (80) may refer to a global and stationary coordinate frame defined by the environment in which the robot (200) operates.
  • the world frame (80) may refer to a coordinate frame that is defined with respect to an observer.
  • the world frame (80) may be defined with reference to the robot (200) such that the world frame (80) is mobile relative to the environment.
  • the robot (200) includes an odometry device (400) suitable for acquiring odometric data of the robot (200), such as an inertial measurement unit (IMU).
  • the odometry device is integrated or otherwise coupled with a camera, and may be referred to generally as a visual inertial odometry (VIO) sensor or device.
  • VIO visual inertial odometry
  • the term “VIO sensor” will refer to any device or combination of devices suitable for acquiring odometric data (such as velocity and pose) of the robot (200) without reliance on GPS signals.
  • the term “odometric data” includes any combination of data from the camera and/or data from the IMU, including one or any combination of image data, velocity data, displacement data, and data derived therefrom.
  • the robot (200) further includes an ultra-wideband (UWB) sensor (300) configurable to receive UWB signals transmitted from one or more various UWB anchors.
  • UWB ultra-wideband
  • Embodiments of the present disclosure are not limited by a particular operating frequency band.
  • the UWB anchor and the UWB sensor may be configured with an operable range between 3.1 to 10.6 GHz (gigahertz), or between 3.1 and 9 GHz, etc.
  • a method of deploying a UWB anchor (500) using only the robot (200) and without prior human intervention will be described with the aid of Fig. 2.
  • the robot (200) may be configured to be used in conjunction with GPS signals in some examples, in this example it is assumed that GPS signals are heavily attenuated or not available in this situation.
  • the UWB anchor (500) is initially in an unknown location as the environment is new to the robot (200).
  • the robot (200) is configured to first calibrate the UWB anchor (500), that is, to first establish the location of the UWB anchor with respect to the world frame (80).
  • the robot (200) is configured to move along a trajectory (90) relative to the world frame (80) and relative to the stationary UWB anchor (500) over a time period.
  • the robot (200) may move in three-dimensional (3D) space and is not limited to moving in a specific two-dimensional (2D) plane. In other examples, the robot (200) may move only in two- dimensional (2D) space.
  • information relating to a height axis may be predetermined or input from a source of known data.
  • a user may provide data relating to a height of the UWB anchor relative to the robot, and/or a height of the UWB anchor and a height of the robot relative to the ground.
  • the UWB sensor (300) of the robot (200) is configured to provide the processor (210) with a series of UWB data (350).
  • Each UWB data (350) is associated with (or linked to) a respective UWB timestamp from a series of UWB timestamps.
  • the VIO sensor (400) of the robot (200) is configured to acquire a plurality of odometric data (450), and to transmit the odometric data (450) to the processor (210).
  • Each of the plurality of the odometric data (450) is associated with (or linked to) a respective VIO timestamp from a time series of VIO timestamps.
  • Fig. 3 schematically illustrates the timing of the various time stamps
  • the UWB clock and the clock(s) of the VIO sensor are asynchronous, that is, they are not synchronized with one another.
  • the series of UWB timestamps is asynchronous relative to each of the time series of VIO timestamps.
  • the series of VIO timestamps is independent of the time series of UWB timestamps. This may alternatively be described in terms of a timeoffset between data from the different sensors of the same robot. In particular, there is a time-offset between the UWB data and the odometric data.
  • the UWB sensor (300) may be described as having a different sampling rate or a different sampling start time relative to the VIO sensor (400).
  • Fig. 3 illustrates an example in which the odometric data (450) is characterized by a higher sampling rate relative to the UWB data (350).
  • the UWB data (350) may have a higher sampling rate relative to the odometric data (450).
  • the series of UWB timestamps may include fewer UWB timestamps than a total number of VIO timestamps in the time period.
  • the UWB signal (550) transmitted from the UWB anchor (500) is not received by the UWB sensor (300) of the robot (200). This may be due to various reasons, including but not limited to, the shape of the terrain or geography being traversed by the robot (200), the presence of building structures, or the pose of the robot (200), etc., such that there is no line-of-sight between the UWB anchor (500) and the UWB sensor (300). It can be appreciated that the time-offset is irregular at different times over the course of the robot’s trajectory, and that synchronization (and hence calibration of the UWB anchor) is not a trivial problem.
  • the robot (200) is configured to employ a “range-focused” method to calibrate the UWB anchor (500).
  • One embodiment of the range-focused method includes selecting timestamps from the series of UWB timestamps, and using data corresponding to the selected UWB timestamps.
  • a range data UWB-related data
  • odometric data position-related data
  • Each of a plurality of UWB data (350) is eventually associated with (related or linked to) a corresponding odometric data (450).
  • one of the plurality of odometric data (450) is determined to be a corresponding odometric data (450a) to form a UWB-odometric data (250).
  • Determining the one of the plurality of odometric data may include selecting or computing the one of the plurality of odometric data.
  • the range-focused method involves selecting from the time series of odometric data (450) the odometric data (450a) having a VIO timestamp that is nearest in time to the respective UWB timestamp (350).
  • the same odometric data (odometric data acquired at the same VIO timestamp) may be associated with different UWB data (UWB data acquired at different UWB timestamps).
  • the range-focused method presents a contrast to a “position-focused” method towards localization and positioning.
  • a position-focused method refers to a method in which the location of a robot is determined using either the camera clock or the IMU clock as the basis for distance measurements. Meaningful results often require a discernible displacement of the robot to a new position so that triangulation can be applied.
  • the position-focused method can require a robot displacement of about 5 to 10 m in all three dimensions (e.g., x-y-z dimensions in a Cartesian coordinate system or world frame).
  • the position-focused method could result in relatively large localization errors.
  • Figs. 4 and 5 illustrate an embodiment in which the VIO sensor (400) includes a camera (410) and an inertial measurement unit (IMU) (420).
  • the camera (410) configured to acquire image data (470) of a visual target (550).
  • the IMU (420) is fixedly coupled to the camera (410), and both the IMU (420) and the camera (410) are coupled to the robot (200).
  • the IMU (420) is configured to provide IMU data (480) corresponding to a velocity of the robot (200).
  • the IMU data (480) may include (but is not limited to) one or more types of data selected from the following: a displacement data, an acceleration data, an angular displacement data, a rotational velocity data, and/or an angular acceleration data.
  • Each of the camera (410) and the IMU (420) is configured with its respective camera clock and IMU clock.
  • this is represented schematically by the respective timing of image data (470) and IMU data (480) along a timeline.
  • image data (470) characterized by a first time series of VIO timestamps
  • IMU data (480) characterized by a second time series of VIO timestamps.
  • the UWB data (350) is characterized by a series of UWB timestamps asynchronous relative to at least one of the first time series of VIO timestamps and the second time series of VIO timestamps.
  • the robot (200) in this example is similarly configured to employ the range-focused method in which the processor (210) is configured to relate a corresponding odometric data (460) to each of the UWB data (350) selected.
  • the robot (200) may be configured to: for each UWB data 350, select a corresponding image data (470) with a respective first VIO timestamp from the first time series of VIO timestamps, in which the first respective VIO timestamp is one nearest in time to the respective UWB timestamp (350).
  • the robot (200) is further configured to select a corresponding IMU data (480a) with a respective second VIO timestamp from the second time series of VIO timestamps, in which the selection is such that the second VIO timestamp is one nearest in time to the respective UWB timestamp (350).
  • the robot (200) is configured to select the corresponding image data (470) and the corresponding IMU data (480a) based on each of the UWB data (350) selected, to collectively form a UWB-odometric data (250).
  • UWB-odometric data (2501) is obtained from inputs including UWB data (3501), IMU data (4801), and image data (4701).
  • UWB data 3501
  • IMU data 4801
  • image data 4701
  • selection of the IMU data and the image data is determined by the occurrence of the UWB timestamp.
  • the IMU data is selected on the basis that the IMU data has an IMU timestamp (4801) nearest in time to the UWB timestamp (3501).
  • the image data is selected on the basis that the image data has an image timestamp (4701) nearest in time to the UWB timestamp (3501).
  • the IMU data selected for association with the UWB timestamp (3502) is one having an IMU timestamp (4802) closest in time to the UWB timestamp (3502).
  • the image data having a timestamp nearest to in time to the UWB timestamp (3502) is the image data at image timestep (4701).
  • the UWB-odometric data (2502) associated with the UWB timestamp (3502) is based on the UWB data at UWB timestamp (3502), IMU data at IMU timestamp (4802), and image data at image timestamp (4701).
  • the range-focused method presents a contrast to a “position-focused” method.
  • the image data (470) and the IMU data (480) may first be processed to form an odometric data (460) with a time series of VIO timestamps.
  • the VIO timestamps may be determined according to a data processing method, for example, a visual-inertial odometry (VIO) pipeline.
  • VIO visual-inertial odometry
  • the odometric data (460) may be assumed to have a time series of VIO timestamps according to a first time series of VIO timestamps.
  • the odometric data (460) may have a time series of VIO timestamps according to a second time series of VIO timestamps.
  • the odometric data 460 may have a time series of VIO timestamps selected from more than one time series of VIO timestamps.
  • FIG. 6 schematically illustrates a method of UWB anchor calibration performed by the processor (210) of a robot (200) based on instructions stored in the memory (220), using data input from the respective UWB sensor (300) and VIO sensor (400), according to embodiments of the present disclosure.
  • UWB data (350) and odometric data (450) are acquired as input and the resulting UWB-odometric data (250) is added to a dataset ( ⁇ 7) (610).
  • a sufficiency check (620) is performed by the processor (210) based on the UWB-odometric data (250) acquired so as to decide whether or not to proceed with a next step in the UWB anchor calibration process.
  • the sufficiency check (620) achieves savings to computation time and resources by determining whether there is improvement in the dataset that many potentially contribute to a better localization solution prior to proceeding with the next step.
  • the sufficiency check (620) gives an overview of the trajectory performed by the robot (200), which can inform the user on the robot’s movement, e.g., whether the robot’s movement exceeds a threshold”.
  • the robot (200) may be configured with an interface suitable for a user to predetermine or select the threshold displacement and/or the threshold velocity.
  • the threshold displacement and/or the threshold velocity may be determined according to user requirements, environmental limitations, and/or performance level of the localization system (100).
  • the processor (210) is configured to determine whether a displacement (203) of the robot (200) between consecutive UWB timestamps is greater than a threshold displacement.
  • the threshold displacement may include displacement values in one or more dimensions, e.g., along one or more of the x-y-z coordinates of the world frame (80).
  • the sufficiency check may require sufficient movement in all axes of the coordinates. If, between consecutive UWB timestamps, the robot (200) undergoes a displacement smaller than the threshold displacement, the processor (210) may be configured to discard one of the UWB-odometric data.
  • the later acquired UWB-odometric data may be excluded or discarded from the dataset ( ⁇ 7). If, between consecutive UWB timestamps, the robot (200) undergoes a displacement smaller than the threshold displacement, the processor (210) may be configured to acquire a next UWB- odometric data instead of proceeding to optimization (630).
  • the sufficiency check may be configured to determine whether a velocity (205) of the robot (200) at the respective UWB timestamp is greater than a threshold velocity.
  • the processor (210) may be configured such that, if the velocity (205) is greater than the threshold velocity, the UWB-odometric data is added to or retained in the dataset ( ⁇ 7).
  • the processor (210) may be configured such that, if the velocity (205) is not greater than the threshold velocity, the UWB-odometric data is excluded or discarded from the dataset ( ⁇ 7).
  • the threshold velocity may be set at zero or a near-zero positive value, such that if the UWB-odometric data indicates that the velocity of the robot is zero or near zero, then the sufficient conditions are not met, and the processor (210) may be configured to acquire a next UWB-odometric data instead of proceeding to optimization (630).
  • the processor (210) may be configured to proceed with the next step of performing optimization-based UWB anchor localization (630) using the dataset ( ⁇ 7).
  • the processor (210) includes a calibration module configured to estimate a UWB anchor location by defining an energy function incorporating data input on position, orientation, velocity, and range. The data input on position, orientation, and velocity are acquired from the VIO sensor, which may be an integrated VIO, or a camera coupled with an IMU sensor. The data input on range comes from the UWB sensor.
  • the calibration module of the processor (210) is configured to minimize the energy function.
  • the optimization is configured such that, for each UWB data (350), a location error value of the UWB anchor (500) is determined based on respective UWB-odometric data (250), in which the UWB-odometric data (250) includes the respective UWB data (350) and the corresponding odometric data (450/460).
  • the energy function may be defined in terms of a UWB anchor location error value or a UWB range measurement residual for each UWB data, as shown in equation (1) below:
  • e is a UWB anchor location error value or UWB range measurement residual for each UWB data
  • UWB data is the UWB data
  • ⁇ p fc is the VIO position output
  • ⁇ v fc the VIO velocity output
  • ⁇ p fc the UWB anchor position
  • Equation (2) An optimization cost function may be employed to optimize the residual.
  • equation (2) the cost function is shown in equation (2) below:
  • J is the dataset
  • p is the Huber loss function to diminish the effect of data outliers
  • e is the anchor location error value or UWB range measurement residual.
  • the optimization is performed via optimization methods such as Gauss-Newton or Levenberg- Marquardt.
  • the processor (910) is configured to terminate itself without human intervention (640).
  • the optimization process may be terminated in response to one or more termination criteria being fulfilled.
  • the termination criteria may be any one or a combination of the following: (i) a maximum value of a covariance matrix is smaller than a threshold value; (ii) a number of iterations of applying the optimization rules is equal to a predetermined maximum number of iterations; and (iii) a maximum optimization time is reached.
  • the robot (200) may be configured with a suitable user interface to enable user reconfiguration of the termination criteria.
  • the robot may move substantially or predominantly in a two- dimensional (2D) plane, such as in the case of a wheeled or legged robot moving on the ground.
  • the termination criteria may not be satisfied due to a lack of informative data, for example, in the height axis.
  • height-related information may be predetermined or provided from an external source.
  • the height-related information may be acquired from a database of height-related data.
  • a user may provide the height-related information.
  • the height-related information may be in the form of one or more of the following: information relating to a height of the UWB anchor relative to the robot, and/or a height of the UWB anchor and a height of the robot relative to the ground. This may reduce the complexity of the termination criteria from a three- dimensional (3D) spatial value to a two-dimensional (2D) spatial value, thus allowing the termination criteria to be met.
  • the processor (210) is configured such that the optimization process determines a lowest location error value from a plurality of location error values of the UWB anchor, to arrive at an estimated location of the UWB anchor (650).
  • the outputs of the optimization process include the estimated location of the UWB anchor as well as the corresponding variances of each axis (cr x ,y,z)-
  • the outputs of the optimization process include the estimated location of the UWB anchor as well as the corresponding variances of each axis ( ⁇ J x y ).
  • the variance output provides insights into how accurate the estimated location is along each axis.
  • the robot (200) may be configured to report all results in real-time such that online calibration is enabled.
  • the variances are indicators on the reliability and trustworthiness of the optimized output, i.e., the estimated location of the UWB anchor.
  • the processor (210) may be configured to check an output or a result of the optimization process against a previously obtained result to determine if a solution for the estimated location of the UWB anchor has been improved. If it is determined that the solution has not been improved, the solution and related data are discarded or removed from the dataset ( ⁇ 7). This maintains the data in the dataset ( ⁇ 7) as useful data and enables a smaller memory requirement on the robot. Further, a smaller dataset ( ⁇ 7) also improves the computation speed.
  • an accuracy of the estimated location of the UWB anchor is characterized by at least one of: (i) a distance between the UWB anchor (500) and the robot (200); (ii) a distance of the robot's movement prior to acquiring the UWB data (350) and the odometric data (450/460); and (iii) the robot's spatial movement distribution in three- dimensional space.
  • the localization system (100) may include more than one visual target (550).
  • the VIO sensor (400) of the robot (200) may be configured to use the more than one visual target (550) and provide more reliable odometric data by the VIO pipeline.
  • the localization system (100) may include a first UWB anchor (500a) and a second UWB anchor (500b), such that there is more than one unknown UWB anchor.
  • each UWB data (350a/350b) may be received from the respective UWB anchor (500a/500b), and may be processed with the odometric data (460) received from the VIO sensor (400) to calibrate each of the UWB anchors.
  • the robot (200) may be configured to perform online calibration each of the unknown UWB anchors encountered in turn, such that once the first UWB anchor (500a) is calibration, it is treated as a known UWB anchor for localization and mapping as the robot (200) continues on in its trajectory (90). The robot (200) can next perform online calibration of the second UWB anchor (500b).
  • One benefit of such a localization system (100) is that the robot is able to venture into unknown environments which may be unsafe for humans (e.g., subterranean exploration, hazardous environments, etc.) without the need for human intervention. Instances where multiple robots operate, the robots are configured with information-sharing capability enabling communication and control between the robots, and/or between the robots and a user. Therefore, information relating to the locations of each UWB anchor allows the user to gain a better understanding of the environment and more effective execution of the robot’s mission.
  • the robot 200 may be further configured to determine a position of the robot (200) in the world frame (80).
  • the robot (200) is configured to determine a keyframe (270) based on further UWB data (350) and further odometric data (450).
  • Each keyframe (270) may include a position of the robot (200) relative to the world frame (80); a position change of the robot (200) relative to the world frame (80); a velocity of the robot (200) relative to the world frame (80); an acceleration of the robot (200) relative to the world frame (80); an orientation of the robot (200) relative to the world frame (80); an orientation change of the robot (200) relative to the world frame (80); an angular velocity of the robot (200) relative to the world frame (80); and/or an angular acceleration of the robot (200) relative to the world frame (80).
  • there are more than one keyframe (270k) and the target timestamp (260) may be between two consecutive keyframes (270k).
  • the target timestamp (260) may be a respective timestamp of any UWB data received between two consecutive keyframes (270k).
  • the robot (200) is configured to determine a predicted position of the robot (200) at a target timestamp (260).
  • a corresponding keyframe (270k) is determined based on the target timestamp (260) to obtain a previous position of the robot (200) at the keyframe (270k).
  • the predicted position of the robot (200) at the target timestamp (260) may be based on the previous position of the robot (200) at the keyframe (270k) and a predicted position change of the robot (200).
  • the predicted position of the robot (200) at the target timestamp (260) is determined via equation (3):
  • 'gP is the predicted position of the robot (200) relative to world frame (80) at the target timestamp (260)
  • ⁇ p k is the previous position of the robot (200) relative to world frame (80) at keyframe 270k
  • Apy is a predicted position change of the robot based on UWB data (350) and odometric data (450)
  • ⁇ v k is a velocity of the robot (200) relative to world frame (80) at keyframe (270k)
  • At is a time gap between the target timestamp and the keyframe (270k).
  • to determine a predicted position change (Apy ) of the robot at the target timestamp includes: for each of a further UWB data (350), selecting a respective further odometric data (450) nearest in time to the further UWB data (350) and determining a possible position change of the robot (200) based on the further UWB data (350) and the respective odometric data (450). Thereafter, determining the predicted position change of the robot (200) based on one of a plurality of possible position changes.
  • the one of the plurality of possible position change has a position change error value lowest among the plurality of possible position changes and may be computed via optimization of a cost function for UWB data (350) and odometric data (450) over two consecutive keyframes (270k).
  • Fig. 13 illustrates a localization method (800) for a robot.
  • the localization method (800) includes using a UWB sensor to acquire a plurality of UWB data from a UWB anchor over a time period (810). Each respective UWB data from the plurality of UWB data is associated with a respective UWB timestamp in a series of UWB timestamps.
  • the method (800) includes using a VIO sensor to acquire a plurality of odometric data of the robot (820). Each of the plurality of odometric data is associated with a respective VIO timestamp from one of at least one time series of VIO timestamps.
  • the method (800) includes, for each of a plurality of selected UWB data, determining one of the plurality of odometric data to be a corresponding odometric data such that the respective VIO timestamp of the corresponding odometric data is one nearest in time to the respective UWB timestamp (830). Determining one of the plurality of odometric data may include selecting or computing the one of the plurality of odometric data.
  • a virtual odometric data with a virtual VIO timestamp may be computed according to equation (3).
  • the method (800) includes, for each of the plurality of selected UWB data, determining a location error value of the UWB anchor (840) based on the respective selected UWB data and the corresponding odometric data.
  • the method (800) includes determining an estimated location of the UWB anchor based on a plurality of location error values of the UWB anchor (850).
  • the method (800) may include determining the estimated location of the UWB anchor by determining a lowest location error value from the plurality of location error values of the UWB anchor.
  • the method (800) may further include determining one or both of a velocity of the robot at the respective UWB timestamp and a displacement of the robot at the respective UWB timestamp, wherein the determining of the location error value of the UWB anchor is performed if the velocity of the robot is greater than a threshold velocity and if the displacement of the robot is greater than a threshold displacement.
  • the method (800) also includes selectively forming a dataset, the dataset including the plurality of selected UWB data and the corresponding odometric data corresponding to each of the plurality of selected UWB data, the corresponding odometric data in the dataset being defined with respect to a coordinate frame.
  • the method (800) may further include iteratively determining the lowest location error value of the UWB anchor based on the dataset, wherein the lowest location error value is defined with respect to the coordinate frame. Further, iteratively determining the lowest location error value includes determining the lowest location error value at time instances corresponding to one of the UWB timestamp in the series of UWB timestamps.
  • each iteration of determining the lowest location error value is performed at a time corresponding to one of the UWB timestamp in the series of UWB timestamps.
  • the dataset includes selected respective UWB data selected from the plurality of UWB data and the corresponding odometric data of the selected respective UWB data.
  • the iterative loop of determining the location error value is terminated in response to any one of the following conditions being satisfied: (i) a maximum value of a covariance matrix is smaller than a threshold value; (ii) a number of iterations of applying optimization rules is equal to a predetermined maximum number of iterations; and (iii) a maximum optimization time is reached.
  • Maximum optimization time may be a predetermined time acquired from an external source, e.g., input by the user.
  • the method of UWB anchor deployment includes using a UWB sensor of an unmanned aerial vehicle to acquire a plurality of UWB data from a UWB anchor over a time period. Each respective UWB data from the plurality of UWB data is associated with a respective UWB timestamp in a series of UWB timestamps.
  • the UAV is secured with a VIO sensor (such as a camera and IMU combination) to acquire a plurality of odometric data of the UAV.
  • Each of the plurality of odometric data is associated with a respective VIO timestamp from one of at least one time series of VIO timestamps.
  • Each of the at least one time series of VIO timestamps is independent of the series of UWB timestamps.
  • one of the plurality of odometric data is determined to be a corresponding odometric data such that the respective VIO timestamp of the corresponding odometric data is one nearest in time to the respective UWB timestamp.
  • a location error value of the UWB anchor is determined based on the respective selected UWB data and the corresponding odometric data, and an estimated location of the UWB anchor is determined based on a plurality of location error values of the UWB anchor.
  • the estimated location of the UWB anchor is iteratively determined.
  • the abovedescribed iteratively determining steps are repeated until stopped, in response to any one of the following occurring: (i) any of a plurality of estimated variances of the estimated location in any directions in 3D is smaller than a predetermined variance threshold; (ii) a number of iterations to determine the estimated location is larger than or equal to a predetermined iteration limit; and (iii) a time taken to perform the number of iterations is greater than a predetermined ceiling time.
  • the accuracy of the estimated location of the UWB anchor is dependent on one or more of the following: (i) a distance between the UWB anchor and the UAV; (ii) a radius defined by a trajectory of the UAV during steps of acquiring UWB data and/or odometric data; and (iii) a spatial movement distribution of the UAV in three-dimensional space.
  • Fig. 14 shows the simulation results for the conventional position-focused method as a plot of localization error against a ratio between a distance to anchor to a measurement radius.
  • Fig. 15 shows the simulation results for the range-focused method as a plot of localization error against a ratio between a distance to anchor to a measurement radius.
  • the range-focused method according to an embodiment of the present disclosure consistently shows a smaller localization error compared to the conventional position-focused method. It can also be observed that localization error for the position-focused method varies with different time offsets between the UWB data and the odometric data. In contrast, the localization error for the proposed range-focused method is invariant relative to different time offsets.
  • the range-focused method proposed herein demonstrate improved reliability and accuracy of the UWB residual for the same amount of UWB data available to the robot.
  • Another significant benefit evident from the simulation results is that, unlike the position-focused method, the range-focused method is not affected by the time offsets between the various sensors.
  • Position-focused methods typically require large robot displacements (about 5 m to 10 m) in all three dimensions (x- y-z dimensions) in, making localization in constricted spaces (such as in tunnels) practically impossible.
  • the range-focused method can be used even if the robot displacement is smaller (about 2 m to 3 m).
  • UWB anchor calibration method proposed herein is configured to control the amount of data used and the amount of data retained, as well as to automatically start and to automatically stop the UWB anchor calibration process.
  • the localization system proposed herein can thus be deployed in environments that are dangerous, hazardous, or inaccessible to humans.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Manipulator (AREA)

Abstract

Disclosed is a UWB-based localization system and method. A robot is configured to perform a method of using a UWB sensor, acquiring a plurality of UWB data from a UWB anchor over a time period; using a VIO sensor, acquiring a plurality of odometric data of the robot; and for each of a plurality of selected UWB data, determining one of the plurality of odometric data to be a corresponding odometric data such that respective VIO timestamp of the corresponding odometric data is one nearest in time to respective UWB timestamp; for each of the plurality of selected UWB data, determining a location error value of the UWB anchor based on the respective selected UWB data and the corresponding odometric data; and determining an estimated location of the UWB anchor based on a plurality of location error values of the UWB anchor.

Description

UWB ANCHOR DEPLOYMENT
The present application claims priority to the Singapore patent application no. 10202010498P, the entire disclosure of which is incorporated herein by reference.
TECHNICAL FIELD
[0001] The present disclosure relates to the field of localization, and more particularly to the deployment of ultra-wideband (UWB) anchors.
BACKGROUND
[0002] There are many situations in which a global positioning system (GPS) is not available for use in localization and navigation, such indoors, in tunnels, or underground. UWB-based localization is sometimes used indoors if it is possible to first set up multiple UWB anchors in known locations, so that at any moment an autonomous robot may self- localize by triangulating with reference to the known locations of at least three UWB anchors. It can be appreciated that UWB-based localization is traditionally unsuitable for use in unknown environments.
SUMMARY
[0003] In one aspect, the present disclosure provides a robot comprising a processor coupled with a memory, a UWB (ultra-wideband) sensor, and a VIO (visual inertial odometry) sensor. The memory being configurable to store instructions executable by the processor to perform a method of calibrating a UWB anchor comprises using the UWB sensor, acquiring a plurality of UWB data from the UWB anchor over a time period, each respective UWB data from the plurality of UWB data being associated with a respective UWB timestamp in a series of UWB timestamps; using the VIO sensor, acquiring a plurality of odometric data of the robot, the robot being configured to move independently of the UWB anchor, each of the plurality of odometric data being associated with a respective VIO timestamp from one of at least one time series of VIO timestamps, each of the at least one time series of VIO timestamps being independent of the series of UWB timestamps; and for each of a plurality of selected UWB data, determining one of the plurality of odometric data to be a corresponding odometric data such that the respective VIO timestamp of the corresponding odometric data is one nearest in time to the respective UWB timestamp; for each of the plurality of selected UWB data, determining a location error value of the UWB anchor based on the respective selected UWB data and the corresponding odometric data; and determining an estimated location of the UWB anchor based on a plurality of location error values of the UWB anchor. The method may further include determining the estimated location of the UWB anchor includes determining a lowest location error value from the plurality of location error values of the UWB anchor.
[0004] Preferably, the series of UWB timestamps is asynchronous relative to each of the at least one time series of VIO timestamps. Preferably, the series of UWB timestamps is characterized by a number of UWB timestamps spaced apart at irregular time intervals within the time period. Preferably, the series of UWB timestamps comprises fewer UWB timestamps than a total number of VIO timestamps in the time period of time.
[0005] The method may further comprise determining one or both of a velocity of the robot at the respective UWB timestamp and a displacement of the robot at the respective UWB timestamp, wherein the determining of the location error value of the UWB anchor is performed if the velocity of the robot is greater than a threshold velocity and if the displacement of the robot is greater than a threshold displacement.
[0006] The method may further comprise selectively forming a dataset, the dataset including the plurality of selected UWB data and the corresponding odometric data corresponding to each of the plurality of selected UWB data, the corresponding odometric data in the dataset being defined with respect to a coordinate frame.
[0007] The method may further comprise iteratively determining the lowest location error value of the UWB anchor based on the dataset, wherein the lowest location error value is defined with respect to the coordinate frame. Preferably, the iteratively determining the lowest location error value comprises determining the lowest location error value at time instances corresponding to one of the UWB timestamp in the series of UWB timestamps. Preferably, the iteratively determining the lowest location error value is terminated in response to any one of the following conditions being satisfied: (i) a maximum value of a covariance matrix is smaller than a threshold value; (ii) a number of iterations of applying optimization rules is equal to a predetermined maximum number of iterations; (iii) a maximum optimization time is reached.
[0008] Preferably, an accuracy of the method is further characterized by at least one of: (i) a distance between the UWB anchor and the robot; (ii) a distance of the robot's movement prior to acquiring the UWB data and the odometric data; and (iii) the robot's spatial movement distribution in a three-dimensional space.
[0009] Preferably, the VIO sensor comprises a camera configured to acquire image data according to a first time series of VIO timestamps; and an inertial measurement unit coupled to the camera, the inertial measurement unit being configured to acquire velocity data according to a second time series of VIO timestamps, and wherein the series of UWB timestamps is asynchronous relative to at least one of the first time series and the second time series.
[0010] Preferably, method may further comprise using the UWB sensor, acquiring a plurality of second UWB data from a second UWB anchor over a second time period, each respective second UWB data from the plurality of second UWB data being associated with a respective second UWB timestamp in a series of second UWB timestamps; using the VIO sensor, acquiring a plurality of second odometric data of the robot, each of the plurality of second odometric data being associated with a respective second VIO timestamp from one of at least one time series of second VIO timestamps, each of the at least one time series of second VIO timestamps being independent of the series of second UWB timestamps; and for each of a plurality of selected second UWB data, determining one of the plurality of second odometric data to be a corresponding second odometric data such that the respective second VIO timestamp of the corresponding second odometric data is one nearest in time to the respective second UWB timestamp; for each of the plurality of selected second UWB data, determining a second location error value of the second UWB anchor based on the respective selected second UWB data and the corresponding second odometric data; and determining a second estimated location of the second UWB anchor based on a plurality of second location error value of the second UWB anchor.
[0011] Preferably, the second estimated location of the second UWB anchor includes determining a lowest second location error value from the plurality of second location error value of the second UWB anchor. [0012] Preferably, the robot is disposed in an indoor setting, the robot configured to perform indoor localization.
[0013] According to another aspect, there is provided a system for UWB anchor deployment, the system comprises a UWB anchor, the UWB anchor being configured to be mobile such that the UWB anchor is disposable at an unknown location; and the robot configured to determine the estimated location of the UWB anchor according to the above.
[0014] According to another aspect, there is provided a method of UWB anchor deployment. The method comprises using a UWB sensor , acquiring a plurality of UWB data from a UWB anchor over a time period, each respective UWB data from the plurality of UWB data being associated with a respective UWB timestamp in a series of UWB timestamps; using a VIO sensor coupled to the UAV, acquiring a plurality of odometric data of the UAV, each of the plurality of odometric data being associated with a respective VIO timestamp from one of at least one time series of VIO timestamps, each of the at least one time series of VIO timestamps being independent of the series of UWB timestamps; and for each of a plurality of selected UWB data, selecting one of the plurality of odometric data to be a corresponding odometric data such that the respective VIO timestamp of the corresponding odometric data is one nearest in time to the respective UWB timestamp; for each of the plurality of selected UWB data, determining a location error value of the UWB anchor based on the respective selected UWB data and the corresponding odometric data; and determining an estimated location of the UWB anchor based on a plurality of location error values of the UWB anchor.
[0015] The method may further comprise iteratively determining the estimated location of the UWB anchor; and stopping the iteratively determining in response to any one of the following: (i) any of a plurality of estimated variances of the estimated location in any one or more directions in a three-dimensional frame of reference is smaller than a predetermined variance threshold; (ii) a number of iterations to determine the estimated location is larger than or equal to a predetermined iteration limit; and (iii) a time taken to perform the number of iterations is greater than a predetermined ceiling time. Preferably, an accuracy of the estimated location is dependent on one or more of the following: (i) a distance between the UWB anchor and the UWB sensor; (ii) a radius defined by a trajectory of the UWB sensor during steps of acquiring UWB data and/or odometric data; and (iii) a spatial movement distribution of the UWB sensor in three-dimensional space. Preferably, the UWB sensor is coupled to an unmanned aerial vehicle (UAV).
BRIEF DESCRIPTION OF DRAWINGS
[0016] Figs. 1 and 2 are schematic diagrams of a system according to an embodiment of the present disclosure;
[0017] Fig. 3 is a timeline showing timing of data acquisition according to the system of Fig. 1;
[0018] Fig. 4 is a schematic diagram of a system according to another embodiment;
[0019] Fig. 5 is a timeline showing timing of data acquisition according to the system of Fig. 4;
[0020] Fig. 6 is a schematic flow chart illustrating a method of UWB anchor deployment;
[0021] Fig. 7 is a schematic diagram of a system according to another embodiment;
[0022] Fig. 8 is a timeline showing timing of data acquisition according to the system of Fig. 7;
[0023] Fig. 9 is a schematic diagram of a system according to another embodiment;
[0024] Fig. 10 is a timeline showing timing of data acquisition according to the system of Fig. 9 during operation;
[0025] Fig. 11 is a schematic diagram of a system according to another embodiment;
[0026] Fig. 12 is a timeline showing timing of data acquisition according to the system of Fig. 11; and
[0027] Fig. 13 is a schematic flow chart of a robot localization method based on the UWB anchor calibration method of Fig. 6;
[0028] Fig. 14 is a plot illustrating localization error against a ratio between a distance to a UWB anchor to a measurement radius based on simulation result for a conventional position-focused method; and
[0029] Fig. 15 is a plot illustrating localization error against a ratio between a distance to UWB anchor to a measurement radius based on simulation result for a range-focused method according to embodiments of the present disclosure.
DETAILED DESCRIPTION
[0030] Reference throughout this specification to “one embodiment”, “another embodiment” or “an embodiment” (or the like) means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, the appearance of the phrases “in one embodiment” or “in an embodiment” or the like in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments. One skilled in the relevant art will recognize, that the various embodiments be practiced without one or more of the specific details, or with other methods, components, materials, etc. In other instances, some or all known structures, materials, or operations may not be shown or described in detail to avoid obfuscation.
[0031] The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any embodiment described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments. As used herein, the singular ‘a’ and ‘an’ may be construed as including the plural “one or more” unless apparent from the context to be otherwise.
[0032] Terms such as “first” and “second” are used in the description and claims only for the sake of brevity and clarity, and do not necessarily imply a priority or order, unless required by the context. The terms "about" and "approximately" as applied to a stated numeric value encompasses the exact value and a reasonable variance as will be understood by one of ordinary skill in the art, and the terms “generally” and “substantially” are to be understood in a similar manner, unless otherwise specified. [0033] As will be evident from the various non-limiting examples below, the present disclosure is applicable in a wide variety of situations, including but not limited to UWB anchor deployment for an unmanned aerial vehicle (UAV) in three-dimensional space. Thus, in this disclosure, for the sake of brevity, the terms “autonomous”, “unmanned”, “self- localizing”, “self-navigating”, and the like, are used interchangeably. As used herein, the term “robot” may refer to a “drone”, a “vehicle”, an “aerial vehicle”, etc., without being limited by the mode of travel or the nature of the task of the robot.
[0034] Fig. 1 schematically illustrates a robot (200) according to one embodiment of the present disclosure. The robot (200) may be an unmanned aerial vehicle and/or the robot (200) may travel on the ground. The robot (200) may be configured as a vehicle controllable via telemetry by a human operator or by another machine. The robot (200) and one or more UWB anchors (500) may be collectively referred to as a localization system (100). The robot is configured to move in a trajectory independently of the one or more UWB anchors. The robot (200) includes a processor (210) coupled with a memory (220). The memory (220) is configurable to store instructions executable by the processor (210). For example, the processor (210) may be configured to perform a method of deploying a UWB anchor (500) according to various embodiments of the present disclosure. The processor (210) may be configured to determine a location of the UWB anchor (500) with respect to a world frame (80). The processor (210) may be configured to perform online calibration of the UWB anchor, that is, the UWB anchor calibration can be performed concurrently with the robot (200) being in (actual or normal) operation, including the robot self-navigating in a new or unknown environment. The processor may be configured to perform self-localization and mapping of the robot (200) relative to a world frame (80).
[0035] The world frame (80) may refer to a global and stationary coordinate frame defined by the environment in which the robot (200) operates. Alternatively, the world frame (80) may refer to a coordinate frame that is defined with respect to an observer. As another alternative, the world frame (80) may be defined with reference to the robot (200) such that the world frame (80) is mobile relative to the environment.
[0036] The robot (200) includes an odometry device (400) suitable for acquiring odometric data of the robot (200), such as an inertial measurement unit (IMU). In some examples, the odometry device is integrated or otherwise coupled with a camera, and may be referred to generally as a visual inertial odometry (VIO) sensor or device. For the sake of brevity, as used herein, the term “VIO sensor” will refer to any device or combination of devices suitable for acquiring odometric data (such as velocity and pose) of the robot (200) without reliance on GPS signals. As used herein, the term “odometric data” includes any combination of data from the camera and/or data from the IMU, including one or any combination of image data, velocity data, displacement data, and data derived therefrom.
[0037] The robot (200) further includes an ultra-wideband (UWB) sensor (300) configurable to receive UWB signals transmitted from one or more various UWB anchors. Embodiments of the present disclosure are not limited by a particular operating frequency band. For example, the UWB anchor and the UWB sensor may be configured with an operable range between 3.1 to 10.6 GHz (gigahertz), or between 3.1 and 9 GHz, etc.
[0038] A method of deploying a UWB anchor (500) using only the robot (200) and without prior human intervention will be described with the aid of Fig. 2. Although the robot (200) may be configured to be used in conjunction with GPS signals in some examples, in this example it is assumed that GPS signals are heavily attenuated or not available in this situation. Further, in this example, the UWB anchor (500) is initially in an unknown location as the environment is new to the robot (200). The robot (200) is configured to first calibrate the UWB anchor (500), that is, to first establish the location of the UWB anchor with respect to the world frame (80).
[0039] The robot (200) is configured to move along a trajectory (90) relative to the world frame (80) and relative to the stationary UWB anchor (500) over a time period. The robot (200) may move in three-dimensional (3D) space and is not limited to moving in a specific two-dimensional (2D) plane. In other examples, the robot (200) may move only in two- dimensional (2D) space. In such scenarios, information relating to a height axis (alternatively, z-axis or a vertical axis) may be predetermined or input from a source of known data. For example, a user may provide data relating to a height of the UWB anchor relative to the robot, and/or a height of the UWB anchor and a height of the robot relative to the ground. Based on a plurality of UWB signals (550) from the UWB anchor (500), the UWB sensor (300) of the robot (200) is configured to provide the processor (210) with a series of UWB data (350). Each UWB data (350) is associated with (or linked to) a respective UWB timestamp from a series of UWB timestamps. Concurrently, the VIO sensor (400) of the robot (200) is configured to acquire a plurality of odometric data (450), and to transmit the odometric data (450) to the processor (210). Each of the plurality of the odometric data (450) is associated with (or linked to) a respective VIO timestamp from a time series of VIO timestamps.
[0040] Referring also to Fig. 3 which schematically illustrates the timing of the various time stamps, it can be appreciated that the UWB clock and the clock(s) of the VIO sensor are asynchronous, that is, they are not synchronized with one another. If there are multiple VIO timestamps, the series of UWB timestamps is asynchronous relative to each of the time series of VIO timestamps. In other words, the series of VIO timestamps is independent of the time series of UWB timestamps. This may alternatively be described in terms of a timeoffset between data from the different sensors of the same robot. In particular, there is a time-offset between the UWB data and the odometric data.
[0041] The UWB sensor (300) may be described as having a different sampling rate or a different sampling start time relative to the VIO sensor (400). Fig. 3 illustrates an example in which the odometric data (450) is characterized by a higher sampling rate relative to the UWB data (350). In some other examples, the UWB data (350) may have a higher sampling rate relative to the odometric data (450). In another example, there is intermittent transmission or attenuation of the UWB signal (550), such that the series of UWB timestamps may be characterized by a number of UWB timestamps spaced apart at irregular time intervals within a time period. The series of UWB timestamps may include fewer UWB timestamps than a total number of VIO timestamps in the time period.
[0042] Additionally, there may be instances when the UWB signal (550) transmitted from the UWB anchor (500) is not received by the UWB sensor (300) of the robot (200). This may be due to various reasons, including but not limited to, the shape of the terrain or geography being traversed by the robot (200), the presence of building structures, or the pose of the robot (200), etc., such that there is no line-of-sight between the UWB anchor (500) and the UWB sensor (300). It can be appreciated that the time-offset is irregular at different times over the course of the robot’s trajectory, and that synchronization (and hence calibration of the UWB anchor) is not a trivial problem.
[0043] According to embodiments of the present disclosure, the robot (200) is configured to employ a “range-focused” method to calibrate the UWB anchor (500). One embodiment of the range-focused method includes selecting timestamps from the series of UWB timestamps, and using data corresponding to the selected UWB timestamps. Alternatively described, a range data (UWB-related data) is first selected, and based on the UWB timestamp of the range data, odometric data (position-related data) is selected and associated with the range data. Each of a plurality of UWB data (350) is eventually associated with (related or linked to) a corresponding odometric data (450). For each UWB data (350), one of the plurality of odometric data (450) is determined to be a corresponding odometric data (450a) to form a UWB-odometric data (250). Determining the one of the plurality of odometric data may include selecting or computing the one of the plurality of odometric data. In the example of Fig. 3, the range-focused method involves selecting from the time series of odometric data (450) the odometric data (450a) having a VIO timestamp that is nearest in time to the respective UWB timestamp (350). In some instances, the same odometric data (odometric data acquired at the same VIO timestamp) may be associated with different UWB data (UWB data acquired at different UWB timestamps). In this and other aspects, the range-focused method presents a contrast to a “position-focused” method towards localization and positioning.
[0044] As used herein, a position-focused method refers to a method in which the location of a robot is determined using either the camera clock or the IMU clock as the basis for distance measurements. Meaningful results often require a discernible displacement of the robot to a new position so that triangulation can be applied. The position-focused method can require a robot displacement of about 5 to 10 m in all three dimensions (e.g., x-y-z dimensions in a Cartesian coordinate system or world frame). In addition, if there is interruption to the acquisition of UWB signals, or if there is only intermittent acquisition of UWB signals by the robot, the position-focused method could result in relatively large localization errors.
[0045] Figs. 4 and 5 illustrate an embodiment in which the VIO sensor (400) includes a camera (410) and an inertial measurement unit (IMU) (420). The camera (410) configured to acquire image data (470) of a visual target (550). The IMU (420) is fixedly coupled to the camera (410), and both the IMU (420) and the camera (410) are coupled to the robot (200). The IMU (420) is configured to provide IMU data (480) corresponding to a velocity of the robot (200). In some examples, the IMU data (480) may include (but is not limited to) one or more types of data selected from the following: a displacement data, an acceleration data, an angular displacement data, a rotational velocity data, and/or an angular acceleration data. Each of the camera (410) and the IMU (420) is configured with its respective camera clock and IMU clock. In Fig. 5, this is represented schematically by the respective timing of image data (470) and IMU data (480) along a timeline. In other words, there is a plurality of image data (470) characterized by a first time series of VIO timestamps and a plurality of IMU data (480) characterized by a second time series of VIO timestamps. The UWB data (350) is characterized by a series of UWB timestamps asynchronous relative to at least one of the first time series of VIO timestamps and the second time series of VIO timestamps.
[0046] The robot (200) in this example is similarly configured to employ the range-focused method in which the processor (210) is configured to relate a corresponding odometric data (460) to each of the UWB data (350) selected. The robot (200) may be configured to: for each UWB data 350, select a corresponding image data (470) with a respective first VIO timestamp from the first time series of VIO timestamps, in which the first respective VIO timestamp is one nearest in time to the respective UWB timestamp (350). The robot (200) is further configured to select a corresponding IMU data (480a) with a respective second VIO timestamp from the second time series of VIO timestamps, in which the selection is such that the second VIO timestamp is one nearest in time to the respective UWB timestamp (350). The robot (200) is configured to select the corresponding image data (470) and the corresponding IMU data (480a) based on each of the UWB data (350) selected, to collectively form a UWB-odometric data (250).
[0047] As shown in the timeline of Fig. 5, UWB-odometric data (2501) is obtained from inputs including UWB data (3501), IMU data (4801), and image data (4701). Following the range-focused method, selection of the IMU data and the image data is determined by the occurrence of the UWB timestamp. The IMU data is selected on the basis that the IMU data has an IMU timestamp (4801) nearest in time to the UWB timestamp (3501). The image data is selected on the basis that the image data has an image timestamp (4701) nearest in time to the UWB timestamp (3501). Upon acquisition of another UWB data having another UWB timestamp (3502), the IMU data selected for association with the UWB timestamp (3502) is one having an IMU timestamp (4802) closest in time to the UWB timestamp (3502). The image data having a timestamp nearest to in time to the UWB timestamp (3502) is the image data at image timestep (4701). The UWB-odometric data (2502) associated with the UWB timestamp (3502) is based on the UWB data at UWB timestamp (3502), IMU data at IMU timestamp (4802), and image data at image timestamp (4701). As illustrated by this example, there may be instances where the same odometric data is shared by or forms a basis of multiple UWB-odometric data at different UWB timestamps. In this and other aspects, the range-focused method presents a contrast to a “position-focused” method.
[0048] In some embodiments, the image data (470) and the IMU data (480) may first be processed to form an odometric data (460) with a time series of VIO timestamps. The VIO timestamps may be determined according to a data processing method, for example, a visual-inertial odometry (VIO) pipeline. In another example where there are multiple time series of VIO timestamps, the odometric data (460) may be assumed to have a time series of VIO timestamps according to a first time series of VIO timestamps. Alternatively, the odometric data (460) may have a time series of VIO timestamps according to a second time series of VIO timestamps. The odometric data 460 may have a time series of VIO timestamps selected from more than one time series of VIO timestamps.
[0049] Fig. 6 schematically illustrates a method of UWB anchor calibration performed by the processor (210) of a robot (200) based on instructions stored in the memory (220), using data input from the respective UWB sensor (300) and VIO sensor (400), according to embodiments of the present disclosure.
[0050] UWB data (350) and odometric data (450) are acquired as input and the resulting UWB-odometric data (250) is added to a dataset (<7) (610). A sufficiency check (620) is performed by the processor (210) based on the UWB-odometric data (250) acquired so as to decide whether or not to proceed with a next step in the UWB anchor calibration process. The sufficiency check (620) achieves savings to computation time and resources by determining whether there is improvement in the dataset that many potentially contribute to a better localization solution prior to proceeding with the next step. Further, the sufficiency check (620) gives an overview of the trajectory performed by the robot (200), which can inform the user on the robot’s movement, e.g., whether the robot’s movement exceeds a threshold”. The robot (200) may be configured with an interface suitable for a user to predetermine or select the threshold displacement and/or the threshold velocity. The threshold displacement and/or the threshold velocity may be determined according to user requirements, environmental limitations, and/or performance level of the localization system (100).
[0051] One embodiment of the sufficiency check is described with the aid of Figs. 7 and 8. The processor (210) is configured to determine whether a displacement (203) of the robot (200) between consecutive UWB timestamps is greater than a threshold displacement. The threshold displacement may include displacement values in one or more dimensions, e.g., along one or more of the x-y-z coordinates of the world frame (80). In examples where the robot is an unmanned aerial device, the sufficiency check may require sufficient movement in all axes of the coordinates. If, between consecutive UWB timestamps, the robot (200) undergoes a displacement smaller than the threshold displacement, the processor (210) may be configured to discard one of the UWB-odometric data. For example, the later acquired UWB-odometric data may be excluded or discarded from the dataset (<7). If, between consecutive UWB timestamps, the robot (200) undergoes a displacement smaller than the threshold displacement, the processor (210) may be configured to acquire a next UWB- odometric data instead of proceeding to optimization (630).
[0052] Additionally, or alternatively, the sufficiency check may be configured to determine whether a velocity (205) of the robot (200) at the respective UWB timestamp is greater than a threshold velocity. The processor (210) may be configured such that, if the velocity (205) is greater than the threshold velocity, the UWB-odometric data is added to or retained in the dataset (<7). The processor (210) may be configured such that, if the velocity (205) is not greater than the threshold velocity, the UWB-odometric data is excluded or discarded from the dataset (<7). In a non-limiting example, the threshold velocity may be set at zero or a near-zero positive value, such that if the UWB-odometric data indicates that the velocity of the robot is zero or near zero, then the sufficient conditions are not met, and the processor (210) may be configured to acquire a next UWB-odometric data instead of proceeding to optimization (630).
[0053] If the processor (210) determines that any one of the sufficient conditions is fulfilled, the processor (210) may be configured to proceed with the next step of performing optimization-based UWB anchor localization (630) using the dataset (<7). In one embodiment, the processor (210) includes a calibration module configured to estimate a UWB anchor location by defining an energy function incorporating data input on position, orientation, velocity, and range. The data input on position, orientation, and velocity are acquired from the VIO sensor, which may be an integrated VIO, or a camera coupled with an IMU sensor. The data input on range comes from the UWB sensor. The calibration module of the processor (210) is configured to minimize the energy function. The optimization is configured such that, for each UWB data (350), a location error value of the UWB anchor (500) is determined based on respective UWB-odometric data (250), in which the UWB-odometric data (250) includes the respective UWB data (350) and the corresponding odometric data (450/460).
[0054] The energy function may be defined in terms of a UWB anchor location error value or a UWB range measurement residual for each UWB data, as shown in equation (1) below:
Figure imgf000016_0001
[0055] where e is a UWB anchor location error value or UWB range measurement residual for each UWB data, is the UWB data, ^pfc is the VIO position output, ^vfc the VIO velocity output, and ^pfc the UWB anchor position. In essence, the system is trying to solve the problem of estimating the UWB anchor position in the coordinate world frame W. This range-focused method essentially considers the UWB residual for each range data using the nearest position data and velocity data. In contrast, the conventional position-focused method is based on a UWB residual defined for each position data using the nearest range data. By taking reference from the UWB time frame, the difficulties arising from the lack of clock synchronization between the different sensors (e.g., between the camera, the IMU, and the UWB sensor) becomes irrelevant. At the same time, any intermittent loss of UWB signal is found to have negligible negative impact on the calibration.
[0056] An optimization cost function may be employed to optimize the residual. As an example, the cost function is shown in equation (2) below:
ErM = j jP ej') (2)
[0057] where J is the dataset, p is the Huber loss function to diminish the effect of data outliers, and e is the anchor location error value or UWB range measurement residual. The optimization is performed via optimization methods such as Gauss-Newton or Levenberg- Marquardt.
[0058] The processor (910) is configured to terminate itself without human intervention (640). In some embodiments, the optimization process may be terminated in response to one or more termination criteria being fulfilled. The termination criteria may be any one or a combination of the following: (i) a maximum value of a covariance matrix is smaller than a threshold value; (ii) a number of iterations of applying the optimization rules is equal to a predetermined maximum number of iterations; and (iii) a maximum optimization time is reached. The robot (200) may be configured with a suitable user interface to enable user reconfiguration of the termination criteria.
[0059] In some examples, the robot may move substantially or predominantly in a two- dimensional (2D) plane, such as in the case of a wheeled or legged robot moving on the ground. The termination criteria may not be satisfied due to a lack of informative data, for example, in the height axis. In such scenarios, height-related information may be predetermined or provided from an external source. For example, the height-related information may be acquired from a database of height-related data. For example, a user may provide the height-related information. The height-related information may be in the form of one or more of the following: information relating to a height of the UWB anchor relative to the robot, and/or a height of the UWB anchor and a height of the robot relative to the ground. This may reduce the complexity of the termination criteria from a three- dimensional (3D) spatial value to a two-dimensional (2D) spatial value, thus allowing the termination criteria to be met.
[0060] The processor (210) is configured such that the optimization process determines a lowest location error value from a plurality of location error values of the UWB anchor, to arrive at an estimated location of the UWB anchor (650). The outputs of the optimization process include the estimated location of the UWB anchor as well as the corresponding variances of each axis (crx,y,z)- In other examples where the robot moves substantially or predominantly in a two-dimensional (2D) plane, the outputs of the optimization process include the estimated location of the UWB anchor as well as the corresponding variances of each axis (<Jx y). The variance output provides insights into how accurate the estimated location is along each axis. The higher the variance, the less accurate the estimated UWB anchor location is along that axis. The robot (200) may be configured to report all results in real-time such that online calibration is enabled. The variances are indicators on the reliability and trustworthiness of the optimized output, i.e., the estimated location of the UWB anchor.
[0061] In some embodiments, if the one or more termination criteria is not satisfied (640), the processor (210) may be configured to check an output or a result of the optimization process against a previously obtained result to determine if a solution for the estimated location of the UWB anchor has been improved. If it is determined that the solution has not been improved, the solution and related data are discarded or removed from the dataset (<7). This maintains the data in the dataset (<7) as useful data and enables a smaller memory requirement on the robot. Further, a smaller dataset (<7) also improves the computation speed.
[0062] In some embodiments, an accuracy of the estimated location of the UWB anchor is characterized by at least one of: (i) a distance between the UWB anchor (500) and the robot (200); (ii) a distance of the robot's movement prior to acquiring the UWB data (350) and the odometric data (450/460); and (iii) the robot's spatial movement distribution in three- dimensional space.
[0063] Conventional thinking struggles between (i) including as much historical data as possible in order to provide a more accurate result and (ii) acquiring the minimal amount of data in order to provide a faster result. The method of UWB anchor deployment proposed herein provides an alternative to either of these conventional thinking. It can be appreciated from the examples described that embodiments of the present disclosure provide a way of automatically determining which data to discard from the dataset and when to stop the iterative process, such that the UWB anchor deployment is accurate enough for practical real-life operations.
[0064] In another embodiment as shown in Figs. 9 and 10, the localization system (100) may include more than one visual target (550). The VIO sensor (400) of the robot (200) may be configured to use the more than one visual target (550) and provide more reliable odometric data by the VIO pipeline.
[0065] Referring still to Figs. 9 and 10 for yet another embodiment of the present disclosure, the localization system (100) may include a first UWB anchor (500a) and a second UWB anchor (500b), such that there is more than one unknown UWB anchor. As illustrated in Fig. 10, each UWB data (350a/350b) may be received from the respective UWB anchor (500a/500b), and may be processed with the odometric data (460) received from the VIO sensor (400) to calibrate each of the UWB anchors. The robot (200) may be configured to perform online calibration each of the unknown UWB anchors encountered in turn, such that once the first UWB anchor (500a) is calibration, it is treated as a known UWB anchor for localization and mapping as the robot (200) continues on in its trajectory (90). The robot (200) can next perform online calibration of the second UWB anchor (500b). One benefit of such a localization system (100) is that the robot is able to venture into unknown environments which may be unsafe for humans (e.g., subterranean exploration, hazardous environments, etc.) without the need for human intervention. Instances where multiple robots operate, the robots are configured with information-sharing capability enabling communication and control between the robots, and/or between the robots and a user. Therefore, information relating to the locations of each UWB anchor allows the user to gain a better understanding of the environment and more effective execution of the robot’s mission.
[0066] As illustrated in Figs. 11 and 12, in some embodiments, upon determining the estimated location of the UWB anchor, the robot 200 may be further configured to determine a position of the robot (200) in the world frame (80). Upon determining the estimated location of the UWB anchor (500), the robot (200) is configured to determine a keyframe (270) based on further UWB data (350) and further odometric data (450). Each keyframe (270) may include a position of the robot (200) relative to the world frame (80); a position change of the robot (200) relative to the world frame (80); a velocity of the robot (200) relative to the world frame (80); an acceleration of the robot (200) relative to the world frame (80); an orientation of the robot (200) relative to the world frame (80); an orientation change of the robot (200) relative to the world frame (80); an angular velocity of the robot (200) relative to the world frame (80); and/or an angular acceleration of the robot (200) relative to the world frame (80). In some embodiments, there are more than one keyframe (270k), and the target timestamp (260) may be between two consecutive keyframes (270k). In another embodiment, the target timestamp (260) may be a respective timestamp of any UWB data received between two consecutive keyframes (270k).
[0067] In some embodiments, the robot (200) is configured to determine a predicted position of the robot (200) at a target timestamp (260). A corresponding keyframe (270k) is determined based on the target timestamp (260) to obtain a previous position of the robot (200) at the keyframe (270k). The predicted position of the robot (200) at the target timestamp (260) may be based on the previous position of the robot (200) at the keyframe (270k) and a predicted position change of the robot (200). As an example, the predicted position of the robot (200) at the target timestamp (260) is determined via equation (3):
Figure imgf000020_0001
[0068] wherein 'gP is the predicted position of the robot (200) relative to world frame (80) at the target timestamp (260), ^pk is the previous position of the robot (200) relative to world frame (80) at keyframe 270k, Apy is a predicted position change of the robot based on UWB data (350) and odometric data (450), ^vk is a velocity of the robot (200) relative to world frame (80) at keyframe (270k), and At is a time gap between the target timestamp and the keyframe (270k).
[0069] In some embodiments, to determine a predicted position change (Apy ) of the robot at the target timestamp includes: for each of a further UWB data (350), selecting a respective further odometric data (450) nearest in time to the further UWB data (350) and determining a possible position change of the robot (200) based on the further UWB data (350) and the respective odometric data (450). Thereafter, determining the predicted position change of the robot (200) based on one of a plurality of possible position changes. As an example, the one of the plurality of possible position change has a position change error value lowest among the plurality of possible position changes and may be computed via optimization of a cost function for UWB data (350) and odometric data (450) over two consecutive keyframes (270k).
[0070] Fig. 13 illustrates a localization method (800) for a robot. The localization method (800) includes using a UWB sensor to acquire a plurality of UWB data from a UWB anchor over a time period (810). Each respective UWB data from the plurality of UWB data is associated with a respective UWB timestamp in a series of UWB timestamps. The method (800) includes using a VIO sensor to acquire a plurality of odometric data of the robot (820). Each of the plurality of odometric data is associated with a respective VIO timestamp from one of at least one time series of VIO timestamps. Each of the at least one time series of VIO timestamps is independent of the series of UWB timestamps. The method (800) includes, for each of a plurality of selected UWB data, determining one of the plurality of odometric data to be a corresponding odometric data such that the respective VIO timestamp of the corresponding odometric data is one nearest in time to the respective UWB timestamp (830). Determining one of the plurality of odometric data may include selecting or computing the one of the plurality of odometric data. In the event a timestamp difference between the one of the plurality of odometric data and the UWB data is larger than a threshold, a virtual odometric data with a virtual VIO timestamp may be computed according to equation (3). The method (800) includes, for each of the plurality of selected UWB data, determining a location error value of the UWB anchor (840) based on the respective selected UWB data and the corresponding odometric data. The method (800) includes determining an estimated location of the UWB anchor based on a plurality of location error values of the UWB anchor (850).
[0071] Further, the method (800) may include determining the estimated location of the UWB anchor by determining a lowest location error value from the plurality of location error values of the UWB anchor.
[0072] The method (800) may further include determining one or both of a velocity of the robot at the respective UWB timestamp and a displacement of the robot at the respective UWB timestamp, wherein the determining of the location error value of the UWB anchor is performed if the velocity of the robot is greater than a threshold velocity and if the displacement of the robot is greater than a threshold displacement.
[0073] The method (800) also includes selectively forming a dataset, the dataset including the plurality of selected UWB data and the corresponding odometric data corresponding to each of the plurality of selected UWB data, the corresponding odometric data in the dataset being defined with respect to a coordinate frame. The method (800) may further include iteratively determining the lowest location error value of the UWB anchor based on the dataset, wherein the lowest location error value is defined with respect to the coordinate frame. Further, iteratively determining the lowest location error value includes determining the lowest location error value at time instances corresponding to one of the UWB timestamp in the series of UWB timestamps. In other words, each iteration of determining the lowest location error value is performed at a time corresponding to one of the UWB timestamp in the series of UWB timestamps. The dataset includes selected respective UWB data selected from the plurality of UWB data and the corresponding odometric data of the selected respective UWB data.
[0074] The iterative loop of determining the location error value is terminated in response to any one of the following conditions being satisfied: (i) a maximum value of a covariance matrix is smaller than a threshold value; (ii) a number of iterations of applying optimization rules is equal to a predetermined maximum number of iterations; and (iii) a maximum optimization time is reached. Maximum optimization time may be a predetermined time acquired from an external source, e.g., input by the user.
[0075] Experiments were conducted to verify the achievable performance of the UWB anchor calibration method and localization system employing the range-focused method proposed herein. Comparison is made with a conventional position-focused method. Table 1 below sets out simulation results of anchor localization error with different time offsets between UWB data and odometric (VIO) data over 200 experiments/simulations (mean ± standard deviations in meters). The robot used here is an unmanned aerial vehicle configured to move in a three-dimensional space.
[0076] The method of UWB anchor deployment includes using a UWB sensor of an unmanned aerial vehicle to acquire a plurality of UWB data from a UWB anchor over a time period. Each respective UWB data from the plurality of UWB data is associated with a respective UWB timestamp in a series of UWB timestamps. The UAV is secured with a VIO sensor (such as a camera and IMU combination) to acquire a plurality of odometric data of the UAV. Each of the plurality of odometric data is associated with a respective VIO timestamp from one of at least one time series of VIO timestamps. Each of the at least one time series of VIO timestamps is independent of the series of UWB timestamps. For each UWB data, one of the plurality of odometric data is determined to be a corresponding odometric data such that the respective VIO timestamp of the corresponding odometric data is one nearest in time to the respective UWB timestamp. For each of the plurality of selected UWB data, a location error value of the UWB anchor is determined based on the respective selected UWB data and the corresponding odometric data, and an estimated location of the UWB anchor is determined based on a plurality of location error values of the UWB anchor.
[0077] The estimated location of the UWB anchor is iteratively determined. The abovedescribed iteratively determining steps are repeated until stopped, in response to any one of the following occurring: (i) any of a plurality of estimated variances of the estimated location in any directions in 3D is smaller than a predetermined variance threshold; (ii) a number of iterations to determine the estimated location is larger than or equal to a predetermined iteration limit; and (iii) a time taken to perform the number of iterations is greater than a predetermined ceiling time. In such a system, the accuracy of the estimated location of the UWB anchor is dependent on one or more of the following: (i) a distance between the UWB anchor and the UAV; (ii) a radius defined by a trajectory of the UAV during steps of acquiring UWB data and/or odometric data; and (iii) a spatial movement distribution of the UAV in three-dimensional space.
[0078] As is evident from the data presented in Table 1, the range-focused method consistently performed better than the convention position-focused method.
Table 1.
Figure imgf000023_0001
[0079] Fig. 14 shows the simulation results for the conventional position-focused method as a plot of localization error against a ratio between a distance to anchor to a measurement radius. Fig. 15 shows the simulation results for the range-focused method as a plot of localization error against a ratio between a distance to anchor to a measurement radius. The range-focused method according to an embodiment of the present disclosure consistently shows a smaller localization error compared to the conventional position-focused method. It can also be observed that localization error for the position-focused method varies with different time offsets between the UWB data and the odometric data. In contrast, the localization error for the proposed range-focused method is invariant relative to different time offsets.
[0080] In conventional thinking, improvements in reliability and accuracy come at the cost of reduced efficiency, e.g., by increasing the amount of data collected or by ensuring better clock synchronization between the various sensors. Surprisingly, the range-focused method proposed herein demonstrate improved reliability and accuracy of the UWB residual for the same amount of UWB data available to the robot. Another significant benefit evident from the simulation results is that, unlike the position-focused method, the range-focused method is not affected by the time offsets between the various sensors. Position-focused methods typically require large robot displacements (about 5 m to 10 m) in all three dimensions (x- y-z dimensions) in, making localization in constricted spaces (such as in tunnels) practically impossible. On the other hand, the range-focused method can be used even if the robot displacement is smaller (about 2 m to 3 m).
[0081] Conventional anchor localizations require humans to set up anchors in known locations and, in some cases, UWB anchor positions are acquired manually, which is timeconsuming and prone to human errors. The results above demonstrate that the present disclosure provides a method of UWB anchor calibration that can be seamlessly extended to self-localization of the robot, without human intervention at any stage. For example, as described in the foregoing, the UWB anchor calibration method proposed herein is configured to control the amount of data used and the amount of data retained, as well as to automatically start and to automatically stop the UWB anchor calibration process. The localization system proposed herein can thus be deployed in environments that are dangerous, hazardous, or inaccessible to humans.
[0082] All examples described herein, whether of apparatus, methods, materials, or products, are presented for the purpose of illustration and to aid understanding, and are not intended to be limiting or exhaustive. Various changes and modifications may be made by one of ordinary skill in the art without departing from the scope of the invention as claimed.

Claims

24 CLAIMS
1. A robot comprising a processor coupled with a memory, a UWB (ultra-wideband) sensor, and a VIO (visual inertial odometry) sensor, the memory being configurable to store instructions executable by the processor to perform a method of calibrating a UWB anchor comprising: using the UWB sensor, acquiring a plurality of UWB data from the UWB anchor over a time period, each respective UWB data from the plurality of UWB data being associated with a respective UWB timestamp in a series of UWB timestamps; using the VIO sensor, acquiring a plurality of odometric data of the robot, the robot being configured to move independently of the UWB anchor, each of the plurality of odometric data being associated with a respective VIO timestamp from one of at least one time series of VIO timestamps, each of the at least one time series of VIO timestamps being independent of the series of UWB timestamps; and for each of a plurality of selected UWB data, determining one of the plurality of odometric data to be a corresponding odometric data such that the respective VIO timestamp of the corresponding odometric data is one nearest in time to the respective UWB timestamp; for each of the plurality of selected UWB data, determining a location error value of the UWB anchor based on the respective selected UWB data and the corresponding odometric data; and determining an estimated location of the UWB anchor based on a plurality of location error values of the UWB anchor.
2. The robot according to claim 1, wherein determining the estimated location of the UWB anchor includes determining a lowest location error value from the plurality of location error values of the UWB anchor.
3. The robot according to any one of claims 1 to 2, wherein the series of UWB timestamps is asynchronous relative to each of the at least one time series of VIO timestamps. The robot according to any one of claims 1 to 3, wherein the series of UWB timestamps is characterized by a number of UWB timestamps spaced apart at irregular time intervals within the time period. The robot according to any one of claims 1 to 4, wherein the series of UWB timestamps comprises fewer UWB timestamps than a total number of VIO timestamps in the time period of time. The robot according to any one of claims 1 to 5, wherein the method further comprises: determining one or both of a velocity of the robot at the respective UWB timestamp and a displacement of the robot at the respective UWB timestamp, wherein the determining of the location error value of the UWB anchor is performed if the velocity of the robot is greater than a threshold velocity and if the displacement of the robot is greater than a threshold displacement. The robot according to claim 2, wherein the method comprises: selectively forming a dataset, the dataset including the plurality of selected UWB data and the corresponding odometric data corresponding to each of the plurality of selected UWB data, the corresponding odometric data in the dataset being defined with respect to a coordinate frame. The robot according to claim 7, wherein the method comprises: iteratively determining the lowest location error value of the UWB anchor based on the dataset, wherein the lowest location error value is defined with respect to the coordinate frame. The robot according to claim 8, wherein the iteratively determining the lowest location error value comprises determining the lowest location error value at time instances corresponding to one of the UWB timestamp in the series of UWB timestamps. The robot according to any of claim 8 or claim 9, wherein the iteratively determining the lowest location error value is terminated in response to any one of the following conditions being satisfied: (i) a maximum value of a covariance matrix is smaller than a threshold value; (ii) a number of iterations of applying optimization rules is equal to a predetermined maximum number of iterations; (iii) a maximum optimization time is reached. The robot according to any one of claims 1 to 10, wherein an accuracy of the method is further characterized by at least one of (i) a distance between the UWB anchor and the robot; (ii) a distance of the robot's movement prior to acquiring the UWB data and the odometric data; and (iii) the robot's spatial movement distribution in a three-dimensional space. The robot according to any one of claims 1 to 11, wherein the VIO sensor comprises: a camera configured to acquire image data according to a first time series of VIO timestamps; and an inertial measurement unit coupled to the camera, the inertial measurement unit being configured to acquire velocity data according to a second time series of VIO timestamps, and wherein the series of UWB timestamps is asynchronous relative to at least one of the first time series and the second time series. The robot according to any one of claims 1 to 12, further comprising: using the UWB sensor, acquiring a plurality of second UWB data from a second UWB anchor over a second time period, each respective second UWB data from the plurality of second UWB data being associated with a respective second UWB timestamp in a series of second UWB timestamps; using the VIO sensor, acquiring a plurality of second odometric data of the robot, each of the plurality of second odometric data being associated with a respective second VIO timestamp from one of at least one time series of second VIO timestamps, each of the at least one time series of second VIO timestamps being independent of the series of second UWB timestamps; and for each of a plurality of selected second UWB data, determining one of the plurality of second odometric data to be a corresponding second odometric data such that the respective second VIO timestamp of the corresponding second odometric data 27 is one nearest in time to the respective second UWB timestamp; for each of the plurality of selected second UWB data, determining a second location error value of the second UWB anchor based on the respective selected second UWB data and the corresponding second odometric data; and determining a second estimated location of the second UWB anchor based on a plurality of second location error value of the second UWB anchor. The robot according to claim 13, wherein the second estimated location of the second UWB anchor includes determining a lowest second location error value from the plurality of second location error value of the second UWB anchor. The robot according to any one of claims 1 to 14, wherein the robot is disposed in an indoor setting, the robot configured to perform indoor localization. A system for UWB anchor deployment, the system comprising: a UWB anchor, the UWB anchor being configured to be mobile such that the UWB anchor is disposable at an unknown location; and the robot configured to determine the estimated location of the UWB anchor according to any of claims 1 to 15. A method of UWB anchor deployment, comprising: using a UWB sensor, acquiring a plurality of UWB data from a UWB anchor over a time period, each respective UWB data from the plurality of UWB data being associated with a respective UWB timestamp in a series of UWB timestamps; using a VIO sensor coupled to the UAV, acquiring a plurality of odometric data of the UAV, each of the plurality of odometric data being associated with a respective VIO timestamp from one of at least one time series of VIO timestamps, each of the at least one time series of VIO timestamps being independent of the series of UWB timestamps; and for each of a plurality of selected UWB data, selecting one of the plurality of odometric data to be a corresponding odometric data such that the respective VIO timestamp of the corresponding odometric data is one nearest in time to the respective 28
UWB timestamp; for each of the plurality of selected UWB data, determining a location error value of the UWB anchor based on the respective selected UWB data and the corresponding odometric data; and determining an estimated location of the UWB anchor based on a plurality of location error values of the UWB anchor. The method according to claim 17, further comprising: iteratively determining the estimated location of the UWB anchor; and stopping the iteratively determining in response to any one of the following: (i) any of a plurality of estimated variances of the estimated location in any one or more directions in a three-dimensional frame of reference is smaller than a predetermined variance threshold; (ii) a number of iterations to determine the estimated location is larger than or equal to a predetermined iteration limit; and (iii) a time taken to perform the number of iterations is greater than a predetermined ceiling time. The method according to claim 17 or claim 18, wherein an accuracy of the estimated location is dependent on one or more of the following: (i) a distance between the UWB anchor and the UWB sensor; (ii) a radius defined by a trajectory of the UWB sensor during steps of acquiring UWB data and/or odometric data; and (iii) a spatial movement distribution of the UWB sensor in three-dimensional space. The method according to any of claim 17 to claim 19 wherein the UWB sensor is coupled to an unmanned aerial vehicle (UAV).
PCT/SG2021/050633 2020-10-22 2021-10-21 Uwb anchor deployment WO2022086446A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
SG10202010498P 2020-10-22
SG10202010498P 2020-10-22

Publications (1)

Publication Number Publication Date
WO2022086446A1 true WO2022086446A1 (en) 2022-04-28

Family

ID=81291762

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SG2021/050633 WO2022086446A1 (en) 2020-10-22 2021-10-21 Uwb anchor deployment

Country Status (1)

Country Link
WO (1) WO2022086446A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109951830A (en) * 2019-02-01 2019-06-28 湖南格纳微信息科技有限公司 A kind of indoor and outdoor seamless positioning method of multi-information fusion
CN110487267A (en) * 2019-07-10 2019-11-22 湖南交工智能技术有限公司 A kind of UAV Navigation System and method based on VIO&UWB pine combination
CN111665470A (en) * 2019-03-07 2020-09-15 阿里巴巴集团控股有限公司 Positioning method and device and robot

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109951830A (en) * 2019-02-01 2019-06-28 湖南格纳微信息科技有限公司 A kind of indoor and outdoor seamless positioning method of multi-information fusion
CN111665470A (en) * 2019-03-07 2020-09-15 阿里巴巴集团控股有限公司 Positioning method and device and robot
CN110487267A (en) * 2019-07-10 2019-11-22 湖南交工智能技术有限公司 A kind of UAV Navigation System and method based on VIO&UWB pine combination

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CHEN WANG; HANDUO ZHANG; THIEN-MINH NGUYEN; LIHUA XIE: "Ultra-Wideband Aided Fast Localization and Mapping System", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 30 September 2017 (2017-09-30), 201 Olin Library Cornell University Ithaca, NY 14853 , XP081292711, DOI: 10.1109/IROS.2017.8205968 *
PEREZ-GRAU F. J.; CABALLERO F.; MERINO L.; VIGURIA A.: "Multi-modal mapping and localization of unmanned aerial robots based on ultra-wideband and RGB-D sensing", 2017 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), IEEE, 24 September 2017 (2017-09-24), pages 3495 - 3502, XP033266342, DOI: 10.1109/IROS.2017.8206191 *
SHI Q. ET AL.: "Anchor self-localization algorithm based on UWB ranging and inertial measurements", TSINGHUA SCIENCE AND TECHNOLOGY, vol. 24, no. 6, 3 June 2019 (2019-06-03), pages 728 - 737, XP055936629, [retrieved on 20220114], DOI: 10.26599/TST.2018.9010102 *

Similar Documents

Publication Publication Date Title
Caballero et al. Vision-based odometry and SLAM for medium and high altitude flying UAVs
CN109885080B (en) Autonomous control system and autonomous control method
US9214021B2 (en) Distributed position identification
Wu et al. Autonomous flight in GPS-denied environments using monocular vision and inertial sensors
Langelaan State estimation for autonomous flight in cluttered environments
CN112347840A (en) Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method
Schneider et al. Fast and effective online pose estimation and mapping for UAVs
US20210183100A1 (en) Data processing method and apparatus
Tuna et al. Unmanned aerial vehicle-aided wireless sensor network deployment system for post-disaster monitoring
Steiner et al. A vision-aided inertial navigation system for agile high-speed flight in unmapped environments: Distribution statement a: Approved for public release, distribution unlimited
Wheeler et al. Relative navigation: A keyframe-based approach for observable GPS-degraded navigation
CN111338383A (en) Autonomous flight method and system based on GAAS and storage medium
Rhudy et al. Unmanned aerial vehicle navigation using wide‐field optical flow and inertial sensors
Morales et al. Collaborative autonomous vehicles with signals of opportunity aided inertial navigation systems
KR101764222B1 (en) System and method for high precise positioning
Ellingson et al. Relative navigation of fixed-wing aircraft in GPS-denied environments
Araguás et al. Quaternion-based orientation estimation fusing a camera and inertial sensors for a hovering UAV
EP3627447B1 (en) System and method of multirotor dynamics based online scale estimation for monocular vision
Hyun et al. UWB-based indoor localization using ray-tracing algorithm
Trigo et al. Hybrid optical navigation by crater detection for lunar pin-point landing: trajectories from helicopter flight tests
Cheng et al. Autonomous dynamic docking of UAV based on UWB-vision in GPS-denied environment
Esmailifar et al. Moving target localization by cooperation of multiple flying vehicles
Schmitz et al. A simplified approach to motion estimation in a UAV using two filters
de Haag et al. sUAS swarm navigation using inertial, range radios and partial GNSS
WO2022086446A1 (en) Uwb anchor deployment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21883431

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21883431

Country of ref document: EP

Kind code of ref document: A1