WO2020003319A1 - Localization techniques - Google Patents

Localization techniques Download PDF

Info

Publication number
WO2020003319A1
WO2020003319A1 PCT/IL2019/050718 IL2019050718W WO2020003319A1 WO 2020003319 A1 WO2020003319 A1 WO 2020003319A1 IL 2019050718 W IL2019050718 W IL 2019050718W WO 2020003319 A1 WO2020003319 A1 WO 2020003319A1
Authority
WO
WIPO (PCT)
Prior art keywords
particle filter
sensor
input
location
map
Prior art date
Application number
PCT/IL2019/050718
Other languages
French (fr)
Inventor
Boaz Ben-Moshe
Nir Shvalb
Original Assignee
Ariel Scientific Innovations Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ariel Scientific Innovations Ltd. filed Critical Ariel Scientific Innovations Ltd.
Publication of WO2020003319A1 publication Critical patent/WO2020003319A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation

Definitions

  • the present invention in some embodiments thereof, relates to a method of location using inputs from multiple sensors, and, more particularly, but not exclusively, to a method for converging candidate locations to a more accurate location and, more particularly, but not exclusively, to a method for fusing image -based navigation with additional location inputs to obtain a more accurate location, and more particularly, but not exclusively to a particle filter method for converging candidate locations,
  • the present invention in some embodiments thereof, relates to a method of location using inputs from multiple sensors, and, more particularly, but not exclusively, to a method for converging candidate locations to a more accurate location and, more particularly, but not exclusively, to a method for fusing image -based navigation with additional location inputs to obtain a more accurate location, and more particularly, but not exclusively to a particle filter method for converging candidate locations.
  • a localization method including obtaining a map of a Region Of Interest (ROI), obtaining a first input from a first sensor and a second input from a second sensor, providing the first input and the second input to a processor, using the processor to estimate a location based on the first input and the second input, wherein the processor uses a particle filter method to estimate the location.
  • ROI Region Of Interest
  • the first input includes images from a camera.
  • the second input includes data associated which is unavailable in some area in the ROI.
  • the particle filter method is a modified particle filter method in which associates a likelihood with a candidate location based upon the first input and the second input.
  • the particle filter method is a modified particle filter method further including performing soft-init.
  • performing soft-init is used to solve a state called“kidnapped-robot”.
  • the soft-init includes adding a number of particles, the number in a range of 1-10% of a total number of particles, when the particle filter method performs re-sampling.
  • the adding a number of particles includes adding particles associated with candidate locations having a probability above a threshold probability, the probability based on a consideration selected from a group consisting of the candidate location can image a light source in the ROI, the candidate location can image a sign in the ROI, and the candidate location is in an elevator and an altitude change has been detected.
  • the particle filter method is a modified particle filter method further including removing a fraction of the particles each re-sample, the fraction in a range of 1-25% of a total number of the particles.
  • the particle filter method is a modified particle filter method further including using elevation change data as a particle filter map constraint.
  • the particle filter method is a modified particle filter method further including using one or more distinct environmental features in a particle filter map, the distinct features selected from a group consisting of a light, a ceiling light, a sign.
  • the particle filter method is a modified particle filter method further including using both angular bias and angular drift as part of a particle state.
  • the particle filter method is a modified particle filter method further including adapting a number of initial particles to a navigation scenario.
  • the particle filter method is a modified particle filter method further including using pedometry based on one or more data inputs selected from a group consisting of optical flow, distance-to-object ranging, and device orientation.
  • one of the first input and the second input includes a light level input.
  • At least one of the first input and the second input includes a sensor in a smart phone or tablet.
  • At least one of the first input and the second input includes a sensor installed in a car.
  • At least one of the first input and the second input includes a sensor selected from a group consisting of a GPS receiver, a GNSS receiver, a WiFi receiver, a Bluetooth receiver, a Bluetooth Low Energy (BLE) receiver, a 3G receiver, a 4G receiver, a 5G receiver, an acceleration sensor, a pedometer, an odometer, an attitude sensor, a MEMS sensor, a magnetometer, a pressure sensor, a light sensor, an audio sensor, a microphone, a camera, a multi-lens camera, a Time-Of-Flight (TOF) camera, a range-finder sensor, an ultrasonic range-finder, a Lidar, an RFID sensor, and a NFC sensor.
  • the particle filter method adapts a weight of a candidate location based upon associating a change in light level to proximity to a door of a building or proximity to a window.
  • the particle filter method adapts a weight of a candidate location based upon a map of WiFi reception strength.
  • the particle filter method adapts a weight of a candidate location based upon associating a change in GPS signal reception level to proximity to a door of a building or proximity to a window.
  • the particle filter method adapts a weight of a candidate location based upon associating a vertical acceleration with an elevator or an escalator or stairs.
  • the particle filter method adapts a weight of a candidate location based upon associating a change in pressure with an elevator or an escalator or stairs.
  • the particle filter method adapts a weight of a candidate location based upon associating a change in magnetic field with a magnometer placed in proximity to a door of a building.
  • the particle filter method includes producing initial candidate locations, and iteratively improving accuracy of the candidate locations, and wherein at least some of the candidate locations are cancelled during at least one iteration.
  • the method is used for navigation in an area where GNSS signals are not received.
  • the method is used for navigation in a car park.
  • the method is used for navigation in a tunnel.
  • a method of mapping a Region Of Interest including obtaining first sensor data from a first sensor and second sensor data from a second sensor, providing the first sensor data and the second sensor data to a processor, using the processor to estimate a location based on the first sensor data and the second sensor data, and sending the location to a mapping application.
  • mapping application further including using the mapping application to display at least one of the first sensor data and the second sensor data.
  • a localization method including a) obtaining a map of a Region Of Interest (ROI), b) obtaining a first input from a first sensor, c) providing the first input to a processor, d) using the processor to estimate a location based on the first input, e) moving from the location and repeating (b)-(d), and further including f) obtaining a second input from a second sensor, g) providing the second input to the processor, h) using the processor to estimate a location based on the second input in addition to the first input, thereby increasing accuracy of the estimating the location.
  • ROI Region Of Interest
  • the processor uses a particle filter method to estimate the location.
  • the second sensor provides input intermittently.
  • the second sensor provides input only in specific areas of the ROI.
  • some embodiments of the present invention may be embodied as a system, method or computer program product. Accordingly, some embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a“circuit,”“module” or“system.” Furthermore, some embodiments of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon. Implementation of the method and/or system of some embodiments of the invention can involve performing and/or completing selected tasks manually, automatically, or a combination thereof. Moreover, according to actual instrumentation and equipment of some embodiments of the method and/or system of the invention, several selected tasks could be implemented by hardware, by software or by firmware and/or by a combination thereof, e.g., using an operating system.
  • a data processor such as a computing platform for executing a plurality of instructions.
  • the data processor includes a volatile memory for storing instructions and/or data and/or a non-volatile storage, for example, a magnetic hard-disk and/or removable media, for storing instructions and/or data.
  • a network connection is provided as well.
  • a display and/or a user input device such as a keyboard or mouse are optionally provided as well.
  • the computer readable medium may be a computer readable signal medium or a computer readable storage medium.
  • a computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
  • a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro magnetic, optical, or any suitable combination thereof.
  • a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code embodied on a computer readable medium and/or data used thereby may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for carrying out operations for some embodiments of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • LAN local area network
  • WAN wide area network
  • Internet Service Provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
  • the computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • Some of the methods described herein are generally designed only for use by a computer, and may not be feasible or practical for performing purely manually, by a human expert.
  • a human expert who wanted to manually perform similar tasks such as determining his location indoors, or navigating indoors, might be expected to use completely different methods, e.g., making use of expert knowledge and/or the pattern recognition capabilities of the human brain, which may be more efficient than manually going through the steps of the methods described herein.
  • FIGETRE 1A is a simplified block diagram illustration of a mobile device for navigation and/or localization according to an example embodiment of the invention
  • FIGETRE 1B is a simplified block diagram illustration of a device for producing maps according to an example embodiment of the invention.
  • FIGETRE 1C is a simplified block diagram illustration of a system for producing maps according to an example embodiment of the invention.
  • FIGETRE 1D is a simplified block diagram illustration of a system for vehicle navigation according to an example embodiment of the invention.
  • FIGETRE 1E is a simplified block diagram illustration of a system for producing maps for vehicles according to an example embodiment of the invention
  • FIGURE 1F is a simplified flowchart illustration of a method for navigation and/or localization according to an example embodiment of the invention
  • FIGURE 1G is a simplified flowchart illustration of a method for producing maps according to an example embodiment of the invention.
  • FIGURE 1H is a simplified flowchart illustration of a method for navigation and/or localization according to an example embodiment of the invention
  • FIGURE II is a simplified flowchart illustration of a method for navigation and/or localization according to an example embodiment of the invention.
  • FIGURES 2A-2C are simplified illustrations of progressing from building level to room level and to a higher accuracy of“seat” level;
  • FIGURES 3A-3C are simplified illustrations of using a particle filter to estimate a location according to an example embodiment of the invention.
  • FIGURES 4A-4B are simplified illustrations of applying a threshold to a color image of ceiling lights according to an example embodiment of the invention.
  • FIGURES 4C-4D are simplified illustrations of contour detection of light sources on an image and determining center mass points according to an example embodiment of the invention
  • FIGURE 5 is a simplified illustration of a map used in mapping RF and visual data according to an example embodiment of the invention
  • FIGURE 6 is a simplified illustration of light source mapping according to an example embodiment of the invention.
  • FIGURE 7 is a simplified illustration of a screenshot of a navigating application according to an example embodiment of the invention.
  • FIGURES 8A-8B are simplified illustrations of tracking according to an example embodiment of the invention in comparison to the ground truth;
  • FIGURE 9 is a simplified illustration of a screenshot of a navigating application displaying a large area, according to an example embodiment of the invention.
  • FIGURE 10 is a graph of an accuracy evaluation of a path according to an example embodiment of the invention and a path according to Google map;
  • FIGURES 11A-12C are simplified illustrations of using a particle filter to estimate a location according to an example embodiment of the invention.
  • FIGURE 11D is a graph showing atmospheric pressure measurements over time of according to an example embodiment of the invention.
  • FIGURE 12 is a simplified illustration of a multicolor map used according to an example embodiment of the invention
  • FIGURE 13 is a simplified illustration of a multicolor map used according to an example embodiment of the invention
  • FIGURE 14 is a simplified illustration of a map showing a comparison of a location determined by Google’s fused location service and a location method used according to an example embodiment of the invention
  • FIGURES 15A-15B are simplified illustrations of a potential advantage of using indoor/outdoor determination according to an example embodiment of the invention.
  • FIGURE 16 is a simplified illustration of tracking according to an example embodiment of the invention in comparison to a Lidar-based ground truth
  • FIGURE 17 is a graph showing height value error over time according to an example embodiment of the invention.
  • FIGURE 18 is a graph showing position error over time according to an example embodiment of the invention.
  • FIGURE 19 is a graph showing position error over time according to an example embodiment of the invention.
  • FIGURE 20 is a graph showing position error over a path according to an example embodiment of the invention.
  • FIGURE 21 is a graph showing position error according to an example embodiment of the invention.
  • FIGURE 22 is a graph showing linear acceleration as captured by a smartphone positioned in a car according to an example embodiment of the invention.
  • FIGURE 23 is a color image of a contemporary parking-lot with speed bumps, color column markings, and location codes
  • FIGURE 24 is an example parking lot map
  • FIGURES 25A-25C are screen capture illustrations of a mapping tool implemented on a smartphone according to an example embodiment of the invention.
  • FIGURE 26 is a photograph of a tunnel including visual landmarks according to an example embodiment of the invention.
  • FIGURES 27 A and 27B are images of a highway according to an example embodiment of the invention. DESCRIPTION OF SPECIFIC EMBODIMENTS OF THE INVENTION
  • the present invention in some embodiments thereof, relates to a method of location using inputs from multiple sensors, and, more particularly, but not exclusively, to a method for converging candidate locations to a more accurate location and, more particularly, but not exclusively, to a method for fusing image -based navigation with additional location inputs to obtain a more accurate location, and more particularly, but not exclusively to a particle filter method for converging candidate locations.
  • An aspect of some embodiments is related to calculating a location of a device by producing candidate locations, followed by iteratively improving accuracy of the candidate locations, or reducing spread of the candidate locations.
  • the location of the device is optionally tracked over time, especially if and/or when the device is moving.
  • device localization is optionally used for indoor navigation, or for navigation in an environment which can provide more than one data source for use in calculating a location.
  • the device moves, collecting sensor data from various locations, and the fact that the data is from different locations potentially improves the accuracy of the location.
  • An aspect of some embodiments is related to fusing data from different sources to potentially reduce time to converge spread of the candidate locations and/or increase accuracy.
  • one of the data sources is an intermittent data source, which is not available all the time and/or not available at every location.
  • a particle filter algorithm is used to evaluate the candidate locations. It may be said that the data sources provide hints as to the correct location of the device. Some hints“nudge” the candidate locations toward a correct location. Some non-limiting examples of such hints can be odometer or pedometer readings, which enable calculation of candidate location similarly to a dead-reckoning (DR) method, by providing information how far the device has travelled. Some hints can significantly increase likelihood of a candidate location or rule out the likelihood of a candidate location.
  • DR dead-reckoning
  • Some non-limiting examples of such hints can be sensing when the device passes over a speed bump - that increases the likelihood of candidate locations which are near to locations of one or more known speed bump(s) in an environment, and decreases the likelihood of candidate locations where a map indicates that there are no speed bumps.
  • An aspect of some embodiments is related to calculating a location of a device by starting fusing data at“room level” accuracy (-5-10 meters), which is available using some methods (e.g. WiFi maps) with data from additional sensors, such as described herein, to increase accuracy to “seat level” accuracy (sub-meter).
  • a number of initial candidate locations is optionally produced.
  • candidate locations are optionally used produce an estimated location.
  • the estimated location is optionally calculated based on applying a method for calculating the estimated location. Some non-limiting examples of such methods include: using an average; using a weighted average, optionally where a weight of a candidate location may be based on its likelihood; selecting a most-likely candidate; iteratively calculating values of the candidate locations; adding additional candidate locations; and removing candidate locations, optionally based on least-likely or least-weighted.
  • the weighting each particle is evaluated with respect to one or more of the sensory data and/or map constraints, and a weight of the particle is updated accordingly.
  • the weights are optionally sampling-rate dependent.
  • the sensory data constraints include constraint from one or more of any one of the sensors listed herein, or of sensors included in a smartphone, tablet, vehicle on-board sensors, and so on.
  • the constraint is optionally that a location is more likely to be in an area where elevation change is possible (stairs, elevation, parking lot ramp) and less likely where such elevation change is unlikely (elevation change of more than 3 meters in a floor of a building where ceiling height is 3 meters).
  • the constraint is optionally that a location is more likely to be in a parking lot area where a speed bump exists less likely where no such bump exists.
  • each particle’s weight, or grade is updated according to how suitable the particle location and/or orientation is to the sensor input.
  • a weight change is additive. In some embodiments a weight change is multiplicative.
  • a change is optionally effected so that when there are two particles which are close by with respect to location, and optionally also orientation, the change brings their weight closer together, that is, a ratio of their weights is made closer to 1.
  • a number of initial candidate locations is optionally controlled. In some embodiments, the number of initial candidate locations is optionally reduced relative to previously known methods such as described in above-mentioned article [YBM17] by Roi Yozevitch and Boaz Ben-Moshe titled“Advanced particle filter methods”.
  • a number of initial candidate locations is optionally controlled based on a computing power of a device to be used to determine location.
  • a computing power of a device to be used to determine location By limiting an initial number of particles the computing load is limited, and the method potentially enables calculating a location at a rate which is useful to a moving user.
  • the number of candidate locations, or particles in a case of particle filer is reduced by 1-10 candidate locations per iteration.
  • the number of candidate locations, or particles in a case of particle filter is reduced by 1-25% of the current candidate locations, per iteration.
  • the number of candidate locations is optionally reduced during the process of iteration, relative to previously known methods such as described in above-mentioned article [YBM17] by Roi Yozevitch and Boaz Ben-Moshe titled“Advanced particle filter methods”.
  • a method of calculation includes making a soft-init, or soft initialization.
  • soft-init and soft-initialization are used herein interchangeably, to refer to the process described below.
  • a soft-init is optionally used to re-start calculation of the location of the device.
  • the soft- init includes adding a few, even just 1-10, particles somewhere, optionally even randomly distributed, within the Region-Of-Interest (ROI).
  • ROI Region-Of-Interest
  • the new particles are optionally not dependent on locations of other particles, and can potentially re-start the particle filter in converging to a correct location.
  • a source for one or more of the localization inputs such as a sensor, stops providing data
  • a soft-init is optionally used to re-start calculation of the location of the device.
  • a method of calculation includes taking into account intermittent data input.
  • a method of calculation optionally includes optionally compensating for intermittent data input.
  • the device if the device has been calculated to be moving, the device is optionally located as if it is continuing its movement during pauses and stops in incoming data.
  • a map is optionally used in relation to localizing a device.
  • a map optionally defines a ROI in which the localization is to be made, or at least started or initialized.
  • a map optionally defines constraints upon possible location of the device, typically in two dimension (2D) or three dimensions (3D).
  • 2D two dimension
  • 3D three dimensions
  • Localizing or tracking a device is performed by using data input from one or more sensors.
  • such sensors are optionally built into a smart phone, a tablet, a car, or a smart camera, optionally coupled with associated computing abilities.
  • a non-limiting example of such sensors includes:
  • a smart phone or tablet camera for example capturing image(s) of an environment, including features such a ceiling light, advertising, bill boards, street signs, store signs, doors, stairs, floor markings, wall colors, windows, hydrants, and additional identifiable features.
  • the image(s) is processed to identify such features.
  • a relative direction and/or distance from the camera to the features is optionally calculated.
  • a simple sensing of relative light or darkness can suffice to determine whether the device is indoors, outdoors, or just passing through a door from outside to inside or vice versa.
  • a distance measurement sensor for example a pair of lenses for measuring distance, Lidar (an acronym of light detection and ranging or of light imaging, detection, and ranging), ultrasonic range detector, camera auto-focus, applying camera image deep-learning to an image, and so on.
  • Lidar an acronym of light detection and ranging or of light imaging, detection, and ranging
  • ultrasonic range detector for example a pair of lenses for measuring distance
  • camera auto-focus for example a camera auto-focus
  • applying camera image deep-learning to an image and so on.
  • WiFi is often used for localization.
  • WiFi, Bluetooth (BLE), 3G, 4G and 5G signals can be used for locating, optionally based on RF fingerprinting or an RF map.
  • GPS signals are often not received, or not received in a sufficient number or clarity to be used for standard GPS localization n.
  • partial reception can optionally be used to indicate that the receiving device is close to a specific location, such as under a skylight, close to a window, and so on.
  • locations in a GPS- restricted environment are optionally marked on a map, or reception of a GPS signal is compared to a map, and a constraint of proximity to a skylight or window or door is optionally deduced.
  • - Acceleration sensors and/or inertial sensors are optionally used to determine movement or rate of movement from an initial location.
  • Odometry in some embodiments odometry is optionally used.
  • a rate of advance and/or direction of advance from a location is optionally deduced by counting steps, tire rotation (optionally converted to distance), calculating optical flow, and so on.
  • any one or more of the above-mentioned sensors potentially provides input to a localization system.
  • a camera such as a smartphone camera is selected as a first source of data, potentially providing candidate locations for placing on a map of a ROI, and data from other sensors is optionally used to calculate probability of likelihood of a candidate location or of candidate locations.
  • sensors include: a GPS receiver; a GNSS receiver; a WiFi receiver; a Bluetooth receiver; a Bluetooth Low Energy (BLE) receiver; a 3G receiver; a 4G receiver; a 5G receiver; an acceleration sensor; a pedometer; an odometer; an attitude sensor; a MEMS sensor; a magnetometer; a pressure sensor; a light sensor; an audio sensor; a microphone; a camera; a multi-lens camera; a Time-Of-Flight (TOF) camera; a range-finder sensor; an ultrasonic range-finder; a Lidar; an RFID sensor; and a NFC sensor.
  • Various sensors can provide different types of data.
  • Some data is continuously available, and can be consistent. For example odometer data, pedometer data.
  • Some data is available intermittently, based on location. Such data is available at some location, and not available at others. Some non-limiting examples of location-dependent data include: more intense light associated with windows and doors, GNSS or GPS signals received when passing near an entrance to a building or wen passing under a skylight.
  • time-dependent data Some data is available intermittently, based on time. Such data is available at some time, and not available at others. Some non-limiting examples of time-dependent data include: more intense light associated with windows and doors during daylight hours, and not at night.
  • low quality data Some data can be called low quality data.
  • Some non-limiting examples of low quality data includes low quality images. Low quality images can be at lower resolution than maximal resolution available from current smartphones. However, even low quality images can potentially provide sensory data which can potentially assist in accurate location. By way of a non-limiting example, even a low quality image can provide directions, vectors, to ceiling lights, and assist in evaluating candidate locations.
  • initial maps of a ROI are optionally obtained from a source for such maps.
  • maps may be found on the Internet, as maps of airports, shopping malls, museums, parking lots, cities, and so on.
  • paper maps are optionally scanned and optionally processed in order to provide digital maps.
  • maps are obtained and/or updated according to methods described herein, optionally producing maps of visual landmarks such as ceiling lights, signs, and additional visual and non-visual features described herein.
  • constructing a light source map is optionally done as described below, with reference to the“exemplary positioning embodiments”, with reference to the“visual landmark mapping” section therein, and with reference to Figure 6 and its description.
  • a map is produced by a process of Simultaneous Localization And Mapping, optionally as described in SLAM [6].
  • producing a map includes:
  • an initial map for example a building plan (e.g., mall map, fire-escape map, an operator-made map, or even a blank map;
  • a building plan e.g., mall map, fire-escape map, an operator-made map, or even a blank map;
  • Locating landmarks e.g., lights & signs
  • ray intersections optionally as presented in the“exemplary positioning embodiments”.
  • mapping device has ranging capability, a SLAM like map is optionally produced;
  • corrections to the path are optionally performed base on loop- closure, potentially increasing map accuracy
  • a location of the elevation change is optionally recorded, and associated with a corresponding color, in a sense described with reference to Figure 12 below, representing different types of areas in the map, such as accessible areas, fixed inaccessible areas, partially-accessible areas, dynamic inaccessible areas and stairs or elevator.
  • An aspect of some embodiments includes mapping environments.
  • locations of a device are optionally collected, and additional data is optionally also collected.
  • the locations and/or data are optionally used to produce a map and/or update a map.
  • the map is optionally updated as to: more-exact locations where GPS may be received, corrections to locations calculated by WiFi, updating locations of signs or adding locations of new signs to a map, and so on.
  • a map produced by using data from one or more device(s) navigating a ROI is optionally sent to devices for use in localization.
  • signs are used to provide location.
  • signs contain identifying information like text and drawings which make the signs highly identifiable.
  • a number of similar signs in an environment such as a mall or a park, may be quite few, sometimes only one or two, compared, for example, with a number of ceiling lights.
  • the signs may provide good candidate locations, as correctly identifying a unique sign can provide quality input to grade candidate locations. Even when there are a few similar signs, correctly identifying the signs can provide quality input to grade candidate locations, although several candidate locations may be provided with a high grade.
  • Some signs provide significant assistance in estimating location - for example room numbers or store numbers, column number in a parking lot, and so on.
  • image processing is optionally used to decipher number or text in a sign.
  • captured images are optionally processed to determine whether a sign appears in the captured image, and optionally processed to determine what is written in the sign.
  • a map which includes data about signs potentially associate each sign with one or more features.
  • Some example features include geometric features which can assist location, such as a location of the sign, an orientation to which the sign is facing or from which it can be seen, and an actual size of the sign.
  • Some example identifying features include text on a sign, font, colors, drawings, and so on.
  • mapping environments it is noted that when a device or system is used for navigation or localization, the device optionally sends data about signs it sees and/or detects, to a central system which can enhance maps with data about signs.
  • a map of an environment can optionally be updated whenever a localization or tracking application is used in that environment, becoming a“collective memory” of the environment.
  • an object detection model Given an image or a video stream, an object detection model can identify which of a known set of objects might be present and provide information about their positions within the image. Such an object model is optionally applied with reference to signs.
  • images are optionally filtered, for example with Python, optionally by the COCO API.
  • the images are optionally filtered to use the images which contain legible text, optionally even filtering that such text is printed by machine.
  • TensorFlow’s Object Detection API is used for image recognition
  • the Tensorflow API provides some different pre-trained Deep leaning models (model previously trained over multiple huge datasets).
  • indoors localization and/or tracking is performed, optionally using a camera and associated computing circuitry (e.g. on a smartphone or tablet).
  • the localization a localization and/or tracking algorithm is based on a modified particle filter which combines visual landmarks with additional data inputs such as RF finger printing, odometry, and map constraints.
  • the localization is potentially well suited for using a low resolution camera (e. g. 1 Megapixel) to track dominant landmarks such as lights and potentially achieve an accuracy of sub-meter 2D, 2.5D or 3D positioning at image capture rates of 10-30 Hz, which are typical and even lower than smartphone video rates.
  • a low resolution camera e. g. 1 Megapixel
  • Such an algorithm potentially works at a fairly low energy consumption suitable for smartphones.
  • example embodiments of the invention have operated successfully using less than 100 milliwatts.
  • the energy consumption is expected to be hardware dependent, yet easily suitable for operation on smartphones or tablets or other battery-dependent devices without causing undue drainage of the battery.
  • the method optionally uses input of ceiling light(s) images and a ceiling light map.
  • the method detects passing through doors, under a skylight or near a window by one or more of detecting changes in light intensity, light color.
  • the method detects passing through a magnetometer.
  • Passing under a skylight and/or passing near a window by receiving a GPS signal or a partial GPS signal.
  • the method includes image processing to read signs such as shop signs, mall signs, parking lot signs, and so on.
  • the method includes detecting obtaining a WiFi signal at an entrance.
  • a parking lot typically includes features which sensors can detect.
  • parking lot localization and/or tracking is performed, optionally using a camera on a smartphone, or a camera serving as a car-mounted system, and associated computing circuitry (e.g. in the smartphone or in the car).
  • some features which can be detected and used as input, optionally combined with visual landmarks include:
  • a tollbooth at an entry (visual detection); speed bumps at specific locations in a parking lot; layout of traffic lanes within a parking lot; relative height difference from known-height locations - e. g. by knowing where an entrance is and measuring barometric pressure difference to detect how many levels up or down, by sensing an incline and a distance travelled along the incline (odometry plus sensor attitude/direction); image processing for detecting parking-lot-specific markings such as numbers and/or letters and/or colors marked on columns; loss of GPS signal at entrance; obtaining a WiFi signal at an entrance.
  • Figure 1A is a simplified block diagram illustration of a mobile device for navigation and/or localization according to an example embodiment of the invention.
  • Figure 1A shows a mobile device 102, including a processor 102, sensors 104, and the processor having a communication channel 103 with the sensors 104.
  • the mobile device 102 may be a smartphone, a tablet, or even a dedicated mobile navigation device.
  • the sensors 104 may include a camera 106 (such sensors are very usual in smartphones and such devices), as well as other sensors 108.
  • Figure 1B is a simplified block diagram illustration of a device for producing maps according to an example embodiment of the invention.
  • Figure 1B shows a mobile device 110, including a processor 112, sensors 114, a mapping user interface 120, and the processor 112 having a communication channel 103 with the sensors 114 and a communication channel 119 with the mapping user interface 120.
  • the mobile device 110 may be a smartphone, a tablet, or even a dedicated mobile mapping device.
  • the sensors 114 may include a camera 116 (such sensors are very usual in smartphones and such devices), as well as other sensors 118.
  • Figure 1C is a simplified block diagram illustration of a system for producing maps according to an example embodiment of the invention.
  • Figure 1C shows a mobile device 125, having communication capability 134 with a mapping server 135.
  • Figure 1C shows the mobile device 135 including a processor 127, sensors 128, a mapping user interface 131, and the processor 127 having a communication channel 132 with the sensors 128 and a communication channel 133 with the mapping user interface 131.
  • the mobile device 125 may be a smartphone, a tablet, or even a dedicated mobile mapping device.
  • the sensors 128 may include a camera 129 (such sensors are very usual in smartphones and such devices), as well as other sensors 130.
  • FIG. 1D is a simplified block diagram illustration of a system for vehicle navigation according to an example embodiment of the invention.
  • Figure 1D shows a mobile device 141, having communication capabilityl42 with a vehicle 140.
  • Figure 1D shows the mobile device 141 including a processor 147, sensors 148, a mapping user interface 150, the processor 147 having a communication channel 149 with the sensors 148 and a communication channel 151 with the mapping user interface 150.
  • Figure 1D also shows the vehicle 140 including a processor 143 and sensors 144, the processor 143 having a communication channel 145 with the sensors 144.
  • mapping user interface 150 and its communication channel may be included in the vehicle 140.
  • mapping user interface and its communication channel may be included in both the mobile device 141 and the vehicle 140.
  • the mobile device 141 may optionally receive images from the vehicle’s camera or cameras, which may have better view outside the vehicle 140. In some embodiments the mobile device 141 may even be placed in a mobile device mount in the vehicle 141, and be used with images peripheral to the car.
  • Figure 1E is a simplified block diagram illustration of a system for producing maps for vehicles according to an example embodiment of the invention.
  • Figure 1E shows a vehicle 155, having communication capabilityl57 with a mapping server
  • Figure 1E shows the vehicle 155 including a processor 160, sensors 161, with the processor 160 having a communication channel 162 with the sensors 161.
  • Figure 1E also shows an optional mobile device 158, having communication capabilityl59 with the vehicle 155.
  • Figure 1E shows the mobile device 158 including a processor 164, sensors 165, an optional mapping user interface 168, the processor 164 having a communication channel 166 with the sensors 165 and a communication channel 167 with the mapping user interface 168.
  • mapping user interface 164 and its communication channel may be included in the vehicle 155.
  • mapping user interface and its communication channel may be included in both the mobile device 158 and the vehicle 155.
  • the mobile device 158 may have communication capability 169 with the mapping server 156.
  • Figure 1F is a simplified flowchart illustration of a method for navigation and/or localization according to an example embodiment of the invention.
  • the method of Figure 1F includes:
  • ROI Region Of Interest
  • the processor uses a particle filter method to estimate the location.
  • Figure 1G is a simplified flowchart illustration of a method for producing maps according to an example embodiment of the invention.
  • the method of Figure 1G includes:
  • mapping application (1878).
  • Figure 1H is a simplified flowchart illustration of a method for navigation and/or localization according to an example embodiment of the invention.
  • the method of Figure 1H includes:
  • ROI Region Of Interest
  • Figure II is a simplified flowchart illustration of a method for navigation and/or localization according to an example embodiment of the invention.
  • the method of Figure II includes:
  • ROI Region Of Interest
  • the processor uses a particle filter method to estimate the location.
  • the second sensor provides input intermittently.
  • the second sensor provides input only in specific areas of the ROI.
  • the exemplary positioning embodiments below describe a general framework for positioning and navigation which improves expected accuracy over known methods such as using WLAN and cellular information.
  • Figures 2A-2C are simplified illustrations of progressing from building level to room level and to a higher accuracy of“seat” level.
  • Figure 2A is meant to illustrate a“building level” location accuracy, capable of estimating a location at an accuracy corresponding to a portion of a building. This can correspond to -10 accuracy.
  • Figure 2B is meant to illustrate a“room level” location accuracy, capable of estimating a location at an accuracy corresponding to a specific room in a building. This can correspond to 2-5 meter accuracy.
  • Figure 2C is meant to illustrate a“seat level” location accuracy, capable of estimating a location at an accuracy corresponding to, by way of a non-limiting example, a specific seat around a specific table, or a specific location within a room. This can correspond to sub-meter accuracy.
  • An example embodiment method uses a modified particle filter which combines one or more of RF finger-printing, odometry, visual landmarks and map constraints.
  • the potential accuracy improvement is achieved by using a camera, optionally a low resolution camera, to track dominant landmarks such as lights.
  • a camera optionally a low resolution camera
  • Use of “glowing-markers” potentially allows one to accurately map relatively complex indoor buildings, optionally using a compact representation.
  • an example method as described in above-mentioned U.S. Patent Application Number 14/418,106 titled“Navigation method and device” was implemented and tested on Android-based mobile devices. The tests indicated a robust sub-meter 3D positioning at a frame capture rate of 10 - 30Hz with a fairly low energy consumption.
  • High sampling rate typically required for a natural and intuitive navigation results, especially for highly dynamic devices.
  • Low energy consumption is an important property for most mobile (or battery operated) devices, it usually requires the use of low computing power methods (e.g., the use of visual based navigation methods are usually impractical for mobile devices - due to their high computing power requirements for image and video processing algorithms).
  • Minimal dedicated infrastructure preferably, a solution should work without any need for additional infrastructure.
  • BYOD “Bring your own device”
  • a need for reliable indoor positioning and navigation service is motivated by several applications including: Location Based Services,“where have I parked my car?”,“Where are my friends”, Augmented Reality gaming (for example a game such as“Pokemon Go”) and even search and rescue.
  • An indoor positioning system is described which focuses on positioning methods for smart phones, optionally Commercial Off The Shelf (COTS) devices.
  • COTS Commercial Off The Shelf
  • an embodiment is described which is potentially useful for mass market scenarios in which an I P S potentially work on existing mobile devices (i.e., smart-phones), in some embodiments even without additional dedicated infrastructure.
  • WLAN finger-printing this common method mainly uses WiFi or BlueTooth ( BLE ) beacon- signals in order to approximate a user position.
  • the method uses a preprocessing stage (finger-printing) in which a site-survey is performed - storing the signal strength of received signals in many locations of a Region of Interest (ROI). Then, the location of the user can be approximated by comparing the current set of RF signals with the finger-printing data-set.
  • ROI Region of Interest
  • Pedestrian odometry this method uses step counting combined with the device approximated orientation in order to compute the user relative path, see: [Harl3], [BH13].
  • Map Matching and Dead Reckoning this method is commonly used for GNSS-based (Global Navigation Satellite System - based) road position, in which the fact that the vehicle needs to be on the road implies significant constraints which can be used to reduce a search space.
  • a related method can be used for indoor positioning - assuming we have the map constrains, see: [BK+ 08], [LSVW11], [BW13]
  • Some positioning systems combine a few of the above methods in order to allow a better accuracy.
  • major IPS providers such as Google and Apple provide a solution for a room or building level accuracy with expected error larger than 5 meters, therefore, even a user’s floor is often not automatically determined, see [LL17], [TSJK+17] for IPS accuracy evaluation.
  • This application describes methods and systems for accurate indoor navigation.
  • standard (C OT S) android mobile devices are used.
  • C OT S standard
  • the described system potentially allows a 3D sub-meter accuracy while maintaining one or more of the properties listed above (Accuracy, High sampling rate, Low energy consumption, Privacy and potentially without adding dedicated infrastructure).
  • WLAN technologies refers not only to wireless LAN but also to other RF technologies, like: Blue-tooth/BLE, 3G/4G and RF - ID. Nevertheless, deployment of additional infrastructures such as RF - ID or RF -beacons are time consuming and often have high implementation costs, hence some embodiments of the present invention rely on existing infrastructures.
  • WLAN technology is being deployed in public places, such as: shopping malls, industrial buildings, airports and hospitals.
  • GNSS e.g., GPS
  • W LAN based positioning systems can provide room level localization, usually with an accuracy of about 5 - 10 meters. Those systems typically use WLAN“Finger Printing” signal-map. A simple estimation can use the ratio of the current WLAN signal scan and the RF signal map.
  • the particle filter localization algorithm also known as Monte Carlo Localization [FBDT99] is a variant of the Bayesian filter family [C+03].
  • each x ⁇ is signed with a corresponding weight w t L that describes a belief state and evaluated proportionally to the likelihood of the Bayesian function p(z ⁇ x) where z is the sensory measurement.
  • some preliminary terms are defined:
  • ROI Environment
  • 2D floor map
  • building map - which may optionally contain several floors (i.e., 2.5 D), in some embodiments, one can also address the map as a 3D representation of the building.
  • a set P of n particles is distributed on the environment map, and assigned with some initial weights.
  • the set P is optionally uniformly distributed.
  • Action A movement-function that describes a position change of a device. In robotics, for example, odometry is used, while for mobile devices carried by a person a pedometer (step counter with orientation) is optionally used.
  • a Sense-function maps sensor data to a particle weight.
  • Re-sample A process where the particles with negligible weights are replaced by new particles in the proximity of the particles with higher weights.
  • An actual implementation of the resampling method is usually a key factor in the performance of a particle filter, in particular, the resampling method potentially affects convergence properties of the localization method.
  • the particle filter optionally reports the current optimal position, for example by simply reporting the’’best” particle or by performing some kind of weighted average over the particles.
  • reporting just a position or location is insufficient.
  • a localization algorithm is optionally required to estimate an accuracy of the proposed location. Using the particles’ current distribution - one can use a method for estimating the expected accuracy.
  • Algorithm 1 Simplified Particle Filter Algorithm For localization.
  • FIGS. 3A-3C are simplified illustrations of using a particle filter to estimate a location according to an example embodiment of the invention.
  • Figure 3A shows an initial distribution of particles, corresponding to an initial distribution of estimated locations.
  • the initial distribution is a uniform distribution.
  • the initial distribution is limited to being with in Region Of Interest (ROI).
  • ROI Region Of Interest
  • the initial distribution is limited to being, for example, within a building, or to being not at inaccessible areas.
  • Figure 3B shows particle translation due to total action vector marked as an arrow 305.
  • Figure 3C shows a convergence of the particles.
  • a process of extracting visual landmarks includes the following two steps: Image processing and Extraction.
  • Figures 4A-4B are simplified illustrations of applying a threshold to a color image of ceiling lights according to an example embodiment of the invention.
  • Figure 4A shows an example color image which includes lights 305
  • Figure 4B shows a resulting, optionally binary, optionally black and white, image of Figure 4A, showing the light 305 after a threshold has been applied.
  • a contour y of a landmark that appears in lb see [Can86], [XB12]).
  • Figures 4C-4D are simplified illustrations of contour detection of light sources on an image and of determining center mass points according to an example embodiment of the invention.
  • Figure 4C shows an example color image which includes lights, showing optional contours 402 of the lights.
  • Figure 4D shows locations 404 of centers-of-mass of the lights.
  • additional geometric properties of each light source are optionally analyzed, including, for example center, radius and a compact representation of a perimeter (contour) of the light source.
  • a first step includes calculating an Intrinsic matrix K3 x 3 of the visual sensor which is known as “Camera Calibration”.
  • the intrinsic matrix encapsulates the sensor’s focal length on both axes fx, fy, it’s center c and the sensor’s skewness [ZhaOO], [HS97].
  • the relative vector is then: where w is a scale factor of mapping R 2 to R 3 which is unknown. Hence, v E R 3 can be calculated up to a scale.
  • one rotates the acquired vector c by the device self-orientation in order to align it with world coordinate system.
  • device orientation is extracted from commonly-used MEMS sensors (e. g android smartphone orientation sensor).
  • the system described herein optionally implements pedometry (for example using a step-counter) and/or optical flow.
  • the rough position estimation is optionally to define a region- of-interest.
  • data fusion is addressed by using a (optionally modified) particles filter method.
  • a particle filter with the initial area of the filter covering all the evaluation map and converging to the most likely Probability Density Function.
  • some particles are periodically spread outside the ROI to overcome ’kidnap robot’ situation.
  • the algorithm used is described in [YBM17].
  • Algorithm 2 Evaluating high accuracy location
  • a real world localization scenario often includes complex sensory data: outliers, inaccurate and partial sensory information are common. Moreover, human factors can influence the sensor measurements and therefore contradict the pure Bayesian inference filter. In this section we present several improvements for the generic particle filter that aim to over- come such problems.
  • barometer data is used.
  • the barometer data is optionally smoothed, optionally using a Kalman filter, detecting events of elevation change.
  • the particles can then be moved in the z axis using the filtered data. Such a method allows one to spread the particles filter over a height of a few floors, for example during an initialization stage.
  • “vertical corridors” are optionally defined in the map (stairs, escalators, elevators) through which an elevation change may occur. The fact that the area of such “vertical corridors” is potentially results in a rapid convergence.
  • Barometric pressure is sampled at a rate suitable for tracking a specific object.
  • sampling may be done every few seconds or even every second.
  • sampling may be done every 1 or more minutes. For example, in a vicinity of the“vertical corridors” sampling may be performed more often than when far from the“vertical corridors”.
  • the algorithm is optionally kept both simple and efficient by using only a 2.5D representation of the building, for example using a 2D map for each floor and an optional absolute world elevation or relative elevation.
  • the soft init is optionally based on discrete analysis of possible locations - for example using lights or signs patterns.
  • a naive registration of the largest image- light to each relevant light in the database is performed.
  • the naive registration defines a global 3D vector for each such registration. Note that each registration pair allows us to approximate the vector length by matching the size of the light in the image and in the database.
  • this new set of particles includes at least one particle which is rather close to the real location - and therefore has a high probability to increase its weight.
  • a re-sampling simplifies the process -only a few particles are changed/replaced each re-sample, and the rest of the particles are evaluated and their weight updated accordingly, potentially allowing a shorter convergence time, or a shorter iteration time, and potentially less particles.
  • an elevation is added to a particle filter map constraint, potentially enabling more accurate localization, and potentially enabling more rapid convergence, at least in scenarios where elevation is a factor.
  • an enriched map is used - for example a map which includes, in addition to structures such as walls and/or doors, also features which are simple to locate, such as lights and/or signs.
  • a simplified map is used - for example a map which includes only or mostly features which are simple to locate, such as lights and/or signs.
  • the particle filter method includes using an accurate compass input, using one or more of an angular bias and angular drift or change as part of a particle’s state.
  • Compass sensor may have an initial bias.
  • compass angle is optionally maintained using a gyro sensor or optical-vision-based method, and potentially an angular drift may develop.
  • an angle based (bias) shift and an angular drift are computed by the algorithm - just as the location (x, y, z) is computed - as part of the particle state.
  • the particle filter method includes using a flexible number of particles, enabling to adapt the number of particles to a navigation scenario. The number of particles is optionally adapted to the probability space represented by the set of the particles.
  • additional particles are optionally added.
  • the additional particles are optionally added in locations according to sensed data and/or in vicinity of high weight particles.
  • the number of particles may be reduced.
  • a particle with a lowest weight is optionally tested with respect to a particle with the highest weight, or with respect to a particle with the average weight - if a ratio of their weights is under some threshold value the lowest particle is removed.
  • a lowest weight particle is removed from the set of particles.
  • the lowest weight particle is optionally replaced with a new particle.
  • the new particle is assigned an initial weight of an average particle and/or a 50-th percentile weight.
  • two particles replace a particle removed.
  • no new particle is added.
  • pedometry input optionally includes measuring optical flow and/or distance ranging with suitable sensors, which provides a more precise“action” as used in particle filter terminology.
  • pedometry input optionally includes device orientation, which provides a more precise“action” as used in particle filter terminology.
  • a few particles are initialized each time the algorithm re samples.
  • a few new particles are optionally located at locations where there are no current particles, yet there is a likelihood for a particle to be.
  • Some non-limiting examples include: (i) initializing according to a light pattern (ii) initializing according to a mapped land mark such as a sign (iii) initializing in an elevator - based on detecting an altitude change.
  • Soft-init potentially overcomes a situation called the“Kidnapped robot problem”, where a localization algorithm loses touch with the ground truth. Adding a few particles in likely places, can lead a particle filter method to converge on those particles, and correct its own deviation from the ground truth.
  • Reporting expected error and confidence the genetic particle filter reports some kind of combined position at one or more of the rounds. In some cases both the expected error and the confidence of the positioning are reported, potentially allowing higher level filters to be applied to the reported positions.
  • Some embodiments include collecting benchmark data (e.g. environmental map, RF map and visual landmark map). Such collecting is potentially useful for a system to work efficiently and to achieve high accuracy results as well as robustness.
  • benchmark data e.g. environmental map, RF map and visual landmark map
  • an environmental map data includes a 2D map of a building’s floor (e.g. a mall).
  • the map describes possible locations the map includes, such as walls, shops (in case of mall map), doors, steps, elevators area. Data about staircases and elevator area can be used to determine floor changing along with sensor fusion. Wall location descriptions help eliminate false location estimation in two ways: non-realistic positions such as within a wall, and previous estimation cannot be evaluated again if they passed through a wall in the action step of the algorithm.
  • Creating the environmental map is optionally a first step in the pre- processing process, it may be performed manually by measuring the environment, by using an existing map, or by collection from users of a localization app which navigate the environment and send data to a data collector, to produce maps and/or update maps - such can be called social mapping.
  • a digital map may be created in any convenient form.
  • a whole map data is optionally split in a way such that sub-maps each describes a single floor of a multi floor building.
  • an additional step of the process is collection of RF data.
  • RF data can consist of 3G/4G mobile signals, Bluetooth/BLE beacons, WiFi at 2.4/5.0 Ghz transmission.
  • RF data collection may optionally be a semi-autonomous process where raw positioning data measurements collected are saved with respect to their collection elapsed time. Given n RF data measurements that were recorded at time ti along a vector p on an environmental map, the estimated
  • Figure 5 is a simplified illustration of a map used in mapping RF and visual data according to an example embodiment of the invention.
  • Figure 5 shows a display 502 of an example mapping application, including a map 504, and controls 506 for a user to provide input.
  • Figure 5 shoes additional controls 508 typical of a smartphone or a tablet.
  • Figure 5 illustrates that just as a smartphone or tablet can be used for localization, they can be used for mapping. A synergy is potentially produced when using a same device or type of device to map a region when a similar device will later be used for localization.
  • Figure 5 shows a path 510 along which the mapping device was carried, while mapping visual data and/or RF data.
  • the map 504 displays dots 512 where ceiling lights, for example, were identified and mapped as visual landmarks.
  • the process of visual landmark mapping resembles the RF mapping task.
  • the visual landmarks are then registered (mapped), optionally using real world coordinates, relative to the environmental map. Note that the task of mapping a complex building such as a shopping-mall may include mapping hundreds of light sources, and may be time consuming.
  • Figure 6 is a simplified illustration of light source mapping according to an example embodiment of the invention.
  • Figure 6 shows a device such as, for example, a smartphone 602, carried along a path 604.
  • the smartphone 602 includes a camera 606, which captures images of a light source 608, while also recording time and/or location.
  • the location recorded while mapping may optionally be recorded using various means, such as pedometry, odometry, RF mapping, WiFi location, GPS if available, optical flow odometry, Google’s Tango phones, and so on.
  • Figure 6 also shows a motion vector 610 indicating the direction the smartphone 602 is moving.
  • Figure 6 depicts a semi-automated mapping method that is both simple, efficient and applicable for mobile devices.
  • the light source 608 is spotted and tracked along the path 604, the position of the light source 608 is optionally estimated as a weighted average of intersections of the 3D vectors to that light source 608.
  • a simple tracking algorithm that compares two consequent frames and looks for a minimal distance change between a previous spotted landmark and landmarks spotted in the new frame. In cases where the minimal distance does not exceed some threshold, a match is declared, otherwise a new landmark Id is assigned.
  • a more sophisticated tracking algorithm may include a 2D Kalman’s Filter, (see [BW01], [LWWL10]).
  • a direction or vector to the light source 608 is calculated for each image.
  • a location of the light source is optionally calculated by calculating an intersection of the vectors to the light source 608. It is noted that the more images are used, the more accurate the location of the light source cn potentially be.
  • Figure 7 is a simplified illustration of a screenshot of a navigating application according to an example embodiment of the invention.
  • Figure 7 shows a screenshot of an example embodiment of a mapping navigating application.
  • images of two detected and tracked lights 702a 702b are shown.
  • the two light sources 702a 702b are mapped to map locations 704a 704b with respect to a user position 706, or the mapping device’s position 706.
  • a barometer sensor is rather common in many smartphones.
  • the current air pressure can be used in order to detect height changes at sub meter accuracy.
  • Using a Kalman filter we were able to detect changes in height (i.e., detecting movement from one floor to another).
  • convergence and/or accuracy potentially depends on a number of image frames included in a calculation.
  • a high frame capture and calculation rate for example 30 frames-per-second (fps)
  • algorithm convergence potentially occurs within 2 seconds or less.
  • convergence rate potentially depends on a processor speed.
  • convergence rate potentially depends on an environment in which the localization is taking place. An environment with many similar feature can take longer than an environment with one or more distinct- from-each-other features.
  • Performing data fusion that is, using input from sensors in addition to a camera, potentially increase speed of convergence.
  • a high fps (say 30) sensory input convergence may take from 2 to. 60 seconds.
  • the time is scenario dependent - in case of detectable signs - a sub- second convergence is possible, while in case of no detectable objects - the convergence may optionally wait for a user movement such as a floor change or simply a walk through a mapped region.
  • Figure 8A shows a path as computed by the“Goln” algorithm
  • Figure 8 B shows the ground truth
  • Figures 8A and 8B present a part of the actual competition evaluation of the”GoIn” system.
  • mapping a shopping mall (with three floors and about a hundred stores) required about one hour in which the RF fingerprinting and the light mapping were performed simultaneously. Then, in the localization part those maps were used for positioning (as shown in Figures 9, 10).
  • mapping an average size shopping mall can be done in a matter of minutes to hours, depending on the hardware, including testing and map validation.
  • the level of accuracy may vary between light based high accuracy (0.5-2 meters) and low accuracy (5-10 meters) when the visual sensor is blocked. Using relative simple image analysis we are able to differ between the high accuracy and the low accuracy cases.
  • image analysis typically enables sub-meter accuracy and often enables a sub-feet accuracy using multiple images. It is noted that accuracy is potentially affected by a distance from a feature which is imaged, typically closer enables a better accuracy. In cases of features located far away from the user (say 50 meters) an expected accuracy may be 2-3 meters.
  • Figure 9 is a simplified illustration of a screenshot of a navigating application displaying a large area, according to an example embodiment of the invention.
  • Figure 9 shows a screen shot from the“Goln” application.
  • Each blue dot 902 represents a single light source (a few hundred lights were mapped in each of three floors).
  • a red line 904 represents a computed path with a meter accuracy (on average).
  • a green circle 906 presents the WiFi location and a red circle 908 represents the light-based position.
  • a Google-maps reported position 910 is presented in light blue - with an average error larger than 15 meter.
  • Figure 10 is a graph of an accuracy evaluation of a path according to an example embodiment of the invention and a path according to Google map.
  • the graph of Figure 10 shows an accuracy evaluation of Goln’s path 1006 (blue) and Google-map’s path 1008 (orange).
  • the path 1006 computed by the Goln app was within 1 meter accuracy during at least half of the evaluation, yet: whenever the phone’s camera was blocked the expected accuracy went to 3-6 meters.
  • the graph of Figure 10 has an X-axis 1002 in meters, and a Y-axis 1004 in meters.
  • the paths 1006 and 1008 present a specific walk in a real mall (PETAH TIKVA) and the locations reported by Google (less accurate) and an example embodiment of the invention - the Goln method (more accurate).
  • the example embodiment of the localization method uses light sources as landmarks.
  • the example embodiment algorithm was able to improve the accuracy from“room-level” to a’’seat-level” in cases where the visual sensor was able to detect those landmarks.
  • the suggested algorithm was implemented as an Android application and tested in several real-life scenarios both for localization and mapping. In general, the accuracy of the localization depends on the ability of the visual sensor to detect visual landmarks (without being blocked by a user’s body).
  • Lessons learned include performing a“camera-switch”, in which one camera on one side of a smartphone or tablet is replaced by another camera on another side, while continuing to use the tracking method.
  • The“camera-switch” is used for potentially improving visual path tracking (i.e., optical flow or visual pedometry) to allow a smooth and continuous path approximation even in cases of relatively long scenarios of a blocked camera.
  • the exemplary positioning embodiments below describe a vision based navigation system designed for indoor localization.
  • An example embodiment framework works as a standalone 3D positioning system by fusing a sophisticated optical-flow pedometry with map-constraints using an advanced particle filter.
  • An example embodiment method potentially requires no personal calibration and potentially works on standard smart-phones, optionally with a relatively low energy consumption.
  • LBS location base services
  • first responders first responders
  • autonomous robotics indoor navigation
  • LBS related applications mainly target smart-phone users navigating in a shopping mall [FNI13], [GLN09]
  • first responders may be using a foot mounded pedometer (see [JSPG10], [KR08a], [KR08b]).
  • JSPG10 foot mounded pedometer
  • the suggested solution should be able to work in an’’off-line” mode (i.e.,“flight- mode” or“standalone” mode).
  • This Example presents a smartphone indoor positioning system (I P S) based on software developed using recent AR and MR (Augmented Reality and Mixed Reality) tools such as Google’s ARCore or Apple’s ARKit.
  • the AR tools were used to develop visual pedometry (scaled optical flow) sensors, which were fused with an improved version of a localization particle filter to produce an accurate and robust solution for various indoor positioning applications.
  • the Example method enables a simple and efficient mapping solution that, combined with the improved version of the localization particle filter, allows 1-2 meter positioning accuracy in most standard indoor scenarios.
  • a user global position can be retrieved from existing geolocation services (e.g., Google Maps Geolocation API). Such user location is commonly approximated using RF signals (4G - 3 G, WLAN, BLE ) and even global navigation satellite system (GNSS ). The accuracy of such methods is considered to be“building level” (10-30 meters) or“room level” (5-10 meters).
  • geolocation services e.g., Google Maps Geolocation API.
  • RF signals 4G - 3 G, WLAN, BLE
  • GNSS global navigation satellite system
  • the accuracy of such methods is considered to be“building level” (10-30 meters) or“room level” (5-10 meters).
  • a user relative position is optionally computed using a pedometer.
  • a smartphone-based pedometer is composed of two virtual sensors: (i) A“Step-counter”: which detects discrete step- events. (ii) An orientation sensor: which approximates the user global / relative direction. Combined, the two sensors enable a step-based relative path computation. Naturally such a method tends to drift over time and steps or distance.
  • Map The particle filter methods estimate the internal state in a given area.
  • an input of this algorithm is a 2D, 2.5D or 3D map of the region.
  • Such a map preferably include as many constraints as possible (for example walls and tables).
  • the map constraint is one of the parameters that determine each particle’s grade, since particles with an impossible location on the map are usually downgraded.
  • Particle At the beginning of the localization process we“spread” a set of particles P on the map.
  • Each particle xi E P has one or more of the following attributes: location: ⁇ x, y, z >, orientation: w and grade: g.
  • location ⁇ x, y, z >
  • orientation w
  • grade g.
  • all particle locations and orientations are optionally modified, as well as their grades.
  • the sum of P particles’ grade is optionally 1 in each step.
  • the grade of each particle is optionally set higher when its location on the map seems more likely to represent the internal state.
  • Move function (Action function): With each step the particles in the map are relocated according to the internal movement in some embodiments, for each step we calculate a movement vector (optionally in 3D) and a difference in orientation, then move the particles accordingly.
  • the movement of each step is optionally provided by a mobile pedometer (for example a step counter, optionally with orientation) as commonly used in smart-phone.
  • Sense function Sensors of the device are optionally used to determine each particle grade.
  • a sense method or function predicts each particle’s sense for each step, and grades it with respect to a correlation between the particle prediction and the internal sense.
  • the sense function can compute distances to a nearest wall (forward and back, right and left) and then compare the computed distance to a distance of each particle to a nearest wall in the map and optionally change the particle grade in an amount corresponding to the correlation.
  • Re-sampling A process of choosing a new set of particles P’ from P.
  • the re-sampling process can be done by various methods and one purpose of the re-sampling is to choose particles with a high weight or grade over particles with lower weight or grade.
  • Random noise To prevent the convergence of the particles from happening too fast (and by that risk missing a true location).
  • particles are optionally moved by a small random noise on the map. In some embodiments this is done by moving each particle in a small radius from its original location.
  • Algorithm 1 described below explains a process of the particle filter method using a mobile pedometry sensor.
  • Input A black and white 2D, 2.5D or 3D map of the navigation area.
  • P a random location ⁇ x, y > is set, optionally
  • Algorithm 1 is described as:
  • Init generate a set P of n particles . For every xi EP a random location ⁇ x, y, z> orientation ⁇ w > and grade g is set, optionally in a uniform distribution over the map.
  • a black and white map is used in order to present geo-constraints used by the particle filter.
  • the above algorithm is relatively time efficient, however its precision may be insufficient in some cases, for example in large areas or areas with few constraints.
  • the next section describes a particle filter based algorithm with advance methods to improve the results accuracy.
  • the improved mapping and the adjusted sense function are both possible due to new AR smart phone technology.
  • the next subsections explain the improved mapping process and the advanced particle filter algorithm.
  • AR augmented reality
  • AR algorithms have the following features:
  • one or more of the first two features enable estimating a user movement and orientation in real time, optionally even for each step taken by the user, and are optionally used to improve pedometer-sensed data.
  • the third feature is optionally exploited to improve the mapping and the particle filter sense function, as will be explain in detail in the following subsections.
  • a method for detecting floor change.
  • An error of “wrong floor” is significant for a user and may cause significant error related to wrong constraints applied by a“wrong map”.
  • one or both of a barometer sensor and a 3D optical-flow were used in order to estimate elevation of a user. Both methods are relatively sensitive to changes in elevation, yet both also tend to drift.
  • 3D optical-flow methods are typically not able to detect vertical movement in an elevator, where the change in elevation is not seen.
  • the following floor - change filter is used, which is based on rapid changes in barometer readings (for simplicity we assume that the barometer sampling rate is fixed):
  • the particles are initially randomly spread among all floors.
  • the improved algorithm may optionally fuse the 3D optical-flow sensor reading with the barometer sensor, in some cases using a Kalman filter.
  • Figures 13A-13C are simplified illustrations of using a particle filter to estimate a location according to an example embodiment of the invention.
  • Figure 13D is a graph showing atmospheric pressure measurements over time of according to an example embodiment of the invention.
  • Figures 11A-11C show a map and particles displayed upon the map, at three different stages.
  • Figure 11A shows an initial state, where the particles are uniformly distributed.
  • Figure 11B shows how, using a short motion vector, the particles begin to organize clusters.
  • Figure 11C shows the particles converged to a single position cluster.
  • Figure 11D shows a graph having an X-axis of time and a Y-axis of barometric pressure (in PSI).
  • Figure 11D shows barometric pressure over time. The detection of floor change enabled the algorithm to converge efficiently to the right 3D location.
  • the advanced particle filter algorithm uses a map of the region of interest.
  • a map is assembled by an example embodiment system using the following technique:
  • the colors are placed on the map.
  • one or more of the following constraints are represented according to the following logic:
  • color B Inaccessible area, such as walls, fixed barriers, etc. optionally as sensed by the AR tool.
  • color C partially accessible area, adjacent to an inaccessible area, optionally having a fixed width.
  • color D Inaccessible area, one that could not have been identified by the AR tool due to its lack of vertical-surfaces-shape. This area may optionally be colored manually.
  • the generated map is an input for the particle filter algorithm, and is later also used to determine the particles grade.
  • Figure 12 is a simplified illustration of a multicolor map used according to an example embodiment of the invention.
  • Figure 12 shows a 2D multicolor map example used in the advanced algorithm.
  • the Color white (marked by the letter A) represents accessible areas
  • the black color (marked as B ) represent fixed inaccessible areas (in this case walls)
  • the color pink (marked as C) represents adjacent partially-accessible areas (near walls)
  • the brown (marked as D ) represents dynamic inaccessible areas (tables in this case)
  • the yellow part (marked as E) represents stairs.
  • FIG. 13 is a simplified illustration of a multicolor map used according to an example embodiment of the invention.
  • Figure 13 shows a multicolor map example used in the advanced algorithm.
  • the color white is (marked by the letter A) represent accessible areas
  • the black color (marked as B) represent the fixed areas (in this case walls)
  • the grey (marked as C) represents dynamic inaccessible areas (tables in this case)
  • the yellow part (marked as D) represent stairs and elevators.
  • the map of Figure 13 includes two floors.
  • the map of Figure 12 is used as a 2.5D map.
  • Such a map can optionally benefit from elevation-related sensors such as barometric pressure or GPS, if available.
  • the naive and the advanced particle filter algorithm differ in their sense functions. While the naive algorithm simply evaluates the weight of the particles according to their map location (a particle in a B or A area), the advanced algorithm may optionally perform an actual sensing, to determine how far each particle is from a ground truth.
  • the sensing performed by the AR measurement tool measures, for example, a real distance from a nearest vertical obstacle, and compares the sensed distance to a calculated distance of each particle to a nearest B area in the same direction on the map. This comparison provides an ability to re-weight the particles in a more precise way. Note that the existence of the C area provides more flexibility with respect to inaccurate measurements.
  • IMU inertial measurement unit
  • pedometer which detects the device’ s global orientation and counts’’steps”.
  • IMU inertial measurement unit
  • Such a method potentially introduces inaccuracy both in the distance measured and in the orientation (i.e., some steps are larger than other, the device orientation is only loosely correlated with the walking orientation).
  • velocity estimation is based on optical flow with plan and range detection, such as described in [VDA+18], in order to estimate the user movement, optionally at a high sampling rate.
  • VDA+18 optical flow with plan and range detection
  • each particle-state may also include additional data, to estimate the compass original bias, and/or current drift.
  • each particle starts with an initial, optionally Gaussian, random value of compass bias.
  • each new particle is optionally assigned a compass-related state according to values of its nearest neighbors, with some minor noise.
  • Each particle optionally uses the smartphone’s compass-measured data combined with its bias and drift for the move function.
  • Particle Filter (PF) algorithms [TBF05] rely on an assumption of a continuous flow of incoming data from sensors. However, the sampling process cannot be guaranteed. Many localization problems include sparse sensing scenarios, i.e., scenarios where data from a sensor is obsolete. In context of the current Example, assuming a localization algorithm is vision based, blocking the camera can cause very serious ramifications. Since sampling only works well given the correct weights which are obtained from the real world via the sensors.
  • the particle filter when such a scenario is detected, optionally reacts accordingly by adding random noisy movement to one or more of the particles, relative to a previously measured movement or pace. Such a reaction provides more scattered particles that potentially solve a momentary uncertainty.
  • an adjustable particle filter is used, that adjusts the number of particles according to a size of an expected probabilistic space.
  • a large region of possible solutions e.g., init- stage over a few floors
  • a large set of particles may be used, yet later on, when the particle filter tends to converge, the number is optionally reduced - enabling a better practical runtime and/or lower memory usage.
  • Kidnapped Robot is a name of a well-known problem [TFBD01], which in context of localization, navigation, tracking, refers to a situation when the algorithm completely loses track of the real world location and the evaluation function performs badly.
  • geolocation services e.g., Google Maps Geolocation API
  • the geolocation services are used as an anchor to the truth, see Figure 14. In case our system reports an extremely different location from the one the used geolocation service reported, we reboot the system based on the geolocation service report, respectively to the service expected accuracy.
  • Figure 14 is a simplified illustration of a map showing a comparison of a location determined by Google’s fused location service and a location method used according to an example embodiment of the invention
  • Figure 14 shows Google’s fused location service location marked as a blue dot 1402, and its accuracy is marked with a light blue circle 1402.
  • the particle filter location determined by an example localization algorithm used with reference to the exemplary positioning embodiments is marked with a green dot 1406.
  • GNSS global system for Mobile communications
  • an estimation is performed whether phone has LOS (line of site) to a navigation satellite, see [YMW16] for more details.
  • a method is based on the phone’s light sensor, as in daylight the outdoor light is usually stronger than the indoor light (even on cloudy days) while at night-time the opposite happens. Using such a method provides an additional sense evaluation, optionally for use as a constraint and/or particle weight consideration.
  • FIGS 15A-15B are simplified illustrations of a potential advantage of using indoor/outdoor determination according to an example embodiment of the invention.
  • Figure 15A shows particle initialization marked on a map, without use of indoor / outdoor sensing data.
  • Figure 15A shows particles 1502 scattered on a map, within a circle 1504 having a specific initial radius.
  • Figure 15B shows that an ability to classify between indoor and outdoor enables an example embodiment algorithm to dramatically decrease an overall area of possible solutions - potentially leading to a faster converges time and/or better accuracy.
  • Figure 15B shows particles 1512 scattered on a map, within a circle 1514 having a specific initial radius, but only indoors, with respect to a building 1516 also shown on the map.
  • Figure 16 is a simplified illustration of tracking according to an example embodiment of the invention in comparison to a Lidar-based ground truth.
  • Figure 16 shows a 2D evaluation of a path 1602 (shown in green) as detected by the STEPS system with respect to the ground truth ( GT) path 1604 (shown in blue). The evaluation was performed at specific points in time, and error lines 1606 are shown in red.
  • Figure 17 is a graph showing height value error over time according to an example embodiment of the invention.
  • the graph of Figure 17 has an X-axis 1704 of time in arbitrary units and a Y-axis 1702 of relative elevation in meters.
  • Figure 17 shows a z-converge process - in which a floor-position, was accurately found after about 40 seconds.
  • Figure 17 includes a first line 1707 showing floor position, a second line 1708 showing the ground truth, and a third line 1706 showing the z-error, or difference between estimated elevation shown by the first line 1707 and ground truth shown by the second line 1708.
  • Figures 18 and 19, described below, show the different convergence nature of a particle filter regarding a 3D case (when the floor is unknown) and a 2D case (when the floor is given).
  • Figure 18 is a graph showing position error over time according to an example embodiment of the invention.
  • the graph of Figure 18 has an X-axis 1804 of time in arbitrary units and a Y-axis 1802 of relative position in meters.
  • Figure 18 includes a first line 1806 showing X-Y error and a second line 1808 showing Z error.
  • Figure 18 shows a test case lasting about 120 seconds.
  • the elevation, or Z component of the location converged from a height error of about 4 meters to a sub-meter error.
  • the horizontal error also reduced, or converged, over time.
  • Figure 19 is a graph showing position error over time according to an example embodiment of the invention.
  • the graph of Figure 19 has an X-axis 1904 of time in arbitrary units and a Y-axis 1802 of relative position in meters.
  • Figure 19 includes a first line 1906 showing Z error and a second line 1908 showing X-Y error.
  • Figure 19 shows Particle Filter 2D convergence in a case where the correct floor, is known. A know value for the Z axis or elevation, is therefore used.
  • the second line 1908 shows that the X-Y error reduces from an error of 4.5 meters to about 1.3 meters within 10 seconds (about 15 steps). During the rest of the test The horizontal error is about 1 meter, while the vertical error is (on average) below half a meter.
  • An example embodiment implementation of the localization algorithm in phones with a TOF (Time Of Flight) camera differs from the augmented reality based single-camera phones (e.g., Google’s ARC ore), potentially providing increased accuracy and/or convergence time.
  • TOF Time Of Flight
  • augmented reality based single-camera phones e.g., Google’s ARC ore
  • a 3D point cloud potentially enables to implement a high-accuracy localization loop using range-error analysis.
  • the TOF cameras typically have a relatively narrow error range - usually smaller than 5cm (see [FPB+ 16]).
  • An example embodiment implementation shows an improvement of expected accuracy.
  • Such implementation enables to improve the expected accuracy down to 10-20 centimeters.
  • a 3D mapping of the region of interest can be performed and the localization particle filter algorithm can work on the 3D map.
  • the particle filter localization algorithm was able to maintain a relative 4-12 meter accuracy (7.2 meter on average), see Figures 20, 21.
  • the overall evaluation of the example embodiment algorithm provided first place in the competition.
  • Figure 20 is a graph showing position error over a path according to an example embodiment of the invention.
  • the graph of Figure 20 has an X-axis 2004 of relative position in meters and a Y-axis 2002 of relative position in meters.
  • Figure 20 includes green dots 2006 showing ground truth, a blue line 2008 showing a calculated position, as calculated by the example embodiment, and red lines
  • Figure 21 is a graph showing position error according to an example embodiment of the invention.
  • the graph of Figure 21 has an X-axis 2004 of consecutive waypoints and a Y-axis 2002 of error in meters.
  • Figure 21 shows an error for each of the 70 points along the evaluation process of the IPIN 2018 localization competition. At the last few points a large error was reported - due to a significant compass drift most probably generated by working elevator which was part of the evaluation path.
  • Fig 21 shows the actual measured accuracy (error) as tested in the IPIN2018 localization competition for the“STEPS” group.
  • the graph shows the error (in meters) with respect to the waypoints (there where about 70 such points).
  • the exemplary parking embodiments below describe a novel approach to vehicle navigation, even in GNSS-denied environments.
  • the approach fuses Dead Reckoning methods obtained from a smartphone and/or a vehicle’s on-board computer, and computes an accurate 2D, 2.5D or 3D position of the vehicle.
  • An example embodiment algorithm is based on an advanced version of a particle filter algorithm which uses road-based events such as speed bumps, turns, altitude change and RF signals, optionally in addition to other sensors described herein, in order to estimate the vehicle’s location, potentially in real-time.
  • An aspect of some example embodiments includes mapping the environment.
  • the present exemplary parking embodiments describe an underground parking lot, but the approach can be applied to other scenarios such as, by way of some non-limiting examples, roads, and more specifically but not exclusively tunnel roads.
  • smartphone and phone are used interchangeably with the terms computer such as a vehicle on-board computer, tablet, and similar computing devices.
  • GNSS navigation is used everywhere in the vehicle industry. Coupled with Map Matching (MM) and Dead Reckoning (DR) techniques, it apparently enables a fairly accurate vehicle localization on top of mapped roads [1, 8]. However, GNSS-denied environments such as indoors, tunnel roads and parking lots create a real challenge for those navigation algorithms.
  • Map Matching Map Matching
  • DR Dead Reckoning
  • Vision based navigation is yet another, less common approach, due to its complexity.
  • light sources can serve as landmarks for IPS as was apparently suggested by [3, 4], who developed a vision-based indoor localization system for mobile robots, utilizing ceiling lamps as landmarks.
  • a lion’s share of research in this field is for pedestrian navigation, mainly for shopping malls, since the commercial incentive is clear - Location Based Services (LBS) [2].
  • LBS Location Based Services
  • UAV Unmanned Aerial Vehicle
  • the present description describes automobile 2,5D navigation in GNSS denied environments. However, additional devices can benefit from such navigation.
  • a complementary sensor-based mechanism for accurate automobile navigation in underground parking-lots and freeway tunnels is described.
  • the system and methods described harnesses, amongst other inputs, road-based events (e.g., crossing a speed-bump) as detected by a phone’s sensors in order to estimate the vehicle’s ego-location.
  • the novelty of the system and methods described includes characterization and detection of the road- based events.
  • the implementation offers a GNSS -level of accuracy in GNSS-denied environments, a feature unavailable today.
  • a desired navigation scenario is as follows: a driver enters a vehicle with her phone and starts driving. The phone should always be able to present an exact location on roads, in tunnels and in parking lots.
  • the naive navigation algorithm may rely on at least two information sources: the mobile device’s sensors and the vehicle itself. We start with the latter.
  • Modem vehicle on-board computers can produce valuable information in real-time regarding the vehicle’s state.
  • the exact absolute speed also known as Speed Over Ground (SoG) value and the wheel orientation.
  • SoG Speed Over Ground
  • a Course Over Ground (CoG) value is reported by a GNSS receiver and the phone’s absolute orientation is calculated from the IMU sensor.
  • the phone orientation relative to the vehicle can be precisely computed. Once this figure is obtained, the vehicle’s orientation can be computed by the phone.
  • the vehicle’s trajectory can be coarsely computed. This is called the Dead Reckoning (DR) approach.
  • DR Dead Reckoning
  • one or more of the following factors are optionally used - a Region Of Interest (ROI) map and data fusion of road-based events.
  • ROI Region Of Interest
  • map obtainment or mapping aspect of the exemplary parking embodiments.
  • the data fusion is performed by a probabilistic particle filter algorithm.
  • a probabilistic particle filter algorithm We start by describing the road-based events and their detection:
  • Speed bumps Nearly all parking lots have speed bumps installed on them (see description of Figure 23 below).
  • the speed bumps provide at least two characterizations: first, they are relativity easy to map (only a few bumps on each floor) and, second, they can be detected by sensing the accelerometer value.
  • the accelerometer measurements may not be distinguishable between two bumps, so a ROI map is used to compare location of a bump and a sensory (acceleration) indication of a bump.
  • Figure 22 demonstrates an accelerometer graph of a vehicle oven a speed bump.
  • Figure 22 is a graph showing linear acceleration as captured by a smartphone positioned in a car according to an example embodiment of the invention.
  • the graph of Figure 22 has an X-axis 2204 of time in arbitrary units and a Y-axis 2202 of time.
  • Figure 19 includes a first red line 2206 showing X-axis acceleration, a second green line 2207 showing Y-axis acceleration, and a third blue line 2208 showing Z-axis acceleration.
  • Figure 22 shows linear acceleration in three axes as captured by a smartphone positioned in a car which passed over a speed-bump. It is noted that detecting a speed bump in a vicinity where a speed bump is expected may optionally be done using less acceleration measurement, for example in only one or two dimensions, along one or two axes.
  • An event of losing or retrieving a GNSS signal A features of indoor navigation is a lack of GNSS signals. From a different perspective, one can get input and information from event of losing or retrieving the GNSS signal. Losing a GNSS signal is also called a GNSS“lost fix” event. Entering a parking-lot is usually accompanied by a sharp GNSS signal degradation or loss. The parking-lot entrance position can be deduced from the GNSS position a just before the signal degradation event. A similar method can also be used when exiting the parking lot. The“fix retrieval” event potentially indicates a location of the exit.
  • a useful and potentially accurate differential sensors is a barometer, also available in many contemporary smartphones.
  • differential one means that an absolute height is not necessary to extract from a barometer.
  • a floor shift is easy to spot as a change in atmospheric pressure.
  • the floor of a parking lot can be determined. This is also true for determining levels halfway between floors.
  • a road turn can be detected from the phone’s gyroscope sensor. Many parking-lots have spiral turns, mostly between floors. Those spirals are unmistakable for the gyroscope sensor.
  • Leaving the vehicle event implies the vehicle is at a parking spot. This event can be deduced both by a phone and by the vehicle from loss of Bluetooth connectivity with the vehicle.
  • a road facing camera even a low-resolution camera, can produce valuable information regarding position and orientation of visual landmarks such as signs, pillars, colored pillars, coded pillars and such landmarks.
  • visual landmarks such as signs, pillars, colored pillars, coded pillars and such landmarks.
  • two consecutive similar- looking pillars may be distinguished, for example by pillar numbers, colors and similar markings often used in parking lots.
  • Position of the pillars is optionally determined, optionally in global coordinates, and a vehicle’s position can be computed with a high accuracy level, potentially limited only by the map accuracy.
  • Figure 23 is a color image of a contemporary parking-lot with speed bumps, color column markings, and location codes.
  • Figure 23 is a picture was taken at a typical parking lot in Israel.
  • Figure 23 shows a speed bump 2302; a first column 2304 painted green marking a“green” portion of the parking lot; a second column 2306 painted red marking a“red” portion of the parking lot; markings 2308 on the columns - including unique identification codes on each column; pavement markings 2310; lights 2314 and signs 2311.
  • Figure 23 also shows green rectangles 2312 displaying where a vision system detected the speed bumps 2302.
  • the road-based events are fused using a particle filter to get an accurate position.
  • the Particle Filter is a member of the non-parametric multi-modal Bayesian filter family.
  • a PF estimates the posterior by a finite number of parameters also called particles.
  • Each particle is represented by a belief function bel(x t ). This belief function serves as a weight
  • the sense function, Zi.- t is n °t necessarily periodic - road-based events are discrete events that change the probability space. Taking a speed-bump event for example. As demonstrated in Figure 22, the event can be detected with high certainty. This means that the vehicle is located in the vicinity of one of the speed bumps. In other words, all the other guesses (particles) are optionally eliminated, and/or their likelihood is diminished. Usually the detection is not absolute and may be only probabilistic, since there may be two or more speed bump candidates. In some embodiments the algorithm does not eliminate all the non- speed-bump-location particles and preserves a small portion of such particles. This approach can also solve the“kidnapped robot” problem.
  • An aspect of some embodiments includes mapping a parking lot. This section addresses a map obtainment problem; how can a parking-lot map be constructed? How can a multi-floor parking-lot map be constructed?
  • Some parking-lots provide a detailed map along with an exact scale. Such a map is shown in Figure 24.
  • Figure 24 is an example parking lot map.
  • the map of Figure 24 shows detail down to a level of an individual parking stall, and may be geometrically accurate to at least that level, that is, ⁇ 1-2 meters.
  • mapping process is optionally applied.
  • a mapping process is optionally used to update a parking lot map (or other environment map).
  • speed bumps in a parking lot may be moved around, or added, signs may be out up, taken down, changed, and so on.
  • the mapping process optionally updates an electronic map.
  • mapping process should be as simple and efficient as possible.
  • mapping process a Simultaneous Localization And Mapping (SLAM [6]) algorithm where a portion of the users (drivers) also function as mappers.
  • SLAM [6] Simultaneous Localization And Mapping
  • a refined, more accurate trajectory can optionally be computed, using the detected road-based events as geographically fixed markers, optionally for fine alignment between various trajectories.
  • a 2.5D Parking-Map The 3D path was used in order to construct a separate map for each floor. The road-based events were positioned in a corresponding floor-map.
  • An advanced mapping algorithm optionally uses a computer vision algorithm.
  • Google published its Vision API.
  • the API apparently enables a user to get an image description, optionally in real time, using a pre-trained Google Artificial Neural Network (ANN).
  • ANN Google Artificial Neural Network
  • an“Exit” sign can be detected utilizing this framework.
  • the tested parking-lot included three main floors with approximately 3,000 parking spots, three main entrances and a relatively complicated subdivision into about a dozen regions, see Figure 23.
  • the tested parking lot has a complex shape and includes sub-floors, intermediate floors with a 1 meter height difference from main floors.
  • the evaluating process was started with a 1 hour preliminary mapping stage.
  • the preliminary stage we performed a 20 minute drive in the parking-lot while logging the car speed (via OBDII protocol), orientation and sensory data available on a standard mobile phone (including barometric pressure, Gyro, Accelerometer, Magnetometer).
  • the mobile phone was positioned in a phone holder attached to the car, and during the drive a low resolution video (QVGA) was captured at a rate of 30 fps.
  • QVGA low resolution video
  • a full mapping stage lasted about three hours.
  • We used the Parking-Map for the particle filter - enabling to report a 3D car position in real-time with a horizontal accuracy of 3-6 meters, with a sub-meter vertical error (floor detection was 100% accurate).
  • the mapping results can be seen in Figures 25A-25C.
  • FIGS. 25A-25C are screen capture illustrations of a mapping tool implemented on a smartphone according to an example embodiment of the invention.
  • Figures 25A-25C show an example embodiment of a display 2502 in a mapping application.
  • the display 2502 includes a top portion where paths 2504a 2504b 2504c (X-Y trajectories) are displayed and optional images of a compass 2506 showing compass directions; a middle portion where a graph shows paths 2508a 2508b 2508c showing elevation during the travel (Z-trajectory); and a bottom portion displaying application controls 2510.
  • Figures 25A-25C show a mapping application based on an android phone with ARCoreSDK.
  • the mapping application includes a vision-based mapping tool which potentially enables us a sub 1% error in mapping without even without performing corrections based on minimizing errors whenever a path forms a closed loop.
  • localization errors are further minimized, by distributing errors when a path closes a loop.
  • the example embodiment tool enables to perform a rapid mapping by simply driving through the parking lot.
  • a dataset which includes the path is sent to a central server, which, by receiving more than one such dataset for a specific environment, optionally produces an improved map by averaging the data.
  • the preliminary implementation includes a large set of parameters defining the“sense- weight” for the road-base events (e.g., what values defines a speed-bump?).
  • the application appears to be robust, simple to use, and suitable for a wide range of scenarios.
  • the coloring and numbering may appear to obviate the need for the location algorithm described here. Nevertheless, even colored and numbered parking lots create confusion. Moreover, the very same method can be generalized to other scenarios.
  • Figure 26 shows a freeway tunnel, including light sources 2602 and signs 2604a 2604b.
  • One of the signs 2604b displays a code number.
  • the coded sign 2604b is, shown in the photograph marked by a red circle 2606.
  • the navigation method described herein can be used as an aided lane- detection algorithm, which operates both where GNSS signals are available and where GNSS signals are available, and transitions between the GNSS-available and GNSS -unavailable areas.
  • Many highways have light poles along the sides, sometimes on both sides.
  • a center-of-mass of each light source is optionally detected using a simple brightness threshold as shown in Figures 27 A and 27B.
  • Figures 27A and 27B are images of a highway according to an example embodiment of the invention.
  • Figure 27B shows a color photograph of a road during darkness, when road lights 2702 are lit.
  • Figure 27A shows the photograph of Figure 24B after applying a brightness threshold operation, in black and white.
  • Figure 24A shows the road lights 2702 of Figure 27B as white spots 2704.
  • compositions, method or structure may include additional ingredients, steps and/or parts, but only if the additional ingredients, steps and/or parts do not materially alter the basic and novel characteristics of the claimed composition, method or structure.
  • the singular form“a”,“an” and“the” include plural references unless the context clearly dictates otherwise.
  • the term“a unit” or“at least one unit” may include a plurality of units, including combinations thereof.
  • range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the invention. Accordingly, the description of a range should be considered to have specifically disclosed all the possible sub-ranges as well as individual numerical values within that range. For example, description of a range such as from 1 to 6 should be considered to have specifically disclosed sub-ranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6 etc., as well as individual numbers within that range, for example, 1, 2, 3, 4, 5, and 6. This applies regardless of the breadth of the range.
  • a numerical range is indicated herein (for example“10-15”,“10 to 15”, or any pair of numbers linked by these another such range indication), it is meant to include any number (fractional or integral) within the indicated range limits, including the range limits, unless the context clearly dictates otherwise.
  • the phrases“range/ranging/ranges between” a first indicate number and a second indicate number and“range/ranging/ranges from” a first indicate number “to”,“up to”,“until” or“through” (or another such range-indicating term) a second indicate number are used herein interchangeably and are meant to include the first and second indicated numbers and all the fractional and integral numbers therebetween.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)

Abstract

A localization method including obtaining a map of a Region Of Interest (ROI), obtaining a first input from a first sensor and a second input from a second sensor, providing the first input and the second input to a processor, using the processor to estimate a location based on the first input and the second input, wherein the processor uses a particle filter method to estimate the location. A method of mapping a Region Of Interest, the method including obtaining first sensor data from a first sensor and second sensor data from a second sensor, providing the first sensor data and the second sensor data to a processor, using the processor to estimate a location based on the first sensor data and the second sensor data, and sending the location to a mapping application. Related apparatus and methods are also described.

Description

LOCALIZATION TECHNIQUES
RELATED APPLICATION/S
This application claims the benefit of priority of U.S. Provisional Patent Application No. 62/690,953 filed 28 June 2018, of U.S. Provisional Patent Application No. 62/690,958 filed 28 June 2018, and of U.S. Provisional Patent Application No. 62/690,955 filed 28 June 2018.
The contents of all of the above applications are incorporated by reference as if fully set forth herein.
FIELD AND BACKGROUND OF THE INVENTION
The present invention, in some embodiments thereof, relates to a method of location using inputs from multiple sensors, and, more particularly, but not exclusively, to a method for converging candidate locations to a more accurate location and, more particularly, but not exclusively, to a method for fusing image -based navigation with additional location inputs to obtain a more accurate location, and more particularly, but not exclusively to a particle filter method for converging candidate locations,
Additional background art includes:
[BFOO08] Francisco Bonin-Font, Alberto Ortiz, and Gabriel Oliver. Visual navigation for mobile robots: A survey. Journal of intelligent and robotic systems, 53(3):263-296, 2008.
[BGVGT+ 17] Ramon F Brena, Juan Pablo Garcia- Vazquez, Carlos E Galvan-Tejada, David Munoz-Rodriguez, Cesar Vargas-Rosales, and James Fangmeyer. Evolution of indoor positioning technologies: A survey. Journal of Sensors, 2017, 2017.
[BH13] Agata Brajdic and Robert Harle. Walk detection and step counting on unconstrained smartphones. In Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing, pages 225-234. ACM, 2013.
[BK+ 08] Stephane Beauregard, Martin Klepal, et al. Indoor pdr performance enhancement using minimal map information and particle filters. In Position, Location and Navigation Symposium, 2008 IEEE/ION, pages 141-147. IEEE, 2008.
[BmS l5] Boaz Ben-Moshe and Nir Shvalb. Navigation method and device, August 20 2015. U.S. Patent Application Number 14/418,106.
[BW01] Gary Bishop and Greg Welch. An introduction to the Kalman filter. Proc of SIGGRAPH, Course, 8(27599-23 l75):4l, 2001. [BW13] Haitao Bao and Wai-Choong Wong. An indoor dead- reckoning algorithm with map matching. In Wireless Communications and Mobile Computing Conference (IWCMC), 2013 9th International, pages 1534-1539. IEEE, 2013.
[C+ 03] Zhe Chen et al. Bayesian filtering: From Kalman filters to particle filters, and beyond. Statistics, 182(1): 1-69, 2003.
[Can86] John Canny. A computational approach to edge detection. IEEE Transactions on pattern analysis and machine intelligence, (6):679-698, 1986.
[DV11] Subhankar Dhar and Upkar Varshney. Challenges and business models for mobile location-based services and advertising. Communications of the ACM, 54(5): 121-128, 2011.
[FBDT99] Dieter Fox, Wolfram Burgard, Frank Dellaert, and Sebastian Thrun. Monte Carlo localization: Efficient position estimation for mobile robots. AAAI/IAAI, l999(343-349):2- 2, 1999.
[FNI13] Zahid Farid, Rosdiadee Nordin, and Mahamod Ismail. Recent advances in wireless indoor localization techniques and system. Journal of Computer Networks and Communications, 2013, 2013.
[FPB+ 16] Peter Fursattel, Simon Placht, Michael Balda, Christian Schaller, Hannes Hofmann, Andreas Maier, and Christian Riess. A comparative error analysis of current time-of- flight sensors. IEEE Transactions on Computational Imaging, 2(l):27-4l, 2016.
[GFN09] Yanying Gu, Anthony Fo, and Ignas Niemegeers. A survey of indoor positioning systems for wireless personal networks. IEEE Communications surveys & tutorials, 11(1): 13-32, 2009.
[Gosl2] Subrata Goswami. Indoor location technologies. Springer Science & Business Media, 2012.
[Harl3] Robert Harle. A survey of indoor inertial positioning systems for pedestrians. IEEE Communications Surveys and Tutorials, 15(3): 1281— 1293, 2013.
[HS97] Janne Heikkila and Olli Silven. A four-step camera calibration procedure with implicit image correction. In Computer Vision and Pattern Recognition, 1997. Proceedings. 1997 IEEE Computer Society Conference on, pages 1106-1112. IEEE, 1997.
[IPI18] International Conference on Indoor Positioning and Indoor Navigation (IPIN- 2018), Nantes, France, 2018.
[JSPG10] Antonio Ramon Jimenez, Fernando Seco, Jose" Carlos Prieto, and Jorge Guevara. Indoor pedestrian navigation using an ins/ekf framework for yaw drift reduction and a foot- mounted imu. In Positioning Navigation and Communication (WPNC), 2010 7th Workshop on, pages 135-143. IEEE, 2010.
[K+ 99] Jack B Kuipers et al. Quaternions and rotation sequences, volume 66. Princeton University Press Princeton, 1999.
[KK04] Kamol Kaemarungsi and Prashant Krishnamurthy. Properties of indoor received signal strength for WLAN location finger- printing. In Mobile and Ubiquitous Systems: Networking and Services, 2004. MOBIQUITOUS 2004. The First Annual International Conference on, pages 14-23. IEEE, 2004.
[KR08a] Bernhard Krach and Patrick Roberston. Cascaded estimation architecture for integration of foot-mounted inertial sensors. In Position, Location and Navigation Symposium, 2008 IEEE/ION, pages 112-119. IEEE, 2008.
[KR08b] Bernhard Krach and Patrick Robertson. Integration of foot- mounted inertial sensors into a Bayesian location estimation framework. In Positioning, Navigation and Communication, 2008. WPNC 2008. 5th Workshop on, pages 55-61. IEEE, 2008.
[Kup05] Axel Kupper. Location-based services. Fundamental and operation, John Willey & Sons, Ltd, 2005.
[KVFF11] Tomas Krajnik, Vojtech Vonasek, Daniel Fiser, and Jan Faigl. Ar-drone as a platform for robotic research and education. In International conference on research and education in robotics, pages 172-186. Springer, 2011.
[LDBL07] Hui Liu, Houshang Darabi, Pat Banerjee, and Jing Liu. Survey of wireless indoor positioning techniques and systems. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 37(6): 1067-1080, 2007.
[LL17] Dimitrios Lymberopoulos and Jie Liu. The Microsoft indoor localization competition: Experiences and lessons learned. IEEE Signal Processing Magazine, 34(5): 125-140, 2017.
[LSVW 11] Jo Agila Bitsch Link, Paul Smith, Nicolai Viol, and Klaus Wehrle. Footpath: Accurate map-based indoor navigation using smartphones. In Indoor Positioning and Indoor Navigation (IPIN), 2011 International Conference on, pages 1-8. IEEE, 2011.
[LWWL10] Xin Li, Kejun Wang, Wei Wang, and Yang Li. A multiple object tracking method using Kalman filter. In Information and Automation (ICIA), 2010 IEEE International Conference on, pages 1862-1866. IEEE, 2010.
[MC15] Vivian Genaro Motti and Kelly Caine. Users’ privacy concerns about wearables. In International Conference on Financial Cryptography and Data Security, pages 231— 244. Springer, 2015. [MT11] Rainer Mautz and Sebastian Tilch. Survey of optical indoor positioning systems. In Indoor Positioning and Indoor Navigation (IPIN), 2011 International Conference on, pages 1-7. IEEE, 2011.
[RMT+ 02] Teemu Roos, Petri Myllymaki, Henry Tirri, Pauli Misikangas, and Juha Sievanen. A probabilistic approach to WLAN user location estimation. International Journal of Wireless Information Networks, 9(3): 155-164, 2002.
[TBF05] Sebastian Thrun, Wolfram Burgard, and Dieter Fox. Probabilistic robotics. MIT press, 2005.
[TFBD01] Sebastian Thrun, Dieter Fox, Wolfram Burgard, and Frank Dellaert. Robust Monte Carlo localization for mobile robots. Artificial intelligence, l28(l-2):99— 141, 2001.
[TSJK+ 17] Joaquin Torres-Sospedra, Antonio R Jimenez, Stefan Knauth, Adriano Moreira, Yair Beer, Toni Fetzer, Viet-Cuong Ta, Raul Montoliu, Fernando Seco, German M Mendoza-Silva, et al. The smartphone-based offline indoor location competition at IPIN 2016: Analysis and future work. Sensors, l7(3):557, 2017.
[VDA+18] Julien Valentin, Ivan Dryanovski, Joao Afonso, Jose Pascoal, Konstantine Tsotsos, Mira Leung, Mirko Schmidt, Onur Guleryuz, Sameh Khamis, Vladimir Tankovitch, Sean Fanello, Adarsh Kowdle, Shahram Izadi, Christoph Rhemann, Jonathan T. Barron, Neal Wadhwa, Max Dzitsiuk, Michael Schoenberg, Vivek Verma, and Eric Turner. Depth from motion for smart phone ar. volume 37, pages 1-19, 12 2018.
[WKM11] Martin Werner, Moritz Kessel, and Chadly Marouane. Indoor positioning using smartphone camera. In Indoor Positioning and Indoor Navigation (IPIN), 2011 International Conference on, pages 1-6. IEEE, 2011.
[XB 12] Ren Xiaofeng and Liefeng Bo. Discriminatively trained sparse code gradients for contour detection. In Advances in neural information processing systems, pages 584- 592, 2012.
[YAS03] Moustafa A Youssef, Ashok Agrawala, and A Udaya Shankar. WLAN location determination via clustering and probability distributions. In Pervasive Computing and Communications, 2003.(PerCom 2003). Proceedings of the First IEEE International Conference on, pages 143-150. IEEE, 2003.
[YBM17] Roi Yozevitch and Boaz Ben-Moshe. Advanced particle filter methods. In Heuristics and Hyper-Heuristics-Principles and Applications . InTech, 2017.
[YMW16] Roi Yozevitch, Boaz Ben Moshe, and Ayal Weissman. A robust GNSS LOS/NLOS signal classifier. Navigation: Journal of The Institute of Navigation, 63(4):429-442, [ZhaOO] Zhengyou Zhang. A flexible new technique for camera calibration. IEEE Transactions on pattern analysis and machine intelligence, 22(11): 1330-1334, 2000.
[ZYWL14] Lingling Zhu, Aolei Yang, Dingbing Wu, and Li Liu. Survey of indoor positioning technologies and systems. In International Conference on Life System Modeling and Simulation and International Conference on Intelligent Computing for Sustainable Energy and Environment, pages 400-409. Springer, 2014.
[1] Edward J Krakiwsky, Clyde B Harris, and Richard VC Wong. A Kalman filter for integrating dead reckoning, map matching and GPS positioning. In Position Location and Navigation Symposium, 1988. Record. Navigation into the 21 st Century. IEEE PLANS’88., IEEE, pages 39-46. IEEE, 1988.
[2] Axel Kupper. Location-based services. Fundamental and operation, John Willey & Sons, Ltd, 2005.
[3] S Panzieri, F Pascucci, R Setola, and G Ulivi. A low cost vision based localization system for mobile robots target, 4:5, 2001.
[4] Hyun Chul Roh, Chang Hun Sung, Min Tae Kang, and Myung Jin Chung. Point pattern matching based visual global localization using ceiling lights. In Control Conference (ASCC), 2011 8th Asian, pages 281-286. IEEE, 2011.
[5] Shaojie Shen, Nathan Michael, and Vijay Kumar. Autonomous multi-floor indoor navigation with a computationally constrained mav. In Robotics and automation (ICRA), 2011 IEEE international conference on, pages 20-25. IEEE, 2011.
[6] Sebastian Thrun, Wolfram Burgard, and Dieter Fox. Probabilistic robotics. MIT press,
2005.
[7] Janis Tiemann, Florian Schweikowski, and Christian Wietfeld. Design of an uwb indoor-positioning system for uav navigation in gnss-denied environments. In Indoor Positioning and Indoor Navigation (IPIN), 2015 International Conference on, pages 1-7. IEEE, 2015.
[8] Rafael Toledo-Moreo, David Betaille, and Francois Peyret. Lane-level integrity provision for navigation and map matching with gnss, dead reckoning, and enhanced maps. Intelligent Transportation Systems, IEEE Transactions on, 11(1): 100-112, 2010.
[9] Roi Yozevitch and Boaz Ben-Moshe. Advanced particle filter methods. In Heuristics and Hyper-Heuristics-Principles and Applications. InTech, 2017.
The disclosures of all references mentioned above and throughout the present specification, as well as the disclosures of all references mentioned in those references, are hereby incorporated herein by reference. SUMMARY OF THE INVENTION
The present invention, in some embodiments thereof, relates to a method of location using inputs from multiple sensors, and, more particularly, but not exclusively, to a method for converging candidate locations to a more accurate location and, more particularly, but not exclusively, to a method for fusing image -based navigation with additional location inputs to obtain a more accurate location, and more particularly, but not exclusively to a particle filter method for converging candidate locations.
According to an aspect of some embodiments of the present invention there is provided a localization method including obtaining a map of a Region Of Interest (ROI), obtaining a first input from a first sensor and a second input from a second sensor, providing the first input and the second input to a processor, using the processor to estimate a location based on the first input and the second input, wherein the processor uses a particle filter method to estimate the location.
According to some embodiments of the invention, the first input includes images from a camera.
According to some embodiments of the invention, the second input includes data associated which is unavailable in some area in the ROI.
According to some embodiments of the invention, the particle filter method is a modified particle filter method in which associates a likelihood with a candidate location based upon the first input and the second input.
According to some embodiments of the invention, the particle filter method is a modified particle filter method further including performing soft-init.
According to some embodiments of the invention, performing soft-init is used to solve a state called“kidnapped-robot”.
According to some embodiments of the invention, the soft-init includes adding a number of particles, the number in a range of 1-10% of a total number of particles, when the particle filter method performs re-sampling.
According to some embodiments of the invention, the adding a number of particles includes adding particles associated with candidate locations having a probability above a threshold probability, the probability based on a consideration selected from a group consisting of the candidate location can image a light source in the ROI, the candidate location can image a sign in the ROI, and the candidate location is in an elevator and an altitude change has been detected.
According to some embodiments of the invention, the particle filter method is a modified particle filter method further including removing a fraction of the particles each re-sample, the fraction in a range of 1-25% of a total number of the particles. According to some embodiments of the invention, the particle filter method is a modified particle filter method further including using elevation change data as a particle filter map constraint.
According to some embodiments of the invention, the particle filter method is a modified particle filter method further including using one or more distinct environmental features in a particle filter map, the distinct features selected from a group consisting of a light, a ceiling light, a sign.
According to some embodiments of the invention, the particle filter method is a modified particle filter method further including using both angular bias and angular drift as part of a particle state.
According to some embodiments of the invention, the particle filter method is a modified particle filter method further including adapting a number of initial particles to a navigation scenario.
According to some embodiments of the invention, the particle filter method is a modified particle filter method further including using pedometry based on one or more data inputs selected from a group consisting of optical flow, distance-to-object ranging, and device orientation.
According to some embodiments of the invention, further including using a map of a Region Of Interest for limiting locations of candidate locations.
According to some embodiments of the invention, one of the first input and the second input includes a light level input.
According to some embodiments of the invention, at least one of the first input and the second input includes a sensor in a smart phone or tablet.
According to some embodiments of the invention, at least one of the first input and the second input includes a sensor installed in a car.
According to some embodiments of the invention, further including using input from at least one more sensor in the particle filter method.
According to some embodiments of the invention, at least one of the first input and the second input includes a sensor selected from a group consisting of a GPS receiver, a GNSS receiver, a WiFi receiver, a Bluetooth receiver, a Bluetooth Low Energy (BLE) receiver, a 3G receiver, a 4G receiver, a 5G receiver, an acceleration sensor, a pedometer, an odometer, an attitude sensor, a MEMS sensor, a magnetometer, a pressure sensor, a light sensor, an audio sensor, a microphone, a camera, a multi-lens camera, a Time-Of-Flight (TOF) camera, a range-finder sensor, an ultrasonic range-finder, a Lidar, an RFID sensor, and a NFC sensor. According to some embodiments of the invention, the particle filter method adapts a weight of a candidate location based upon associating a change in light level to proximity to a door of a building or proximity to a window.
According to some embodiments of the invention, the particle filter method adapts a weight of a candidate location based upon a map of WiFi reception strength.
According to some embodiments of the invention, the particle filter method adapts a weight of a candidate location based upon associating a change in GPS signal reception level to proximity to a door of a building or proximity to a window.
According to some embodiments of the invention, the particle filter method adapts a weight of a candidate location based upon associating a vertical acceleration with an elevator or an escalator or stairs.
According to some embodiments of the invention, the particle filter method adapts a weight of a candidate location based upon associating a change in pressure with an elevator or an escalator or stairs.
According to some embodiments of the invention, the particle filter method adapts a weight of a candidate location based upon associating a change in magnetic field with a magnometer placed in proximity to a door of a building.
According to some embodiments of the invention, the particle filter method includes producing initial candidate locations, and iteratively improving accuracy of the candidate locations, and wherein at least some of the candidate locations are cancelled during at least one iteration.
According to some embodiments of the invention, the method is used for navigation in an area where GNSS signals are not received.
According to some embodiments of the invention, the method is used for navigation in a car park.
According to some embodiments of the invention, the method is used for navigation in a tunnel.
According to an aspect of some embodiments of the present invention there is provided a method of mapping a Region Of Interest, the method including obtaining first sensor data from a first sensor and second sensor data from a second sensor, providing the first sensor data and the second sensor data to a processor, using the processor to estimate a location based on the first sensor data and the second sensor data, and sending the location to a mapping application.
According to some embodiments of the invention, further including sending at least one of the first sensor data and the second sensor data. According to some embodiments of the invention, further including using the mapping application to display the location on a map.
According to some embodiments of the invention, further including using the mapping application to display at least one of the first sensor data and the second sensor data.
According to some embodiments of the invention, further including updating the map based on receiving the location.
According to some embodiments of the invention, further including updating the map based on receiving at least one of the first sensor data and the second sensor data.
According to some embodiments of the invention, further including transmitting the location to a map server.
According to some embodiments of the invention, further including transmitting at least one of the first sensor data and the second sensor data to a map server.
According to an aspect of some embodiments of the present invention there is provided a localization method including a) obtaining a map of a Region Of Interest (ROI), b) obtaining a first input from a first sensor, c) providing the first input to a processor, d) using the processor to estimate a location based on the first input, e) moving from the location and repeating (b)-(d), and further including f) obtaining a second input from a second sensor, g) providing the second input to the processor, h) using the processor to estimate a location based on the second input in addition to the first input, thereby increasing accuracy of the estimating the location.
According to some embodiments of the invention, the processor uses a particle filter method to estimate the location.
According to some embodiments of the invention, the second sensor provides input intermittently.
According to some embodiments of the invention, the second sensor provides input only in specific areas of the ROI.
Unless otherwise defined, all technical and/or scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the invention pertains. Although methods and materials similar or equivalent to those described herein can be used in the practice or testing of embodiments of the invention, exemplary methods and/or materials are described below. In case of conflict, the patent specification, including definitions, will control. In addition, the materials, methods, and examples are illustrative only and are not intended to be necessarily limiting.
As will be appreciated by one skilled in the art, some embodiments of the present invention may be embodied as a system, method or computer program product. Accordingly, some embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a“circuit,”“module” or“system.” Furthermore, some embodiments of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon. Implementation of the method and/or system of some embodiments of the invention can involve performing and/or completing selected tasks manually, automatically, or a combination thereof. Moreover, according to actual instrumentation and equipment of some embodiments of the method and/or system of the invention, several selected tasks could be implemented by hardware, by software or by firmware and/or by a combination thereof, e.g., using an operating system.
For example, hardware for performing selected tasks according to some embodiments of the invention could be implemented as a chip or a circuit. As software, selected tasks according to some embodiments of the invention could be implemented as a plurality of software instructions being executed by a computer using any suitable operating system. In an exemplary embodiment of the invention, one or more tasks according to some exemplary embodiments of method and/or system as described herein are performed by a data processor, such as a computing platform for executing a plurality of instructions. Optionally, the data processor includes a volatile memory for storing instructions and/or data and/or a non-volatile storage, for example, a magnetic hard-disk and/or removable media, for storing instructions and/or data. Optionally, a network connection is provided as well. A display and/or a user input device such as a keyboard or mouse are optionally provided as well.
Any combination of one or more computer readable medium(s) may be utilized for some embodiments of the invention. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium and/or data used thereby may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for some embodiments of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
Some embodiments of the present invention may be described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
Some of the methods described herein are generally designed only for use by a computer, and may not be feasible or practical for performing purely manually, by a human expert. A human expert who wanted to manually perform similar tasks, such as determining his location indoors, or navigating indoors, might be expected to use completely different methods, e.g., making use of expert knowledge and/or the pattern recognition capabilities of the human brain, which may be more efficient than manually going through the steps of the methods described herein.
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING(S)
Some embodiments of the invention are herein described, by way of example only, with reference to the accompanying drawings and images. With specific reference now to the drawings and images in detail, it is stressed that the particulars shown are by way of example and for purposes of illustrative discussion of embodiments of the invention. In this regard, the description taken with the drawings and images makes apparent to those skilled in the art how embodiments of the invention may be practiced.
In the drawings:
FIGETRE 1A is a simplified block diagram illustration of a mobile device for navigation and/or localization according to an example embodiment of the invention;
FIGETRE 1B is a simplified block diagram illustration of a device for producing maps according to an example embodiment of the invention;
FIGETRE 1C is a simplified block diagram illustration of a system for producing maps according to an example embodiment of the invention;
FIGETRE 1D is a simplified block diagram illustration of a system for vehicle navigation according to an example embodiment of the invention;
FIGETRE 1E is a simplified block diagram illustration of a system for producing maps for vehicles according to an example embodiment of the invention; FIGURE 1F is a simplified flowchart illustration of a method for navigation and/or localization according to an example embodiment of the invention;
FIGURE 1G is a simplified flowchart illustration of a method for producing maps according to an example embodiment of the invention;
FIGURE 1H is a simplified flowchart illustration of a method for navigation and/or localization according to an example embodiment of the invention;
FIGURE II is a simplified flowchart illustration of a method for navigation and/or localization according to an example embodiment of the invention;
FIGURES 2A-2C are simplified illustrations of progressing from building level to room level and to a higher accuracy of“seat” level;
FIGURES 3A-3C are simplified illustrations of using a particle filter to estimate a location according to an example embodiment of the invention;
FIGURES 4A-4B are simplified illustrations of applying a threshold to a color image of ceiling lights according to an example embodiment of the invention;
FIGURES 4C-4D are simplified illustrations of contour detection of light sources on an image and determining center mass points according to an example embodiment of the invention;
FIGURE 5 is a simplified illustration of a map used in mapping RF and visual data according to an example embodiment of the invention;
FIGURE 6 is a simplified illustration of light source mapping according to an example embodiment of the invention;
FIGURE 7 is a simplified illustration of a screenshot of a navigating application according to an example embodiment of the invention;
FIGURES 8A-8B are simplified illustrations of tracking according to an example embodiment of the invention in comparison to the ground truth;
FIGURE 9 is a simplified illustration of a screenshot of a navigating application displaying a large area, according to an example embodiment of the invention;
FIGURE 10 is a graph of an accuracy evaluation of a path according to an example embodiment of the invention and a path according to Google map;
FIGURES 11A-12C are simplified illustrations of using a particle filter to estimate a location according to an example embodiment of the invention;
FIGURE 11D is a graph showing atmospheric pressure measurements over time of according to an example embodiment of the invention;
FIGURE 12 is a simplified illustration of a multicolor map used according to an example embodiment of the invention; FIGURE 13 is a simplified illustration of a multicolor map used according to an example embodiment of the invention;
FIGURE 14 is a simplified illustration of a map showing a comparison of a location determined by Google’s fused location service and a location method used according to an example embodiment of the invention;
FIGURES 15A-15B are simplified illustrations of a potential advantage of using indoor/outdoor determination according to an example embodiment of the invention;
FIGURE 16 is a simplified illustration of tracking according to an example embodiment of the invention in comparison to a Lidar-based ground truth;
FIGURE 17 is a graph showing height value error over time according to an example embodiment of the invention;
FIGURE 18 is a graph showing position error over time according to an example embodiment of the invention;
FIGURE 19 is a graph showing position error over time according to an example embodiment of the invention;
FIGURE 20 is a graph showing position error over a path according to an example embodiment of the invention;
FIGURE 21 is a graph showing position error according to an example embodiment of the invention;
FIGURE 22 is a graph showing linear acceleration as captured by a smartphone positioned in a car according to an example embodiment of the invention;
FIGURE 23 is a color image of a contemporary parking-lot with speed bumps, color column markings, and location codes;
FIGURE 24 is an example parking lot map;
FIGURES 25A-25C are screen capture illustrations of a mapping tool implemented on a smartphone according to an example embodiment of the invention;
FIGURE 26 is a photograph of a tunnel including visual landmarks according to an example embodiment of the invention; and
FIGURES 27 A and 27B are images of a highway according to an example embodiment of the invention. DESCRIPTION OF SPECIFIC EMBODIMENTS OF THE INVENTION
The present invention, in some embodiments thereof, relates to a method of location using inputs from multiple sensors, and, more particularly, but not exclusively, to a method for converging candidate locations to a more accurate location and, more particularly, but not exclusively, to a method for fusing image -based navigation with additional location inputs to obtain a more accurate location, and more particularly, but not exclusively to a particle filter method for converging candidate locations.
Overview
An aspect of some embodiments is related to calculating a location of a device by producing candidate locations, followed by iteratively improving accuracy of the candidate locations, or reducing spread of the candidate locations. In some embodiments the location of the device is optionally tracked over time, especially if and/or when the device is moving. In some embodiments device localization is optionally used for indoor navigation, or for navigation in an environment which can provide more than one data source for use in calculating a location.
In some embodiments the device moves, collecting sensor data from various locations, and the fact that the data is from different locations potentially improves the accuracy of the location.
An aspect of some embodiments is related to fusing data from different sources to potentially reduce time to converge spread of the candidate locations and/or increase accuracy. In some embodiments, one of the data sources is an intermittent data source, which is not available all the time and/or not available at every location.
In some embodiments a particle filter algorithm is used to evaluate the candidate locations. It may be said that the data sources provide hints as to the correct location of the device. Some hints“nudge” the candidate locations toward a correct location. Some non-limiting examples of such hints can be odometer or pedometer readings, which enable calculation of candidate location similarly to a dead-reckoning (DR) method, by providing information how far the device has travelled. Some hints can significantly increase likelihood of a candidate location or rule out the likelihood of a candidate location. Some non-limiting examples of such hints can be sensing when the device passes over a speed bump - that increases the likelihood of candidate locations which are near to locations of one or more known speed bump(s) in an environment, and decreases the likelihood of candidate locations where a map indicates that there are no speed bumps.
An aspect of some embodiments is related to calculating a location of a device by starting fusing data at“room level” accuracy (-5-10 meters), which is available using some methods (e.g. WiFi maps) with data from additional sensors, such as described herein, to increase accuracy to “seat level” accuracy (sub-meter). Algorithms
In some embodiments, a number of initial candidate locations is optionally produced. In some embodiments candidate locations are optionally used produce an estimated location. In various embodiments the estimated location is optionally calculated based on applying a method for calculating the estimated location. Some non-limiting examples of such methods include: using an average; using a weighted average, optionally where a weight of a candidate location may be based on its likelihood; selecting a most-likely candidate; iteratively calculating values of the candidate locations; adding additional candidate locations; and removing candidate locations, optionally based on least-likely or least-weighted.
In some embodiments the weighting each particle is evaluated with respect to one or more of the sensory data and/or map constraints, and a weight of the particle is updated accordingly. In some embodiments the weights are optionally sampling-rate dependent.
The sensory data constraints include constraint from one or more of any one of the sensors listed herein, or of sensors included in a smartphone, tablet, vehicle on-board sensors, and so on. For example, when a sensor indicates a change in elevation, the constraint is optionally that a location is more likely to be in an area where elevation change is possible (stairs, elevation, parking lot ramp) and less likely where such elevation change is unlikely (elevation change of more than 3 meters in a floor of a building where ceiling height is 3 meters). For example, when a car’s onboard sensor(s) sense a bump, the constraint is optionally that a location is more likely to be in a parking lot area where a speed bump exists less likely where no such bump exists.
In some embodiments, for each input of sensory data, each particle’s weight, or grade, is updated according to how suitable the particle location and/or orientation is to the sensor input.
In some embodiments a weight change is additive. In some embodiments a weight change is multiplicative.
In some embodiments a change is optionally effected so that when there are two particles which are close by with respect to location, and optionally also orientation, the change brings their weight closer together, that is, a ratio of their weights is made closer to 1.
In some embodiments, a number of initial candidate locations is optionally controlled. In some embodiments, the number of initial candidate locations is optionally reduced relative to previously known methods such as described in above-mentioned article [YBM17] by Roi Yozevitch and Boaz Ben-Moshe titled“Advanced particle filter methods”.
In some embodiments, a number of initial candidate locations is optionally controlled based on a computing power of a device to be used to determine location. By limiting an initial number of particles the computing load is limited, and the method potentially enables calculating a location at a rate which is useful to a moving user.
In some embodiments, the number of candidate locations, or particles in a case of particle filer, is reduced by 1-10 candidate locations per iteration.
In some embodiments, the number of candidate locations, or particles in a case of particle filter, is reduced by 1-25% of the current candidate locations, per iteration.
In some embodiments, the number of candidate locations is optionally reduced during the process of iteration, relative to previously known methods such as described in above-mentioned article [YBM17] by Roi Yozevitch and Boaz Ben-Moshe titled“Advanced particle filter methods”.
In some embodiments, a method of calculation includes making a soft-init, or soft initialization. The term soft-init and soft-initialization are used herein interchangeably, to refer to the process described below. By way of a non-limiting example, if a calculated location is estimated to be improbable or impossible, for example in relation to a reasonable region where the device should be, a soft-init is optionally used to re-start calculation of the location of the device. The soft- init includes adding a few, even just 1-10, particles somewhere, optionally even randomly distributed, within the Region-Of-Interest (ROI). The new particles are optionally not dependent on locations of other particles, and can potentially re-start the particle filter in converging to a correct location. In some embodiments, if a source for one or more of the localization inputs, such as a sensor, stops providing data, a soft-init is optionally used to re-start calculation of the location of the device.
In some embodiments, a method of calculation includes taking into account intermittent data input.
In some embodiments, a method of calculation optionally includes optionally compensating for intermittent data input. By way of a non-limiting example, if the device has been calculated to be moving, the device is optionally located as if it is continuing its movement during pauses and stops in incoming data.
Maps
In some embodiments a map is optionally used in relation to localizing a device. In some embodiments such a map optionally defines a ROI in which the localization is to be made, or at least started or initialized. In some embodiments such a map optionally defines constraints upon possible location of the device, typically in two dimension (2D) or three dimensions (3D). By way of some non-limiting examples, a device known to be carried by a person, such as a smartphone, is optionally constrained to be not higher than 2 or 3 meters above a floor; a car is optionally constrained to be a same distance above a surface of a road or parking lot or terrain almost all the time.
Sensors
Localizing or tracking a device is performed by using data input from one or more sensors.
In some embodiments such sensors are optionally built into a smart phone, a tablet, a car, or a smart camera, optionally coupled with associated computing abilities.
A non-limiting example of such sensors includes:
- A smart phone or tablet camera, for example capturing image(s) of an environment, including features such a ceiling light, advertising, bill boards, street signs, store signs, doors, stairs, floor markings, wall colors, windows, hydrants, and additional identifiable features. In some embodiments the image(s) is processed to identify such features. In some embodiments a relative direction and/or distance from the camera to the features is optionally calculated. In some embodiments a simple sensing of relative light or darkness can suffice to determine whether the device is indoors, outdoors, or just passing through a door from outside to inside or vice versa.
- A distance measurement sensor, for example a pair of lenses for measuring distance, Lidar (an acronym of light detection and ranging or of light imaging, detection, and ranging), ultrasonic range detector, camera auto-focus, applying camera image deep-learning to an image, and so on.
- WiFi. WiFi is often used for localization. WiFi, Bluetooth (BLE), 3G, 4G and 5G signals can be used for locating, optionally based on RF fingerprinting or an RF map.
- GPS. In some embodiments, such as localization in a building or between buildings, GPS signals are often not received, or not received in a sufficient number or clarity to be used for standard GPS localization n. However, when the ROI is inside a building, even partial reception can optionally be used to indicate that the receiving device is close to a specific location, such as under a skylight, close to a window, and so on. In some embodiments such locations in a GPS- restricted environment are optionally marked on a map, or reception of a GPS signal is compared to a map, and a constraint of proximity to a skylight or window or door is optionally deduced.
- Acceleration sensors and/or inertial sensors. In some embodiments such sensors are optionally used to determine movement or rate of movement from an initial location.
- Odometry - in some embodiments odometry is optionally used. A rate of advance and/or direction of advance from a location is optionally deduced by counting steps, tire rotation (optionally converted to distance), calculating optical flow, and so on.
Any one or more of the above-mentioned sensors potentially provides input to a localization system. In some embodiments a camera such as a smartphone camera is selected as a first source of data, potentially providing candidate locations for placing on a map of a ROI, and data from other sensors is optionally used to calculate probability of likelihood of a candidate location or of candidate locations.
Some non-limiting examples of sensors include: a GPS receiver; a GNSS receiver; a WiFi receiver; a Bluetooth receiver; a Bluetooth Low Energy (BLE) receiver; a 3G receiver; a 4G receiver; a 5G receiver; an acceleration sensor; a pedometer; an odometer; an attitude sensor; a MEMS sensor; a magnetometer; a pressure sensor; a light sensor; an audio sensor; a microphone; a camera; a multi-lens camera; a Time-Of-Flight (TOF) camera; a range-finder sensor; an ultrasonic range-finder; a Lidar; an RFID sensor; and a NFC sensor.
Types of data
Various sensors can provide different types of data.
Some data is continuously available, and can be consistent. For example odometer data, pedometer data.
Some data is available intermittently, based on location. Such data is available at some location, and not available at others. Some non-limiting examples of location-dependent data include: more intense light associated with windows and doors, GNSS or GPS signals received when passing near an entrance to a building or wen passing under a skylight.
Some data is available intermittently, based on time. Such data is available at some time, and not available at others. Some non-limiting examples of time-dependent data include: more intense light associated with windows and doors during daylight hours, and not at night.
Some data can be called low quality data. Some non-limiting examples of low quality data includes low quality images. Low quality images can be at lower resolution than maximal resolution available from current smartphones. However, even low quality images can potentially provide sensory data which can potentially assist in accurate location. By way of a non-limiting example, even a low quality image can provide directions, vectors, to ceiling lights, and assist in evaluating candidate locations.
Initial maps
In some embodiments initial maps of a ROI are optionally obtained from a source for such maps. By way of some non-limiting examples such maps may be found on the Internet, as maps of airports, shopping malls, museums, parking lots, cities, and so on.
In some embodiments paper maps are optionally scanned and optionally processed in order to provide digital maps. In some embodiments maps are obtained and/or updated according to methods described herein, optionally producing maps of visual landmarks such as ceiling lights, signs, and additional visual and non-visual features described herein.
In some embodiments, constructing a light source map is optionally done as described below, with reference to the“exemplary positioning embodiments”, with reference to the“visual landmark mapping” section therein, and with reference to Figure 6 and its description.
In some embodiments, a map is produced by a process of Simultaneous Localization And Mapping, optionally as described in SLAM [6].
By way of example, producing a map includes:
• Starting with an initial map, for example a building plan (e.g., mall map, fire-escape map, an operator-made map, or even a blank map;
• Locating a current location and orientation and starting to move.
• Using an optic-flow to periodically determine updated location and orientation. This can be done with low drift, typically [0.5- 1.0]% of the distance travelled;
• Locating landmarks (e.g., lights & signs), optionally using ray intersections, optionally as presented in the“exemplary positioning embodiments”.
• Marking a path as a location suitable for travel. If the mapping device has ranging capability, a SLAM like map is optionally produced;
• In some embodiments, corrections to the path are optionally performed base on loop- closure, potentially increasing map accuracy;
If the sensors indicate an elevation change - a location of the elevation change is optionally recorded, and associated with a corresponding color, in a sense described with reference to Figure 12 below, representing different types of areas in the map, such as accessible areas, fixed inaccessible areas, partially-accessible areas, dynamic inaccessible areas and stairs or elevator.
• Optionally updating the map, optionally repeating from the second point described above,“Locating a current location and orientation and starting to move”.
Mapping
An aspect of some embodiments includes mapping environments. In some embodiments, locations of a device are optionally collected, and additional data is optionally also collected. The locations and/or data are optionally used to produce a map and/or update a map. By way of some non- limiting examples the map is optionally updated as to: more-exact locations where GPS may be received, corrections to locations calculated by WiFi, updating locations of signs or adding locations of new signs to a map, and so on. In some embodiments a map produced by using data from one or more device(s) navigating a ROI is optionally sent to devices for use in localization.
Signs
In some embodiments, signs are used to provide location. Typically, signs contain identifying information like text and drawings which make the signs highly identifiable. Often, a number of similar signs in an environment, such as a mall or a park, may be quite few, sometimes only one or two, compared, for example, with a number of ceiling lights. The signs may provide good candidate locations, as correctly identifying a unique sign can provide quality input to grade candidate locations. Even when there are a few similar signs, correctly identifying the signs can provide quality input to grade candidate locations, although several candidate locations may be provided with a high grade.
Some signs provide significant assistance in estimating location - for example room numbers or store numbers, column number in a parking lot, and so on. In some embodiment image processing is optionally used to decipher number or text in a sign.
In some embodiments captured images are optionally processed to determine whether a sign appears in the captured image, and optionally processed to determine what is written in the sign.
In some embodiments, a map which includes data about signs potentially associate each sign with one or more features. Some example features include geometric features which can assist location, such as a location of the sign, an orientation to which the sign is facing or from which it can be seen, and an actual size of the sign. Some example identifying features include text on a sign, font, colors, drawings, and so on.
With reference to mapping environments, it is noted that when a device or system is used for navigation or localization, the device optionally sends data about signs it sees and/or detects, to a central system which can enhance maps with data about signs. In such a way, for example, a map of an environment can optionally be updated whenever a localization or tracking application is used in that environment, becoming a“collective memory” of the environment.
Given an image or a video stream, an object detection model can identify which of a known set of objects might be present and provide information about their positions within the image. Such an object model is optionally applied with reference to signs.
In some embodiments, images are optionally filtered, for example with Python, optionally by the COCO API.
In some embodiments the images are optionally filtered to use the images which contain legible text, optionally even filtering that such text is printed by machine. In some embodiments TensorFlow’s Object Detection API is used for image recognition
The Tensorflow API provides some different pre-trained Deep leaning models (model previously trained over multiple huge datasets).
Applications
Some non-limiting examples of applications using devices and methods according to embodiments of the invention are now described.
Indoors localization:
In some embodiments, indoors localization and/or tracking is performed, optionally using a camera and associated computing circuitry (e.g. on a smartphone or tablet). In some embodiments, the localization a localization and/or tracking algorithm is based on a modified particle filter which combines visual landmarks with additional data inputs such as RF finger printing, odometry, and map constraints. The localization is potentially well suited for using a low resolution camera (e. g. 1 Megapixel) to track dominant landmarks such as lights and potentially achieve an accuracy of sub-meter 2D, 2.5D or 3D positioning at image capture rates of 10-30 Hz, which are typical and even lower than smartphone video rates. Such an algorithm potentially works at a fairly low energy consumption suitable for smartphones.
It is noted that example embodiments of the invention have operated successfully using less than 100 milliwatts. The energy consumption is expected to be hardware dependent, yet easily suitable for operation on smartphones or tablets or other battery-dependent devices without causing undue drainage of the battery.
In some embodiments the method optionally uses input of ceiling light(s) images and a ceiling light map.
In some embodiments the method detects passing through doors, under a skylight or near a window by one or more of detecting changes in light intensity, light color.
In some embodiments the method detects passing through a magnetometer.
Passing under a skylight and/or passing near a window by receiving a GPS signal or a partial GPS signal.
In some embodiments the method includes image processing to read signs such as shop signs, mall signs, parking lot signs, and so on.
In some embodiments the method includes detecting obtaining a WiFi signal at an entrance.
Parking lot localization:
A parking lot typically includes features which sensors can detect. In some embodiments, parking lot localization and/or tracking is performed, optionally using a camera on a smartphone, or a camera serving as a car-mounted system, and associated computing circuitry (e.g. in the smartphone or in the car).
By way of some non-limiting examples, some features which can be detected and used as input, optionally combined with visual landmarks, include:
A tollbooth at an entry (visual detection); speed bumps at specific locations in a parking lot; layout of traffic lanes within a parking lot; relative height difference from known-height locations - e. g. by knowing where an entrance is and measuring barometric pressure difference to detect how many levels up or down, by sensing an incline and a distance travelled along the incline (odometry plus sensor attitude/direction); image processing for detecting parking-lot-specific markings such as numbers and/or letters and/or colors marked on columns; loss of GPS signal at entrance; obtaining a WiFi signal at an entrance.
Before explaining at least one embodiment of the invention in detail, it is to be understood that the invention is not necessarily limited in its application to the details of construction and the arrangement of the components and/or methods set forth in the following description and/or illustrated in the drawings and/or the Examples. The invention is capable of other embodiments or of being practiced or carried out in various ways.
EXEMPLARY SYSTEMS
Reference is now made to Figure 1A, which is a simplified block diagram illustration of a mobile device for navigation and/or localization according to an example embodiment of the invention.
Figure 1A shows a mobile device 102, including a processor 102, sensors 104, and the processor having a communication channel 103 with the sensors 104.
In various embodiments, the mobile device 102 may be a smartphone, a tablet, or even a dedicated mobile navigation device.
In various embodiments, the sensors 104 may include a camera 106 (such sensors are very usual in smartphones and such devices), as well as other sensors 108.
Reference is now made to Figure 1B, which is a simplified block diagram illustration of a device for producing maps according to an example embodiment of the invention.
Figure 1B shows a mobile device 110, including a processor 112, sensors 114, a mapping user interface 120, and the processor 112 having a communication channel 103 with the sensors 114 and a communication channel 119 with the mapping user interface 120.
In various embodiments, the mobile device 110 may be a smartphone, a tablet, or even a dedicated mobile mapping device. In various embodiments, the sensors 114 may include a camera 116 (such sensors are very usual in smartphones and such devices), as well as other sensors 118.
Reference is now made to Figure 1C, which is a simplified block diagram illustration of a system for producing maps according to an example embodiment of the invention.
Figure 1C shows a mobile device 125, having communication capability 134 with a mapping server 135.
Figure 1C shows the mobile device 135 including a processor 127, sensors 128, a mapping user interface 131, and the processor 127 having a communication channel 132 with the sensors 128 and a communication channel 133 with the mapping user interface 131.
In various embodiments, the mobile device 125 may be a smartphone, a tablet, or even a dedicated mobile mapping device.
In various embodiments, the sensors 128 may include a camera 129 (such sensors are very usual in smartphones and such devices), as well as other sensors 130.
Reference is now made to Figure 1D, which is a simplified block diagram illustration of a system for vehicle navigation according to an example embodiment of the invention.
Figure 1D shows a mobile device 141, having communication capabilityl42 with a vehicle 140.
Figure 1D shows the mobile device 141 including a processor 147, sensors 148, a mapping user interface 150, the processor 147 having a communication channel 149 with the sensors 148 and a communication channel 151 with the mapping user interface 150.
Figure 1D also shows the vehicle 140 including a processor 143 and sensors 144, the processor 143 having a communication channel 145 with the sensors 144.
It is noted that in some embodiments the mapping user interface 150 and its communication channel may be included in the vehicle 140.
It is noted that in some embodiments a mapping user interface and its communication channel may be included in both the mobile device 141 and the vehicle 140.
It is noted that some vehicles have cameras mounted, and that the mobile device 141 may optionally receive images from the vehicle’s camera or cameras, which may have better view outside the vehicle 140. In some embodiments the mobile device 141 may even be placed in a mobile device mount in the vehicle 141, and be used with images peripheral to the car.
Reference is now made to Figure 1E, which is a simplified block diagram illustration of a system for producing maps for vehicles according to an example embodiment of the invention.
Figure 1E shows a vehicle 155, having communication capabilityl57 with a mapping server
156. Figure 1E shows the vehicle 155 including a processor 160, sensors 161, with the processor 160 having a communication channel 162 with the sensors 161.
Figure 1E also shows an optional mobile device 158, having communication capabilityl59 with the vehicle 155.
Figure 1E shows the mobile device 158 including a processor 164, sensors 165, an optional mapping user interface 168, the processor 164 having a communication channel 166 with the sensors 165 and a communication channel 167 with the mapping user interface 168.
It is noted that in some embodiments the mapping user interface 164 and its communication channel may be included in the vehicle 155.
It is noted that in some embodiments a mapping user interface and its communication channel may be included in both the mobile device 158 and the vehicle 155.
In some embodiments the mobile device 158 may have communication capability 169 with the mapping server 156.
EXEMPLARY METHODS
Reference is now made to Figure 1F, which is a simplified flowchart illustration of a method for navigation and/or localization according to an example embodiment of the invention.
The method of Figure 1F includes:
obtaining a map of a Region Of Interest (ROI) (180);
obtaining a first input from a first sensor and a second input from a second sensor (181); providing the first input and the second input to a processor (182);
using the processor to estimate a location based on the first input and the second input (183).
In some embodiments the processor uses a particle filter method to estimate the location.
Reference is now made to Figure 1G, which is a simplified flowchart illustration of a method for producing maps according to an example embodiment of the invention.
The method of Figure 1G includes:
obtaining first sensor data from a first sensor and second sensor data from a second sensor
(185);
providing the first sensor data and the second sensor data to a processor; (186)
using the processor to estimate a location based on the first sensor data and the second sensor data (187); and
sending the location to a mapping application (1878).
Reference is now made to Figure 1H, which is a simplified flowchart illustration of a method for navigation and/or localization according to an example embodiment of the invention. The method of Figure 1H includes:
obtaining a map of a Region Of Interest (ROI) (190);
obtaining a first input from a first sensor and a second input from a second sensor (191); providing the first input and the second input to a processor (192);
using the processor to estimate a location based on the first input and the second input (193); and
Using the location to navigate in a location Selected from: a building; a shopping mall; a parking lot, and a tunnel.
Reference is now made to Figure II, which is a simplified flowchart illustration of a method for navigation and/or localization according to an example embodiment of the invention.
The method of Figure II includes:
a) obtaining a map of a Region Of Interest (ROI) (202);
b) obtaining a first input from a first sensor (204);
c) providing the first input to a processor (206);
d) using the processor to estimate a location based on the first input (208);
e) moving from the location and repeating (b)-(d) (210);
and further comprising:
f) obtaining a second input from a second sensor (212);
g) providing the second input to the processor (214);
h) using the processor to estimate a location based on the second input in addition to the first input (216),
thereby increasing accuracy of the estimating the location.
In some embodiments, the processor uses a particle filter method to estimate the location.
In some embodiments, the second sensor provides input intermittently.
In some embodiments, the second sensor provides input only in specific areas of the ROI.
Various embodiments and aspects of the present invention as delineated hereinabove and as claimed in the claims section below find experimental and/or calculated support in the following examples.
ADDITIONAL EXEMPLARY EMBODIMENTS
Reference is now made to the following examples, which together with the above descriptions illustrate some embodiments of the invention in a non-limiting fashion. EXEMPLARY POSITIONING EMBODIMENTS
The exemplary positioning embodiments below describe a general framework for positioning and navigation which improves expected accuracy over known methods such as using WLAN and cellular information. In this example a description is provided of example embodiments of a method which potentially produces an improved accuracy to a sub-meter error rate.
Reference is now made to Figures 2A-2C, which are simplified illustrations of progressing from building level to room level and to a higher accuracy of“seat” level.
Figure 2A is meant to illustrate a“building level” location accuracy, capable of estimating a location at an accuracy corresponding to a portion of a building. This can correspond to -10 accuracy.
Figure 2B is meant to illustrate a“room level” location accuracy, capable of estimating a location at an accuracy corresponding to a specific room in a building. This can correspond to 2-5 meter accuracy.
Figure 2C is meant to illustrate a“seat level” location accuracy, capable of estimating a location at an accuracy corresponding to, by way of a non-limiting example, a specific seat around a specific table, or a specific location within a room. This can correspond to sub-meter accuracy.
An example embodiment method uses a modified particle filter which combines one or more of RF finger-printing, odometry, visual landmarks and map constraints. In some embodiments the potential accuracy improvement is achieved by using a camera, optionally a low resolution camera, to track dominant landmarks such as lights. Use of “glowing-markers” potentially allows one to accurately map relatively complex indoor buildings, optionally using a compact representation.
In some embodiments an example method as described in above-mentioned U.S. Patent Application Number 14/418,106 titled“Navigation method and device” was implemented and tested on Android-based mobile devices. The tests indicated a robust sub-meter 3D positioning at a frame capture rate of 10 - 30Hz with a fairly low energy consumption.
Introduction
Indoor positioning and navigation has attracted a wide range of researchers. Several navigation technologies have been developed apparently including: RF finger-printing, Pedometer, Optic Flow, Visual SLAM, Ultrasound, RF - TOA (Time Of Arrival), RF - DTOA (Differential Time Of Arrival), Lidar Navigation. Apparently, a characteristic of some indoor navigation methods is a fusion of various positioning technologies in order to achieve improved positioning results (see above-mentioned references [ZYWL14], [MT11], [Harl3] for surveys regarding indoor positioning technologies and systems). Although there are many different types of applications which require indoor positioning, it seems that most try to improve the following properties:
Accuracy: is often the main and foremost parameter which is being tested.
High sampling rate: typically required for a natural and intuitive navigation results, especially for highly dynamic devices.
Low energy consumption: is an important property for most mobile (or battery operated) devices, it usually requires the use of low computing power methods (e.g., the use of visual based navigation methods are usually impractical for mobile devices - due to their high computing power requirements for image and video processing algorithms).
Robustness: that is, a suggested positioning method should work in a wide range of scenarios and in crowded and dynamic environments such as crowded shopping malls.
Minimal dedicated infrastructure: preferably, a solution should work without any need for additional infrastructure.
Privacy: allowing an off-line mode while avoid using high- resolution video or photos (e.g., wearable devices such as “Google Glass” have raised many privacy concerns - see above- mentioned reference [MC15]).
Auto mapping: allowing simple and efficient crowd- sourcing for a finger-printing process (both for RF and visual mapping).
“Bring your own device” (BYOD): The solution should work on existing C OT S (Commercial Off-The-Shelf () devices.
’’Keep It Simple”: Simplicity is a key factor in the ability to adapt the solution to various types of platforms and applications.
A. Motivation
A need for reliable indoor positioning and navigation service is motivated by several applications including: Location Based Services,“where have I parked my car?”,“Where are my friends”, Augmented Reality gaming (for example a game such as“Pokemon Go”) and even search and rescue. An indoor positioning system is described which focuses on positioning methods for smart phones, optionally Commercial Off The Shelf (COTS) devices.
B. Related Works
The challenge of a performing indoor positioning has recently attracted researchers, see for example [LDBL07], [Gosl2], [BGVGT+ 17] for general surveys on I P S (Indoor Positioning Systems).
In the present exemplary positioning embodiments an embodiment is described which is potentially useful for mass market scenarios in which an I P S potentially work on existing mobile devices (i.e., smart-phones), in some embodiments even without additional dedicated infrastructure.
In general, researchers have apparently proposed the following indoor positioning methods:
• WLAN finger-printing: this common method mainly uses WiFi or BlueTooth ( BLE ) beacon- signals in order to approximate a user position. The method uses a preprocessing stage (finger-printing) in which a site-survey is performed - storing the signal strength of received signals in many locations of a Region of Interest (ROI). Then, the location of the user can be approximated by comparing the current set of RF signals with the finger-printing data-set. For a detailed description of such methods see: [GLN09], [KK04], [YAS03], [RMT+ 02]
• 3G, 4G cellular positioning: this method is often implemented on the“server-side” and allows locating a user’s approximate position according to the signal strength (and angle of arrival) to a few cellular base stations, see [DV 11], [Ku" p05] for general information regarding the cellular Location Based Services (LBS).
• Pedestrian odometry (pedometry): this method uses step counting combined with the device approximated orientation in order to compute the user relative path, see: [Harl3], [BH13].
• Map Matching and Dead Reckoning: this method is commonly used for GNSS-based (Global Navigation Satellite System - based) road position, in which the fact that the vehicle needs to be on the road implies significant constraints which can be used to reduce a search space. A related method can be used for indoor positioning - assuming we have the map constrains, see: [BK+ 08], [LSVW11], [BW13]
• Visual navigation: motivated by bio-inspired navigation and the fact that smart-phones have cameras, researchers have apparently proposed positioning methods based on visual processing - commonly combined with system real-time orientation provided by MEMS-Sensors, see: [WKM11], [BFOO08], [LSVW 11]. It is worth mentioning that in the last few years even toy- grade drones often have an indoor visual navigation system, see for example [KVFF11].
Some positioning systems combine a few of the above methods in order to allow a better accuracy. Yet, major IPS providers such as Google and Apple provide a solution for a room or building level accuracy with expected error larger than 5 meters, therefore, even a user’s floor is often not automatically determined, see [LL17], [TSJK+17] for IPS accuracy evaluation.
This application describes methods and systems for accurate indoor navigation. In some embodiments standard (C OT S) android mobile devices are used. To the best of our knowledge, this is the first paper which presents a working implementation of indoor navigation algorithm based on mapping and identification of standard building lights as landmarks combined with an advanced particle filter algorithm (such as described, by way of a non-limiting example, with reference to Figure 2). The described system potentially allows a 3D sub-meter accuracy while maintaining one or more of the properties listed above (Accuracy, High sampling rate, Low energy consumption, Privacy and potentially without adding dedicated infrastructure).
A Generic Framework For Accurate Indoor Positioning
A. WLAN - Standard Indoor Positioning System
Most common and low cost solutions for indoor positioning system are based on WLAN technologies. WLAN technologies refers not only to wireless LAN but also to other RF technologies, like: Blue-tooth/BLE, 3G/4G and RF - ID. Nevertheless, deployment of additional infrastructures such as RF - ID or RF -beacons are time consuming and often have high implementation costs, hence some embodiments of the present invention rely on existing infrastructures. In the past few years WLAN technology is being deployed in public places, such as: shopping malls, industrial buildings, airports and hospitals. For outdoor navigation the use of GNSS (e.g., GPS) is a habit of almost any smart-phone user, but GNSS devices hardly perform indoors, and when they do, their expected accuracy is at a’’building level”. W LAN based positioning systems can provide room level localization, usually with an accuracy of about 5 - 10 meters. Those systems typically use WLAN“Finger Printing” signal-map. A simple estimation can use the ratio of the current WLAN signal scan and the RF signal map.
B. Particle Filter For Localization
The particle filter localization algorithm, also known as Monte Carlo Localization [FBDT99] is a variant of the Bayesian filter family [C+03]. Particle filters use a finite set P of particles (IP I = n), with each particle representing a system state x at time t denoted x\ for i E [«]. To approximate a posterior probability, each x\ is signed with a corresponding weight wt L that describes a belief state and evaluated proportionally to the likelihood of the Bayesian function p(z\x) where z is the sensory measurement. To describe the particle filter algorithm some preliminary terms are defined:
1) Environment ( ROI ): Defines where a particle filter method is applied, in a localization problem the environment will be a map of the environment, for example a floor map (2D) or building map - which may optionally contain several floors (i.e., 2.5 D), in some embodiments, one can also address the map as a 3D representation of the building.
2) Initialization: At a first step, a set P of n particles is distributed on the environment map, and assigned with some initial weights. In some embodiments the set P is optionally uniformly distributed. 3) Action: A movement-function that describes a position change of a device. In robotics, for example, odometry is used, while for mobile devices carried by a person a pedometer (step counter with orientation) is optionally used.
4) Sense: A Sense-function maps sensor data to a particle weight.
5) Re-sample: A process where the particles with negligible weights are replaced by new particles in the proximity of the particles with higher weights. An actual implementation of the resampling method is usually a key factor in the performance of a particle filter, in particular, the resampling method potentially affects convergence properties of the localization method.
6) Reporting a best position: at some or even at each stage the particle filter optionally reports the current optimal position, for example by simply reporting the’’best” particle or by performing some kind of weighted average over the particles.
7) Optionally approximating the expected error: in some embodiments, reporting just a position or location is insufficient. In some embodiments a localization algorithm is optionally required to estimate an accuracy of the proposed location. Using the particles’ current distribution - one can use a method for estimating the expected accuracy.
An example embodiment of a particle filter method for localization is now described:
Data: Particles, Environmental Map
Result: State xt.
Initialization: distribute particles uniformly on the map.
z <— measurements(sensors).
for p EP articles do
state(p) <— action (p)
w <— sens e(p, z );
wcighU/ ) <— wcight(/ )-H;
end
Re-sampl e(Particles)
xt <— best-solution) Pa rl i cle s)
optionally return to z <— measurements(sensors) and re-iterate.
Algorithm 1: Simplified Particle Filter Algorithm For localization.
Reference is now made to Figures 3A-3C, which are simplified illustrations of using a particle filter to estimate a location according to an example embodiment of the invention.
Figure 3A shows an initial distribution of particles, corresponding to an initial distribution of estimated locations. In some embodiments the initial distribution is a uniform distribution. In some embodiments the initial distribution is limited to being with in Region Of Interest (ROI). In some embodiments the initial distribution is limited to being, for example, within a building, or to being not at inaccessible areas.
Figure 3B shows particle translation due to total action vector marked as an arrow 305.
Figure 3C shows a convergence of the particles.
C. Extracting Visual Landmarks
In this section we assume that the system includes a visual sensor (camera) and an orientation sensor. In some embodiments a process of extracting visual landmarks includes the following two steps: Image processing and Extraction.
1) Image Processing: considering the ceiling lights as land- marks, one can use a simple threshold filter for their extraction as depicted in Figures 4A-4B.
Reference is now made to Figures 4A-4B, which are simplified illustrations of applying a threshold to a color image of ceiling lights according to an example embodiment of the invention.
Figure 4A shows an example color image which includes lights 305, and Figure 4B shows a resulting, optionally binary, optionally black and white, image of Figure 4A, showing the light 305 after a threshold has been applied.
In some embodiments, from a resulting, optionally binary, image lb one optionally extracts a center of mass c of one or more of the remaining visual landmarks. In some embodiments, we first extract a contour y of a landmark that appears in lb (see [Can86], [XB12]). Optionally, one then averages through all points g Ey.
Reference is now made to Figures 4C-4D, which are simplified illustrations of contour detection of light sources on an image and of determining center mass points according to an example embodiment of the invention.
Figure 4C shows an example color image which includes lights, showing optional contours 402 of the lights.
Figure 4D shows locations 404 of centers-of-mass of the lights.
In some embodiments additional geometric properties of each light source are optionally analyzed, including, for example center, radius and a compact representation of a perimeter (contour) of the light source.
2) Extraction: For each spotted visual landmark center c, represented as a two dimensional vector c = (cx, cy) we determine a relative world vector connecting the visual sensor and the landmark (optionally in world coordinates). A first step includes calculating an Intrinsic matrix K3x 3 of the visual sensor which is known as “Camera Calibration”. The intrinsic matrix encapsulates the sensor’s focal length on both axes fx, fy, it’s center c and the sensor’s skewness [ZhaOO], [HS97]. The relative vector is then:
Figure imgf000035_0001
where w is a scale factor of mapping R2 to R3 which is unknown. Hence, v E R3 can be calculated up to a scale.
Optionally, one rotates the acquired vector c by the device self-orientation in order to align it with world coordinate system.
In some embodiments device orientation is extracted from commonly-used MEMS sensors (e. g android smartphone orientation sensor). The orientation is typically given in a quaternion form q (for a detailed explanation regarding quaternion representation of orientation see [K+ 99]). So, the rotated world vector is vw = q’ * v * q (here v and vw are given in their quaternion form).
D. Put It All Together
As described above, systems with 5 to 10 meters location accuracy are available. In some embodiments the system described herein optionally implements pedometry (for example using a step-counter) and/or optical flow. The rough position estimation is optionally to define a region- of-interest.
In some embodiments, data fusion is addressed by using a (optionally modified) particles filter method. In some embodiments one uses a particle filter, with the initial area of the filter covering all the evaluation map and converging to the most likely Probability Density Function.
In some embodiments, we optionally confine our search to the ROI. Once the ROI is reported, the relevant particles’ weights are increased. The true position is likely to be inside the predefined area. To fuse detected visual landmarks each particle’s weight is optionally modified. In some embodiments weights are modified according to some weight function that compares the particle state with a truth state.
In some embodiments, some particles are periodically spread outside the ROI to overcome ’kidnap robot’ situation. In some embodiments, the algorithm used is described in [YBM17].
An example method:
Data: ROTpoint , ROTradius, Particles , Landmarks
Result: Location .
for p EP articles and vs E Landmarks do
state(p) <— ac l i o n (pcdo m cl y ,/; ) ;
increase p weight if p E area(ROI, ROTradius);
w <— w eight- fu n c t i o n (/ , y.v ) ;
weight]/;) <— weight (p)-w;
end Location <— wcighlcd-sum(/J articles),
Algorithm 2: Evaluating high accuracy location
ADVANCED POSITIONING ALGORITHM
A real world localization scenario often includes complex sensory data: outliers, inaccurate and partial sensory information are common. Moreover, human factors can influence the sensor measurements and therefore contradict the pure Bayesian inference filter. In this section we present several improvements for the generic particle filter that aim to over- come such problems.
A. Generalizing to 3D
While estimating horizontal movement of a pedestrian is often highly inaccurate (e.g., step counter), a relative elevation change can be approximated using barometer pressure. This sensor is rather common in smart-phones and allows robust sub-meter detection of elevation change.
In some embodiments barometer data is used. In some embodiments the barometer data is optionally smoothed, optionally using a Kalman filter, detecting events of elevation change. The particles can then be moved in the z axis using the filtered data. Such a method allows one to spread the particles filter over a height of a few floors, for example during an initialization stage.
In some embodiments “vertical corridors” are optionally defined in the map (stairs, escalators, elevators) through which an elevation change may occur. The fact that the area of such “vertical corridors” is potentially results in a rapid convergence.
Events of elevation change are optionally detected by detecting a change in barometric pressure. In some embodiments barometric pressure is sampled at a rate suitable for tracking a specific object. When tracking a device expected to move fast and be able to change elevation rapidly, sampling may be done every few seconds or even every second. When tracking a slow moving device or when tracking in a Region Of Interest where changing elevation is less likely, sampling may be done every 1 or more minutes. For example, in a vicinity of the“vertical corridors” sampling may be performed more often than when far from the“vertical corridors”.
In some embodiments, the algorithm is optionally kept both simple and efficient by using only a 2.5D representation of the building, for example using a 2D map for each floor and an optional absolute world elevation or relative elevation.
B. Improving The Particle Filter
In some embodiments the following modifications are made to a generic particle filter for localization:
• Mutable number of particles: allowing the application to reduce the number of particles in a convergence state, while adding a considerable amount of particles in cases such as loss of visual landmarks. • Soft init: allows a partial initialization of new particles on one or more algorithm-rounds. This way the algorithm is less sensitive to“robot-kidnap” or simply wrong convergence cases. Note: in some embodiments the“soft-init” new particles are not necessarily spread uniformly in the ROI. In some embodiments the soft-init particles are optionally localized according to data from visual mapping. For example, consider a case where we have several light sources in a current image.
In some embodiments the soft init is optionally based on discrete analysis of possible locations - for example using lights or signs patterns.
In some embodiments a naive registration of the largest image- light to each relevant light in the database is performed. The naive registration defines a global 3D vector for each such registration. Note that each registration pair allows us to approximate the vector length by matching the size of the light in the image and in the database. In some embodiments this new set of particles includes at least one particle which is rather close to the real location - and therefore has a high probability to increase its weight.
In some embodiments a re-sampling simplifies the process -only a few particles are changed/replaced each re-sample, and the rest of the particles are evaluated and their weight updated accordingly, potentially allowing a shorter convergence time, or a shorter iteration time, and potentially less particles.
In some embodiments an elevation is added to a particle filter map constraint, potentially enabling more accurate localization, and potentially enabling more rapid convergence, at least in scenarios where elevation is a factor.
In some embodiments an enriched map is used - for example a map which includes, in addition to structures such as walls and/or doors, also features which are simple to locate, such as lights and/or signs.
In some embodiments a simplified map is used - for example a map which includes only or mostly features which are simple to locate, such as lights and/or signs.
In some embodiments the particle filter method includes using an accurate compass input, using one or more of an angular bias and angular drift or change as part of a particle’s state.
Compass sensor may have an initial bias. In some embodiments, compass angle is optionally maintained using a gyro sensor or optical-vision-based method, and potentially an angular drift may develop.
In some embodiments an angle based (bias) shift and an angular drift are computed by the algorithm - just as the location (x, y, z) is computed - as part of the particle state. In some embodiments the particle filter method includes using a flexible number of particles, enabling to adapt the number of particles to a navigation scenario. The number of particles is optionally adapted to the probability space represented by the set of the particles.
In cases where the set is sparse, for example as measured by a total number of particles or as measured by a number of particles per unit area, additional particles are optionally added. The additional particles are optionally added in locations according to sensed data and/or in vicinity of high weight particles.
In cases the particles are located in a small (compact and/or dense) area, for example measured as described above, the number of particles may be reduced.
In the re-sample stage: a particle with a lowest weight is optionally tested with respect to a particle with the highest weight, or with respect to a particle with the average weight - if a ratio of their weights is under some threshold value the lowest particle is removed.
In some embodiments a lowest weight particle is removed from the set of particles.
In some embodiments the lowest weight particle is optionally replaced with a new particle. In some embodiments the new particle is assigned an initial weight of an average particle and/or a 50-th percentile weight.
In some embodiments, for example in a case where the particles are calculated to be sparse, in the sense described herein, two particles (or more) replace a particle removed.
In some embodiments, especially when a compact representation is desired, no new particle is added.
In some embodiments pedometry input optionally includes measuring optical flow and/or distance ranging with suitable sensors, which provides a more precise“action” as used in particle filter terminology. In some embodiments pedometry input optionally includes device orientation, which provides a more precise“action” as used in particle filter terminology.
In some embodiments a few particles (e.g.,0-2) are initialized each time the algorithm re samples. A few new particles are optionally located at locations where there are no current particles, yet there is a likelihood for a particle to be. Some non-limiting examples include: (i) initializing according to a light pattern (ii) initializing according to a mapped land mark such as a sign (iii) initializing in an elevator - based on detecting an altitude change.
Soft-init potentially overcomes a situation called the“Kidnapped robot problem”, where a localization algorithm loses touch with the ground truth. Adding a few particles in likely places, can lead a particle filter method to converge on those particles, and correct its own deviation from the ground truth. • Reporting expected error and confidence: the genetic particle filter reports some kind of combined position at one or more of the rounds. In some cases both the expected error and the confidence of the positioning are reported, potentially allowing higher level filters to be applied to the reported positions.
IV. MAPPING AND FINGERPRINTING ALGORITHM
Some embodiments include collecting benchmark data (e.g. environmental map, RF map and visual landmark map). Such collecting is potentially useful for a system to work efficiently and to achieve high accuracy results as well as robustness.
A. Environmental Map
In some embodiments an environmental map data includes a 2D map of a building’s floor (e.g. a mall). The map describes possible locations the map includes, such as walls, shops (in case of mall map), doors, steps, elevators area. Data about staircases and elevator area can be used to determine floor changing along with sensor fusion. Wall location descriptions help eliminate false location estimation in two ways: non-realistic positions such as within a wall, and previous estimation cannot be evaluated again if they passed through a wall in the action step of the algorithm. Creating the environmental map is optionally a first step in the pre- processing process, it may be performed manually by measuring the environment, by using an existing map, or by collection from users of a localization app which navigate the environment and send data to a data collector, to produce maps and/or update maps - such can be called social mapping. A digital map may be created in any convenient form. In some embodiments a whole map data is optionally split in a way such that sub-maps each describes a single floor of a multi floor building.
B. RF mapping
In some embodiments, an additional step of the process is collection of RF data. RF data can consist of 3G/4G mobile signals, Bluetooth/BLE beacons, WiFi at 2.4/5.0 Ghz transmission. RF data collection may optionally be a semi-autonomous process where raw positioning data measurements collected are saved with respect to their collection elapsed time. Given n RF data measurements that were recorded at time ti along a vector p on an environmental map, the estimated
;-lh position
Figure imgf000039_0001
Reference is now made to Figure 5, which is a simplified illustration of a map used in mapping RF and visual data according to an example embodiment of the invention.
Figure 5 shows a display 502 of an example mapping application, including a map 504, and controls 506 for a user to provide input.
The example embodiment of Figure 5 shoes additional controls 508 typical of a smartphone or a tablet. Figure 5 illustrates that just as a smartphone or tablet can be used for localization, they can be used for mapping. A synergy is potentially produced when using a same device or type of device to map a region when a similar device will later be used for localization.
Figure 5 shows a path 510 along which the mapping device was carried, while mapping visual data and/or RF data. The map 504 displays dots 512 where ceiling lights, for example, were identified and mapped as visual landmarks.
In some embodiments the RF data measured at each point pi is an «-tuple VRF:= {{Id, RSSI)o , ..., (Id, RSSI)n } which optionally includes an identity number of each of the RF transmitters and its corresponding received power - the RSSI.
C. Visual Landmark Mapping
The process of visual landmark mapping resembles the RF mapping task. One preprocesses the visual measured data and extracts the visual landmark vectors
Figure imgf000040_0001
that corresponds to each landmark, for example as described in the section titled Extracting Visual Landmarks. The visual landmarks are then registered (mapped), optionally using real world coordinates, relative to the environmental map. Note that the task of mapping a complex building such as a shopping-mall may include mapping hundreds of light sources, and may be time consuming.
Reference is now made to Figure 6, which is a simplified illustration of light source mapping according to an example embodiment of the invention.
Figure 6 shows a device such as, for example, a smartphone 602, carried along a path 604. The smartphone 602 includes a camera 606, which captures images of a light source 608, while also recording time and/or location. The location recorded while mapping may optionally be recorded using various means, such as pedometry, odometry, RF mapping, WiFi location, GPS if available, optical flow odometry, Google’s Tango phones, and so on. Figure 6 also shows a motion vector 610 indicating the direction the smartphone 602 is moving.
Figure 6 depicts a semi-automated mapping method that is both simple, efficient and applicable for mobile devices. The light source 608 is spotted and tracked along the path 604, the position of the light source 608 is optionally estimated as a weighted average of intersections of the 3D vectors to that light source 608.
In some embodiments a simple tracking algorithm is used, that compares two consequent frames and looks for a minimal distance change between a previous spotted landmark and landmarks spotted in the new frame. In cases where the minimal distance does not exceed some threshold, a match is declared, otherwise a new landmark Id is assigned.
A more sophisticated tracking algorithm may include a 2D Kalman’s Filter, (see [BW01], [LWWL10]). In some embodiments a direction or vector to the light source 608 is calculated for each image. Using several images, each taken from a location which is known, a location of the light source is optionally calculated by calculating an intersection of the vectors to the light source 608. It is noted that the more images are used, the more accurate the location of the light source cn potentially be.
Reference is now made to Figure 7, which is a simplified illustration of a screenshot of a navigating application according to an example embodiment of the invention.
Figure 7 shows a screenshot of an example embodiment of a mapping navigating application. At the lower part of the screenshot images of two detected and tracked lights 702a 702b are shown. At the upper part of the screenshot the two light sources 702a 702b are mapped to map locations 704a 704b with respect to a user position 706, or the mapping device’s position 706.
V. FIELD EXPERIMENT AND RESULT
In order to validate an example embodiment algorithm we have conducted a set of experiments. A first test was performed in a laboratory setup. Next we participated in the Microsoft Indoor Localization Competition (2018) in which our algorithm performed poorly - mainly due to the fact that the front camera was mostly blocked by the evaluator body (head) and only few lights were detected - several implementation changes were conducted in order to cope with such blocking cases. Finally, we have performed a large scale experiment in a shopping mall in which hundreds of lights were auto mapped and then used for accurate 3D navigating.
A. Lab Experiments
The first set of experiments were performed in our lab building (that includes two stories 20 x 40 meter, see Figures 2, 7, 5. The following experiments were performed:
- WiFi Localization: in general the accuracy of our implementation was about 4-6 meter(after performing an RF fingerprinting. Google’s localization API allowed a slightly larger error of 5-8 meters - both can be considered at a ’’room-level” accuracy - yet in both implementations the altitude estimation was not sufficiently accurate to allow a 3 meter reliable floor detection.
- 3D vector accuracy: in this experiment we have aimed the smartphone camera to a light source and rotated it while maintaining the light in the camera FOV (and keeping it in the same location) - for each frame, the 3D vector from the phone to the light was computed. We conclude that the angular expected relative inaccuracy of the vectors are within 1-3 degrees, and the expected global inaccuracy is about 3-10 degrees - mostly due to compass inaccuracy). - 3D mapping of lights: in this technical experiment we have auto-mapped the building lights using the mapping algorithm, see Figures 6, 7, 5.
- Pedometer: we have tested several smart-phone“step-counter” applications and also implemented our version of a pedometer. We conclude that in regular walking while holding the smartphone the expected error is 10 - 20% of the path size (computed using a simple“loop closing” method).
- Relative height: a barometer sensor is rather common in many smartphones. The current air pressure can be used in order to detect height changes at sub meter accuracy. Using a Kalman filter we were able to detect changes in height (i.e., detecting movement from one floor to another).
- Navigating using the lights-map: an example embodiment algorithm was implemented and tested in the lab experiment allowing a fast converged localization (30-40 seconds) with an expected accuracy of sub-meter in most cases where lights were detected.
It is noted that convergence and/or accuracy potentially depends on a number of image frames included in a calculation. At a high frame capture and calculation rate, for example 30 frames-per-second (fps), algorithm convergence potentially occurs within 2 seconds or less.
It is noted that convergence rate potentially depends on a processor speed.
It is noted that convergence rate potentially depends on an environment in which the localization is taking place. An environment with many similar feature can take longer than an environment with one or more distinct- from-each-other features.
Performing data fusion, that is, using input from sensors in addition to a camera, potentially increase speed of convergence.
It is noted that using a high fps (say 30) sensory input convergence may take from 2 to. 60 seconds. The time is scenario dependent - in case of detectable signs - a sub- second convergence is possible, while in case of no detectable objects - the convergence may optionally wait for a user movement such as a floor change or simply a walk through a mapped region.
B. Study case: Microsoft Indoor Localization Competition
Since 2014 an annual indoor localization competition is organized by Microsoft, see [LL17] for 2014-2017 indoor localization evaluations. In the 2018 competition we implemented an example embodiment of a particle filter method, named”GoIn”. The system was design to work on both light-sources and natural lights (such as windows and sky-lights). Overall, the system performed poorly with an average 2D error of about 4.2 meters. The relatively large errors were due to the way the phone was held (in the evaluator’s hand) - the front camera was used for detecting lights but it was often blocked by the evaluator’s head. Reference is now made to Figures 8A-8B, which are simplified illustrations of tracking according to an example embodiment of the invention in comparison to the ground truth.
Figure 8A shows a path as computed by the“Goln” algorithm, and Figure 8 B shows the ground truth.
Figures 8A and 8B present a part of the actual competition evaluation of the”GoIn” system.
C. Full scale field experiments
A full scale experiment was performed in a crowded shopping mall. The experiment was divided into two steps: (i) Mapping (ii) Localization and accuracy evaluation. The mapping of a shopping mall (with three floors and about a hundred stores) required about one hour in which the RF fingerprinting and the light mapping were performed simultaneously. Then, in the localization part those maps were used for positioning (as shown in Figures 9, 10). We conclude that mapping an average size shopping mall can be done in a matter of minutes to hours, depending on the hardware, including testing and map validation. The level of accuracy may vary between light based high accuracy (0.5-2 meters) and low accuracy (5-10 meters) when the visual sensor is blocked. Using relative simple image analysis we are able to differ between the high accuracy and the low accuracy cases.
In some embodiments image analysis typically enables sub-meter accuracy and often enables a sub-feet accuracy using multiple images. It is noted that accuracy is potentially affected by a distance from a feature which is imaged, typically closer enables a better accuracy. In cases of features located far away from the user (say 50 meters) an expected accuracy may be 2-3 meters.
Reference is now made to Figure 9, which is a simplified illustration of a screenshot of a navigating application displaying a large area, according to an example embodiment of the invention.
Figure 9 shows a screen shot from the“Goln” application. Each blue dot 902 represents a single light source (a few hundred lights were mapped in each of three floors). A red line 904 represents a computed path with a meter accuracy (on average). A green circle 906 presents the WiFi location and a red circle 908 represents the light-based position. A Google-maps reported position 910 is presented in light blue - with an average error larger than 15 meter.
Reference is now made to Figure 10, which is a graph of an accuracy evaluation of a path according to an example embodiment of the invention and a path according to Google map.
The graph of Figure 10 shows an accuracy evaluation of Goln’s path 1006 (blue) and Google-map’s path 1008 (orange). The path 1006 computed by the Goln app was within 1 meter accuracy during at least half of the evaluation, yet: whenever the phone’s camera was blocked the expected accuracy went to 3-6 meters. The graph of Figure 10 has an X-axis 1002 in meters, and a Y-axis 1004 in meters. The paths 1006 and 1008 present a specific walk in a real mall (PETAH TIKVA) and the locations reported by Google (less accurate) and an example embodiment of the invention - the Goln method (more accurate).
VI. CONCLUSION OF THE EXEMPLARY POSITIONING EMBODIMENTS
In this example an enhanced indoor positioning framework was presented. The example embodiment of the localization method uses light sources as landmarks. The example embodiment algorithm was able to improve the accuracy from“room-level” to a’’seat-level” in cases where the visual sensor was able to detect those landmarks. The suggested algorithm was implemented as an Android application and tested in several real-life scenarios both for localization and mapping. In general, the accuracy of the localization depends on the ability of the visual sensor to detect visual landmarks (without being blocked by a user’s body).
Lessons learned include performing a“camera-switch”, in which one camera on one side of a smartphone or tablet is replaced by another camera on another side, while continuing to use the tracking method. The“camera-switch” is used for potentially improving visual path tracking (i.e., optical flow or visual pedometry) to allow a smooth and continuous path approximation even in cases of relatively long scenarios of a blocked camera.
EXEMPLARY NAVIGATION SYSTEM EMBODIMENTS
The exemplary positioning embodiments below describe a vision based navigation system designed for indoor localization. An example embodiment framework works as a standalone 3D positioning system by fusing a sophisticated optical-flow pedometry with map-constraints using an advanced particle filter. An example embodiment method potentially requires no personal calibration and potentially works on standard smart-phones, optionally with a relatively low energy consumption.
Preliminary field experiments on Android smart-phones show an example 3D error of about 1 - 2 meters in many real life scenarios.
I. INTRODUCTION
Indoor positioning is an important capability for a wide range of applications including: location base services (LBS), public safety (first responders) and autonomous robotics (indoor navigation). While LBS related applications mainly target smart-phone users navigating in a shopping mall [FNI13], [GLN09], first responders may be using a foot mounded pedometer (see [JSPG10], [KR08a], [KR08b]). Apparently some such solutions were presented by research groups in the last two decades - the robustness and accuracy of existing indoor positioning systems (IP S) are often insufficient [LL17].
Utilizing particle filter for localization problem is known, [YBM17].
The sense, action and re-sample functions of each implementation differentiate one algorithm from another.
Additional differences include example embodiments of the invention, including harnessing smart phone internal sensors, which can be performed using a wide range of techniques. Although there are many different types of applications which require indoor pedestrian positioning, it seems that the following properties should be optimized with respect to almost any such method:
Accuracy: Often the foremost parameter which is being tested.
Keep It Simple: Simplicity is a key factor: the system should work automatically with minimal or even no manual overhead operation or calibration.
Real-time: For natural and intuitive positioning results, especially for highly dynamic movements.
Privacy: The suggested solution should be able to work in an’’off-line” mode (i.e.,“flight- mode” or“standalone” mode).
Bring your own device: The suggested solution should work on existing COTS (Commercial Off The Shelf) mobile devices (i.e., common smart-phones - this property implies low energy-consumption and computing-power limitations).
A. Motivation
The need for reliable indoor positioning and navigation service is well motivated by several important applications including: Location Based Services, questions such as“where have I parked my car?”,“Where are my friends”, by Augmented Reality gaming (e.g.,“Pokemon Go”) and even search and rescue. In this Example we describe an indoor positioning system which focuses on positioning methods for existing mobile phones for the mass commercial market ( COTS devices).
B. Our Contribution
This Example presents a smartphone indoor positioning system (I P S) based on software developed using recent AR and MR (Augmented Reality and Mixed Reality) tools such as Google’s ARCore or Apple’s ARKit. The AR tools were used to develop visual pedometry (scaled optical flow) sensors, which were fused with an improved version of a localization particle filter to produce an accurate and robust solution for various indoor positioning applications. The Example method enables a simple and efficient mapping solution that, combined with the improved version of the localization particle filter, allows 1-2 meter positioning accuracy in most standard indoor scenarios. II. A BASIS OF INDOOR POSITION
A user global position can be retrieved from existing geolocation services (e.g., Google Maps Geolocation API). Such user location is commonly approximated using RF signals (4G - 3 G, WLAN, BLE ) and even global navigation satellite system ( GNSS ). The accuracy of such methods is considered to be“building level” (10-30 meters) or“room level” (5-10 meters).
A user relative position is optionally computed using a pedometer. A smartphone-based pedometer is composed of two virtual sensors: (i) A“Step-counter”: which detects discrete step- events. (ii) An orientation sensor: which approximates the user global / relative direction. Combined, the two sensors enable a step-based relative path computation. Naturally such a method tends to drift over time and steps or distance.
A. Basic Particle Filter for Localization
A possible naive particle filter algorithm for localization estimation is now described. Since the particle filter method represents a posterior distribution of a set of particles P (IP I = n) on a given map, the result of such an algorithm (for each step) a new set of particles P’ with a (slightly) different distribution. The goal of this algorithm is to get all the particles to converge in one area on the map in a few steps (re-sampling). After converging, the internal state (location) is optionally, in some embodiments, an average location of the best particles (the particles with the highest grades).
Before presenting the algorithm some terms are clarified:
• Map: The particle filter methods estimate the internal state in a given area. Thus, an input of this algorithm is a 2D, 2.5D or 3D map of the region. Such a map preferably include as many constraints as possible (for example walls and tables). The map constraint is one of the parameters that determine each particle’s grade, since particles with an impossible location on the map are usually downgraded.
• Particle: At the beginning of the localization process we“spread” a set of particles P on the map. Each particle xi E P has one or more of the following attributes: location: < x, y, z >, orientation: w and grade: g. In some embodiments, for each step all particle locations and orientations are optionally modified, as well as their grades. In some embodiments, the sum of P particles’ grade is optionally 1 in each step. In some embodiments the grade of each particle Xi at
1
the initial step is—. The grade of each particle is optionally set higher when its location on the map seems more likely to represent the internal state.
• Move function (Action function): With each step the particles in the map are relocated according to the internal movement in some embodiments, for each step we calculate a movement vector (optionally in 3D) and a difference in orientation, then move the particles accordingly. The movement of each step is optionally provided by a mobile pedometer (for example a step counter, optionally with orientation) as commonly used in smart-phone.
• Sense function: Sensors of the device are optionally used to determine each particle grade. A sense method or function predicts each particle’s sense for each step, and grades it with respect to a correlation between the particle prediction and the internal sense. In the case of the present exemplary positioning embodiments, the sense function can compute distances to a nearest wall (forward and back, right and left) and then compare the computed distance to a distance of each particle to a nearest wall in the map and optionally change the particle grade in an amount corresponding to the correlation.
• Re-sampling: A process of choosing a new set of particles P’ from P. The re-sampling process can be done by various methods and one purpose of the re-sampling is to choose particles with a high weight or grade over particles with lower weight or grade.
• Random noise: To prevent the convergence of the particles from happening too fast (and by that risk missing a true location). In some embodiments, after re-sampling, particles are optionally moved by a small random noise on the map. In some embodiments this is done by moving each particle in a small radius from its original location.
Algorithm 1 described below explains a process of the particle filter method using a mobile pedometry sensor.
Input: A black and white 2D, 2.5D or 3D map of the navigation area.
1
Init: generate a set P of n particles, each with a grade— For every xL E P
Figure imgf000047_0001
P a random location < x, y > is set, optionally
in a uniform distribution over the map.
Result: Estimated location: pos =< x, y >
for Each step do
1) Calculate the step vector s
2) Apply the Move function on all particles in P by s
3) Apply the sense function on each particle in P according to the current geo-location likelihood position.
4) Evaluate the weight of each particle according to its new position on the map.
5) Re-sample all particles into P’ .
6) Estimate the current position by calculating the particles’ average location in P’, considering their weights. end
In some embodiments Algorithm 1 is described as:
Init: generate a set P of n particles . For every xi EP a random location < x, y, z> orientation < w > and grade g is set, optionally in a uniform distribution over the map.
Result: Estimated Step location and orientation vector: l =< x, y, z, w >
for Each step do
1) Estimate location for each step < x, y, z, w > to current (via mobile AR measurement tool).
2) Calculate a step vector di = current - prev.
3) Apply the Move function on all particles in P
4) Apply the sense function on each particle in P.
5) Evaluate the weight of each particle according to its new position on the map.
6) Re-sample all particles into P’
7) Estimate the current position by calculating the particles’ average location in P’ I , considering their weights.
end
In some embodiments a black and white map is used in order to present geo-constraints used by the particle filter.
The above algorithm is relatively time efficient, however its precision may be insufficient in some cases, for example in large areas or areas with few constraints. The next section describes a particle filter based algorithm with advance methods to improve the results accuracy.
III. ADVANCED ALGORITHM
In this section an advanced localization algorithm is described: Improved map-constraint combined with an adjusted sense function potentially enables better accuracy. The next subsections describe the improved mapping process and the advanced particle filter algorithm.
The improved mapping and the adjusted sense function are both possible due to new AR smart phone technology. The next subsections explain the improved mapping process and the advanced particle filter algorithm.
A. Augmented Reality
The field of augmented reality (AR) has involved dramatically in recent years. Major companies have released powerful tools (e.g., Google’s ARCore, Apple’s ARKit and Qualcomm’s Vuforia), allowing developers an opportunity to harness AR abilities for geographic applications.
AR algorithms have the following features:
1) Feature points detection and tracking. 2) Visual-based optical flow.
3) Plane recognition and tracking.
In some embodiments one or more of the first two features enable estimating a user movement and orientation in real time, optionally even for each step taken by the user, and are optionally used to improve pedometer-sensed data. The third feature is optionally exploited to improve the mapping and the particle filter sense function, as will be explain in detail in the following subsections.
B. Methods for detecting floor-change
In order to generalize the localization algorithm from 2D to 3D (or 2.5D) a method is used for detecting floor change. An error of “wrong floor” is significant for a user and may cause significant error related to wrong constraints applied by a“wrong map”. In some embodiments, one or both of a barometer sensor and a 3D optical-flow were used in order to estimate elevation of a user. Both methods are relatively sensitive to changes in elevation, yet both also tend to drift. Moreover, 3D optical-flow methods are typically not able to detect vertical movement in an elevator, where the change in elevation is not seen. In some embodiments the following floor - change filter is used, which is based on rapid changes in barometer readings (for simplicity we assume that the barometer sampling rate is fixed):
1) On start, let zo be an init elevation - estimated according to the barometer pressure. Let Dzo = 0, let p < 1 be some positive parameter, usually related to the barometer sampling rate, e.g., p = 1/Hz, where Hz is the barometer sampling rate.
2) On barometer reading (zi), let Dzi = p(zi - zi-i) + (1 - r)D zi-i
3) If D zi > Cup (a threshold positive elevation rate), assume the user is going up.
4) else if D Zi < Cdown (a threshold negative elevation rate) assume the user is going down.
5) else assume the user is on a flat floor>
If the user was going up or down estimate the elevation-change between a current z and the last flat-floor parameter.
In some embodiments, when no knowledge about the current floor is provided, the particles are initially randomly spread among all floors.
In some embodiments the improved algorithm may optionally fuse the 3D optical-flow sensor reading with the barometer sensor, in some cases using a Kalman filter.
Reference is now made to Figures 13A-13C, which are simplified illustrations of using a particle filter to estimate a location according to an example embodiment of the invention. Reference is also made to Figure 13D, which is a graph showing atmospheric pressure measurements over time of according to an example embodiment of the invention.
Figures 11A-11C show a map and particles displayed upon the map, at three different stages.
Figure 11A shows an initial state, where the particles are uniformly distributed.
Figure 11B shows how, using a short motion vector, the particles begin to organize clusters.
Figure 11C shows the particles converged to a single position cluster.
In the test displayed in Figures 11A-11C a floor change detection occurred.
Figure 11D shows a graph having an X-axis of time and a Y-axis of barometric pressure (in PSI). Figure 11D shows barometric pressure over time. The detection of floor change enabled the algorithm to converge efficiently to the right 3D location.
C. Mapping
The advanced particle filter algorithm uses a map of the region of interest. In some embodiments such a map is assembled by an example embodiment system using the following technique:
1) using AR measurement tool for surface detection, which enables determining boundaries of a sampled region of interest ROI).
2) representing a map of the ROI in form of a“painted” image, using defined colors: A, B, C, D E to represent a variety of different constraints.
The colors are placed on the map. In some embodiments, one or more of the following constraints are represented according to the following logic:
• color A: Accessible area.
• color B: Inaccessible area, such as walls, fixed barriers, etc. optionally as sensed by the AR tool.
• color C: partially accessible area, adjacent to an inaccessible area, optionally having a fixed width.
• color D: Inaccessible area, one that could not have been identified by the AR tool due to its lack of vertical-surfaces-shape. This area may optionally be colored manually.
• color E: Area near floor changes, such as stairs.
The generated map is an input for the particle filter algorithm, and is later also used to determine the particles grade.
Reference is now made to Figure 12, which is a simplified illustration of a multicolor map used according to an example embodiment of the invention. Figure 12 shows a 2D multicolor map example used in the advanced algorithm. The Color white (marked by the letter A) represents accessible areas , the black color (marked as B ) represent fixed inaccessible areas (in this case walls), the color pink (marked as C) represents adjacent partially-accessible areas (near walls), the brown (marked as D ) represents dynamic inaccessible areas (tables in this case) and the yellow part (marked as E) represents stairs.
Reference is now made to Figure 13, which is a simplified illustration of a multicolor map used according to an example embodiment of the invention;
Figure 13 shows a multicolor map example used in the advanced algorithm. The color white is (marked by the letter A) represent accessible areas, the black color (marked as B) represent the fixed areas (in this case walls), the grey (marked as C) represents dynamic inaccessible areas (tables in this case) and the yellow part (marked as D) represent stairs and elevators.
It is noted that the map of Figure 13 includes two floors. In some embodiments the map of Figure 12 is used as a 2.5D map. Such a map can optionally benefit from elevation-related sensors such as barometric pressure or GPS, if available.
D. Improved Sense Function
The naive and the advanced particle filter algorithm differ in their sense functions. While the naive algorithm simply evaluates the weight of the particles according to their map location (a particle in a B or A area), the advanced algorithm may optionally perform an actual sensing, to determine how far each particle is from a ground truth. The sensing performed by the AR measurement tool measures, for example, a real distance from a nearest vertical obstacle, and compares the sensed distance to a calculated distance of each particle to a nearest B area in the same direction on the map. This comparison provides an ability to re-weight the particles in a more precise way. Note that the existence of the C area provides more flexibility with respect to inaccurate measurements.
E. Velocity Estimation
Indoor navigation method typically uses a device’s inertial measurement unit (IMU) sensor in order to implement a pedometer which detects the device’ s global orientation and counts’’steps”. Such a method potentially introduces inaccuracy both in the distance measured and in the orientation (i.e., some steps are larger than other, the device orientation is only loosely correlated with the walking orientation). In some embodiments, velocity estimation is based on optical flow with plan and range detection, such as described in [VDA+18], in order to estimate the user movement, optionally at a high sampling rate. Such a method and sampling rate enable an improved distance approximation by fusing optical features to reduce IMU drifts. F. Improving Compass Accuracy
The orientation reported by smartphones often suffers from significant errors due to magnetic interference. In order to reduce the orientation inaccuracy related to compass noise and bias, the particle-state may also include additional data, to estimate the compass original bias, and/or current drift. In some embodiments, each particle starts with an initial, optionally Gaussian, random value of compass bias. During the re-sampling process, each new particle is optionally assigned a compass-related state according to values of its nearest neighbors, with some minor noise. Each particle optionally uses the smartphone’s compass-measured data combined with its bias and drift for the move function.
G. Sparse Sensing
Practically all Particle Filter (PF) algorithms [TBF05] rely on an assumption of a continuous flow of incoming data from sensors. However, the sampling process cannot be guaranteed. Many localization problems include sparse sensing scenarios, i.e., scenarios where data from a sensor is obsolete. In context of the current Example, assuming a localization algorithm is vision based, blocking the camera can cause very serious ramifications. Since sampling only works well given the correct weights which are obtained from the real world via the sensors.
In some embodiments, when such a scenario is detected, the particle filter optionally reacts accordingly by adding random noisy movement to one or more of the particles, relative to a previously measured movement or pace. Such a reaction provides more scattered particles that potentially solve a momentary uncertainty.
H. Adjustable particle set
In a particle filter algorithm, increasing the number of particles potentially enables improved representation of the probabilistic space, which potentially leads to improved accuracy. On the other hand, the algorithm complexity is correlated to the number of particles. In some embodiments an adjustable particle filter is used, that adjusts the number of particles according to a size of an expected probabilistic space. Thus, in cases where there is a large region of possible solutions (e.g., init- stage over a few floors) a large set of particles may be used, yet later on, when the particle filter tends to converge, the number is optionally reduced - enabling a better practical runtime and/or lower memory usage.
I. Kidnapped Robot
Kidnapped Robot is a name of a well-known problem [TFBD01], which in context of localization, navigation, tracking, refers to a situation when the algorithm completely loses track of the real world location and the evaluation function performs badly. In some embodiments, in order to tackle the Kidnapped Robot problem, geolocation services (e.g., Google Maps Geolocation API) are optionally used, that provide input or constraints at a“building level” accuracy. The geolocation services are used as an anchor to the truth, see Figure 14. In case our system reports an extremely different location from the one the used geolocation service reported, we reboot the system based on the geolocation service report, respectively to the service expected accuracy.
Reference is now made to Figure 14, which is a simplified illustration of a map showing a comparison of a location determined by Google’s fused location service and a location method used according to an example embodiment of the invention;
Figure 14 shows Google’s fused location service location marked as a blue dot 1402, and its accuracy is marked with a light blue circle 1402. The particle filter location determined by an example localization algorithm used with reference to the exemplary positioning embodiments is marked with a green dot 1406.
J. Indoor and outdoor classification
There are several methods to detect if a user’s phone is indoors or outdoors. In some embodiments, by using GNSS signal strength, an estimation is performed whether phone has LOS (line of site) to a navigation satellite, see [YMW16] for more details. In some embodiments, a method is based on the phone’s light sensor, as in daylight the outdoor light is usually stronger than the indoor light (even on cloudy days) while at night-time the opposite happens. Using such a method provides an additional sense evaluation, optionally for use as a constraint and/or particle weight consideration.
Reference is now made to Figures 15A-15B, which are simplified illustrations of a potential advantage of using indoor/outdoor determination according to an example embodiment of the invention.
Figure 15A shows particle initialization marked on a map, without use of indoor / outdoor sensing data. Figure 15A shows particles 1502 scattered on a map, within a circle 1504 having a specific initial radius.
Figure 15B shows that an ability to classify between indoor and outdoor enables an example embodiment algorithm to dramatically decrease an overall area of possible solutions - potentially leading to a faster converges time and/or better accuracy. Figure 15B shows particles 1512 scattered on a map, within a circle 1514 having a specific initial radius, but only indoors, with respect to a building 1516 also shown on the map. IV. EXPERIMENTAL RESULTS
In this section an accuracy evaluation of an example embodiment indoor positioning method is presented. The result addresses the Microsoft Indoor Localization Competition I P S N 2018, in which a preliminary version of the example embodiment algorithm was implemented, which provides a 1-2 meter accuracy in a relatively complicated 3D scenario. It is noted that it is possible to improve the accuracy of the example embodiment algorithm to sub-feet accuracy level using an improved implementation on smart-phones with a 3D Time Of Flight ( TOF ) camera.
A. Study case: Microsoft Indoor Localization Competition
Since 2014 an annual Indoor Localization Competition has been organized by Microsoft, see [LL17] for 2014-2017 indoor localization evaluations. In the 2018 competition a preliminary version of an improved particle filter (named“STEPS”) was implemented by the inventors. The system was designed to improve existing indoor positioning services (such as Google’s indoor maps API) with an expected accuracy of 10-20 meter to an accuracy of 1-2 meter (3D). Overall the system performed as expected, allowing a rapid convergence of the particle filter - within 10-15 seconds (or 15-20 steps).
Reference is now made to Figure 16, which is a simplified illustration of tracking according to an example embodiment of the invention in comparison to a Lidar-based ground truth.
Figure 16 shows a 2D evaluation of a path 1602 (shown in green) as detected by the STEPS system with respect to the ground truth ( GT) path 1604 (shown in blue). The evaluation was performed at specific points in time, and error lines 1606 are shown in red.
Reference is now made to Figure 17, which is a graph showing height value error over time according to an example embodiment of the invention;
The graph of Figure 17 has an X-axis 1704 of time in arbitrary units and a Y-axis 1702 of relative elevation in meters.
Figure 17 shows a z-converge process - in which a floor-position, was accurately found after about 40 seconds.
Figure 17 includes a first line 1707 showing floor position, a second line 1708 showing the ground truth, and a third line 1706 showing the z-error, or difference between estimated elevation shown by the first line 1707 and ground truth shown by the second line 1708.
Figures 18 and 19, described below, show the different convergence nature of a particle filter regarding a 3D case (when the floor is unknown) and a 2D case (when the floor is given).
Reference is now made to Figure 18, which is a graph showing position error over time according to an example embodiment of the invention. The graph of Figure 18 has an X-axis 1804 of time in arbitrary units and a Y-axis 1802 of relative position in meters.
Figure 18 includes a first line 1806 showing X-Y error and a second line 1808 showing Z error. Figure 18 shows a test case lasting about 120 seconds. The elevation, or Z component of the location, converged from a height error of about 4 meters to a sub-meter error. The horizontal error also reduced, or converged, over time.
It is thought that upon converging on a correct elevation, the horizontal position converged as well, and that reducing elevation error potentially assists in horizontal convergence and/or shortens horizontal convergence time.
Reference is now made to Figure 19, which is a graph showing position error over time according to an example embodiment of the invention;
The graph of Figure 19 has an X-axis 1904 of time in arbitrary units and a Y-axis 1802 of relative position in meters.
Figure 19 includes a first line 1906 showing Z error and a second line 1908 showing X-Y error.
Figure 19 shows Particle Filter 2D convergence in a case where the correct floor, is known. A know value for the Z axis or elevation, is therefore used. The second line 1908 shows that the X-Y error reduces from an error of 4.5 meters to about 1.3 meters within 10 seconds (about 15 steps). During the rest of the test The horizontal error is about 1 meter, while the vertical error is (on average) below half a meter.
Sub-feet accuracy - Using phones with 3D camera
An example embodiment implementation of the localization algorithm in phones with a TOF (Time Of Flight) camera (e.g., Google’s Tango phones) differs from the augmented reality based single-camera phones (e.g., Google’s ARC ore), potentially providing increased accuracy and/or convergence time.
A 3D point cloud potentially enables to implement a high-accuracy localization loop using range-error analysis. The TOF cameras typically have a relatively narrow error range - usually smaller than 5cm (see [FPB+ 16]). An example embodiment implementation shows an improvement of expected accuracy.
Such implementation enables to improve the expected accuracy down to 10-20 centimeters. Moreover, using a TOF camera a 3D mapping of the region of interest can be performed and the localization particle filter algorithm can work on the 3D map.
It is noted that for many practical application a 1-2 meters accuracy is acceptable. For such a level of accuracy the use of a single camera smartphone with ARCore is sufficient. C. Study case: IPIN 2018
During September 2018 an indoor positioning competition was held in a large shopping mall at Nantes, France, as part of the IPIN 2018 Conference on Indoor Positioning [IPI18]. An on site competition had two tracks, with and without a camera. The inventors took part in the“Camera based Positioning” (Track 1). A preliminary version of an example embodiment of the algorithm was implemented on a Tango based Android Phone. The initial starting position was given to the competitors. The evaluation was conducted over about 70 known waypoints (each with a known 3D global position). The path was conducted on 3 floors - over 1 km long. Along the path the example embodiment algorithm used a few GNSS momentary positioning (via the mall sky-lights) for a global (inaccurate) position. The particle filter localization algorithm was able to maintain a relative 4-12 meter accuracy (7.2 meter on average), see Figures 20, 21. The overall evaluation of the example embodiment algorithm provided first place in the competition. Another example embodiment, the Goln algorithm [VL18], got second place.
Reference is now made to Figure 20, which is a graph showing position error over a path according to an example embodiment of the invention;
The graph of Figure 20 has an X-axis 2004 of relative position in meters and a Y-axis 2002 of relative position in meters.
Figure 20 includes green dots 2006 showing ground truth, a blue line 2008 showing a calculated position, as calculated by the example embodiment, and red lines
Reference is now made to Figure 21, which is a graph showing position error according to an example embodiment of the invention;
The graph of Figure 21 has an X-axis 2004 of consecutive waypoints and a Y-axis 2002 of error in meters.
Figure 21 shows an error for each of the 70 points along the evaluation process of the IPIN 2018 localization competition. At the last few points a large error was reported - due to a significant compass drift most probably generated by working elevator which was part of the evaluation path.
Fig 21 shows the actual measured accuracy (error) as tested in the IPIN2018 localization competition for the“STEPS” group. The graph shows the error (in meters) with respect to the waypoints (there where about 70 such points).
EXEMPLARY PARKING EMBODIMENTS
The exemplary parking embodiments below describe a novel approach to vehicle navigation, even in GNSS-denied environments. The approach fuses Dead Reckoning methods obtained from a smartphone and/or a vehicle’s on-board computer, and computes an accurate 2D, 2.5D or 3D position of the vehicle. An example embodiment algorithm is based on an advanced version of a particle filter algorithm which uses road-based events such as speed bumps, turns, altitude change and RF signals, optionally in addition to other sensors described herein, in order to estimate the vehicle’s location, potentially in real-time. An aspect of some example embodiments includes mapping the environment.
The present exemplary parking embodiments describe an underground parking lot, but the approach can be applied to other scenarios such as, by way of some non-limiting examples, roads, and more specifically but not exclusively tunnel roads.
In the present application, the terms smartphone and phone are used interchangeably with the terms computer such as a vehicle on-board computer, tablet, and similar computing devices.
I. INTRODUCTION
Global Navigation Satellite Systems (GNSS) navigation is used everywhere in the vehicle industry. Coupled with Map Matching (MM) and Dead Reckoning (DR) techniques, it apparently enables a fairly accurate vehicle localization on top of mapped roads [1, 8]. However, GNSS-denied environments such as indoors, tunnel roads and parking lots create a real challenge for those navigation algorithms.
Vision based navigation is yet another, less common approach, due to its complexity. However, light sources can serve as landmarks for IPS as was apparently suggested by [3, 4], who developed a vision-based indoor localization system for mobile robots, utilizing ceiling lamps as landmarks.
A lion’s share of research in this field is for pedestrian navigation, mainly for shopping malls, since the commercial incentive is clear - Location Based Services (LBS) [2]. Some vehicle indoor navigation research is apparently devoted to Unmanned Aerial Vehicle (UAV) navigation [7, 5].
The present description describes automobile 2,5D navigation in GNSS denied environments. However, additional devices can benefit from such navigation.
In the exemplary parking embodiments, a complementary sensor-based mechanism for accurate automobile navigation in underground parking-lots and freeway tunnels is described. The system and methods described harnesses, amongst other inputs, road-based events (e.g., crossing a speed-bump) as detected by a phone’s sensors in order to estimate the vehicle’s ego-location. The novelty of the system and methods described includes characterization and detection of the road- based events. The implementation offers a GNSS -level of accuracy in GNSS-denied environments, a feature unavailable today. II. A NAIVE NAVIGATION ALGORITHM
A common wisdom states that as technology progresses forward, its implementation should be simplified. Thus, a desired navigation scenario is as follows: a driver enters a vehicle with her phone and starts driving. The phone should always be able to present an exact location on roads, in tunnels and in parking lots. The naive navigation algorithm may rely on at least two information sources: the mobile device’s sensors and the vehicle itself. We start with the latter.
Modem vehicle on-board computers can produce valuable information in real-time regarding the vehicle’s state. In particular, the exact absolute speed, also known as Speed Over Ground (SoG) value and the wheel orientation.
A Course Over Ground (CoG) value is reported by a GNSS receiver and the phone’s absolute orientation is calculated from the IMU sensor. The phone orientation relative to the vehicle can be precisely computed. Once this figure is obtained, the vehicle’s orientation can be computed by the phone.
Given a location of an entrance point to a parking lot and given the vehicle self CoG or SoG values, the vehicle’s trajectory can be coarsely computed. This is called the Dead Reckoning (DR) approach.
Alas, DR itself produces unreliable results, mainly due to inherent noise/drift within the sensors and/or the environment.
In some embodiments of the present approach, one or more of the following factors are optionally used - a Region Of Interest (ROI) map and data fusion of road-based events.
The present description continues by assuming that the map is given. Elsewhere herein a description is provided of map obtainment, or mapping aspect of the exemplary parking embodiments.
In some embodiments the data fusion is performed by a probabilistic particle filter algorithm. We start by describing the road-based events and their detection:
A. Road-Based events
Detailed below are several road-based events that help the particle filter algorithm to converge.
1) Speed bumps: Nearly all parking lots have speed bumps installed on them (see description of Figure 23 below). The speed bumps provide at least two characterizations: first, they are relativity easy to map (only a few bumps on each floor) and, second, they can be detected by sensing the accelerometer value. The accelerometer measurements may not be distinguishable between two bumps, so a ROI map is used to compare location of a bump and a sensory (acceleration) indication of a bump. In some embodiments the CoG=SoG values are optionally also used in probabilistic methods like the particle filter method, to locate the right speed bump.
Figure 22 demonstrates an accelerometer graph of a vehicle oven a speed bump.
Reference is now made to Figure 22, which is a graph showing linear acceleration as captured by a smartphone positioned in a car according to an example embodiment of the invention.
The graph of Figure 22 has an X-axis 2204 of time in arbitrary units and a Y-axis 2202 of time.
Figure 19 includes a first red line 2206 showing X-axis acceleration, a second green line 2207 showing Y-axis acceleration, and a third blue line 2208 showing Z-axis acceleration.
Figure 22 shows linear acceleration in three axes as captured by a smartphone positioned in a car which passed over a speed-bump. It is noted that detecting a speed bump in a vicinity where a speed bump is expected may optionally be done using less acceleration measurement, for example in only one or two dimensions, along one or two axes.
2) An event of losing or retrieving a GNSS signal: A features of indoor navigation is a lack of GNSS signals. From a different perspective, one can get input and information from event of losing or retrieving the GNSS signal. Losing a GNSS signal is also called a GNSS“lost fix” event. Entering a parking-lot is usually accompanied by a sharp GNSS signal degradation or loss. The parking-lot entrance position can be deduced from the GNSS position a just before the signal degradation event. A similar method can also be used when exiting the parking lot. The“fix retrieval” event potentially indicates a location of the exit.
3) Height Changes: A useful and potentially accurate differential sensors is a barometer, also available in many contemporary smartphones. By differential, one means that an absolute height is not necessary to extract from a barometer. However, a floor shift is easy to spot as a change in atmospheric pressure.
On some embodiments, using a known entrance point by GNSS position, the floor of a parking lot can be determined. This is also true for determining levels halfway between floors.
4) Road Turns: A road turn can be detected from the phone’s gyroscope sensor. Many parking-lots have spiral turns, mostly between floors. Those spirals are unmistakable for the gyroscope sensor.
5) Leaving/Entering the Vehicle: Leaving the vehicle event implies the vehicle is at a parking spot. This event can be deduced both by a phone and by the vehicle from loss of Bluetooth connectivity with the vehicle.
6) Camera Based Events: a road facing camera, even a low-resolution camera, can produce valuable information regarding position and orientation of visual landmarks such as signs, pillars, colored pillars, coded pillars and such landmarks. In some embodiments even two consecutive similar- looking pillars may be distinguished, for example by pillar numbers, colors and similar markings often used in parking lots. Position of the pillars is optionally determined, optionally in global coordinates, and a vehicle’s position can be computed with a high accuracy level, potentially limited only by the map accuracy.
Reference is now made to Figure 23, which is a color image of a contemporary parking-lot with speed bumps, color column markings, and location codes.
Figure 23 is a picture was taken at a typical parking lot in Israel.
Figure 23 shows a speed bump 2302; a first column 2304 painted green marking a“green” portion of the parking lot; a second column 2306 painted red marking a“red” portion of the parking lot; markings 2308 on the columns - including unique identification codes on each column; pavement markings 2310; lights 2314 and signs 2311.
Figure 23 also shows green rectangles 2312 displaying where a vision system detected the speed bumps 2302.
As explained above, in some embodiments the road-based events are fused using a particle filter to get an accurate position.
B. Particle Filter
The Particle Filter (PF) is a member of the non-parametric multi-modal Bayesian filter family. A PF estimates the posterior by a finite number of parameters also called particles.
The particles are represented as Xt =
Figure imgf000060_0001
··· xj^, where N is the number of particles.
Each particle is represented by a belief function bel(xt). This belief function serves as a weight
(importance) of each particle. Thus, j(wf L\ xf L^: L E oportional to the likelihood of a specific particle: [6, 9]
Figure imgf000060_0002
where zl:t and Ui:t are the sense and action functions respectively.
The action function is periodic and progresses the particles at each time-stamp. Assuming the vehicle SoG=CoG are V and a respectively, the action function can be computed as:
x(t) = x(t - 1) + At V cos a
y(t) y(t - 1 ) + At V sin a The sense function, Zi.-t, is n°t necessarily periodic - road-based events are discrete events that change the probability space. Taking a speed-bump event for example. As demonstrated in Figure 22, the event can be detected with high certainty. This means that the vehicle is located in the vicinity of one of the speed bumps. In other words, all the other guesses (particles) are optionally eliminated, and/or their likelihood is diminished. Usually the detection is not absolute and may be only probabilistic, since there may be two or more speed bump candidates. In some embodiments the algorithm does not eliminate all the non- speed-bump-location particles and preserves a small portion of such particles. This approach can also solve the“kidnapped robot” problem.
III. MAPPING A PARKING LOT
An aspect of some embodiments includes mapping a parking lot. This section addresses a map obtainment problem; how can a parking-lot map be constructed? How can a multi-floor parking-lot map be constructed?
Some parking-lots provide a detailed map along with an exact scale. Such a map is shown in Figure 24.
Reference is now made to Figure 24, which is an example parking lot map. The map of Figure 24 shows detail down to a level of an individual parking stall, and may be geometrically accurate to at least that level, that is, ~ 1-2 meters.
In some embodiments, where such map is unavailable, a mapping process is optionally applied.
In some embodiments, a mapping process is optionally used to update a parking lot map (or other environment map). By way of some non-limiting examples, speed bumps in a parking lot may be moved around, or added, signs may be out up, taken down, changed, and so on. The mapping process optionally updates an electronic map.
It is desirable that the mapping process should be as simple and efficient as possible.
One can think of the mapping process as a Simultaneous Localization And Mapping (SLAM [6]) algorithm where a portion of the users (drivers) also function as mappers.
In some embodiments a trajectory of a vehicle is optionally calculated from the SoG = CoG values and road-based events are optionally marked on the trajectory, or added to the map.
Given a set of such trajectories, a refined, more accurate trajectory can optionally be computed, using the detected road-based events as geographically fixed markers, optionally for fine alignment between various trajectories.
A. Basic Mapping Motivated by the“Keep It Simple” requirement of the mapping stage we have used the following COTS devices to map: an android mobile phone (can be replaced by a car onboard Android console) and the car OBDII protocol allowing the mapping algorithm to extract an approximated speed. Using logged data the following GEO-based information was created:
(i) a 3D path - constructed from the car orientation (phone), speed (OBDII) and height (the phone barometer). Using a loop-closing method the path was corrected to include low drifts.
(ii) Road-based events, including:
- Speed-bumps (see the green rectangles 2312 in Figure 23).
- Vibrations and height change.
- Road turns (gyro).
- Loss and/or renewal of GNSS signal.
- Visual markers such as signs or lights (see Figure 23).
(iii) A 2.5D Parking-Map: The 3D path was used in order to construct a separate map for each floor. The road-based events were positioned in a corresponding floor-map.
B. Advanced Mapping
An advanced mapping algorithm optionally uses a computer vision algorithm. Recently, Google published its Vision API. The API apparently enables a user to get an image description, optionally in real time, using a pre-trained Google Artificial Neural Network (ANN). For example, an“Exit” sign can be detected utilizing this framework.
IV. PRELIMINARY RESULTS
This section describes results of several field experiments which were conducted in order to evaluate example embodiments. One experiment included testing an example embodiment algorithm in a large scale (underground) parking-lot. The tested parking-lot included three main floors with approximately 3,000 parking spots, three main entrances and a relatively complicated subdivision into about a dozen regions, see Figure 23. Moreover, the tested parking lot has a complex shape and includes sub-floors, intermediate floors with a 1 meter height difference from main floors.
The evaluating process was started with a 1 hour preliminary mapping stage. In the preliminary stage we performed a 20 minute drive in the parking-lot while logging the car speed (via OBDII protocol), orientation and sensory data available on a standard mobile phone (including barometric pressure, Gyro, Accelerometer, Magnetometer). The mobile phone was positioned in a phone holder attached to the car, and during the drive a low resolution video (QVGA) was captured at a rate of 30 fps. In practice, a full mapping stage lasted about three hours. We used the Parking-Map for the particle filter - enabling to report a 3D car position in real-time with a horizontal accuracy of 3-6 meters, with a sub-meter vertical error (floor detection was 100% accurate). The mapping results can be seen in Figures 25A-25C.
Reference is now made to Figures 25A-25C, which are screen capture illustrations of a mapping tool implemented on a smartphone according to an example embodiment of the invention.
Figures 25A-25C show an example embodiment of a display 2502 in a mapping application. The display 2502 includes a top portion where paths 2504a 2504b 2504c (X-Y trajectories) are displayed and optional images of a compass 2506 showing compass directions; a middle portion where a graph shows paths 2508a 2508b 2508c showing elevation during the travel (Z-trajectory); and a bottom portion displaying application controls 2510.
Figures 25A-25C show a mapping application based on an android phone with ARCoreSDK. The mapping application includes a vision-based mapping tool which potentially enables us a sub 1% error in mapping without even without performing corrections based on minimizing errors whenever a path forms a closed loop.
In some embodiments, localization errors are further minimized, by distributing errors when a path closes a loop. The example embodiment tool enables to perform a rapid mapping by simply driving through the parking lot.
In some embodiments, a dataset which includes the path is sent to a central server, which, by receiving more than one such dataset for a specific environment, optionally produces an improved map by averaging the data.
The preliminary implementation includes a large set of parameters defining the“sense- weight” for the road-base events (e.g., what values defines a speed-bump?). The application appears to be robust, simple to use, and suitable for a wide range of scenarios.
V. GENERALIZATION
As explained above, nowadays, most parking lots both color and number different areas, thus supporting a determination of a location of a vehicle car (Figure 23).
The coloring and numbering may appear to obviate the need for the location algorithm described here. Nevertheless, even colored and numbered parking lots create confusion. Moreover, the very same method can be generalized to other scenarios.
For example, the use of light sources as landmarks can be adapted in freeway tunnels. While outdoors free-way light sources are usually relevant during night time, tunnel lamps are lighted also during the day. Reference is now made to Figure 26, which is a photograph of a tunnel including visual landmarks according to an example embodiment of the invention.
Figure 26 shows a freeway tunnel, including light sources 2602 and signs 2604a 2604b.
One of the signs 2604b displays a code number. The coded sign 2604b is, shown in the photograph marked by a red circle 2606.
In some embodiments the navigation method described herein can be used as an aided lane- detection algorithm, which operates both where GNSS signals are available and where GNSS signals are available, and transitions between the GNSS-available and GNSS -unavailable areas. Many highways have light poles along the sides, sometimes on both sides. In some embodiments a center-of-mass of each light source is optionally detected using a simple brightness threshold as shown in Figures 27 A and 27B.
Reference is now made to Figures 27A and 27B, which are images of a highway according to an example embodiment of the invention.
Figure 27B shows a color photograph of a road during darkness, when road lights 2702 are lit.
Figure 27A shows the photograph of Figure 24B after applying a brightness threshold operation, in black and white. Figure 24A shows the road lights 2702 of Figure 27B as white spots 2704.
The terms“comprising”,“including”,“having” and their conjugates mean“including but not limited to”.
The term“consisting of’ is intended to mean“including and limited to”.
The term“consisting essentially of’ means that the composition, method or structure may include additional ingredients, steps and/or parts, but only if the additional ingredients, steps and/or parts do not materially alter the basic and novel characteristics of the claimed composition, method or structure.
As used herein, the singular form“a”,“an” and“the” include plural references unless the context clearly dictates otherwise. For example, the term“a unit” or“at least one unit” may include a plurality of units, including combinations thereof.
The words“example” and“exemplary” are used herein to mean“serving as an example, instance or illustration”. Any embodiment described as an“example or“exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments and/or to exclude the incorporation of features from other embodiments. The word“optionally” is used herein to mean“is provided in some embodiments and not provided in other embodiments”. Any particular embodiment of the invention may include a plurality of“optional” features unless such features conflict.
Throughout this application, various embodiments of this invention may be presented in a range format. It should be understood that the description in range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the invention. Accordingly, the description of a range should be considered to have specifically disclosed all the possible sub-ranges as well as individual numerical values within that range. For example, description of a range such as from 1 to 6 should be considered to have specifically disclosed sub-ranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6 etc., as well as individual numbers within that range, for example, 1, 2, 3, 4, 5, and 6. This applies regardless of the breadth of the range.
Whenever a numerical range is indicated herein (for example“10-15”,“10 to 15”, or any pair of numbers linked by these another such range indication), it is meant to include any number (fractional or integral) within the indicated range limits, including the range limits, unless the context clearly dictates otherwise. The phrases“range/ranging/ranges between” a first indicate number and a second indicate number and“range/ranging/ranges from” a first indicate number “to”,“up to”,“until” or“through” (or another such range-indicating term) a second indicate number are used herein interchangeably and are meant to include the first and second indicated numbers and all the fractional and integral numbers therebetween.
Unless otherwise indicated, numbers used herein and any number ranges based thereon are approximations within the accuracy of reasonable measurement and rounding errors as understood by persons skilled in the art.
It is appreciated that certain features of the invention, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the invention, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable sub-combination or as suitable in any other described embodiment of the invention. Certain features described in the context of various embodiments are not to be considered essential features of those embodiments, unless the embodiment is inoperative without those elements.
Although the invention has been described in conjunction with specific embodiments thereof, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, it is intended to embrace all such alternatives, modifications and variations that fall within the spirit and broad scope of the appended claims. All publications, patents and patent applications mentioned in this specification are herein incorporated in their entirety by reference into the specification, to the same extent as if each individual publication, patent or patent application was specifically and individually indicated to be incorporated herein by reference. In addition, citation or identification of any reference in this application shall not be construed as an admission that such reference is available as prior art to the present invention. To the extent that section headings are used, they should not be construed as necessarily limiting.
In addition, any priority document(s) of this application is/are hereby incorporated herein by reference in its/their entirety

Claims

WHAT IS CLAIMED IS:
1. A localization method comprising:
obtaining a map of a Region Of Interest (ROI);
obtaining a first input from a first sensor and a second input from a second sensor;
providing the first input and the second input to a processor;
using the processor to estimate a location based on the first input and the second input, wherein
the processor uses a particle filter method to estimate the location.
2. The method of claim 1 wherein the first input comprises images from a camera.
3. The method of any one of claims 1-2 wherein the second input comprises data associated which is unavailable in some area in the ROI.
4. The method of any one of claims 1-3 wherein the particle filter method is a modified particle filter method in which associates a likelihood with a candidate location based upon the first input and the second input.
5. The method of any one of claims 1 -4 wherein the particle filter method is a modified particle filter method further comprising performing soft-init.
6. The method of claim 4 performing soft-init is used to solve a state called“kidnapped-robot”.
7. The method of any one of claims 5-6 wherein the soft-init comprises adding a number of particles, the number in a range of 1-10% of a total number of particles, when the particle filter method performs re- sampling.
8. The method of claim 7 wherein the adding a number of particles comprises adding particles associated with candidate locations having a probability above a threshold probability, the probability based on a consideration selected from a group consisting of:
the candidate location can image a light source in the ROI;
the candidate location can image a sign in the ROI; and
the candidate location is in an elevator and an altitude change has been detected.
9. The method of any one of claims 1-5 wherein the particle filter method is a modified particle filter method further comprising removing a fraction of the particles each re- sample, the fraction in a range of 1-25% of a total number of the particles.
10. The method of any one of claims 1-9 wherein the particle filter method is a modified particle filter method further comprising using elevation change data as a particle filter map constraint.
11. The method of any one of claims 1-10 wherein the particle filter method is a modified particle filter method further comprising using one or more distinct environmental features in a particle filter map, the distinct features selected from a group consisting of:
a light;
a ceiling light;
a sign.
12. The method of any one of claims 1-11 wherein the particle filter method is a modified particle filter method further comprising using both angular bias and angular drift as part of a particle state.
13. The method of any one of claims 1-12 wherein the particle filter method is a modified particle filter method further comprising adapting a number of initial particles to a navigation scenario.
14. The method of any one of claims 1-13 wherein the particle filter method is a modified particle filter method further comprising using pedometry based on one or more data inputs selected from a group consisting of:
optical flow;
distance-to-object ranging; and
device orientation.
15. The method of any one of the above claims and further comprising using a map of a Region Of Interest for limiting locations of candidate locations.
16. The method of any one of the above claims wherein one of the first input and the second input comprises a light level input.
17. The method of any one of the above claims wherein at least one of the first input and the second input comprises a sensor in a smart phone or tablet.
18. The method of any one of the above claims wherein at least one of the first input and the second input comprises a sensor installed in a car.
19. The method of any one of the above claims and further comprising using input from at least one more sensor in the particle filter method.
20. The method of any one of the above claims wherein at least one of the first input and the second input comprises a sensor selected from a group consisting of:
a GPS receiver;
a GNSS receiver;
a WiFi receiver;
a Bluetooth receiver;
a Bluetooth Low Energy (BLE) receiver;
a 3G receiver;
a 4G receiver;
a 5G receiver;
an acceleration sensor;
a pedometer;
an odometer;
an attitude sensor;
a MEMS sensor;
a magnetometer;
a pressure sensor;
a light sensor;
an audio sensor;
a microphone;
a camera;
a multi-lens camera; a Time-Of-Flight (TOF) camera;
a range-finder sensor;
an ultrasonic range-finder;
a Lidar;
an RFID sensor; and
a NFC sensor.
21. The method of any one of the above claims wherein said particle filter method adapts a weight of a candidate location based upon associating a change in light level to proximity to a door of a building or proximity to a window.
22. The method of any one of the above claims wherein said particle filter method adapts a weight of a candidate location based upon a map of WiFi reception strength.
23. The method of any one of the above claims wherein said particle filter method adapts a weight of a candidate location based upon associating a change in GPS signal reception level to proximity to a door of a building or proximity to a window.
24. The method of any one of the above claims wherein said particle filter method adapts a weight of a candidate location based upon associating a vertical acceleration with an elevator or an escalator or stairs.
25. The method of any one of the above claims wherein said particle filter method adapts a weight of a candidate location based upon associating a change in pressure with an elevator or an escalator or stairs.
26. The method of any one of the above claims wherein said particle filter method adapts a weight of a candidate location based upon associating a change in magnetic field with a magnometer placed in proximity to a door of a building.
27. The method of any one of the above claims wherein the particle filter method comprises: producing initial candidate locations; and
iteratively improving accuracy of the candidate locations; and wherein at least some of the candidate locations are cancelled during at least one iteration.
28. The method of any one of the above claims, used for navigation in an area where GNSS signals are not received.
29. The method of any one of the above claims, used for navigation in a car park.
30. The method of any one of claims 1-28, used for navigation in a tunnel.
31. A method of mapping a Region Of Interest, the method comprising:
obtaining first sensor data from a first sensor and second sensor data from a second sensor; providing the first sensor data and the second sensor data to a processor;
using the processor to estimate a location based on the first sensor data and the second sensor data; and
sending the location to a mapping application.
32. The method of claim 31, and further comprising sending at least one of the first sensor data and the second sensor data.
33. The method of any one of claims 31-32, and further comprising using the mapping application to display the location on a map.
34. The method of any one of claims 31-33, and further comprising using the mapping application to display at least one of the first sensor data and the second sensor data.
35. The method of any one of claims 31-34, and further comprising updating the map based on receiving the location.
36. The method of any one of claims 32-34, and further comprising updating the map based on receiving at least one of the first sensor data and the second sensor data.
37. The method of any one of claims 31-36, and further comprising transmitting the location to a map server.
38. The method of any one of claims 31-37, and further comprising transmitting at least one of the first sensor data and the second sensor data to a map server.
39. A localization method comprising:
a) obtaining a map of a Region Of Interest (ROI);
b) obtaining a first input from a first sensor;
c) providing the first input to a processor;
d) using the processor to estimate a location based on the first input;
e) moving from the location and repeating (b)-(d);
and further comprising:
f) obtaining a second input from a second sensor;
g) providing the second input to the processor;
h) using the processor to estimate a location based on the second input in addition to the first input,
thereby increasing accuracy of the estimating the location.
40. The method of claim 39, wherein the processor uses a particle filter method to estimate the location.
41. The method of any one of claims 39-40, wherein the second sensor provides input intermittently.
42. The method of any one of claims 39-41, wherein the second sensor provides input only in specific areas of the ROI.
PCT/IL2019/050718 2018-06-28 2019-06-28 Localization techniques WO2020003319A1 (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US201862690955P 2018-06-28 2018-06-28
US201862690953P 2018-06-28 2018-06-28
US201862690958P 2018-06-28 2018-06-28
US62/690,953 2018-06-28
US62/690,955 2018-06-28
US62/690,958 2018-06-28

Publications (1)

Publication Number Publication Date
WO2020003319A1 true WO2020003319A1 (en) 2020-01-02

Family

ID=68984715

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2019/050718 WO2020003319A1 (en) 2018-06-28 2019-06-28 Localization techniques

Country Status (1)

Country Link
WO (1) WO2020003319A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112762928A (en) * 2020-12-23 2021-05-07 重庆邮电大学 ODOM and DM landmark combined mobile robot containing laser SLAM and navigation method
DE102021116510A1 (en) 2021-06-25 2022-12-29 Cariad Se Method and computing device for providing a route network map of a multi-storey car park
US20230100851A1 (en) * 2021-09-28 2023-03-30 Here Global B.V. Method, apparatus, and system for mapping a parking facility without location sensor data
CN115965682A (en) * 2022-12-16 2023-04-14 镁佳(北京)科技有限公司 Method and device for determining passable area of vehicle and computer equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130238236A1 (en) * 2012-03-12 2013-09-12 Google Inc. Location correction
WO2014020547A1 (en) * 2012-07-31 2014-02-06 Indoorgo Navigation Systems Ltd. Navigation method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130238236A1 (en) * 2012-03-12 2013-09-12 Google Inc. Location correction
WO2014020547A1 (en) * 2012-07-31 2014-02-06 Indoorgo Navigation Systems Ltd. Navigation method and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
BOAZ BEN MOSHE ET AL.: "Advanced particle filter methods.", HEURISTICS AND HYPER-HEURISTICS-PRINCIPLES AND APPLICATIONS, 31 December 2017 (2017-12-31), XP055648201, DOI: 10.5772/intechopen.69236 *
BOAZ BEN MOSHE ET AL.: "GoIn-an accurate indoor navigation framework for mobile devices", MICROSOFT INDOOR LOCALIZATION COMPETITION, TECH. REP., 31 December 2017 (2017-12-31) *
LANDA, VLAD ET AL.: "GoIn-An Accurate 3D InDoor Navigation Framework for Mobile Devices", 27 September 2018 (2018-09-27), pages 1 - 8, XP033447082, DOI: 10.1109/IPIN.2018.8533810 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112762928A (en) * 2020-12-23 2021-05-07 重庆邮电大学 ODOM and DM landmark combined mobile robot containing laser SLAM and navigation method
DE102021116510A1 (en) 2021-06-25 2022-12-29 Cariad Se Method and computing device for providing a route network map of a multi-storey car park
DE102021116510B4 (en) 2021-06-25 2024-05-08 Cariad Se Method and computing device for providing a route network map of a parking garage
US20230100851A1 (en) * 2021-09-28 2023-03-30 Here Global B.V. Method, apparatus, and system for mapping a parking facility without location sensor data
CN115965682A (en) * 2022-12-16 2023-04-14 镁佳(北京)科技有限公司 Method and device for determining passable area of vehicle and computer equipment
CN115965682B (en) * 2022-12-16 2023-09-01 镁佳(北京)科技有限公司 Vehicle passable area determining method and device and computer equipment

Similar Documents

Publication Publication Date Title
US11959771B2 (en) Creation and use of enhanced maps
US10845457B2 (en) Drone localization
KR102399591B1 (en) System for determining the location of entrances and areas of interest
WO2020003319A1 (en) Localization techniques
US10281279B2 (en) Method and system for global shape matching a trajectory
US10126134B2 (en) Method and system for estimating uncertainty for offline map information aided enhanced portable navigation
US9146113B1 (en) System and method for localizing a trackee at a location and mapping the location using transitions
CN107110651B (en) Method and apparatus for enhanced portable navigation using map information assistance
US10621861B2 (en) Method and system for creating a lane-accurate occupancy grid map for lanes
US9448072B2 (en) System and method for locating, tracking, and/or monitoring the status of personnel and/or assets both indoors and outdoors
US10444019B2 (en) Generating map data
CN105378431A (en) Indoor location-finding using magnetic field anomalies
EP3848674B1 (en) Location signaling with respect to an autonomous vehicle and a rider
AU2014277724B2 (en) Locating, tracking, and/or monitoring personnel and/or assets both indoors and outdoors
CN117203492A (en) Map matching track
JP6810723B2 (en) Information processing equipment, information processing methods, and programs
Landa et al. GoIn-An Accurate 3D InDoor Navigation Framework for Mobile Devices
Davidson Algorithms for autonomous personal navigation systems
JP2020085783A (en) Pedestrian-purpose positioning device, pedestrian-purpose positioning system, and pedestrian-purpose positioning method
Vourgidis et al. A Prediction-Communication P2V Framework for Enhancing Vulnerable Road Users’ Safety
Wang Indoor navigation for passengers in underground transit stations using smartphones

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19826352

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19826352

Country of ref document: EP

Kind code of ref document: A1