CN111174780A - Road inertial navigation positioning system for blind people - Google Patents

Road inertial navigation positioning system for blind people Download PDF

Info

Publication number
CN111174780A
CN111174780A CN201911404433.2A CN201911404433A CN111174780A CN 111174780 A CN111174780 A CN 111174780A CN 201911404433 A CN201911404433 A CN 201911404433A CN 111174780 A CN111174780 A CN 111174780A
Authority
CN
China
Prior art keywords
anchor point
module
algorithm
user
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911404433.2A
Other languages
Chinese (zh)
Other versions
CN111174780B (en
Inventor
许志鑫
刘儿兀
王睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University filed Critical Tongji University
Priority to CN201911404433.2A priority Critical patent/CN111174780B/en
Publication of CN111174780A publication Critical patent/CN111174780A/en
Application granted granted Critical
Publication of CN111174780B publication Critical patent/CN111174780B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/04Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by terrestrial means
    • G01C21/08Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by terrestrial means involving use of the magnetic field of the earth
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/02Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using radio waves
    • G01S5/0294Trajectory determination or predictive filtering, e.g. target tracking or Kalman filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Theoretical Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Environmental & Geological Engineering (AREA)
  • General Life Sciences & Earth Sciences (AREA)
  • Geology (AREA)
  • Navigation (AREA)

Abstract

The present invention relates to the field of location services (providing location services). A blind road inertial navigation positioning system comprises a road area paved with anchor points, an anchor point position database and wearable equipment of a user, wherein the embedded equipment is adopted, and the anchor point position database is in real-time communication connection with the wearable equipment; the wearable device comprises an acquisition module, an inertial positioning module, a target detection network module, a fusion module and a feedback system; anchor points are manually set in the region of interest, and an anchor point position database records the absolute positions and the IDs of the road anchor point images, and the like. The method is based on the inertial navigation technology, and simultaneously realizes the identification of the specific marker by combining the target detection technology so as to eliminate the accumulated error of the inertial sensor and provide high-precision positioning service for users; the blind sidewalk is identified, and warning feedback information is given when the user gradually breaks away from the blind sidewalk in the walking process; and the obstacles on the blind road are identified, and early warning is given when the obstacles appear on the blind road, so that the walking safety of the user is guaranteed.

Description

Road inertial navigation positioning system for blind people
Technical Field
The present invention relates to the field of location services (providing location services).
Background
Chinese has the visual handicapped people with the largest number in the world, and the 'difficult trip' is the most major problem faced by the people. In the case of no accompanying person, the visual impaired person alone can hardly realize the travel.
At present, a positioning method based on wearable equipment is mainly applied to outdoor positioning, such as positioning early warning, child bracelets and the like, and mainly depends on GNSS satellite signals.
Wearable devices that can serve visually impaired people to travel have also relied on the inventors to develop.
The development faces a plurality of problems, such as UWB system cost problem, stability problem of Bluetooth technology in a complex space, system maintenance problem and the like.
Specifically, the problem of accumulated errors existing in positioning by using an inertial sensor is inherent and inevitable, and a previous method for correcting the posture of a mobile phone based on zero speed detection is a solution, but one difficulty of zero speed detection is that the time interval of a zero speed interval in the moving process of a pedestrian is short and difficult to detect, and how to reliably detect the zero speed interval is still an important problem.
Also, there are many problems with wireless signals, such as stability issues, equipment deployment issues, system maintenance issues, interference rejection issues, energy consumption issues, privacy issues, and so on.
In addition, considering that the target group is the visually impaired, the algorithm is required to be low in energy consumption, strong in real-time performance, high in reliability and the like.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and discloses a road inertial navigation positioning system for blind people. The method is based on the inertial navigation technology, and simultaneously realizes the identification of the specific marker by combining the target detection technology so as to eliminate the accumulated error of the inertial sensor and provide high-precision positioning service for users; the blind sidewalk is identified, and warning feedback information is given when the user gradually breaks away from the blind sidewalk in the walking process; and the obstacles on the blind road are identified, and early warning is given when the obstacles appear on the blind road, so that the walking safety of the user is guaranteed.
The invention provides a technical scheme
A blind road inertial navigation positioning system is characterized by comprising a road area paved with anchor points, an anchor point position database and wearable equipment of a user, wherein the embedded equipment is adopted, and the anchor point position database is in real-time communication connection with the wearable equipment;
the wearable device comprises an acquisition module, an inertial positioning module, a target detection network module, a fusion module and a feedback system;
manually setting anchor points in the interested area, wherein an anchor point position database records the absolute positions and IDs of anchor point images of roads, namely anchor point images are arranged in the interested area, and the real positions of the images are known;
the acquisition module comprises a camera, an acceleration sensor, a magnetic field sensor and a gyroscope sensor, wherein the camera acquires road images in front of a user;
the inertial positioning module comprises a step detection module, a step estimation module and a course estimation module, wherein:
the step detection module is used for detecting whether a user performs step action, judging whether the user performs the step action by detecting the change of the acceleration module value output by the acceleration sensor, and updating the current position of the user through the step action of the user;
the step length estimation module estimates the step lengths at different moments, the step length estimation is shown as a formula (1), wherein β is a constant, the difference in the root number indicates the current step occurrence time,gamax、gaminthe maximum value and the minimum value of the acceleration vertical component of the terrestrial coordinate system are respectively.
L=β·(gamax-gamin)1/4(1)
The maximum value and the minimum value of the acceleration vertical component of the terrestrial coordinate system are as follows: assuming that the acceleration measured by the acceleration sensor in the wearable device is x, the acceleration x is the vector of the wearable device in its own coordinate system, i.e. the device coordinate system, and the acceleration vector is mapped to the earth coordinate system through the transformation relationship between the wearable device coordinate system and the earth coordinate system,the maximum and minimum values of the acceleration component in the direction of gravity in the terrestrial coordinate system during the period of time when the step occurs can then be calculated, respectively expressed asgamaxgamin
The heading estimation module can read the direction of the magnetic field from the magnetic field sensor, further determines the walking direction of the user by means of the gyroscope sensor, and fuses the magnetic field and the heading estimation module by using a fusion filtering algorithm (which is a mature technology in the prior art). The fusion filtering formula is as shown in formula (2) and formula (3):
St=ASt-1+BUt+wt(2)
Zt=HSt+vt(3)
wherein, S represents the estimated value of the dead reckoning, t represents the time, U represents the external input value, i.e. the output value of the gyro sensor in the invention, Z represents the measured value of the dead reckoning, i.e. the output value of the magnetic field sensor to the dead reckoning, w and v represent the prediction error and the measurement error respectively, and are represented by gaussian white noise, A, B, H represents the matrix transformation factor, a is H is 1, B is dt is the sampling time interval of the gyro;
the target detection network module is connected with the anchor point position database and comprises an artificial neural network; for each frame of image provided by a camera, detecting an input image by an artificial neural network, and if detecting that the current frame contains an anchor point image (the detection probability is greater than a given threshold value), returning a result that the current image contains the anchor point image; if a blind road exists in the current frame (the probability value obtained by the detection result is greater than a given threshold), but an obstacle also exists (the probability value obtained by the detection result is greater than the given threshold), the areas of the blind road and the obstacle in the image are overlapped, and the overlapped part is greater than the given threshold, the result returned by the algorithm is that the obstacle exists on the blind road; if the probability of detecting the blind track in the current frame is smaller than a given threshold value, returning a result that the track deviates from the blind track, and giving an early warning signal for deviating from the blind track; the final result is that the detection result includes the blind road, no obstacle exists on the blind road, the image does not include the anchor point image, and the probability of the blind road in the image is returned at the moment.
And the fusion module is responsible for fusing the result of the inertial navigation positioning module and the result of the target detection module and giving a final position coordinate. If the result returned by the target detection algorithm is that an anchor point image exists in the anchor point position database, the fusion result given by the fusion algorithm is the position coordinate of the anchor point image, and the measurement values of the inertial sensor are all cleared at the moment, so that the accumulated error is removed to improve the accuracy, because the position coordinate of the anchor point ID image at the moment is absolutely accurate and known; if the result returned by the target detection algorithm is the obstacle early warning information, the fusion result returned by the fusion algorithm is the position coordinate information given by the inertial navigation positioning algorithm, and meanwhile, the feedback system sends the early warning information of the front obstacle to the user; if the result returned by the target detection algorithm is the probability of the blind road, the fusion algorithm compares the probability value with a given threshold value, if the probability value is smaller than the given threshold value, the proportion of the blind road in the visual field is small, the feedback system prompts the user of the early warning information of the blind road separation, and if the probability value is larger than the given threshold value, the result returned by the fusion algorithm is the position coordinate value given by the inertial navigation positioning.
Wearable equipment possesses the camera, possesses CPU operational capability, has certain GPU operational capability, and the wearing mode can be other position relatively fixed's wearing modes such as intelligent glasses or wear-type, the camera of being convenient for like this to the collection of image. The camera is responsible for collecting road images in front of the user, and the collected image range is 1-2 m of the advancing direction of the user; the CPU is responsible for operating an inertial navigation positioning algorithm and a fusion algorithm; the GPU is responsible for executing the target detection network.
The invention does not use visible light communication technology, is not based on zero-speed detection, does not need a UWB base station, and is not based on hardware needed by other positioning technologies such as Bluetooth, Wi-Fi and RFID. The inertial navigation positioning system based on the wearable device and the target detection depends on hardware including an inertial sensor in the wearable device and certain computing capacity. The invention relates to a positioning technology based on an inertial sensor, which comprises an acceleration sensor, a gyroscope sensor and a magnetic field sensor; in order to correct the accumulated error, the problem of the measurement error of the inertial sensor is considered, and a target detection technology is used for assisting.
The present invention has the following features. The inertial sensor is embedded in a plurality of mobile terminals, and the civil inertial sensor is low in price, so that the system cost for realizing the inertial navigation system is low, and corresponding markers are required on a blind road for correcting the accumulated error of the inertial navigation system because a corresponding target is required by a target detection technology. Therefore, the invention has the characteristics of no interference of wireless signals, high stability, low cost, low maintenance cost and the like.
Drawings
FIG. 1 pedestrian dead reckoning principle
FIG. 2 Algorithm Overall flow
FIG. 3 is a detailed flow chart of an inertial navigation positioning algorithm
FIG. 4 detailed flow of the target detection algorithm
FIG. 5 shows the overall flow of the fusion algorithm
FIG. 6 is a schematic view of the system
Detailed Description
The technical solution of the present invention is further explained below with reference to the examples and the accompanying drawings.
The invention discloses a road inertial navigation positioning system for blind people for the first time.
The various modules of the system are described in detail below.
The positioning module comprises a step detection algorithm module, a step estimation algorithm module and a course estimation algorithm module. The step detection algorithm module is responsible for detecting whether the user has step action, and the current position of the user is updated through the step action of the user. Because the gravity center of a person can be changed in height in the walking process, the gravity center of the person is lowered when the person takes a leg away, and the gravity center of the person is raised when the person takes a leg away, the modulus value measured by the acceleration sensor can be periodically raised and lowered, and the step detection algorithm module judges whether the user has step action by detecting the change in height of the acceleration modulus value.
The step length estimation algorithm module considers the step length inconsistency of different users,even if the same user, the step length of the user is not consistent at different moments, therefore, in order to reduce errors, the invention adopts a step length estimation algorithm module to estimate the step length at different moments, the formula of the step length estimation is shown as formula (1), wherein β is a constant, the difference in the root number indicates the current step occurring time,gamaxgaminthe maximum value and the minimum value of the acceleration vertical component of the terrestrial coordinate system are respectively.
L=β·(gamax-gamin)1/4(1)
The heading estimation algorithm module can read the direction of the magnetic field from the magnetic field sensor, the measured value of the magnetic field sensor can be used as one of the reference values, but the uncertainty of the magnetic field distribution in the indoor environment is considered, and the result is inaccurate by only using the measured value of the magnetic field sensor, so that the walking direction of the user is further determined by using the gyroscope sensor. The fusion filtering formula is as shown in formula (2) and formula (3):
St=ASt-1+BUt+wt(2)
Zt=HSt+vt(3)
wherein the content of the first and second substances,
s represents the estimated value of the navigation position, t represents the time, U represents the external input value, namely the output value of the gyroscope sensor in the invention,
z represents the measured value of the position, i.e. the output value of the magnetic field sensor to the position,
w and v represent prediction error and measurement error, respectively, expressed in white gaussian noise,
A. b, H denotes the matrix transformation factor, a-H-1 and B-dt, i.e. the sampling time interval of the gyroscope.
Therefore, the final principle of the pedestrian dead reckoning algorithm is shown in fig. 1, the step length estimation algorithm is used for estimating L in the graph, the dead reckoning algorithm is used for estimating theta in the graph, therefore, when L, theta and the initial position of the user are known, the position of the user at any time can be obtained through calculation, and since L and theta have estimation errors, accurate estimation results need to be obtained, other means need to be assisted, and the accuracy of estimation is improved by combining the target detection algorithm.
The detailed flow chart of the target detection module and the algorithm is shown in figure 4 of the attached figure part. The input data of the algorithm comes from a camera of the wearable device, and the algorithm is responsible for detecting whether an anchor point image exists in an image, whether an obstacle exists on a blind road and the probability of the blind road in the image. First, a database is needed that records the absolute positions and IDs of anchor point images, i.e. anchor point images are arranged in the region of interest, the true positions of which are known as reference signals, from which the accumulated error of the inertial sensor is reduced. The specific principle is as follows:
for each frame of image from the camera, the artificial neural network detects the input image, and if the current frame is detected to contain the anchor point image (the detection probability is greater than a given threshold value), the returned result is that the current image contains the anchor point image; if a blind road exists in the current frame (the probability value obtained by the detection result is greater than a given threshold), but an obstacle also exists (the probability value obtained by the detection result is greater than the given threshold), the areas of the blind road and the obstacle in the image are overlapped, and the overlapped part is greater than the given threshold, the result returned by the algorithm is that the obstacle exists on the blind road; if the probability of detecting the blind track in the current frame is smaller than a given threshold value, returning a result that the track deviates from the blind track, and giving an early warning signal for deviating from the blind track; the final result is that the detection result includes the blind road, no obstacle exists on the blind road, the image does not include the anchor point image, and the probability of the blind road in the image is returned at the moment.
The detailed flow chart of the fusion module and the algorithm is shown in figure 5 of the accompanying drawings. The algorithm module is mainly responsible for fusing the result of the inertial navigation positioning algorithm module with the result of the target detection algorithm module and giving a final position coordinate. If the result returned by the target detection algorithm is that the anchor point image exists, the fusion result given by the fusion algorithm is the position coordinate of the anchor point image, and the measurement values of the inertial sensor are all cleared at the moment, so that the accumulated error is removed to improve the accuracy, because the position coordinate of the anchor point image at the moment is absolutely accurate and known; if the result returned by the target detection algorithm is the obstacle early warning information, the fusion result returned by the fusion algorithm is the position coordinate information given by the inertial navigation positioning algorithm, and meanwhile, the feedback system sends the early warning information of the front obstacle to the user; if the result returned by the target detection algorithm is the probability of the blind road, the fusion algorithm compares the probability value with a given threshold value, if the probability value is smaller than the given threshold value, the proportion of the blind road in the visual field is small, the feedback system prompts the user of the early warning information of the blind road separation, and if the probability value is larger than the given threshold value, the result returned by the fusion algorithm is the position coordinate value given by the inertial navigation positioning.
The overall flow of the system is shown in figure 2. Wherein:
the whole flow of the inertial navigation positioning algorithm is shown in fig. 3, the input of the algorithm is the measurement data of the inertial sensor, and when the algorithm is started, three threads, namely a step detection thread, a step estimation thread and a course estimation thread, are started at the same time. The three threads are divided into work, the step detection thread is responsible for detecting whether a user performs step action, and the current position of the user is updated in real time through the step action of the user; the step length estimation thread is responsible for estimating the step length of each step of the user, and the step length is considered to be different from person to person, so that the step length estimation method adds a step length estimation link to reduce the calculation error of the algorithm and improve the accuracy of the algorithm; the course estimation thread is responsible for estimating the walking direction of the user, the course of the user is estimated by adopting a fusion filtering algorithm, and the magnetic field sensor data and the gyroscope sensor data are fused by the fusion filtering algorithm in consideration of the uncertainty of indoor magnetic field distribution so as to reduce estimation errors.
When the step detection thread detects that the user has step action, reading the distance traveled by one step of the user from the step estimation thread, reading the walking direction of the user from the course estimation thread, determining the current position of the user according to the step and the walking direction, and clearing the value of the step estimation to start the next step estimation and clear the integral value of the gyroscope to carry out the next course estimation.
The target detection algorithm flowchart is shown in fig. 4, the input of the algorithm is an image collected from a camera, the initialization process is started when the algorithm is started, detection is performed on each image from the camera, detection results are given, the detection results are three, the image contains an anchor point image, obstacles exist on a blind road, and the probability of the blind road exists in the image is three. When the anchor point image appears in the image, the result returned by the target detection algorithm is that the anchor point exists; when the blind road in the image has the obstacle, the target detection algorithm returns early warning information of the obstacle; when the image has neither the anchor image blind road nor the obstacle, the target detection algorithm returns the probability that the blind road exists in the image (the judgment and the processing of the results are handed to the subsequent fusion algorithm).
The fusion algorithm flow chart is shown in fig. 5, the inputs of the algorithm are the inertial navigation positioning result and the result of the target detection algorithm, and the fusion algorithm fuses the results given by the two algorithms. Firstly, judging whether the result given by the target detection algorithm is the anchor point position, the obstacle early warning signal or the probability of blind roads in the image, and if the result given by the target detection algorithm is the obstacle early warning information, outputting the obstacle early warning information as the fusion result given by the fusion algorithm; if the result given by the target detection algorithm is that the anchor point image exists in the image, the fusion result given by the fusion algorithm is that the position of the anchor point image closest to the current position is matched from the database according to the current position of the user, the position of the anchor point is used for replacing the position of the user obtained by inertial navigation positioning measurement, each sensor of the inertial navigation positioning algorithm is set to be zero, and the inertial navigation algorithm is restarted, so that the aim of reducing the accumulated error of the inertial sensor is fulfilled, and the accuracy of the algorithm is improved; if the result given by the target detection algorithm is the probability of the blind road, the probability needs to be compared with a set threshold value, if the probability value returned by the algorithm is smaller than the threshold value, the probability that the blind road exists in the image is smaller, and the user is likely to break away from the blind road at the moment, so that the fusion result given by the fusion algorithm is early warning information for breaking away from the blind road, and if the probability value given by the algorithm is larger than the threshold value, the algorithm outputs the measured value given by the inertial navigation positioning algorithm at the moment.
System implementation
Equipment: wearable device, for which the following are required: possess the camera, possess CPU operational capability, have certain GPU operational capability, the wearing mode can be other relatively fixed wearing modes in position such as intelligent glasses or wear-type, the camera of being convenient for like this to the collection of image, for example wear on the wrist and not meet the requirement promptly.
The camera is responsible for collecting road images in front of the user, and the collected image range is 1-2 m of the advancing direction of the user; the CPU is responsible for operating an inertial navigation positioning algorithm and a fusion algorithm; the GPU is responsible for executing the target detection network.
The technical problems solved by the technical scheme of the invention are as follows:
the problem of accumulated errors existing in positioning realized by using an inertial sensor is inherent and inevitable, and the correction of the posture of a mobile phone by a predecessor based on zero speed detection is a solution, but one difficulty of zero speed detection is that the time interval of a zero speed interval in the moving process of a pedestrian is short and difficult to detect, and how to reliably detect the zero speed interval is still an important problem, so that the problem of the accumulated errors of the inertial sensor is solved by the invention, the zero speed detection method is not adopted, and the accumulated errors are eliminated by combining a target detection technology;
in consideration of various problems of wireless signals, such as stability problems, equipment deployment problems, system maintenance problems, anti-interference problems, energy consumption problems, privacy problems and the like, the method does not adopt a method based on the wireless signals, does not relate to wireless signals such as Bluetooth, Wi-Fi, geomagnetism and the like, and relies on hardware which is only an inertial sensor, including an acceleration sensor, a magnetic field sensor, a gyroscope sensor and the computing capability of equipment, so that the possible problems of the wireless signals are fundamentally avoided, and the robustness of the system is enhanced;
and thirdly, considering that the target group of the algorithm is the visually impaired, the algorithm is operated in the wearable device to be more in line with the use habit of the visually impaired, and simultaneously, the requirements are provided for the invention, namely the algorithm is required to be low in energy consumption, strong in real-time performance, high in reliability and the like. Therefore, in order to improve the real-time performance of the algorithm, the invention adopts the artificial neural network specially designed for the embedded equipment, so that the CPU can also meet the ideal speed requirement, and the invention has the characteristic of light weight.

Claims (1)

1. A blind road inertial navigation positioning system is characterized by comprising a road area paved with anchor points, an anchor point position database and wearable equipment of a user, wherein the embedded equipment is adopted, and the anchor point position database is in real-time communication connection with the wearable equipment;
the wearable device comprises an acquisition module, an inertial positioning module, a target detection network module, a fusion module and a feedback system;
manually setting anchor points in the interested area, wherein an anchor point position database records the absolute positions and IDs of anchor point images of roads, namely anchor point images are arranged in the interested area, and the real positions of the images are known;
the acquisition module comprises a camera, an acceleration sensor, a magnetic field sensor and a gyroscope sensor, wherein the camera acquires road images in front of a user;
the inertial positioning module comprises a step detection module, a step estimation module and a course estimation module, wherein:
the step detection module is used for detecting whether a user performs step action, judging whether the user performs the step action by detecting the change of the acceleration module value output by the acceleration sensor, and updating the current position of the user through the step action of the user;
the step length estimation module estimates the step lengths at different time instants, and the step length estimation is shown as a formula (1)where β is a constant, the difference in the root number indicates the time that the current step occurs,
Figure FDA0002348246610000011
the maximum value and the minimum value of the acceleration vertical component of the terrestrial coordinate system are respectively.
Figure FDA0002348246610000012
The course estimation module can read the direction of the magnetic field from the magnetic field sensor, further determines the walking direction of the user by means of the gyroscope sensor, and fuses the magnetic field sensor and the gyroscope sensor by using a fusion filtering algorithm; the fusion filtering formula is as shown in formula (2) and formula (3):
St=ASt-1+BUt+wt(2)
Zt=HSt+vt(3)
wherein, S represents the estimated value of the dead reckoning, t represents the time, U represents the external input value, i.e. the output value of the gyro sensor in the invention, Z represents the measured value of the dead reckoning, i.e. the output value of the magnetic field sensor to the dead reckoning, w and v represent the prediction error and the measurement error respectively, and are represented by gaussian white noise, A, B, H represents the matrix transformation factor, a is H is 1, B is dt is the sampling time interval of the gyro;
the target detection network module is connected with the anchor point position database and comprises an artificial neural network; for each frame of image provided by a camera, detecting an input image by an artificial neural network, and if detecting that the current frame contains an anchor point image (the detection probability is greater than a given threshold value), returning a result that the current image contains the anchor point image; if a blind road exists in the current frame (the probability value obtained by the detection result is greater than a given threshold), but an obstacle also exists (the probability value obtained by the detection result is greater than the given threshold), the areas of the blind road and the obstacle in the image are overlapped, and the overlapped part is greater than the given threshold, the result returned by the algorithm is that the obstacle exists on the blind road; if the probability of detecting the blind track in the current frame is smaller than a given threshold value, returning a result that the track deviates from the blind track, and giving an early warning signal for deviating from the blind track; the final result is that the detection result comprises a blind road, no barrier exists on the blind road, the image does not comprise an anchor point image, and the probability of the blind road in the image is returned at the moment;
and the fusion module is responsible for fusing the result of the inertial navigation positioning module and the result of the target detection module and giving a final position coordinate. If the result returned by the target detection algorithm is that an anchor point image exists in the anchor point position database, the fusion result given by the fusion algorithm is the position coordinate of the anchor point image, and the measurement values of the inertial sensor are all cleared at the moment, so that the accumulated error is removed to improve the accuracy, because the position coordinate of the anchor point ID image at the moment is absolutely accurate and known; if the result returned by the target detection algorithm is the obstacle early warning information, the fusion result returned by the fusion algorithm is the position coordinate information given by the inertial navigation positioning algorithm, and meanwhile, the feedback system sends the early warning information of the front obstacle to the user; if the result returned by the target detection algorithm is the probability of the blind road, the fusion algorithm compares the probability value with a given threshold value, if the probability value is smaller than the given threshold value, the proportion of the blind road in the visual field is small, the feedback system prompts the user of the early warning information of the blind road separation, and if the probability value is larger than the given threshold value, the result returned by the fusion algorithm is the position coordinate value given by the inertial navigation positioning.
CN201911404433.2A 2019-12-31 2019-12-31 Road inertial navigation positioning system for blind people Active CN111174780B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911404433.2A CN111174780B (en) 2019-12-31 2019-12-31 Road inertial navigation positioning system for blind people

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911404433.2A CN111174780B (en) 2019-12-31 2019-12-31 Road inertial navigation positioning system for blind people

Publications (2)

Publication Number Publication Date
CN111174780A true CN111174780A (en) 2020-05-19
CN111174780B CN111174780B (en) 2022-03-08

Family

ID=70652306

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911404433.2A Active CN111174780B (en) 2019-12-31 2019-12-31 Road inertial navigation positioning system for blind people

Country Status (1)

Country Link
CN (1) CN111174780B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111932866A (en) * 2020-08-11 2020-11-13 中国科学技术大学先进技术研究院 Wearable blind person outdoor traffic information sensing equipment
CN112102412A (en) * 2020-11-09 2020-12-18 中国人民解放军国防科技大学 Method and system for detecting visual anchor point in unmanned aerial vehicle landing process
CN113917452A (en) * 2021-09-30 2022-01-11 北京理工大学 Blind road detection device and method combining vision and radar

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130131985A1 (en) * 2011-04-11 2013-05-23 James D. Weiland Wearable electronic image acquisition and enhancement system and method for image acquisition and visual enhancement
WO2014119801A1 (en) * 2013-02-04 2014-08-07 Ricoh Company, Ltd. Inertial device, method, and program
CN106441319A (en) * 2016-09-23 2017-02-22 中国科学院合肥物质科学研究院 System and method for generating lane-level navigation map of unmanned vehicle
CN106595653A (en) * 2016-12-08 2017-04-26 南京航空航天大学 Wearable autonomous navigation system for pedestrian and navigation method thereof
CN106840148A (en) * 2017-01-24 2017-06-13 东南大学 Wearable positioning and path guide method based on binocular camera under outdoor work environment
CN106920260A (en) * 2017-03-02 2017-07-04 万物感知(深圳)科技有限公司 Three-dimensional inertia blind-guiding method and device and system
WO2017215024A1 (en) * 2016-06-16 2017-12-21 东南大学 Pedestrian navigation device and method based on novel multi-sensor fusion technology
CN109579853A (en) * 2019-01-24 2019-04-05 燕山大学 Inertial navigation indoor orientation method based on BP neural network

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130131985A1 (en) * 2011-04-11 2013-05-23 James D. Weiland Wearable electronic image acquisition and enhancement system and method for image acquisition and visual enhancement
WO2014119801A1 (en) * 2013-02-04 2014-08-07 Ricoh Company, Ltd. Inertial device, method, and program
WO2017215024A1 (en) * 2016-06-16 2017-12-21 东南大学 Pedestrian navigation device and method based on novel multi-sensor fusion technology
CN106441319A (en) * 2016-09-23 2017-02-22 中国科学院合肥物质科学研究院 System and method for generating lane-level navigation map of unmanned vehicle
CN106595653A (en) * 2016-12-08 2017-04-26 南京航空航天大学 Wearable autonomous navigation system for pedestrian and navigation method thereof
CN106840148A (en) * 2017-01-24 2017-06-13 东南大学 Wearable positioning and path guide method based on binocular camera under outdoor work environment
CN106920260A (en) * 2017-03-02 2017-07-04 万物感知(深圳)科技有限公司 Three-dimensional inertia blind-guiding method and device and system
CN109579853A (en) * 2019-01-24 2019-04-05 燕山大学 Inertial navigation indoor orientation method based on BP neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GIOFUSCO: "Self-Localization at Street Intersections", 《2014 CANADIAN CONFERENCE ON COMPUTER AND ROBOT VISION》 *
曲法义,等: "基于惯导/GPS/视觉的无人机容错相对导航方法", 《中国惯性技术学报》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111932866A (en) * 2020-08-11 2020-11-13 中国科学技术大学先进技术研究院 Wearable blind person outdoor traffic information sensing equipment
CN112102412A (en) * 2020-11-09 2020-12-18 中国人民解放军国防科技大学 Method and system for detecting visual anchor point in unmanned aerial vehicle landing process
CN113917452A (en) * 2021-09-30 2022-01-11 北京理工大学 Blind road detection device and method combining vision and radar
CN113917452B (en) * 2021-09-30 2022-05-24 北京理工大学 Blind road detection device and method combining vision and radar

Also Published As

Publication number Publication date
CN111174780B (en) 2022-03-08

Similar Documents

Publication Publication Date Title
CN111174781B (en) Inertial navigation positioning method based on wearable device combined target detection
CN111174780B (en) Road inertial navigation positioning system for blind people
CN105607104B (en) A kind of adaptive navigation alignment system and method based on GNSS and INS
CN104457751B (en) Indoor and outdoor scene recognition method and system
US9146113B1 (en) System and method for localizing a trackee at a location and mapping the location using transitions
CN104180805B (en) Smart phone-based indoor pedestrian positioning and tracking method
CN108489489B (en) Indoor positioning method and system for correcting PDR (product data Rate) with assistance of Bluetooth
CN111006655A (en) Multi-scene autonomous navigation positioning method for airport inspection robot
Ladetto et al. In step with INS navigation for the blind, tracking emergency crews
CN110553648A (en) method and system for indoor navigation
CN111879305B (en) Multi-mode perception positioning model and system for high-risk production environment
KR20060087449A (en) Vehicle position recognizing device and vehicle position recognizing method
CN105865450A (en) Zero-speed update method and system based on gait
CN109855621A (en) A kind of composed chamber's one skilled in the art's navigation system and method based on UWB and SINS
CN113140132B (en) Pedestrian anti-collision early warning system and method based on 5G V2X mobile intelligent terminal
CN108458746A (en) One kind being based on sensor method for self-adaption amalgamation
CN104075718B (en) Pedestrian's track route localization method of fixing circuit
CN108827308B (en) High-precision pedestrian outdoor positioning system and method
CN112550377A (en) Rail transit emergency positioning method and system based on video identification and IMU (inertial measurement Unit) equipment
Bhandari et al. Fullstop: A camera-assisted system for characterizing unsafe bus stopping
JP2020204501A (en) Self position estimation device of vehicle, and vehicle
CN113916221B (en) Self-adaptive pedestrian dead reckoning method integrating visual odometer and BP network
JP6903955B2 (en) Moving object state quantity estimation device and program
CN114674317A (en) Self-correcting dead reckoning system and method based on activity recognition and fusion filtering
CN109099926B (en) Method for collecting indoor positioning fingerprints

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant