CN110458887A - A kind of Weighted Fusion indoor orientation method based on PCA - Google Patents

A kind of Weighted Fusion indoor orientation method based on PCA Download PDF

Info

Publication number
CN110458887A
CN110458887A CN201910636664.XA CN201910636664A CN110458887A CN 110458887 A CN110458887 A CN 110458887A CN 201910636664 A CN201910636664 A CN 201910636664A CN 110458887 A CN110458887 A CN 110458887A
Authority
CN
China
Prior art keywords
frame image
pca
elm
image
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910636664.XA
Other languages
Chinese (zh)
Other versions
CN110458887B (en
Inventor
徐岩
李宁宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201910636664.XA priority Critical patent/CN110458887B/en
Publication of CN110458887A publication Critical patent/CN110458887A/en
Application granted granted Critical
Publication of CN110458887B publication Critical patent/CN110458887B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)
  • Navigation (AREA)

Abstract

The invention discloses a kind of Weighted Fusion indoor orientation method based on PCA, comprising: training sample set is input in ELM neural network, establishes neural network model to training sample set using ELM regression algorithm by initialization ELM model;Test sample collection is input in trained neural network model, the relative displacement between consecutive frame image is obtained, relative displacement result is integrated, the position of frame image is obtained;Image fuzzy Judgment is introduced, calculates the position of current frame image as frame framing result using the position of Inertia information and previous frame image;Drift correction is carried out to original acceleration measuring signal using visual information, the acceleration after drift correction is subjected to quadratic integral, obtains inertial positioning result;Weight distribution is carried out to frame image, inertial positioning result using PCA, obtains final positioning result.The accumulated error of INS can be effectively controlled in this method, effectively solves the problems, such as VNS vulnerable to external disturbance.

Description

A kind of Weighted Fusion indoor orientation method based on PCA
Technical field
The present invention relates to indoor positioning, information fusion and field of signal processing, more particularly to it is a kind of based on PCA (it is main at Point analysis) Weighted Fusion indoor orientation method.
Background technique
In recent years, the demand rapid growth of indoor positioning service, indoor locating system become more and more popular.The whole world is fixed Position system (Global Positioning System, GPS) is most popular positioning and navigation system, and GPS is in outdoor environment Positioning accuracy can reach several meters when middle use, have good accuracy and higher confidence level.But due to wall barrier and more Diameter effect, GPS can not provide reliable service in environment indoors.It is well known that indoor positioning item tracking indoors, in shop It is played an important role in a variety of applications such as shopping guide and indoor navigation.Therefore, it is necessary to find more effective way to mention For indoor positioning service.
Be currently suggested multiple indoor location technology: the location technology based on single piece of information source, the method realized extensively are Received signal strength (RSS).Signal source can be Wi-Fi[1], FM, bluetooth etc..Wi-Fi is considered as most promising method, It can be by Positioning Precision Control in the range of several meters.But Wi-Fi system, there are some disadvantages, Wi-Fi access point is usual About 90 meters of covering radius of region, and it is easy the interference by other signals.Bluetooth indoor positioning has at low cost, power consumption Low, the advantages that equipment volume is small, but for complicated space environment, the stability of bluetooth positioning system is slightly worse, is easy by outer The influence of portion's noise signal.It can be seen that the location technology in existing single piece of information source is restricted by positioning accuracy and reliability, all It can not popularization and application in daily life.
Recently, inertial navigation system (INS)[2]Have become the focus of indoor positioning research.Because it can be not outer Position is provided in the case where portion's equipment, and there is quick data renewal speed, has small in size, at low cost, portability is strong The features such as.But as time goes by, the error of gyroscope and accelerometer will increase rapidly, INS is merely able to realize In short term, short distance positioning[3].With continuously improving for computer vision technique, domestic and international expert is to vision navigation system (VNS) It is more and more interested.VNS[4]A kind of good method understood using vision data and perceive indoor environment is provided, it can be High position precision is obtained in scene with characteristic matching abundant and identification.C.Piciarelli[5]It proposes a kind of by image It is compared to realize the vision indoor positioning technologies of positioning (at this with the reference model of the visual signature with position mark In referred to as VL algorithm), compared with non-vision navigation system, it, which has, contains much information, and positioning accuracy is high, it is noiseless the advantages that, However it is ineffective in some cases, it such as blocks, light variation and personnel access interference etc..
Summary of the invention
The present invention provides a kind of Weighted Fusion indoor orientation method based on PCA, using ELM, (limit learns the present invention Machine) regression algorithm obtain view-based access control model data location information, utilize visual information static feedback carry out drift correction, improve Traditional inertial positioning finally utilizes principal component analytical method (PCA)[6]Improved inertia and vision positioning result are weighed It reassigns, this method can effectively control the accumulated error of inertial navigation system (INS), effectively solution vision navigation system (VNS) vulnerable to external disturbance the problem of, described below:
A kind of Weighted Fusion indoor orientation method based on PCA, the described method comprises the following steps:
ELM model is initialized, training sample set is input in ELM neural network, using ELM regression algorithm to training sample This collection establishes neural network model;
Test sample collection is input in trained neural network model, the relative displacement between consecutive frame image is obtained, Relative displacement result is integrated, the position of frame image is obtained;
Image fuzzy Judgment is introduced, is made using the position that the position of Inertia information and previous frame image calculates current frame image For frame framing result;
Drift correction is carried out to original acceleration measuring signal using visual information, the acceleration after drift correction is carried out Quadratic integral obtains inertial positioning result;
Weight distribution is carried out to frame image, inertial positioning result using PCA, obtains final positioning result.
Wherein, the training sample set specifically:
The SURF descriptor for extracting image, to the N number of SURF characteristic point of every frame image zooming-out, and carries out Feature Points Matching;It adopts Matching result is handled with RANSAC algorithm, Mismatching point is removed, obtains affine transformation matrix;Calculate each frame image with Relative displacement between next frame image true coordinate;
It is inputted affine matrix as training, relative displacement establishes training sample set as training output.
Further, the introducing image fuzzy Judgment is calculated current using the position of Inertia information and previous frame image The position of frame image is as frame framing result specifically:
This index of introduced feature matching rate NS, by can be used for measuring whether current image obscures to NS given threshold;
When NS is less than threshold value, determine that the frame image is fuzzy, using the positioning result of the previous frame obtained by ELM, in conjunction with Acceleration, the Inertia informations such as gyroscope obtain the position of present frame.
Wherein, described that drift correction is carried out to original acceleration measuring signal using visual information, after drift correction Acceleration carries out quadratic integral specifically:
By the threshold value of translational movement between setting consecutive frame image, to distinguish motion state and stationary state.Work as translation vector When less than threshold value, judgement is currently at stationary state, to carry out drift correction, otherwise regards as motion state and its original is kept to add Velocity amplitude is constant, carries out quadratic integral to the acceleration after drift correction and obtains final position.
The beneficial effect of the technical scheme provided by the present invention is that:
(1) present invention merges inertia and visual information by PCA, so that inertia and visual information are rung in accuracy and frequency It answers aspect to obtain complementation, improves positioning performance;
(2) present invention carries out drift correction to inertial data by carrying out static feedback using visual information, can be effective Ground controls the error accumulation of inertial navigation system (INS);
(3) present invention carries out weight distribution to improved inertia and vision positioning result using PCA, and it is fused fixed to obtain Position is as a result, the positioning result made is more accurate.
Detailed description of the invention
Fig. 1 is a kind of flow chart of Weighted Fusion indoor orientation method based on PCA;
The error accumulation distribution map contrast schematic diagram for the location algorithm that Fig. 2 is proposed by this method and document [5];
Fig. 3 is the concealed nodes number of vision positioning precision and ELM and the relation schematic diagram of activation primitive.
Specific embodiment
To make the object, technical solutions and advantages of the present invention clearer, embodiment of the present invention is made below further Ground detailed description.
The embodiment of the invention provides a kind of Weighted Fusion indoor orientation method based on PCA, is mainly made of 3 parts: The positioning of view-based access control model, the positioning based on inertia, and the positioning of the fusion based on PCA.
One, the positioning of view-based access control model:
Training sample set of the pretreatment creation comprising training input X and training output T is carried out to training vision data, will be instructed Practice sample set to be input in ELM neural network, using ELM regression algorithm learning training sample set, establishes ELM neural network mould Type.
Testing vision data are pre-processed, test sample collection is created, test sample collection is input to trained ELM In neural network model, the relative displacement between consecutive frame image is obtained, then by integral calculation, obtains the position of each frame picture Set coordinate.
Introduce image fuzzy Judgment simultaneously, when the image in video sequence is judged as fuzzy, using Inertia information and The position of previous frame image calculates the position of present frame, the positioning result as vision.
Wherein, above-mentioned ELM neural network, ELM regression algorithm, establish the step of ELM neural network model and image The step of fuzzy Judgment, is known to those skilled in the art, and the embodiment of the present invention does not repeat them here this.
Two, based on the positioning of inertia:
The static exercise state that video camera detects is fed back into inertial sensor, carries out drift correction.If at one section When characteristic point pixel in time between successive frame hardly happens variation, just determine that object is in stationary motion state.However Most of inertial data is not zero, thus using vision system by this static state-feedback to inertial sensor be used for into Row drift correction removes inertial data cumulative errors.
By the threshold value of translational movement between setting consecutive frame image, to distinguish motion state and stationary state.Work as translation vector When less than threshold value, judgement is currently at stationary state, to carry out drift correction, otherwise regards as motion state and keeps its former speed Angle value is constant.
Using the result of above-mentioned drift correction as the result of inertial positioning.
Three, the fusion positioning based on PCA:
Using vision positioning result and inertial positioning result as two indices, their corresponding weights are determined using PCA, The positioning result of vision and inertia is multiplied by obtaining final positioning result after corresponding Weight.
Embodiment 1
The technical solution of the embodiment of the present invention is further introduced below with reference to specific calculation formula, attached drawing, It is described below:
Training sample set of the pretreatment creation comprising training input X and training output T is carried out to training vision data:
Firstly, pre-processing to visual information, SURF (accelerating robustness) feature is extracted to every frame training image, and will The image I that number is iiThe image I for being i+1 with numberi+1It is matched, it is right using random sampling consistency (RANSAC algorithm) Matching result is handled, and is removed Mismatching point, and calculate affine transformation matrix, is obtained the affine transformation square for being best suitable for match point Battle array, formula are as follows:
In formula, r represents rotation angle, and A is scale vectors, Tx, TyRepresent translation vector.
Secondly, initialization ELM model, training sample is input in ELM neural network, using ELM regression algorithm to instruction Practice sample set and establishes neural network model.
The step of constructing target output vector, by the relative displacement Δ T between adjacent two field picturesiAs output vector, Δ Ti It can be acquired by following formula:
ΔTi=(Δ xi,Δyi)=(xi-xi-1,yi-yi-1) (2)
In formula: i ∈ (1 ..., N), xi,yiIt is the true x-axis and y-axis coordinate of every frame image, Δ x respectivelyiIt is adjacent in x-axis Displacement between two field pictures, Δ yiFor the displacement between two field pictures adjacent in y-axis.
ELM model is initialized, training sample set is input in ELM neural network, using ELM regression algorithm to training sample This collection establishes neural network model.
Wherein, the step of above-mentioned initialization is known to those skilled in the art, and the embodiment of the present invention does not repeat them here this.
Testing vision data are pre-processed, test sample collection is created, and test sample collection are input to trained In ELM neural network model, obtain being displaced output accordingly.
The output position coordinate difference Δ T of each frame and next framek=(Δ xk,Δyk), k ∈ (1 ..., M) then passes through Formula is to Δ TkIntegral calculation is carried out, each frame picture I is obtainedkOutput position.
In order to measure the fog-level of present image, this index of characteristic matching rate is introduced, calculation formula is as follows:
NS=Nc/Nq (4)
Wherein, NS represents characteristic matching rate, NcCharacteristic matching quantity between consecutive frame image, NqRepresent reference picture Feature sum.
By setting a reasonable threshold value to NS, can be used to measure whether present image obscures.When picture blur, That is when NS is less than threshold value, present frame is positioned using inertial data according to formula (5)-(6), it is upper using being obtained by ELM The positioning result of one frame, in conjunction with acceleration, the Inertia informations such as gyroscope obtain the position of present frame.
xk=xk-1+Δsk-1·cosψk-1 (5)
yk=yk-1+Δsk-1·sinψk-1 (6)
Wherein, xkAnd ykIt is position coordinates of the k moment target in reference frame.Δsk-1It is target from k-1 moment to k The distance run in this period at moment, ψk-1It is angle of the target at the k-1 moment.
Drift correction is carried out to original acceleration measuring signal using visual information, the acceleration after drift correction is carried out Quadratic integral obtains the positioning result based on inertial data specifically:
Static exercise state inertial data is fed back to using visual information to be used to carry out drift correction.When in a period of time When characteristic point pixel between successive frame hardly happens variation, just determine that object is in stationary motion state.
The embodiment of the present invention passes through the threshold value of translational movement between setting consecutive frame image, to distinguish motion state and static shape State.When translation vector is less than threshold value, judgement is currently at stationary state, to carry out drift correction, otherwise regards as moving State keeps its former acceleration value constant, carries out quadratic integral to the acceleration after drift correction and obtains final position.
Translational movement calculation formula is as follows:
Wherein, Tx, TyFor translation vector in affine transformation matrix.
Drift correction formula is as follows:
Wherein, akFor the acceleration information at k moment, FkTranslational movement between consecutive frame image, thFBetween consecutive frame image The threshold value of translational movement.
Weight distribution finally is carried out to vision positioning result and inertial positioning result using PCA, to obtain final Positioning result.
In conclusion the embodiment of the present invention merges inertia by PCA and visual information is more robust to provide, more accurately Positioning result, the embodiment of the present invention introduce drift correction, and effective solution history inertial data accumulated error is melted by PCA Inertia and vision positioning are closed as a result, fusion positioning result is more accurate.
Embodiment 2
The feasibility of scheme in embodiment 1 is verified below with reference to Fig. 2-Fig. 3, table 1 and specific example, is detailed in It is described below:
To the effect of this method, using the algorithm steps in embodiment 1 as above to total duration 65 seconds, shift length 13m Experiment carry out positioning analysis, the experiment include personnel arbitrarily pass in and out and scene be mutated etc. interference scene.Parameter setting is such as Under: node in hidden layer 450, SURF characteristic NqIt is 30.
Qualitative angle, Fig. 2 are illustrated by the error accumulation distribution map comparison for the location algorithm that this method and document [5] propose Figure;In order to reach the best locating effect of VL algorithm, therefore VL algorithm SURF number is set as 100 by this method in comparative experiments, It, can it can be seen that this method has apparent advantage compared to VL algorithm in terms of reliability and stability from experimental result By control errors within 2m, accurately positioning result can be still provided in the case where personnel's interference.
From quantitative angle, table 1 is that vision positioning, improved inertial positioning, PCA fusion positioning and the VL based on ELM are calculated Each evaluation index result that method obtains.
Each evaluation index result of table 1
This method further improves the precision of inertial positioning system after drift correction, and with single vision positioning system System is compared, and the RMSE of the method proposed is usually smaller.This means that emerging system is only more steady than single vision positioning system It is fixed and steady.When being compared with VL, this algorithm is promoted in terms of positioning accuracy compared to VL, and the method proposed is in reality Advantage in terms of when property is obvious.
In practical applications, the relevant parameter that this method is related to need to be configured.Fig. 3 is shown with hidden layer section The increase position error of points is gradually reduced, when the increase of concealed nodes number to a certain extent when, position error no longer will obviously subtract It is small.Locating effect when using sigmoid function as activation primitive when ratio sine function and radbas function is demonstrated simultaneously More preferably.
The optimized parameter of this experiment is provided that node in hidden layer is 450, and activation primitive is sigmoid function, right The quantity N of the SURF feature of each frame image zooming-outq30 are set as, image fuzzy Judgment threshold value is set as 0.8, while according to view Feel that the threshold value of translational movement when information carries out static feedback is set as 0.06, using PCA to inertial positioning result and vision positioning knot When fruit is weighted fusion, vision positioning result and the corresponding weight of inertial positioning result are 0.65 and 0.35.
The results show, under the parameter setting, this method real-time, stability and in terms of take Obtain extraordinary effect.
Bibliography
[1] the WiFi indoor positioning technologies research Mianyang [D] of Chen Jiao based on RSSI ranging: Xinan Science and Technology Univ., 2015.
[2]Faulkner W T,Alwood R,Taylor D W A,et al.Altitude accuracy while tracking pedestrians using a boot-mounted IMU[C].Position Location and Navigation Symposium.IEEE,2010:90-96.
[3]Remazeilles A,Chaumette F.Image-based robot navigation from an image memory[J].Robotics&Autonomous Systems,2007,55(4):345-356.
[4]Foxlin,Eric.Pedestrian Tracking with Shoe-Mounted Inertial Sensors [J].IEEE Computer Graphics and Applications,2005,25(6):38-46.
[5]C.Piciarelli,‘Visual Indoor Localization in Known Environments’, IEEE Signal Process.Lett.,2016,23,(10),pp.1330-1334.
[6]Fan Y G,Li P,Song Z H.KPCA based on feature samples for fault detection[J].Kongzhi yu Juece/Control and Decision,2005,20(12):1415-1418+ 1422.
The embodiment of the present invention to the model of each device in addition to doing specified otherwise, the model of other devices with no restrictions, As long as the device of above-mentioned function can be completed.
It will be appreciated by those skilled in the art that attached drawing is the schematic diagram of a preferred embodiment, the embodiments of the present invention Serial number is for illustration only, does not represent the advantages or disadvantages of the embodiments.
The foregoing is merely presently preferred embodiments of the present invention, is not intended to limit the invention, it is all in spirit of the invention and Within principle, any modification, equivalent replacement, improvement and so on be should all be included in the protection scope of the present invention.

Claims (4)

1. a kind of Weighted Fusion indoor orientation method based on PCA, which is characterized in that the described method comprises the following steps:
ELM model is initialized, training sample set is input in ELM neural network, using ELM regression algorithm to training sample set Establish neural network model;
Test sample collection is input in trained neural network model, the relative displacement between consecutive frame image is obtained, to phase Displacement result is integrated, the position of frame image is obtained;
Image fuzzy Judgment is introduced, calculates the position of current frame image as frame using the position of Inertia information and previous frame image Framing result;
Drift correction is carried out to original acceleration measuring signal using visual information, the acceleration after drift correction is carried out secondary Integral, obtains inertial positioning result;
Weight distribution is carried out to frame image, inertial positioning result using PCA, obtains final positioning result.
2. a kind of Weighted Fusion indoor orientation method based on PCA according to claim 1, which is characterized in that the instruction Practice sample set specifically:
The SURF descriptor for extracting image, to the N number of SURF characteristic point of every frame image zooming-out, and carries out Feature Points Matching;Using RANSAC algorithm handles matching result, removes Mismatching point, obtains affine transformation matrix;Each frame image is calculated under Relative displacement between one frame image true coordinate;
It is inputted affine matrix as training, relative displacement establishes training sample set as training output.
3. a kind of Weighted Fusion indoor orientation method based on PCA according to claim 1, which is characterized in that described to draw Enter image fuzzy Judgment, the position for calculating current frame image using the position of Inertia information and previous frame image is fixed as frame image Position result specifically:
This index of introduced feature matching rate NS, by can be used for measuring whether current image obscures to NS given threshold;
When NS is less than threshold value, determine that the frame image is fuzzy, using the positioning result of the previous frame obtained by ELM, in conjunction with acceleration Degree, the Inertia informations such as gyroscope obtain the position of present frame.
4. a kind of Weighted Fusion indoor orientation method based on PCA according to claim 1, which is characterized in that the benefit Drift correction is carried out to original acceleration measuring signal with visual information, the acceleration after drift correction is subjected to quadratic integral tool Body are as follows:
By the threshold value of translational movement between setting consecutive frame image, to distinguish motion state and stationary state;When translation vector is less than When threshold value, judgement is currently at stationary state, to carry out drift correction, otherwise regards as motion state and keeps its former acceleration Be worth it is constant, to after drift correction acceleration carry out quadratic integral obtain final position.
CN201910636664.XA 2019-07-15 2019-07-15 Weighted fusion indoor positioning method based on PCA Active CN110458887B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910636664.XA CN110458887B (en) 2019-07-15 2019-07-15 Weighted fusion indoor positioning method based on PCA

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910636664.XA CN110458887B (en) 2019-07-15 2019-07-15 Weighted fusion indoor positioning method based on PCA

Publications (2)

Publication Number Publication Date
CN110458887A true CN110458887A (en) 2019-11-15
CN110458887B CN110458887B (en) 2022-12-06

Family

ID=68481248

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910636664.XA Active CN110458887B (en) 2019-07-15 2019-07-15 Weighted fusion indoor positioning method based on PCA

Country Status (1)

Country Link
CN (1) CN110458887B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113111873A (en) * 2021-03-25 2021-07-13 贵州电网有限责任公司 PCA-based weighted fusion mobile robot positioning method

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104867138A (en) * 2015-05-07 2015-08-26 天津大学 Principal component analysis (PCA) and genetic algorithm (GA)-extreme learning machine (ELM)-based three-dimensional image quality objective evaluation method
CN106447691A (en) * 2016-07-19 2017-02-22 西安电子科技大学 Weighted extreme learning machine video target tracking method based on weighted multi-example learning
US20170103541A1 (en) * 2015-10-12 2017-04-13 Xsens Holding B.V. Integration of Inertial Tracking and Position Aiding for Motion Capture
CN106780570A (en) * 2016-12-21 2017-05-31 中国航天科工集团第四研究院指挥自动化技术研发与应用中心 A kind of real-time modeling method method based on on-line study
CN107016319A (en) * 2016-01-27 2017-08-04 北京三星通信技术研究有限公司 A kind of key point localization method and device
US20170345161A1 (en) * 2016-05-31 2017-11-30 Kabushiki Kaisha Toshiba Information processing apparatus and method
CN107615211A (en) * 2015-05-23 2018-01-19 深圳市大疆创新科技有限公司 Merged using the sensor of inertial sensor and imaging sensor
CN108090921A (en) * 2016-11-23 2018-05-29 中国科学院沈阳自动化研究所 Monocular vision and the adaptive indoor orientation method of IMU fusions
US10043076B1 (en) * 2016-08-29 2018-08-07 PerceptIn, Inc. Visual-inertial positional awareness for autonomous and non-autonomous tracking
US20180266828A1 (en) * 2017-03-14 2018-09-20 Trimble Inc. Integrated vision-based and inertial sensor systems for use in vehicle navigation
CN108716917A (en) * 2018-04-16 2018-10-30 天津大学 A kind of indoor orientation method merging inertia and visual information based on ELM
CN108827341A (en) * 2017-03-24 2018-11-16 快图有限公司 The method of the deviation in Inertial Measurement Unit for determining image collecting device
CN108921893A (en) * 2018-04-24 2018-11-30 华南理工大学 A kind of image cloud computing method and system based on online deep learning SLAM
CN108981687A (en) * 2018-05-07 2018-12-11 清华大学 A kind of indoor orientation method that vision is merged with inertia
CN109615056A (en) * 2018-10-09 2019-04-12 天津大学 A kind of visible light localization method based on particle group optimizing extreme learning machine
CN109945858A (en) * 2019-03-20 2019-06-28 浙江零跑科技有限公司 It parks the multi-sensor fusion localization method of Driving Scene for low speed
CN109993113A (en) * 2019-03-29 2019-07-09 东北大学 A kind of position and orientation estimation method based on the fusion of RGB-D and IMU information

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104867138A (en) * 2015-05-07 2015-08-26 天津大学 Principal component analysis (PCA) and genetic algorithm (GA)-extreme learning machine (ELM)-based three-dimensional image quality objective evaluation method
CN107615211A (en) * 2015-05-23 2018-01-19 深圳市大疆创新科技有限公司 Merged using the sensor of inertial sensor and imaging sensor
US20170103541A1 (en) * 2015-10-12 2017-04-13 Xsens Holding B.V. Integration of Inertial Tracking and Position Aiding for Motion Capture
CN107016319A (en) * 2016-01-27 2017-08-04 北京三星通信技术研究有限公司 A kind of key point localization method and device
US20170345161A1 (en) * 2016-05-31 2017-11-30 Kabushiki Kaisha Toshiba Information processing apparatus and method
CN106447691A (en) * 2016-07-19 2017-02-22 西安电子科技大学 Weighted extreme learning machine video target tracking method based on weighted multi-example learning
US10043076B1 (en) * 2016-08-29 2018-08-07 PerceptIn, Inc. Visual-inertial positional awareness for autonomous and non-autonomous tracking
CN108090921A (en) * 2016-11-23 2018-05-29 中国科学院沈阳自动化研究所 Monocular vision and the adaptive indoor orientation method of IMU fusions
CN106780570A (en) * 2016-12-21 2017-05-31 中国航天科工集团第四研究院指挥自动化技术研发与应用中心 A kind of real-time modeling method method based on on-line study
US20180266828A1 (en) * 2017-03-14 2018-09-20 Trimble Inc. Integrated vision-based and inertial sensor systems for use in vehicle navigation
CN108827341A (en) * 2017-03-24 2018-11-16 快图有限公司 The method of the deviation in Inertial Measurement Unit for determining image collecting device
CN108716917A (en) * 2018-04-16 2018-10-30 天津大学 A kind of indoor orientation method merging inertia and visual information based on ELM
CN108921893A (en) * 2018-04-24 2018-11-30 华南理工大学 A kind of image cloud computing method and system based on online deep learning SLAM
CN108981687A (en) * 2018-05-07 2018-12-11 清华大学 A kind of indoor orientation method that vision is merged with inertia
CN109615056A (en) * 2018-10-09 2019-04-12 天津大学 A kind of visible light localization method based on particle group optimizing extreme learning machine
CN109945858A (en) * 2019-03-20 2019-06-28 浙江零跑科技有限公司 It parks the multi-sensor fusion localization method of Driving Scene for low speed
CN109993113A (en) * 2019-03-29 2019-07-09 东北大学 A kind of position and orientation estimation method based on the fusion of RGB-D and IMU information

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
GYÖRGY DÁN; MUHAMMAD ALTAMASH KHAN; VIKTORIA FODOR: "《Characterization of SURF and BRISK Interest Point Distribution for Distributed Feature Extraction in Visual Sensor Networks》", 《IEEE TRANSACTIONS ON MULTIMEDIA》 *
徐岩等: "一种融合加权ELM和AdaBoost的交通标志识别算法", 《小型微型计算机***》 *
徐龙阳等: "基于神经网络的多传感器融合PDR定位方法", 《传感技术学报》 *
汪剑鸣等: "室内惯性/视觉组合导航地面图像分割算法", 《中国惯性技术学报》 *
王哲等: "蝗虫群优化和极限学习机相结合的RFID室内定位算法", 《计算机科学》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113111873A (en) * 2021-03-25 2021-07-13 贵州电网有限责任公司 PCA-based weighted fusion mobile robot positioning method

Also Published As

Publication number Publication date
CN110458887B (en) 2022-12-06

Similar Documents

Publication Publication Date Title
CN110125928A (en) A kind of binocular inertial navigation SLAM system carrying out characteristic matching based on before and after frames
US20150235367A1 (en) Method of determining a position and orientation of a device associated with a capturing device for capturing at least one image
CN112907678B (en) Vehicle-mounted camera external parameter attitude dynamic estimation method and device and computer equipment
CN110553648A (en) method and system for indoor navigation
CN111912416A (en) Method, device and equipment for positioning equipment
CN110617821A (en) Positioning method, positioning device and storage medium
Mseddi et al. YOLOv5 based visual localization for autonomous vehicles
CN108303094A (en) The Position Fixing Navigation System and its positioning navigation method of array are merged based on multiple vision sensor
CN109445599A (en) Interaction pen detection method and 3D interactive system
Deigmoeller et al. Stereo visual odometry without temporal filtering
CN112731503B (en) Pose estimation method and system based on front end tight coupling
CN110458887A (en) A kind of Weighted Fusion indoor orientation method based on PCA
Bonilla et al. Pedestrian dead reckoning towards indoor location based applications
Irmisch et al. Robust visual-inertial odometry in dynamic environments using semantic segmentation for feature selection
Yusefi et al. A Generalizable D-VIO and Its Fusion with GNSS/IMU for Improved Autonomous Vehicle Localization
CN114608560A (en) Passive combined indoor positioning system and method based on intelligent terminal sensor
Jonker et al. Philosophies and technologies for ambient aware devices in wearable computing grids
Jung et al. U-VIO: tightly coupled UWB visual inertial odometry for robust localization
Park et al. A simulation based method for vehicle motion prediction
CN117593650B (en) Moving point filtering vision SLAM method based on 4D millimeter wave radar and SAM image segmentation
Han Optical flow/ins navigation system in four-rotor
Xia et al. YOLO-Based Semantic Segmentation for Dynamic Removal in Visual-Inertial SLAM
Aufderheide et al. A visual-inertial approach for camera egomotion estimation and simultaneous recovery of scene structure
Hashimoto et al. Self-localization from a 360-Degree Camera Based on the Deep Neural Network
CN117576218B (en) Self-adaptive visual inertial navigation odometer output method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant