CN113686335A - Method for performing accurate indoor positioning through IMU data by one-dimensional convolutional neural network - Google Patents

Method for performing accurate indoor positioning through IMU data by one-dimensional convolutional neural network Download PDF

Info

Publication number
CN113686335A
CN113686335A CN202110647858.7A CN202110647858A CN113686335A CN 113686335 A CN113686335 A CN 113686335A CN 202110647858 A CN202110647858 A CN 202110647858A CN 113686335 A CN113686335 A CN 113686335A
Authority
CN
China
Prior art keywords
neural network
imu
inertial navigation
navigation data
convolutional neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110647858.7A
Other languages
Chinese (zh)
Other versions
CN113686335B (en
Inventor
刘孜捷
钱宇恒
洪颖
张盛锴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Aoou Intelligent Technology Co ltd
Original Assignee
Shanghai Aoou Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Aoou Intelligent Technology Co ltd filed Critical Shanghai Aoou Intelligent Technology Co ltd
Priority to CN202110647858.7A priority Critical patent/CN113686335B/en
Publication of CN113686335A publication Critical patent/CN113686335A/en
Application granted granted Critical
Publication of CN113686335B publication Critical patent/CN113686335B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/045Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means using resistive elements, e.g. a single continuous surface or two parallel surfaces put in contact
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/08Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers from or to individual record carriers, e.g. punched card, memory card, integrated circuit [IC] card or smart card

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Automation & Control Theory (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Navigation (AREA)

Abstract

The invention discloses a method for performing accurate indoor positioning by using IMU data through a one-dimensional convolution neural network, which comprises the following steps: acquiring inertial navigation data acquired by an IMU, preprocessing the acquired IMU inertial navigation data, and constructing a reference database based on the preprocessed IMU inertial navigation data; constructing a neural network CNN model, and training by using the reference database as a training sample to obtain the neural network CNN model; IMU inertial navigation data of an object to be positioned are input into a neural network CNN model obtained through training after being preprocessed, displacement information of the object to be positioned is obtained through output, and position coordinates of the object to be positioned are estimated according to the output displacement information. The method for performing accurate indoor positioning by using IMU data through the one-dimensional convolutional neural network can accurately calculate displacement and perform positioning.

Description

Method for performing accurate indoor positioning through IMU data by one-dimensional convolutional neural network
Technical Field
The invention belongs to the technical field of indoor, and particularly relates to a method for accurately positioning indoors by using IMU data through a one-dimensional convolutional neural network.
Background
In recent years, with the popularization of mobile devices such as smart phones, tablet computers, and notebook computers, positioning applications based on these devices have become an important part of our daily lives. At present, most mobile devices mainly apply a Global Positioning System (GPS) or a Beidou positioning system to provide positioning service, but the two schemes can only effectively position a target outdoors, and the precision is low or even the mobile devices cannot work in a complex indoor environment. The reasons for this are mainly the following two points: (1) because a plurality of objects exist between the indoor access point AP and the mobile client, the wireless signals are reflected to a plurality of paths to be transmitted, which is called a multipath phenomenon; (2) GPS provides positioning accuracy of a few meters, more than enough for streets or city blocks in outdoor environments, but far from satisfactory for indoor environments that GPS cannot reach.
In order to effectively locate indoor objects, current indoor location technologies are mainly developed around Acoustic signals (acoustics), optical signals (Camera and Visible light), and electrical signals (UWB, Radar and RFID). The positioning technology based on the sound signal has the advantages that the precision is easily affected by environmental noise, and the positioning range is limited; the positioning technology based on optical signals excessively depends on the light intensity in the environment, cannot work in a non-line-of-sight environment, and is easy to cause the problem of target privacy disclosure; positioning technology based on electric signals mainly relies on expensive equipment and precise instruments in order to achieve high-precision positioning of targets. Therefore, the positioning method based on the above scheme cannot be generally suitable for our daily life.
Inertial navigation is a dead reckoning navigation method, which mainly measures angular velocity and linear acceleration through a gyroscope and an accelerometer which are installed on a motion carrier, and then calculates the position of the next point. The method has the advantages that the method is not influenced by external factors, the positioning precision is better in a short term, the defect is that the gyroscope has random drift errors, the noise is amplified in the 1-2 integration processes by adopting the traditional integration algorithm, the conditions of large noise interference or distortion and the like caused by filtering can occur, the positioning errors are increased along with time, and the errors of the speed and the position obtained by acceleration integration are large.
The convolutional neural network is a multi-layer supervised learning neural network, and a convolutional layer and a pool sampling layer of a hidden layer are core modules for realizing the function of extracting the characteristics of the convolutional neural network. The network model adopts a gradient descent method to minimize a loss function to reversely adjust weight parameters in the network layer by layer, and improves the accuracy of the network through frequent iterative training. The low hidden layer of the convolutional neural network is composed of convolutional layers and maximum pool sampling layers alternately, and the high layer is a hidden layer and a logistic regression classifier of a full-connection layer corresponding to the traditional multilayer perceptron. The convolutional neural network structure includes: convolutional layer, downsampling layer, full link layer. Each layer has a plurality of feature maps, each feature map extracting a feature of the input through a convolution filter, each feature map having a plurality of neurons.
Disclosure of Invention
1. Technical problem to be solved by the invention
The invention aims to solve the problems in the prior art and provides a method for performing accurate indoor positioning by using IMU data through a one-dimensional convolutional neural network.
2. Technical scheme
In order to achieve the purpose, the technical scheme provided by the invention is as follows:
a method for performing accurate indoor positioning by using IMU data through a one-dimensional convolutional neural network comprises the following steps:
s1: acquiring inertial navigation data acquired by an IMU, preprocessing the acquired IMU inertial navigation data, and constructing a reference database based on the preprocessed IMU inertial navigation data;
s2: constructing a neural network CNN model, and training by using the reference database as a training sample to obtain the neural network CNN model;
s3: IMU inertial navigation data of an object to be positioned are input into a neural network CNN model obtained through training after being preprocessed, displacement information of the object to be positioned is obtained through output, and position coordinates of the object to be positioned are estimated according to the output displacement information.
The preferable technical scheme is as follows:
in the method for performing accurate indoor positioning by using IMU data through a one-dimensional convolutional neural network as described above, in step S1, the inertial navigation data acquired by the IMU includes acceleration information, angular velocity information, and magnetic force information, and the reference database includes one or more sets of inertial navigation data acquired by the IMU.
In the method for performing accurate indoor positioning by using IMU data through the one-dimensional convolutional neural network as described above, in step S2, the training process of the neural network CNN model includes: the neural network CNN model adopts a forward propagation algorithm to adjust the connection weight between adjacent neurons in the neural network. The process of adjusting the connection weight between adjacent neurons in the neural network comprises the following steps: and predicting the displacement information transmitted to the last layer according to the forward response to obtain an error value between the displacement information and the actual displacement information of the object to be positioned, and continuously adjusting the connection weight of the neural network according to the obtained error value.
In the method for performing accurate indoor positioning by using IMU data through the one-dimensional convolutional neural network, the error value between the displacement information prediction obtained by transmitting the current response to the last layer and the actual displacement information of the object to be positioned reaches e-7And stage, stopping regulation.
The method for performing accurate indoor positioning by using IMU data through the one-dimensional convolutional neural network as described above, wherein the displacement information comprises the direction and the magnitude of the displacement.
In the method for performing accurate indoor positioning by using IMU data through the one-dimensional convolutional neural network as described above, before acquiring inertial navigation data acquired by the IMU in step S1, the method further includes: and acquiring the position coordinate information of the initial point of the object to be positioned.
The method for performing accurate indoor positioning by using IMU data through a one-dimensional convolutional neural network as described above, after preprocessing the obtained IMU inertial navigation data in step S1, further includes: and performing behavior state classification and gender classification on the preprocessed IMU inertial navigation data.
The method for performing accurate indoor positioning by using IMU data through the one-dimensional convolutional neural network comprises the following steps: obtaining an acceleration frequency f according to acceleration information a acquired by the IMU, and judging the walking state when the frequency is more than 0Hz and less than 2 Hz; when f is 0Hz, judging the state to be static; when f is more than or equal to 2Hz, the running state is judged; the method for gender classification comprises the following steps: according to the acceleration information a acquired by the IMU, when the a is less than 5.1m/s and 4.2m/s, the male is judged; when 3.4m/s < a <4.6m/s, it is judged as female.
The method for performing accurate indoor positioning by using IMU data through a one-dimensional convolutional neural network as described above, when estimating the position coordinate of an object to be positioned according to the output obtained displacement information, further includes: the behavioral state and gender of the subject to be located are estimated.
3. Advantageous effects
Compared with the prior art, the technical scheme provided by the invention has the following beneficial effects:
compared with the traditional integral calculation method, the method for accurately positioning indoors by using IMU data through the one-dimensional convolutional neural network can amplify noise in the 1-2 integral processes, and has the conditions of large noise interference or distortion and the like caused by filtering.
Drawings
FIG. 1 is a flow chart of a method of using IMU data for accurate indoor positioning via a one-dimensional convolutional neural network in accordance with the present invention;
FIG. 2 is a schematic diagram of the principle of the neural network CNN model outputting displacement information in the present invention;
FIG. 3 is a composite function of sample point correspondences in machine learning;
fig. 4 is a composite relationship of (a + b) × (b + 1);
FIG. 5 is a derivation relationship of adjacent nodes between different layers;
FIG. 6 is a corresponding network schema for back propagation;
Detailed Description
In order to facilitate an understanding of the invention, the invention will now be described more fully hereinafter with reference to the accompanying drawings, in which several embodiments of the invention are shown, but which may be embodied in many different forms and are not limited to the embodiments described herein, but rather are provided for the purpose of providing a more thorough disclosure of the invention.
It will be understood that when an element is referred to as being "secured to" another element, it can be directly on the other element or intervening elements may also be present; when an element is referred to as being "connected" to another element, it can be directly connected to the other element or intervening elements may also be present; the terms "vertical," "horizontal," "left," "right," and the like as used herein are for illustrative purposes only.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs; the terminology used herein in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention; as used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Referring to fig. 1, the present embodiment provides a method for performing accurate indoor positioning by using IMU data through a one-dimensional convolutional neural network, including the following steps:
s1: acquiring initial point position coordinate information of an object to be positioned;
s2: acquiring inertial navigation data acquired by an IMU, preprocessing the acquired IMU inertial navigation data, performing behavior state classification and gender classification on the preprocessed IMU inertial navigation data, and constructing a reference database based on the IMU inertial navigation data acquired after classification;
s3: constructing a neural network CNN model, and training by using the reference database as a training sample to obtain the neural network CNN modelThe process of training through the network CNN model comprises the following steps: the neural network CNN model adopts a forward propagation algorithm to adjust the connection weight between adjacent neurons in the neural network. The process of adjusting the connection weight between adjacent neurons in the neural network comprises the following steps: predicting displacement information obtained by transmitting forward response to the last layer to obtain an error value between the displacement information and actual displacement information of an object to be positioned, and continuously adjusting the connection weight of the neural network according to the obtained error value; the error value of the displacement information prediction obtained by transmitting the forward response to the last layer and the actual displacement information of the object to be positioned reaches e-7Stage, stopping regulation;
s4: the IMU inertial navigation data of the object to be positioned is input into a neural network CNN model obtained through training after being preprocessed, displacement information of the object to be positioned is obtained through output, and the position coordinate of the object to be positioned is estimated according to the displacement information obtained through output and in combination with the initial point position coordinate information of the object to be positioned, wherein the displacement information comprises the direction and the size of displacement.
In step S1, the behavior state classification method includes: obtaining an acceleration frequency f according to acceleration information a acquired by the IMU, and judging the walking state when the frequency is more than 0Hz and less than 2 Hz; when f is 0Hz, judging the state to be static; when f is more than or equal to 2Hz, the running state is judged;
the method for gender classification comprises the following steps: according to the acceleration information a acquired by the IMU, when the a is less than 5.1m/s and 4.2m/s, the male is judged; when 3.4m/s < a <4.6m/s, it is judged as female.
The steps are described in more detail below.
For step S2, the inertial navigation data collected by the IMU includes acceleration information, angular velocity information, and magnetic force information, and the reference database includes one or more sets of inertial navigation data collected by the IMU. For an IMU, the IMU typically includes a plurality of three-axis gyroscopes, three-axis accelerometers, and three-axis magnetometers to measure acceleration, angular velocity, and magnetomechanical information of an object to be positioned in indoor space. The IMU may be carried on an object to be positioned, or may be disposed in an indoor environment in which the object to be positioned is located.
For step S3, the neural network CNN model includes a convolutional layer, a downsampling layer, and a full link layer, each layer has a plurality of feature maps, each feature map extracts a feature of an input through a convolutional filter, and each feature map has a plurality of neurons. The goal of the training is to minimize the cost function (cost function) by adjusting each weight Wij. In this embodiment, the minimized cost function is effectively solved by a gradient descent method. The gradient descent method specifically includes: and giving an iteration point, solving a gradient vector of the point, searching in a certain step length by taking the negative gradient direction as a searching direction so as to determine the next iteration point, calculating a new gradient direction, and repeating until cost converges. The gradient is calculated specifically as follows:
suppose we represent the cost function as H (W)11,W12,...,Wij,...,Wmn) Then its gradient vector is equal to
Figure RE-GDA0003302393640000071
Wherein eijRepresenting an orthogonal unit vector. Obtaining cost function H for each weight W by using Back propagationijPartial derivatives of (a).
Referring to fig. 3, fig. 3 shows a composite function corresponding to a sample point in machine learning.
The corresponding expression is as follows:
Figure RE-GDA0003302393640000072
Figure RE-GDA0003302393640000073
Figure RE-GDA0003302393640000074
Figure RE-GDA0003302393640000075
w in the above formulaijThe weights between two adjacent neurons are parameters to be learned, and are equivalent to parameters k and b to be solved in a straight line fitting y ═ k × x + b, but in machine learning, a cost function (cost function) is generally adopted, and the training goal is to adjust each weight WijTo minimize cost. The Cost function can also be regarded as the sum of all weights W to be calculatedijBeing a complex function of the arguments and being essentially non-convex, i.e. containing many local minima, the problem of minimizing the cost function can be solved efficiently using the gradient descent method.
The gradient descent method needs to give an iteration point, solve the gradient vector of the point, then search in a certain step length by taking the negative gradient direction as the search direction, so as to determine the next iteration point, then calculate a new gradient direction, and repeat until cost converges. The gradient is calculated specifically as follows:
suppose we represent the cost function as H (W)11,W12,...,Wij,...,Wmn) Then its gradient vector is equal to
Figure RE-GDA0003302393640000076
Wherein eijRepresenting an orthogonal unit vector. The cost function H is obtained for each weight W by using a Back propagation algorithm (hereinafter, represented by BP algorithm)ijPartial derivatives of (a). The specific process comprises the following steps:
taking the partial derivative of (a + b) × (b +1) as an example, the complex relationship is shown in fig. 4.
In the figure, intermediate variables c and d are introduced, and in order to obtain the gradient of e when a is 2 and b is 1, the partial derivative relation of adjacent nodes between different layers needs to be obtained first, as shown in fig. 5 of the attached drawings.
Taking the formula shown in fig. 5 as an example, the processing is performed in units of layers with an initial value of 1 from the uppermost node e. For all children nodes of the next level of e, multiply 1 by the partial derivative value on the path of e to a node and "pile" the result in that child node. After the layer where e is located is propagated, each node of the second layer "stacks" values, and then for each node, all the "stacked" values in the node are summed to obtain the partial derivative of the vertex e to the node. Then, the nodes of the second layer are respectively used as initial vertexes, the initial values are set as partial derivatives of the vertexes e to the nodes, and the propagation process is repeated by taking the layer as a unit, so that the partial derivatives of the vertexes e to the nodes of each layer can be obtained. And the node c receives 1 x 2 sent by the e and stacks the received data, the node d receives 1 x 3 sent by the e and stacks the received data, and the total stacking amount of each node is calculated and continuously sent to the next layer until the second layer is finished. Node c sends 2 x 1 to a and stacks, node c sends 2 x 1 to b and stacks, node d sends 3 x 1 to b and stacks, and until the third layer is finished, node a stacks 2, node b stacks 2 x 1+3 x 1 to 5, namely, the partial derivative of vertex e to b is 5.
Back propagation specific calculation derivation process:
Figure RE-GDA0003302393640000081
the corresponding network is shown in figure 6.
FIG. 6 shows a matrix corresponding to the network type as
Figure RE-GDA0003302393640000091
Yout=0.5
Figure RE-GDA0003302393640000092
W1=[w53 w54]=[0.3 0.9]
First, with forward propagation computation:
Figure RE-GDA0003302393640000093
then:
Figure RE-GDA0003302393640000094
the same can get:
Figure RE-GDA0003302393640000095
y2=f(z2)=f(w1*y1)=f([0.801])=[0.69]
the final loss can then be calculated as:
Figure RE-GDA0003302393640000096
by further reducing this value by a directional propagation algorithm, according to the formula, one can obtain:
Figure RE-GDA0003302393640000097
and (3) solving the partial derivative of the C to the w according to a chain rule:
Figure RE-GDA0003302393640000101
the same can be obtained:
Figure RE-GDA0003302393640000102
the above is the parameter partial derivative of the last layer, now also requiring w31,w32,w41,w42
The formulas are as follows:
Figure RE-GDA0003302393640000103
then
Figure RE-GDA0003302393640000104
Figure RE-GDA0003302393640000105
The final result is:
Figure RE-GDA0003302393640000106
continuously carrying out forward propagation algorithm according to the weight parameter, and continuing training until the error value of the displacement information prediction obtained by propagating forward response to the last layer and the actual displacement information of the object to be positioned reaches e-7And step two, stopping adjustment to obtain the trained neural network CNN model.
For step S4, when the position coordinate of the object to be positioned is estimated according to the output obtained displacement information, since the preprocessing is performed on the IMU inertial navigation data obtained in step S1, and the behavior state classification and the gender classification are also performed on the preprocessed IMU inertial navigation data, in this embodiment, after one preprocessed real-time IMU inertial navigation data is input to the neural network CNN model, it is not only able to estimate the position coordinate of the object to be positioned according to the output obtained displacement information, but also able to estimate the behavior state and the gender of the object to be positioned.
For the neural network CNN model in this embodiment, the input data is inertial navigation data acquired by the IMU, specifically including acceleration information, angular velocity information, and magnetic force information, and the displacement information of the object to be positioned is output through the neural network CNN model, and the estimation of the position coordinate of the object to be positioned is performed by knowing a real-time position change through an initial position coordinate of the object to be positioned and adding the displacement, thereby finally completing the estimation of the position coordinate of the object to be positioned. In the present application, the reason why the displacement information of the object to be positioned is output through the neural network CNN model rather than the estimated coordinate information of the object to be positioned is that: as shown in fig. 2, assuming that the initial position coordinate of the object to be positioned is point a, when the final coordinate of point E of the object to be positioned is estimated by using the neural network CNN model in this embodiment, the neural network CNN model only needs to obtain the displacement from point a to point E, so as to calculate the position of point E; when the estimated coordinate information of the object to be positioned is directly output by the neural network CNN model, the neural network CNN model must calculate B, C, D point coordinates between the points a and E, and then estimate the E point coordinates by the coordinates of the point D. In contrast, in the present application, by outputting the displacement information of the object to be positioned through the neural network CNN model, when the object to be positioned moves for a long time, the neural network CNN model can save a large amount of computation, thereby reducing the system load.
In summary, compared with the conventional integral calculation method which amplifies noise in 1-2 integral processes, the method for performing accurate indoor positioning by using IMU data through the one-dimensional convolutional neural network has the problems of large noise interference or distortion and the like caused by filtering.
The above-mentioned embodiments only express a certain implementation mode of the present invention, and the description thereof is specific and detailed, but not construed as limiting the scope of the present invention; it should be noted that, for those skilled in the art, without departing from the concept of the present invention, several variations and modifications can be made, which are within the protection scope of the present invention; therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (9)

1. The method for performing accurate indoor positioning by using IMU data through the one-dimensional convolutional neural network is characterized by comprising the following steps of:
s1: acquiring inertial navigation data acquired by an IMU, preprocessing the acquired IMU inertial navigation data, and constructing a reference database based on the preprocessed IMU inertial navigation data;
s2: constructing a neural network CNN model, and training by using the reference database as a training sample to obtain the neural network CNN model;
s3: IMU inertial navigation data of an object to be positioned are input into a neural network CNN model obtained through training after being preprocessed, displacement information of the object to be positioned is obtained through output, and position coordinates of the object to be positioned are estimated according to the output displacement information.
2. The method for performing indoor accurate localization using IMU data via one-dimensional convolutional neural network as claimed in claim 1, wherein in step S1, the inertial navigation data collected by the IMU comprises acceleration information, angular velocity information and magnetic force information, and the reference database comprises one or more sets of inertial navigation data collected by the IMU.
3. The method for performing accurate indoor positioning by IMU data through one-dimensional convolutional neural network as claimed in claim 2, wherein in step S2, the training process of neural network CNN model includes: the neural network CNN model adopts a forward propagation algorithm to adjust the connection weight between adjacent neurons in the neural network. The process of adjusting the connection weight between adjacent neurons in the neural network comprises the following steps: and predicting the displacement information transmitted to the last layer according to the forward response to obtain an error value between the displacement information and the actual displacement information of the object to be positioned, and continuously adjusting the connection weight of the neural network according to the obtained error value.
4. The method of claim 3 for accurate indoor positioning using IMU data via one-dimensional convolutional neural network, wherein the error value between the displacement information prediction obtained by propagating the current response to the last layer and the actual displacement information of the object to be positioned reaches e-7And stage, stopping regulation.
5. The method for accurate indoor positioning with IMU data via one-dimensional convolutional neural network of claim 1, wherein the displacement information includes direction and magnitude of displacement.
6. The method for performing accurate indoor positioning using IMU data via one-dimensional convolutional neural network as claimed in claim 1, wherein before acquiring inertial navigation data acquired by IMU in step S1, further comprising: and acquiring the position coordinate information of the initial point of the object to be positioned.
7. The method for performing accurate indoor positioning by IMU data through one-dimensional convolutional neural network as claimed in claim 2, wherein after preprocessing the IMU inertial navigation data obtained in step S1, further comprising: and performing behavior state classification and gender classification on the preprocessed IMU inertial navigation data.
8. The method for performing accurate indoor localization using IMU data via one-dimensional convolutional neural network as claimed in claim 7, wherein the method for performing behavior state classification is: obtaining an acceleration frequency f according to acceleration information a acquired by the IMU, and judging the walking state when the frequency is more than 0Hz and less than 2 Hz; when f is 0Hz, judging the state to be static; when f is more than or equal to 2Hz, the running state is judged; the method for gender classification comprises the following steps: according to the acceleration information a acquired by the IMU, when the a is less than 5.1m/s and 4.2m/s, the male is judged; when 3.4m/s < a <4.6m/s, it is judged as female.
9. The method of claim 7, wherein estimating the position coordinates of the object to be located based on the outputted displacement information, further comprises: the behavioral state and gender of the subject to be located are estimated.
CN202110647858.7A 2021-06-10 2021-06-10 Method for carrying out accurate indoor positioning by using IMU data through one-dimensional convolutional neural network Active CN113686335B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110647858.7A CN113686335B (en) 2021-06-10 2021-06-10 Method for carrying out accurate indoor positioning by using IMU data through one-dimensional convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110647858.7A CN113686335B (en) 2021-06-10 2021-06-10 Method for carrying out accurate indoor positioning by using IMU data through one-dimensional convolutional neural network

Publications (2)

Publication Number Publication Date
CN113686335A true CN113686335A (en) 2021-11-23
CN113686335B CN113686335B (en) 2024-05-24

Family

ID=78576529

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110647858.7A Active CN113686335B (en) 2021-06-10 2021-06-10 Method for carrying out accurate indoor positioning by using IMU data through one-dimensional convolutional neural network

Country Status (1)

Country Link
CN (1) CN113686335B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105628027A (en) * 2016-02-19 2016-06-01 中国矿业大学 Indoor environment precise real-time positioning method based on MEMS inertial device
CN105943013A (en) * 2016-05-09 2016-09-21 安徽华米信息科技有限公司 Heart rate detection method and device, and intelligent wearable device
CN106643715A (en) * 2016-11-17 2017-05-10 天津大学 Indoor inertial navigation method based on bp neural network improvement
CN107907127A (en) * 2017-09-30 2018-04-13 天津大学 A kind of step-size estimation method based on deep learning
CN107958221A (en) * 2017-12-08 2018-04-24 北京理工大学 A kind of human motion Approach for Gait Classification based on convolutional neural networks
US20190079534A1 (en) * 2017-09-13 2019-03-14 TuSimple Neural network architecture system for deep odometry assisted by static scene optical flow
CN109579853A (en) * 2019-01-24 2019-04-05 燕山大学 Inertial navigation indoor orientation method based on BP neural network
WO2020008878A1 (en) * 2018-07-02 2020-01-09 ソニー株式会社 Positioning device, positioning method, and program
CN110766985A (en) * 2019-10-09 2020-02-07 天津大学 Wearable motion sensing interactive teaching system and motion sensing method thereof
US20200355503A1 (en) * 2018-01-10 2020-11-12 Oxford University Innovation Limited Determining the location of a mobile device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105628027A (en) * 2016-02-19 2016-06-01 中国矿业大学 Indoor environment precise real-time positioning method based on MEMS inertial device
CN105943013A (en) * 2016-05-09 2016-09-21 安徽华米信息科技有限公司 Heart rate detection method and device, and intelligent wearable device
CN106643715A (en) * 2016-11-17 2017-05-10 天津大学 Indoor inertial navigation method based on bp neural network improvement
US20190079534A1 (en) * 2017-09-13 2019-03-14 TuSimple Neural network architecture system for deep odometry assisted by static scene optical flow
CN107907127A (en) * 2017-09-30 2018-04-13 天津大学 A kind of step-size estimation method based on deep learning
CN107958221A (en) * 2017-12-08 2018-04-24 北京理工大学 A kind of human motion Approach for Gait Classification based on convolutional neural networks
US20200355503A1 (en) * 2018-01-10 2020-11-12 Oxford University Innovation Limited Determining the location of a mobile device
WO2020008878A1 (en) * 2018-07-02 2020-01-09 ソニー株式会社 Positioning device, positioning method, and program
CN109579853A (en) * 2019-01-24 2019-04-05 燕山大学 Inertial navigation indoor orientation method based on BP neural network
CN110766985A (en) * 2019-10-09 2020-02-07 天津大学 Wearable motion sensing interactive teaching system and motion sensing method thereof

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
YARONG LUO;CHI GUO;JINTENG SU;WENFEI GUO;QUAN ZHANG: "Learning-Based Complex Motion Patterns Recognition for Pedestrian Dead Reckoning", vol. 21, no. 4, 31 December 2020 (2020-12-31) *
刘宇;向高林;王伊冰;陈燕苹;吕玲;黄河明;: "一种改进的行人导航算法研究", 重庆邮电大学学报(自然科学版), no. 02, 15 April 2016 (2016-04-15) *
张晓雪: "基于智能手机的穿戴式无缝导航定位***研究与实现_张晓雪", 中国优秀硕士学位论文全文数据库 (信息科技辑), no. 03, 15 March 2018 (2018-03-15) *
黄河泽: "智能手机多模式误差补偿和导航方法研究", 中国优秀硕士学位论文全文数据库 (信息科技辑), no. 07, 15 July 2021 (2021-07-15) *

Also Published As

Publication number Publication date
CN113686335B (en) 2024-05-24

Similar Documents

Publication Publication Date Title
CN111693047B (en) Visual navigation method for micro unmanned aerial vehicle in high-dynamic scene
Tian et al. Pedestrian dead reckoning for MARG navigation using a smartphone
CN110118560B (en) Indoor positioning method based on LSTM and multi-sensor fusion
CN113074739B (en) UWB/INS fusion positioning method based on dynamic robust volume Kalman
CN106912105B (en) Three-dimensional positioning method based on PSO _ BP neural network
Lv et al. WSN localization technology based on hybrid GA-PSO-BP algorithm for indoor three-dimensional space
CN107727095B (en) 3D indoor positioning method based on spectral clustering and weighted back propagation neural network
Real Ehrlich et al. Indoor localization for pedestrians with real-time capability using multi-sensor smartphones
CN109655786B (en) Mobile ad hoc network cooperation relative positioning method and device
CN110260885B (en) Satellite/inertia/vision combined navigation system integrity evaluation method
Chiang et al. Magnetic field-based localization in factories using neural network with robotic sampling
CN112729301A (en) Indoor positioning method based on multi-source data fusion
CN105674989A (en) Indoor target motion track estimation method based on mobile phone built-in sensors
Wang et al. Modified zeroing neurodynamics models for range-based WSN localization from AOA and TDOA measurements
Hasan et al. Smart phone based sensor fusion by using Madgwick filter for 3D indoor navigation
Deng et al. Heading estimation fusing inertial sensors and landmarks for indoor navigation using a smartphone in the pocket
CN110072192B (en) WiFi indoor positioning method for smart phone
Hassan A performance model of pedestrian dead reckoning with activity-based location updates
CN117516517A (en) Passive fusion positioning method and system in indoor environment and electronic equipment
Wang et al. Recent advances in floor positioning based on smartphone
CN108981689B (en) UWB/INS combined navigation system based on DSP TMS320C6748
CN113686335B (en) Method for carrying out accurate indoor positioning by using IMU data through one-dimensional convolutional neural network
Huang et al. An intelligent and autonomous MEMS IMU/GPS integration scheme for low cost land navigation applications
CN108668254B (en) WiFi signal characteristic area positioning method based on improved BP neural network
Kang et al. Smartphone indoor positioning system based on geomagnetic field

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant