CN112212861B - Track restoration method based on single inertial sensor - Google Patents

Track restoration method based on single inertial sensor Download PDF

Info

Publication number
CN112212861B
CN112212861B CN202010994902.7A CN202010994902A CN112212861B CN 112212861 B CN112212861 B CN 112212861B CN 202010994902 A CN202010994902 A CN 202010994902A CN 112212861 B CN112212861 B CN 112212861B
Authority
CN
China
Prior art keywords
track
motion
sequence
points
geometric
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010994902.7A
Other languages
Chinese (zh)
Other versions
CN112212861A (en
Inventor
赵毅
王一峰
汪洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Graduate School Harbin Institute of Technology
Original Assignee
Shenzhen Graduate School Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Graduate School Harbin Institute of Technology filed Critical Shenzhen Graduate School Harbin Institute of Technology
Priority to CN202010994902.7A priority Critical patent/CN112212861B/en
Publication of CN112212861A publication Critical patent/CN112212861A/en
Application granted granted Critical
Publication of CN112212861B publication Critical patent/CN112212861B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Remote Sensing (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Automation & Control Theory (AREA)
  • Length Measuring Devices With Unspecified Measuring Means (AREA)

Abstract

The invention relates to a track restoration method based on a single inertial sensor. The method comprises the steps of firstly, constructing geometric models of various basic motion tracks, forming a preliminary motion track according to acceleration and angular velocity data measured by an inertial sensor, accurately segmenting the preliminary motion track by utilizing an LSTM model and a similarity curve, segmenting the preliminary motion track into a plurality of basic motion tracks, predicting the type of each basic motion track by utilizing a trained 1D-CNN model, accurately determining the value of geometric parameters of each basic motion track by utilizing a trained deep learning model, accurately restoring each basic motion track, splicing the basic motion tracks, and finally obtaining the restored track of the motion.

Description

Track restoration method based on single inertial sensor
Technical Field
The invention relates to the field of track reduction, in particular to a track reduction method based on a single inertial sensor.
Background
By referring to related patents such as an IEEE database, a Wanfang database, a China-known network cnki academic paper library and the like, the existing motion trail restoration method is basically based on images and optical data, the images and the optical data are acquired by a camera, and the camera is inconvenient to carry.
The common inertial sensor in the market has a small volume, but the accuracy of the inertial sensor is low, so that the requirement that high-accuracy data is needed in the calculation process cannot be met, and therefore, the research on completing any track restoration by using the common inertial sensor in the market is not available.
Disclosure of Invention
The invention aims to provide a track reduction method based on a single inertial sensor, which realizes accurate reduction of a motion track by using the inertial sensor.
In order to achieve the purpose, the invention provides the following scheme:
a track restoration method based on a single inertial sensor comprises the following steps:
constructing geometric models of various basic motion tracks, and determining geometric parameters of each geometric model;
determining a preliminary motion track of the real-time motion action according to the three-dimensional acceleration and the three-dimensional angular velocity of the real-time motion action measured by the inertial sensor at each moment;
predicting an initial segmentation point between basic motion tracks in the preliminary motion track by using an LSTM model according to the three-dimensional acceleration, the three-dimensional angular velocity and the three-dimensional space coordinate data of the preliminary motion track;
determining a similarity curve of a preset spline function and a part of the preliminary motion trail between two initial segmentation points with the farthest distance;
determining a final segmentation point according to the similarity curve;
obtaining the type of a geometric model corresponding to the preliminary motion track between adjacent final segmentation points by using the trained 1D-CNN model according to the three-dimensional space coordinate data of the preliminary motion track;
obtaining the values of the geometric parameters of the geometric model of the initial motion track between the adjacent final segmentation points by utilizing a trained deep learning model according to the type of the geometric model corresponding to the initial motion track between the three-dimensional acceleration, the three-dimensional angular velocity and the adjacent final segmentation points;
according to the type of a geometric model corresponding to the initial motion track between the adjacent final segmentation points and the value of a geometric parameter, finishing the reduction of the basic motion track between the adjacent final segmentation points, and obtaining the final basic motion track reduced between each adjacent final segmentation point;
and sequentially connecting the final basic motion tracks after reduction end to obtain the reduction tracks of the motion motions.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the invention discloses a track restoration method based on a single inertial sensor, which comprises the steps of firstly constructing geometric models of various basic motion tracks, forming a preliminary motion track according to acceleration and angular velocity data measured by the inertial sensor, accurately dividing the preliminary motion track by utilizing an LSTM model and a similarity curve, dividing the preliminary motion track into a plurality of basic motion tracks, predicting the type of each basic motion track by using a trained 1D-CNN model, accurately determining the value of a geometric parameter of each basic motion track by using a trained deep learning model, accurately restoring each basic motion track, splicing the basic motion tracks, finally obtaining the restored track of the motion, and realizing accurate restoration of the motion track only according to the measured data of a single inertial sensor.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a flow chart of a track restoration method based on a single inertial sensor according to the present invention;
FIG. 2 is a schematic diagram of a geometric model of a linear trajectory provided by the present invention; wherein, fig. 2(a) is a point on a linear trajectory, and fig. 2(b) is a geometric model of the linear trajectory;
FIG. 3 is a schematic diagram of a geometric model of an elliptical trajectory provided by the present invention;
FIG. 4 is a schematic diagram of a geometric model of a polyline type trajectory provided by the present invention; wherein, fig. 4(a) is a geometric model of a V-shaped track, and fig. 4(b) is a geometric model of an L-shaped track;
FIG. 5 is a schematic diagram of a first set of geometric models of S-shaped trajectories provided by the present invention;
FIG. 6 is a schematic view of the joints of a first set of S-shaped traces provided by the present invention;
FIG. 7 is a schematic diagram of a second set of geometric models of sigmoid trajectories provided by the present invention;
FIG. 8 is a schematic view of the misalignment of the joints of the second set of S-shaped traces provided by the present invention;
FIG. 9 is a schematic diagram of a smoothing process at a junction point when a Z-value changes in a second set of S-shaped traces according to the present invention; fig. 9(a) is a connection diagram after a preset Z value, fig. 9(b) is a diagram of Z value variation between two points simulated by a linear function, and fig. 9(c) is a diagram of variation of Z value simulated by a sine function;
FIG. 10 is a schematic diagram of a geometric model of a third set of S-shaped tracks provided by the present invention;
FIG. 11 is a schematic diagram of the preliminary trajectory restoration provided by the present invention;
FIG. 12 is a graph showing a sliding comparison of a spline function sequence and a trace sequence provided by the present invention; wherein, fig. 12(a) is a sliding comparison graph of a first spline function sequence and a track sequence, and fig. 12(b) is a sliding comparison graph of a second spline function sequence and a track sequence;
FIG. 13 is a similarity graph provided in accordance with the present invention;
FIG. 14 is a similarity curve obtained by fitting a third-order spline according to the present invention;
FIG. 15 is a schematic view of a segmentation point provided by the present invention;
FIG. 16 is a schematic illustration of a reduction trajectory provided by the present invention;
FIG. 17 is a schematic illustration of the curve non-smoothness at the connection point provided by the present invention;
FIG. 18 is a schematic diagram of a smoothed trajectory according to the present invention;
FIG. 19 is an original trajectory diagram of a second embodiment provided by the present invention;
FIG. 20 is a diagram of the final base motion trajectory for the second embodiment of the present invention;
FIG. 21 is a diagram illustrating a joint smoothing operation according to a second embodiment of the present invention;
FIG. 22 is a diagram of a recovery trajectory according to a second embodiment of the present invention;
fig. 23 is a comparison diagram of a restored track and an original track according to a second embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide a track reduction method based on a single inertial sensor, which realizes accurate reduction of a motion track by using the inertial sensor.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
The invention can accurately restore any object motion track by using a single inertial sensor which is common in the market. Because the inertial sensor has larger error, the motion track of the object is difficult to obtain by directly using an algorithm in the aspects of attitude calculation and inertial guidance, so that some basic geometric models are established, any track is decomposed into the sum of a plurality of basic tracks, and the aim of accurately restoring the track is finally fulfilled by predicting the geometric parameters of the basic tracks.
The invention provides a track restoration method based on a single inertial sensor, which comprises the following steps of:
s101, constructing geometric models of various basic motion tracks, and determining geometric parameters of each geometric model;
s102, determining a preliminary motion track of the real-time motion according to the three-dimensional acceleration and the three-dimensional angular velocity of the real-time motion measured by the inertial sensor at each moment;
s103, predicting an initial segmentation point between basic motion tracks in the preliminary motion track by using an LSTM (Long Short-Term Memory) model according to the three-dimensional acceleration, the three-dimensional angular velocity and the three-dimensional space coordinate data of the preliminary motion track;
s104, determining a similarity curve of a preset spline function and a part of the preliminary motion trail between two initial segmentation points with the farthest distance;
s105, determining a final segmentation point according to the similarity curve;
s106, obtaining the type of a geometric model corresponding to the initial motion track between the adjacent final segmentation points by using a trained 1D-CNN (Convolutional neural network) model according to the three-dimensional space coordinate data of the initial motion track;
s107, obtaining the values of the geometric parameters of the geometric model of the initial motion track between the adjacent final segmentation points by using the trained deep learning model according to the three-dimensional acceleration, the three-dimensional angular velocity and the type of the geometric model corresponding to the initial motion track between the adjacent final segmentation points;
s108, finishing the reduction of the basic action track between the adjacent final segmentation points according to the type of the geometric model corresponding to the initial motion track between the adjacent final segmentation points and the value of the geometric parameter, and obtaining the reduced final basic action track between each adjacent final segmentation point;
and S109, sequentially connecting the final basic motion tracks after reduction end to obtain the reduction tracks of the motion motions.
The implementation of the above steps will be described in detail below.
The multiple basic motion trajectories constructed in step S101 include: linear tracks, elliptical tracks, broken line tracks and three S-shaped tracks.
The method and the device model any motion trail of the object by means of the space 3D geometric model, and finally achieve the effect of accurately restoring the motion trail. The geometric model should cover all the changes of the motion trail, and the number of model parameters should be reduced as much as possible on the basis of meeting the change. Because the parameters are obtained by predicting through the inertial sensor data and the deep learning model, the prediction accuracy is reduced due to too many parameters and too complex track models.
The basic geometric models can be roughly divided into two main categories according to the complexity of the trajectory: a simple geometric model and a compound geometric model. The "simplex geometric model" includes: straight lines, arcs, conic sections, etc.; the 'composite geometric model' comprises: broken lines, "S" shaped curves, etc.
Although the S-shaped trajectory may be decomposed into a combination of several arcs and the polyline may also be regarded as a combination of straight lines, the final trajectory reduction effect depends not only on the prediction accuracy of the geometric model parameters but also on the accuracy of the division of the time-series data of the trajectory, if the S-shaped trajectory is divided into a combination of a plurality of arcs, the number of data division increases, the error caused at the connection point may also increase, which may cause unnecessary errors, and therefore, a part of the more frequently occurring trajectory such as the S-shaped trajectory, the polyline type, and the like is selected as the base trajectory, and the geometric model is established.
A. Simplex geometric model:
the geometric models of the straight line, the ellipse and the conical curve are relatively simple, mature geometric equations (such as an ellipse equation) are provided, the parameter quantity is relatively less, and the parameter prediction is relatively easy.
The concrete modeling process of the simple track is illustrated by taking a straight-line track and an elliptic arc-line track as examples:
linear trajectory:
two-point equation of a straight line in space:
two points A (x) in a given space1,y1,z1) And B (x)2,y2,z2) The vector of the straight line passing through A, B is AB ═ x2-x1,y2-y1,z2-z1) Therefore, the two-point equation of the straight line L defined by A, B is:
Figure BDA0002692244100000051
any point M on the line segment AB0(x0,y0,z0) Satisfy the requirement of
Figure BDA0002692244100000061
And x0=αx1+(1-α)x2,0≤α≤1。
Thus, the geometric parameters of a linear trajectory are: m0(x0,y0,z0) As shown in fig. 2, fig. 2(a) shows a point on a linear trajectory, and fig. 2(b) shows a geometric model of the linear trajectory.
Elliptical trajectory, arc trajectory:
the arc-shaped trajectory can be considered as a part of the elliptical arc-shaped trajectory, and therefore, a geometric model of the elliptical trajectory is mainly described here.
An elliptical trajectory can be described by the geometric equation of the ellipse:
the parametric equation of the ellipse model consists of the center point C (C) of the ellipsex,Cy,Cz) And long axis vector
Figure BDA0002692244100000062
And minor axis vector
Figure BDA0002692244100000063
Determining, for any point M (x (t), y (t), z (t)) on the ellipse, the parametric equation for the ellipse can be expressed as:
Figure BDA0002692244100000064
namely:
Figure BDA0002692244100000065
wherein t is more than or equal to 0 and less than 2 pi, and the elliptic geometric model is finally obtained as shown in figure 3.
B. A composite geometric model:
broken line type track:
the broken-line trajectory can be viewed as a combination of two straight lines in space, and thus the basic equation is derived from the equation of a spatial straight line.
A point M 'in a given space'0(x0,y0,z0) And an vector
Figure BDA0002692244100000066
Then pass the point and
Figure BDA0002692244100000067
there is one and only one straight line L that is a direction vector.
If M' (x, y, z) is any point on the straight line L, then
Figure BDA0002692244100000068
And
Figure BDA0002692244100000069
parallel, they are proportional to the coordinates, i.e. the point-wise equation for the straight line L is:
the geometric model of the broken line type track is composed of M 'from the same point'0(x0,y0,z0) The two emitted line segments form, and the point-direction equation of the straight line where the two line segments are located is as follows: l is1:
Figure BDA00026922441000000610
And L2:
Figure BDA0002692244100000071
Default M'0(x0,y0,z0) If (0,0,0), then the two line segments can be represented as vectors
Figure BDA0002692244100000072
And
Figure BDA0002692244100000073
the geometric parameter of the broken line type track is m1、n1、p1、m2、n2、p2
The zigzag track is more frequently occurred in the motion tracking of an object, for example, a user wears an inertial sensor on his hand and makes L, V-type motion in the air, and the finally obtained L, V geometric model is as shown in fig. 4, where fig. 4(a) is a geometric model of a V-shaped track and fig. 4(b) is a second geometric model of an L-shaped track.
S-shaped track:
the composite geometric model is generally complex, so a fixed geometric equation is not available for direct use, the complex geometric model is often required to be constructed, and meanwhile, different construction modes also have great influence on the final track reduction effect, so a plurality of sets of geometric models are constructed for complex track types.
The S-shaped trajectory in the three-dimensional space belongs to a more complicated type in the trajectory model, and therefore, a geometric model structure of the S-shaped trajectory will be described as an example.
And modeling the S curve in the space in the xoy plane, and mapping the S curve to the three-dimensional space in a three-dimensional coordinate system transformation mode after the shape is basically determined. Obviously, the euler angle during the rotation of the coordinate system is one of the model parameters.
The invention establishes three sets of 3D geometric models, and the following process is established for each model.
In the first set of models, the S-shaped curve is modeled by means of a rectangular plane coordinate system, and is divided into 6 segments, each segment is a quarter arc with specific arc length and radius, as shown in FIG. 5, wherein, a1a2Is section 1 (arc 1), a2a3Is section 2 (arc 2), a3a4Is section 3 (arc 3), a4a5Is section 4 (arc 4), a5a6Is section 5 (arc 5), a6a7Segment 6 (arc 6).
Modeling the S curve through a circular parameter equation, wherein the modeling steps are as follows:
the method comprises the following steps: generating a group of radius data by a radius parameter r and a radius variation parameter k, wherein the generation idea is as follows:
if n data are to be generated, i.e. r1,r2,…,ri,…,rn
First, using the numpy's distance function in python, from [ -1,1]In (iii) n values are recorded as a'1,a'2,…,a'i,…,a'n. Then by the formula
Figure BDA0002692244100000074
N radius data values are obtained.
Step two: parametric equation of combined circle
Figure BDA0002692244100000081
Data for one quarter circle is acquired. Theta is a point on the circle (x)i,yi) And the included angle between the line of the circle center and the horizontal direction.
Step three: six quarter circles are established along with the steps.
Step four: the quarter circle is rotated to different directions by rotation to the x-axis or to the y-axis.
Step five: the arcs 4, 5, 6 are translated to the right half of S by a translation operation, i.e. adding the x, y data values of each quadrant to the coordinate values to be translated, to form the S graph in fig. 5.
Thus, the size of each quarter circular arc of S can be different by adjusting the r parameter of each circular arc. Each quadrant is made more convex or concave with an increase and decrease in the k parameter. So that the arc has smoothness and the radian of the concave and convex parts can be adjusted. However, when the trajectory is modified by adjusting the radius parameter under such a geometric model, the model itself may cause a certain problem. If the degree of the radius adjustment is too large, the curve of the function is too flat or convex, so that the connection between the current arc and the next arc is problematic, as shown in fig. 6.
However, the method can simulate various S shapes to a certain extent, and the requirement of training the model later is basically met. Since the S-shaped track is divided into 6 segments, the required parameters are also many, so that 12 parameters need to be determined during simulation.
The set of geometric models is suitable for simulating the S-shaped track with the gentle middle part. In addition, the method has a good reduction effect on S-shaped tracks with good integrity and round shapes.
In order to improve the training effect of the subsequent deep learning model, a second set of S-shaped geometric trajectory models is established, as shown in FIG. 7.
In the second set of S-shaped geometric locus model, a semiellipse parameter equation is used for fitting a large arc consisting of a direct arc 1 and an arc 3, and the steps are as follows:
the method comprises the following steps: using the numpy's linspace function in python, from [0, π]Uniformly taking n values, and recording as a1,a2,…,ai,…,an
Step two: by means of a width parameter wid and a height parameter high, using the parameter equation of an ellipse
Figure BDA0002692244100000082
A set of semi-elliptical data is generated.
Step three: and repeating the first step and the second step through different wid parameters and high parameters to obtain data required by two groups of semi-elliptical arcs.
Step four: the data for one of the semi-elliptical arcs is rotated about the y-axis and translated to the right, yielding arc 3.
The semi-elliptical nature also facilitates a smooth transition at the junction. While the middle arc 2 part uses Sigmoid, i.e.
Figure BDA0002692244100000091
Function, tanh is
Figure BDA0002692244100000092
Etc. During modeling, it was found that Sigmoid and tanh functions, whose two ends are too close to the axis, are not sufficient to fit the arc in S, if only the middle part is cut, then a misalignment occurs at the junction with the semicircular shape, as shown in fig. 8.
To solve the misalignment problem of the joint, the curve of the rotated cubic function is selected as the geometric locus model of the middle part of the S. The operation steps are as follows:
the method comprises the following steps: using the numpy's linspace function in python, from [ -1,1]In (iii) n values are recorded as a'1,a'2,…,a'i,…,a'n
Step two: abscissa data x of starting to construct arc 2i
Order to
Figure BDA0002692244100000093
Then, carrying out normalization operation: x is the number ofi=x'i/x1,i=n,n-1,...,1。
Step three: the degree of curvature of the arc 2 to be constructed, i.e., the abscissa data, is adjusted.
The adjustment is performed by using a parameter arc, and the operation formula is as follows:
Figure BDA0002692244100000094
step four: symmetrically inverting the obtained abscissa data, wherein the operation formula is as follows:
Figure BDA0002692244100000095
step five: starting to construct the ordinate data of arc 2, there is the formula:
Figure BDA0002692244100000096
step six: and (3) translating the data of the arc 2 to the right, and splicing the data into the S curve, so that the construction of the data of the arc 2 is completed.
The geometric model is basically determined, and some parameters are considered to be added, so that more forms of S can be simulated.
Since the eccentricity of the ellipse is limited between [0,1) so that it cannot be too high, there is no way to design the modelOperating with eccentricity, but using wid and high parameters (formula see step two when arcs 1, 3 are constructed), allows the semi-circle to be more curved. Furthermore, the parameters of the segment S are taken into account. The method aims to solve the problem that in the design of the middle section, the cubic function is compressed by utilizing normalization, so that the size of the middle section is relatively fixed and excessive change is difficult to meet. Two stretching ratios size _1 and size _2 are set and the formula is used
Figure BDA0002692244100000101
The horizontal and vertical coordinate values of the arc 2 are respectively amplified according to multiple times.
At the same time, the arc parameter was designed to change the curvature of arc 2. Due to the normalized process (see step two in the build arc 2 operation), the x values are all between [ -1,1 ]. The x value is thus adjusted by means of the arc parameter using a power operation (formula see step three when arc 2 is constructed).
In summary, a second set of geometric models of S-shaped trajectories was obtained.
The S-pattern in space is difficult to lie strictly within one plane. Therefore, it is necessary to introduce the z-axis parameter on the existing sigmoid curve in the xoy plane. 7 points with more characteristics in S are selected, namely points a1, a2, a3, a4, a5, a6 and a7 in FIG. 7. Their z values are set separately, and then one of them is connected as shown in fig. 9 (a). First, a first order function is used to simulate the z value change between two points, as shown in fig. 9 (b). However, sharp points appear at the connecting points, which clearly do not correspond to a continuous smooth S-shaped trajectory. For both ends continuity, the variation of z value is simulated with a sine function, as shown in fig. 9 (c). So that their junctions become smoother as shown in fig. 9.
And the position and the direction of the S are easily met through coordinate translation and coordinate system rotation transformation.
In order to better simulate the geometrical change of the S-shaped track in the space, a track cutting function and a stretching function are introduced. Wherein the track cutting function can cut off the initial part or the end part of the S-shaped track according to the proportion, so that the simulation of some S without a semicircular arc can be completed. And the stretching function is to scale up or down the x, y and z coordinates of S so as to complete the fitting of S with different sizes.
The first and second sets of model parameters are numerous, although the model parameters can basically cover the change space of the S-shaped trajectory, the complicated parameters and the complex geometric configurations bring great difficulty for prediction, meanwhile, different users have different writing habits, sometimes, the trivial writing method is not trivial in corresponding to the parameter value of S, and similarly, the S-shaped trajectory generated by the trivial parameter value is not necessarily the shape which is customarily written at ordinary times. Thus, the present invention provides a third set of geometric models of S-shaped trajectories, as shown in FIG. 10.
The basic process of the step S102 is as follows:
resolving the motion attitude of the real-time motion action in real time according to the three-dimensional angular velocity;
determining an object coordinate system according to the motion attitude;
decomposing the gravity from a ground coordinate system to an object coordinate system by using a coordinate system transformation matrix;
subtracting the gravity acceleration under the object coordinate system from the three-dimensional acceleration to obtain the linear acceleration at each moment under the object coordinate system;
performing quadratic integration on the linear acceleration to determine the displacement at each moment;
sequentially connecting the displacements at all moments together according to a time sequence to form a first motion track of the real-time motion action;
and correcting the first motion track by using a particle filter algorithm, and determining a preliminary motion track of the real-time motion action.
The specific process of the step S102 is as follows:
the accelerometer measures measurement data obtained by establishing a relative coordinate system with respect to the position of the moving object, and includes the acceleration due to gravity (i.e., [00g ] of the acceleration data when the object is stationary on the XoY plane). In order to restore the whole motion track of the object, the linear acceleration of the motion of the object can be obtained by subtracting the component of the gravity acceleration from the measurement data of the accelerometer, and then the motion track of the object can be completely converted to a ground coordinate system through the attitude matrix. Therefore, the trajectory restoration focuses on the conversion of the posture update and the displacement.
Obtaining an attitude matrix
Using accelerometer data a ═ a at time tx ay az]And gyroscope data g ═ gx gy gz]And attitude quaternion Q at time t-1t-1=[q0 q1 q2 q3]t-1Updating to obtain quaternion Q at time tt=[q0 q1 q2 q3]tAnd a quaternion transform matrix MtThe method comprises the following specific steps:
(1) the accelerometer data is normalized to a range of values between-1 and + 1.
Figure BDA0002692244100000111
(2) Extracting gravity components in a quaternion equivalent cosine matrix, and deducing the quaternion matrix with known change as follows:
Figure BDA0002692244100000112
using the normalized gravity matrix [001], the gravity component, i.e., the third row element of the quaternion matrix, can be extracted.
Figure BDA0002692244100000121
(3) The cross product of the vectors can be used to determine whether the two vectors are parallel, when both vectors are unit vectors, the cross product between them represents the parallelism between them, if parallel, the cross product is 0, if perpendicular, the cross product is 1, the smaller the direction difference between the two vectors, the smaller the cross product, so the cross product represents the direction error of the two normalized vectors. Performing cross product on the accelerometer data and the gravity component to obtain the following components:
Figure BDA0002692244100000122
(4) integrating (accumulating) the gravity error by an integral coefficient of KiTo obtain the accumulated error
Figure BDA0002692244100000123
Then the gravity error at the moment is calculated by a coefficient KpAdding the data to gyroscope data while adding the previous step from KiThe gravity error result of the coefficient accumulation is added to the data of the gyroscope for correcting the gyroscope data.
Figure BDA0002692244100000124
(5) And solving the quaternion by using a first-order Runge Kutta, and integrating the gyroscope data corrected by the accelerometer into the quaternion.
Figure BDA0002692244100000125
And carrying out normalization processing on the calculated quaternion to obtain a new quaternion of the object after rotation.
Figure BDA0002692244100000126
Obtaining a quaternion transformation matrix at the time t by using the new quaternion
Figure BDA0002692244100000131
② trace reduction
The matrix M obtained by the previous steptCan know each momentThe attitude of the object can be subtracted from the component of the gravitational acceleration in the object coordinate system to obtain the linear acceleration of the actual motion of the object. Approximately, it can be considered that the sensor is in constant motion during the time interval when the sensor obtains data. Then, physical quantities on the object coordinate system and the ground coordinate system can be converted into each other by using the attitude quaternion matrix.
Using accelerometer data at time t
Figure BDA0002692244100000134
And attitude quaternion matrix MtThe velocity V can be calculatedt earthThen the position shift S can be calculatedt. Finally, connecting the displacement points at each moment together to obtain the motion trail of the object, wherein the specific steps of the step are as follows:
(1) at time t, the time t-1
Figure BDA0002692244100000135
Quaternion Q by time ttCorresponding matrix MtConverted into
Figure BDA0002692244100000136
At the same time, the accelerometer is used
Figure BDA0002692244100000137
The component of gravity G on the object is subtracted from the acceleration of gravity to obtain the linear acceleration
Figure BDA0002692244100000138
Then by transforming the matrix MtCan obtain
Figure BDA0002692244100000139
The relationship is as follows:
Figure BDA0002692244100000132
(2) using calculated acceleration
Figure BDA00026922441000001310
Updating the velocity V at time tt earthCombined with the speed at time t-1
Figure BDA00026922441000001311
And a displacement St-1Calculating the displacement S at time tt. Assuming that the speed of the object in the object coordinate system of the object is kept unchanged when the current posture of the object is changed. The quaternion matrices before and after the attitude change are different, and although the velocity in the object coordinate system remains unchanged, the projection in the ground coordinate system is different. Therefore, in order to reflect the change of the speed on the coordinate system after the posture of the object is changed, V at the time t is usedt earthArray M in attitudetV converted back to object coordinate systemt objectOn the next cycle of t +1, with the updated attitude matrix Mt+1V on object coordinate systemt objectIn a reduced-to-earth coordinate system
Figure BDA00026922441000001312
The relationship is as follows:
Figure BDA0002692244100000133
thus, the displacement point is calculated, and the process is repeated, so that the displacement point is calculated at each moment, and the displacement S at each moment is calculatedtWhen connected together, the motion trail of the object is obtained, as shown in fig. 11.
The above related variable symbolic meanings are:
1. the sensor measures the triaxial data of the accelerometer as a ═ ax ay az];
2. The three-axis data of the gyroscope measured by the sensor is g ═ g respectivelyx gy gz];
3. The period measured by the sensor is delta t, namely the frequency is 1/delta t;
4. the attitude quaternion is recorded as Q ═ Q0 q1 q2 q3];
5. The acceleration of gravity is recorded as G ═ Gx Gy Gz];
6. The error integral of gravity is eInt ═ exInt eyInt ezInt];
7. The attitude quaternion matrix is denoted as M (coordinates of the ground coordinate system M × coordinates of the object coordinate system), and M is known from the properties of the orthogonal matrixTRepresenting inverse transformation (object coordinate system coordinate M)TX sit coordinate system coordinates);
8. a ground coordinate system: an absolute coordinate system, i.e., a geographical NED (North East Down) coordinate system, does not change with the movement of the object;
9. an object coordinate system: establishing a three-dimensional coordinate system by using the position of the object where the sensor is located;
10. after that is formed as
Figure BDA0002692244100000141
In the expression (1), X represents a physical quantity, the subscript t represents a time point, and the superscript system represents a coordinate system (earth, object) in which the quantity is located.
In step S103, the output vector of the LSTM model is (y ═ y)1,y2,y3……ym2000, -2000 … … -2000), when the values appearing subsequently are all found to be less than zero (a plurality of successive-2000) in the output of the LSTM model, the segmentation is judged to be over;
where y represents the output vector, y1、y2、y3And ymA first time position, a second time position, a third time position and an mth time position which respectively represent the initial segmentation points.
The segmentation points predicted by the LSTM model provide an approximate range of the segmentation points, but are not necessarily accurate, and on the basis of this, they are adjusted, and step S104 and step S105 are the refinement determination process of the segmentation nodes.
S102, a primary motion track of the object motion is already calculated, the track calculation needs to perform quadratic integration on acceleration, the acceleration needs to calculate the current attitude from angular velocity data and is obtained by gravity decomposition, so that the final track is very sensitive to sensor data, even a tiny data error can cause large distortion of the track, although a huge obstacle is brought to the track reduction, the sensitive characteristic is also an advantage condition for data segmentation! At the joint point of the basic motion and the basic motion, the acceleration and the angular velocity data are often obviously changed, and the change is very obvious on the initial recovery of the track! Therefore, such a change will be detected, and an accurate segmentation position will be determined from the detection result.
The invention sets two spline functions of straight line and arc line, and respectively executes the steps S104 and S105 by the straight line spline function and the arc line spline function, and takes the total segmentation point determined by the two spline functions as the final segmentation point.
The basic process of step S104 is:
the preset spline function slides along the preliminary motion trajectory from an initial segmentation point of a first time point on a part of the preliminary motion trajectory (a sliding comparison graph of a spline function sequence and a trajectory sequence, as shown in fig. 12, fig. 12(a) is a sliding comparison graph of a first spline function sequence and a trajectory sequence, and fig. 12(b) is a sliding comparison graph of a second spline function sequence and a trajectory sequence, and the Frechet distance (Frechet distance) of the preset spline function sequence which slides once and the part of the preliminary motion trajectory sequence are calculated, which specifically includes:
generating a distance value matrix F with the size of p multiplied by q and with all initialized matrix elements of-1 according to the sequence length value p of the preset spline function sequence and the sequence length value q of a part of the preliminary motion trail sequence;
calculating the distance values F [ i, j ] of the length value parameter groups i and j, and judging whether the F [ i, j ] is larger than-1 or not to obtain a first judgment result;
if the first judgment result shows that the motion trajectory is positive, taking F [ i, j ] as the Frechet distance of the preset spline function sequence and part of the preliminary motion trajectory sequence;
if the first judgment result shows no, respectively judging whether i is equal to 0 and j is equal to 0 to obtain a second judgment result;
if the second judgment result indicates that i is greater than 0 and j is greater than 0, calculating the distance value F [ i-1, j ] of the length value parameter i-1 and the length value parameter j]A distance value F [ i, j-1 ] of a length value parameter i and a length value parameter j-1]A distance value F [ i-1, j-1 ] to the length value parameter i-1 and the length value parameter j-1]And calculating the Euclidean distance D (a) between the ith space coordinate point of the preset spline function sequence and the jth space coordinate point of the partial preliminary motion track sequencei,bj) Using the formula F [ i, j]=max(min(F[i-1,j],F[i,j-1],F[i-1,j-1]),D(ai,bj) Determine F [ i, j ]]And F [ i, j ]]Frechet distance Fd (P, Q) as a preset spline function sequence and a part of the preliminary motion trajectory sequence;
if the second determination result indicates that i is greater than 0 and j is equal to 0, F [ i-1,0 ] is calculated]And D (a)i,b0) Using the formula F [ i, j]=max(F[i-1,0],D(ai,b0) Determine F [ i, j ]]And F [ i, j ]]Frechet distance Fd (P, Q) as a preset spline function sequence and a part of the preliminary motion trajectory sequence;
if the second determination result indicates that i is equal to 0 and j is greater than 0, F [0, j-1 ] is calculated]And D (a)0,bj) Using the formula F [ i, j]=max(F[0,j-1],D(a0,bj) Determine F [ i, j ]]And F [ i, j ]]Frechet distance Fd (P, Q) as a preset spline function sequence and a part of the preliminary motion trajectory sequence;
if the second determination result indicates that i is equal to 0 and j is equal to 0, D (a) is calculated0,b0) Obtaining F [ i, j]=D(a0,b0) And F [ i, j ]]Frechet distance Fd (P, Q) as a preset spline function sequence and a part of the preliminary motion trajectory sequence;
wherein i is more than or equal to 0 and less than or equal to p-1, j is more than or equal to 0 and less than or equal to q-1, aiFor the ith spatial coordinate point of a predetermined spline function sequence, bjIs the jth spatial coordinate point of part of the preliminary motion track sequence, F [ i-1,0 ]]Distance values, D (a), being length value parameters i-1 and 0i,b0) For the ith spatial coordinate point a of a predetermined spline function sequenceiWith part of the preliminary movement locus0 th spatial coordinate point b of the sequence0Euclidean distance of (1), F [0, j-1 ]]Distance values of length value parameters 0 and j-1, D (a)0,bj) The Euclidean distance, D (a), between the 0 th spatial coordinate point of the preset spline function sequence and the jth spatial coordinate point of the partial preliminary motion trajectory sequence is preset0,b0) For the 0 th spatial coordinate point a of the predetermined spline function sequence0And the 0 th spatial coordinate point b of partial preliminary motion trail sequence0Euclidean distance of.
According to the Frechet distance, determining the similarity between the preset spline function sequence and part of the preliminary motion trail sequence at each moment, which specifically comprises the following steps:
using a formula
Figure BDA0002692244100000161
Calculating the modular length R of a preset spline function sequence;
according to the model length R of the preset spline function sequence and the Frechet distance Fd (P, Q) of the preset spline function sequence and part of the preliminary motion trail sequence, utilizing a formula
Figure BDA0002692244100000162
And calculating to obtain the similarity Accuracy of the preset spline function sequence and the partial preliminary motion trail sequence at each moment.
Wherein, ai+1For the i +1 st spatial coordinate point of the predetermined spline function sequence, D (a)i,ai+1) For the ith spatial coordinate point a of a predetermined spline function sequenceiAnd the (i + 1) th spatial coordinate point ai+1Euclidean distance of.
The similarity at all times constitutes a similarity curve, as shown in fig. 13.
And S105, specifically comprising:
invoking an interp1 function (one-dimensional data interpolation function) of scipy. interplate by using data points on the similarity curve, wherein the parameter is 'cubic', and fitting to obtain a function of the similarity curve; the similarity curve obtained by fitting the third-order spline function is shown in fig. 14;
curve of degree of similarityAbscissa interval [ x ]d,xd+1]Is divided into (x)d+1-xd) A/delta small interval, calculating the first derivative of the function of each small interval, and judging the interval [ x [d,xd+1]Whether all the first-order derivatives in the first-order derivatives are larger than a threshold value or not is judged to obtain a third judgment result;
if the third judgment result shows yes, x is addeddThe label is derivitive _ x [ d ]]Discarding x as 1d
If the third judgment result shows no, x is judgeddThe label is derivitive _ x [ d ]]0, and xdThe corresponding point is used as a segmentation alternative point;
selecting an abscissa interval [ x ] of the point from the segmentation by using a line function in numpyd,xd+D']Uniformly selecting D '/delta points, obtaining the vertical coordinates corresponding to the D '/delta points according to the function of the similarity curve, and selecting the maximum value maxy and the minimum value miny of the vertical coordinates corresponding to the D '/delta points;
if the difference between the maximum value maxy and the minimum value miny is larger than a threshold value A, deleting the segmentation alternate points;
if the variance of the vertical coordinates corresponding to the D'/delta points is larger than a threshold VAR, deleting the segmentation alternative points; wherein, the variance is calculated by using var function of numpy;
calculating the difference between the function of the similarity curve and the minimum value miny in an interval [ x ] by using the quad function (integral function) of scipyd,xd+D']If the integral is larger than the threshold value B, deleting the segmentation alternative points;
the segmentation candidate points that are not deleted are taken as final segmentation points, and the time positions of the final segmentation points are determined, as shown in fig. 15.
Wherein x isdAnd xd+1 is the abscissa of two points on the similarity curve, δ is a constant, and D' is the distance between the abscissas of the two alternative points.
The principle of the step S106 is: although the preliminary track restoration result obtained in step S102 is far from the real object motion track, the rough form and trend thereof can substantially reflect the type of the motion track. Thus, a 1D-CNN model was established,and randomly selecting 80% of the initial track reduction results obtained in the step S102 as a training set, and using the rest 20% of the initial track reduction results as a test set to train the training sets. And finally obtaining a classification model capable of identifying the track type. The classification model inputs the trajectory reduction data (with dimensions of 3 × n) obtained in step S102 and outputs the trajectory reduction data as a trajectory type (a)1、A2、A3……Ak) And A is one of the geometric models of the basic motion track.
Step S107, the method also comprises the following steps:
acquiring a training data set; the training data set comprises a plurality of groups of basic motion track data, wherein the basic motion track data comprise a geometric model of a basic motion track, geometric parameters of the geometric model and three-dimensional acceleration and three-dimensional angular velocity of the basic motion track measured by the inertial sensor at each moment;
taking an ALM neural network as a deep learning model, and taking a Frechet distance as a loss function of the deep learning model;
and training the ALM neural network by using the training data set to obtain the trained ALM neural network.
The manufacturing process of the training data set comprises the following 4 steps:
step 1, fixing an inertial sensor to a wrist, and simultaneously pasting a light-reflecting ball sensitive to light on the inertial sensor. (where accurate motion trajectories of objects in space need to be captured with the aid of a three-dimensional optical positioning device for tagging the acquired inertial sensor data).
The optical sensor and the inertial sensor are adjusted to the same sampling frequency (the Mems six-axis inertial sensor can measure three-dimensional acceleration and three-dimensional angular velocity data of a moving object, is small in size and easy to wear).
After switching on both sensors, a specific and fast "marking action" is first performed, for example: and (6) striking the ground. The action is more specific in both sensor data records and is easier to locate due to the shorter execution time. This action can help align the data of both sensors and establish a "frame-to-frame" correspondence. After completing a specific 'marking action', any hand movement is started, and any curve is drawn in the space, wherein the time length of each group is within 10 seconds. After each group is completed, the inertial sensors are restarted to reduce the accumulated error of the data collected by the gyroscope.
Setting inertial sensor at t1At time t, the optical sensor is turned on2Is turned on at a time and the marking action occurs at t3Time of day, obviously t3>t2、t3>t1At this time, t is the data collected by the two sensors3All the parts before the moment are deleted, t3All data after the time instant correspond to each other every frame.
Step 1, outputting the data collected by an inertial sensor: three-dimensional acceleration, three-dimensional angular velocity data, and collected by an optical sensor: three-dimensional spatial coordinate data. And two types of sensor data "frame to frame". There are 5000 sets of motion data, each set of motion data having a valid time (i.e., t)3Time thereafter) length does not exceed 10 seconds.
Step 2: motion segmentation (data segmentation) is done from the optical sensor data.
Since the two sensor data obtained from step 1 correspond exactly at each instant, the video data collected by the optical sensor is observed. The 5000 sets of motion data collected in step 1 are operated as follows:
and manually segmenting the motion trail of each group of actions through video data, wherein each segmented segment approximately meets the geometric model preset in the first part. The trajectory after the segmentation is called a base motion trajectory or a base trajectory. For the M-th (M is 1, 2, 3 … … 5000) operation, the time-series data thereof is N frames: (1, 2, … … N, … … N), the motion is divided into k basic motion tracks, i.e. corresponding to k segments of original data, and the relationship between each segment of motion track and the original data is:
1 st basic motion trajectory: 1, 2 … … n1(n1A termination frame for the data);
2 nd basic motion trajectory: n is1,n1+1,n1+2……n2(n2A termination frame for the data);
……
the kth basic motion trajectory: n isk-1,nk-1+1,nk-1+2 … … N (N being the termination frame of the data).
After the division, the mth operation obtains k sets of time series data with different lengths. Note that in data segmentation, it is also necessary to artificially determine the trajectory type for it:
type 1 basic motion trajectory: a. the1
Type 2 basic motion trajectory: a. the2
……
Kth base action trajectory type: a. thek
Wherein A iskIs one of the geometric models pre-formulated in the first part.
And step 3: and (6) processing sensor data.
Optical sensor data processing:
due to the problem of human occlusion, the track data collected by the camera may have a default, and in addition, if the hand moves too fast, the default situation will be caused. For this, there are two processing methods:
when the default value is less (less than 10 frames), the cubic spline interpolation method is adopted to complement the default value. Let the missing data start from the nth frame and miss i frames consecutively, i.e. tn、tn+1、tn+2……、tn+i-1(i<10) The corresponding data is missing. At this time, t is usedn-80To tn+20And a cubic spline function, and calling an interplate function (interpolation function) of the scipy in python. Will tn-80To tn+20The interpolation function f is obtained by inputting the default data t into the interpolation functionn、tn+1、tn+2……、tn+i-1(i<10) T can be obtained by data interpolation function fn、tn+1、tn+2……、tn+i-1(i<10) And the corresponding function value completes the supplement of the default value.
If the default value exceeds 10 frames, the action corresponding to the data is deleted as a whole.
Inertial sensor data processing:
the six-axis motion data collected by the inertial sensor contains very much noise and therefore requires gaussian filtering to remove some of the noise. See the examples for specific parameters.
And 4, step 4: the geometric label of the underlying trajectory is obtained from the optical sensor data.
The optical sensor has the advantage that the "what you see is what you get" gives track data that is very accurate, so that it is necessary to calculate a "tag" using the track it gives, and then to correlate the "tag" with the inertial sensor data.
The method comprises the following specific steps: after the optical sensor data is segmented and screened in the previous step, N groups of basic track data are finally generated by 5000 groups of initial motion data, wherein the optical sensor data corresponding to the nth group of basic tracks is not set to be W, and the geometric model corresponding to the basic tracks is set to be M (p)1,p2,p3……pj) Wherein p is1,p2,p3……pjFor the parameters to be determined of the geometric model M, the parameter space is searched by using a gridding search algorithm (prior art) to obtain a large number of models (M) corresponding to different parameter combinations1,M2,M3……Mk) At this time, (M) is calculated separately1,M2,M3……Mk) The Frechet distance between each model and the track measurement result W, and the parameter combination corresponding to the model M with the minimum distance:
Figure BDA0002692244100000201
Figure BDA0002692244100000202
i.e. the labels of the nth set of base tracks.
The deep learning model is a core part for completing the prediction of geometric parameters of the basic track. In the case of geometric parameter predictions, up to a dozen parameters to be predicted are to be encountered, each of which has an effect on the formation of the final trajectory. Meanwhile, the training samples are obviously insufficient compared with the problem difficulty, and in the face of such a small sample problem, the information amount in the limited data needs to be utilized to the maximum extent. For this purpose, the construction method is as follows:
a model main body: a small sample time sequence prediction oriented predictive learning machine method is applied. The method combines the random distribution embedding theory of dynamics and adopts a brand-new space-time information conversion method, thereby establishing the ALM neural network. An ALM (Augmented Lagrange multiplier) neural network is a deep learning model for short-time series analysis of small samples, and compared with the approximate mapping of the conventional learning, the ALM has excellent nonlinear function learning capability, so that STI (Spatial-Temporal Information transformation) mapping can be better simulated. In addition, Dropout (random inactivation) of the ALM can well simulate a random sampling process, so that the ALM can synthesize kinetic information in a plurality of sub-sampling systems to accurately predict, the ALM is the latest time sequence prediction method at present, and a deep learning model specially aiming at short-time samples and small sample data is provided, so that the ALM neural network forms the main body of the model.
Inputting a model: the input of the model is six-axis motion data collected by the inertial sensor (the data segmentation has been completed in step 2) and the coarse trajectory restoration result (three axes) obtained in step S102, and after combining them, nine-axis input data, that is, a 9 × n input matrix (n is the time-series frame number corresponding to the motion) is obtained.
Output and loss function of the model: the Frechet distance is introduced as a loss function of the deep learning model. The Frechet distance is a method for measuring the similarity between two tracks, and the sequence of positions and time is considered, so that the accuracy of the output of the deep learning model can be more effectively measured, the accuracy is used as a loss function for training, and the performance of the model on a track reduction task can be obviously improved.
In this step, a deep learning model for geometric parameter prediction is trained, that is, for the segmented sensor time series data, it can accurately predict the geometric parameters, and further complete the restoration of the basic trajectory, as shown in fig. 16.
And S108, integrating and splicing the basic motion tracks: after the basic action tracks are restored, the basic action tracks are still independent from each other. However, in order to form the final track, the overall spatial position of the whole track needs to be spliced end to end, which cannot be obtained (the inertial sensor data cannot obtain the spatial absolute position information of the object), so that only the shape of the track is concerned when the track is restored. Therefore, the initial point of the whole track (i.e. the initial point of the first basic track) is set as the origin (0,0,0) of the three-dimensional space coordinate system, and the initial point of each subsequent basic track is set as the end point of the previous basic track.
If the trajectory is not decomposed into k basic motion trajectories, the initial and end points thereof are shown in table 1.
TABLE 1 initial and terminal points of the basic motion trajectory
Base track number Starting coordinate of space Space termination coordinate
1 (0,0,0) (x1,y1,z1)
2 (x1,y1,z1) (x2,y2,z2)
…… …… ……
k (xk-1,yk-1,zk-1) (xk,yk,zk)
The connection manner of the step S108 inevitably makes the connection point not smooth enough, as shown in fig. 17. Therefore, after the step S108, the corresponding smoothing process is also required:
respectively selecting a plurality of points with the same number from two sides of each head-tail phase connection point on a reduction track, wherein the total number of the points of each head-tail phase connection point and the plurality of points of two sides of each head-tail phase connection point is N;
constructing a B-spline difference function according to the coordinates of each head-tail phase joint and the coordinates of a plurality of points on two sides of each head-tail phase joint;
and (4) re-selecting N points on the B-spline difference function to replace N points on the restored track, and finishing the smooth processing of the track, wherein the track after the smooth processing is shown in FIG. 18.
For example: let the connection point be P (x)i,yi,zi) Taking 20 points before and after the P point respectively, establishing a B-spline interpolation function through the 41 points, wherein the function is a continuous analytical model and is conductible at the node compared with the piecewise linear interpolation, thereby having smoothness. If the B-spline interpolation function is not set as f, after the f function is generated by the known 41 points, the 41 points are selected again on the f to replace the original known points, and then the track can be flattenedAnd (4) performing slipping treatment.
At this point, the motion track of the object is successfully restored.
In order to measure the overall effect of the track restoration, and based on this, the process of track segmentation is adjusted.
The effect of the final trajectory restoration is determined by two components: the trajectory segmentation and trajectory restoration, the segmentation effect obviously determines the final restoration effect, and therefore, the parameters of the segmentation model can be adjusted again.
The track segmentation model is an LSTM model, and the parameters of the LSTM model are as follows: the number of network layers (num _ layers), the number of hidden layer nodes (hidden _ size), the training times (epoch), the size of a training batch (batch), and the like, at this time, the parameters of the segmentation model are continuously adjusted, data generated by the data segmentation model under different parameters are obtained, the subsequent parameter prediction model is trained by the data respectively, the test is carried out, and finally the final parameter prediction effect corresponding to different parameter combinations of the LSTM model is obtained, so that a 'parameter combination library' is formed. At present, the trend of the influence of different parameter combination changes on the prediction effect obtained from the parameter combination library needs to be observed, and finally a group of more suitable parameter combinations, namely the parameters set by the trajectory segmentation model (LSTM model), is found.
The four most important problems solved by the present invention are:
segmentation of arbitrary trajectory inertial sensor data: after the inertial sensor collects the acceleration and angular velocity data, the inertial sensor needs to be accurately divided into a plurality of parts, and each part corresponds to a basic geometric locus. If the data segmentation is problematic, it is difficult to calculate an accurate trajectory in a subsequent process by segmenting the wrong data. Therefore, a corresponding technical scheme is established for accurate data segmentation;
and (3) accurately restoring the segmented basic track: after the data segmentation is completed, a plurality of sections of original data of the inertial sensor are obtained, and the data are required to accurately restore the motion track of the object. Due to the limitation of the precision of the inertial sensor, a plurality of parameter-containing geometric models are arranged to complete the reduction of the track, the change of the parameters of the geometric models corresponds to the change of the track, and the change of the geometric models can basically cover all change spaces of the track of the type, so that the geometric models can be used as the reduced track, and the most important point is the accurate prediction of the parameters of the geometric models, and the LSTM model is adopted to complete the step.
The effect of data segmentation obviously determines the effect of subsequent track reduction, so that when an algorithm model of data segmentation is established, the effect of subsequent track reduction is taken as a reference factor, and the two parts of data segmentation and track reduction are integrated into an organic whole, so that the model of data segmentation is specially designed.
Since the reduction of each section of the basic track is independent, the final track reduction result is a simple concatenation of each section of the reduction result, which obviously causes great distortion, for example, the joint of two sections of the track is not smooth enough, and the like, and therefore, a related technical scheme is designed to solve the problem.
The invention also provides two embodiments of the track restoration method based on the single inertial sensor.
The first embodiment comprises the steps of:
step 1: experiment design and data acquisition;
the motion track of the inertial motion sensor is restored, and due to the small volume, if only one reflective ball is pasted on the sensor body, the optical sensor can easily lose frames during capturing, so that the inertial sensor and the reflective ball are pasted on the table tennis bat. Thus, a plurality of reflective balls can be arranged on the table tennis bat to prevent the optical sensor from losing objects when capturing motion. The inertial sensor and the optical sensor are started in sequence to complete the action of 'hitting the ground'. A total of 5000 sets of motion data were collected.
Step 2: performing motion segmentation (data segmentation) from the optical sensor data;
and step 3: processing sensor data;
optical sensor data processing: the data collected by the optical sensor contains a large amount of default data, and after the data is cleaned, the data is supplemented and deleted, and finally the obtained data does not have any default value.
Inertial sensor data processing: at this time, it is necessary to perform gaussian filtering on the inertial sensor data, and a window having a width of 21 frames is used for the acceleration data, and a window having a width of 11 frames is used for the angular velocity data.
And 4, step 4: obtaining a geometric label of the base trajectory from the optical sensor data;
the basic trajectory is a definite geometric model, and a circular arc trajectory is taken as an example for explanation.
The circular arc trajectory has 5 parameters: the radius r of the circular arc, the angle alpha of the circular arc, and the three-dimensional normal vector n (x, y, z) of the plane where the circular arc is located, and the 5 parameters form a 5-dimensional vector: (r, α, x, y, z), dividing the 5-dimensional space corresponding to the vector, wherein each dimension is divided by 10 equally, each dimension of the 5-dimensional vector is taken as a value in a traversal manner, and setting the initial value of the 5-dimensional vector as: (0.2,0, -1, -1, -1), and in summary, the radius r has the following value sequence: 0.2, 0.3, 0.4, 0.5, 0.6, 0.7 … … 1.2.2. Other parameters are also the same. During each pass, an arc is generated, the arc is compared with the track measured by the optical inertial sensor, and the Frechet distance is obtained. Finally, the combination of the parameters with the minimum Frechet distance is the geometrical parameters of the measured track of the optical inertial sensor.
And 5: carrying out preliminary track restoration by using the data of the inertial sensor;
this step will use the data measured by the inertial sensors for preliminary trajectory recovery. The angular velocity and the acceleration measured by the sensor are combined with the 'attitude calculation', 'inertial guidance' and 'particle filter' related algorithms to obtain the approximate track of the displacement of the object.
Step 6: time series data segmentation point prediction based on an LSTM model;
establishing an LSTM model based on a pyrrch library, and setting parameters as follows during initial establishment: the input dimension (input _ size) is set to 9, the input sequence maximum length (seq _ len) is set to 2000, the number of network layers (num _ layers) is set to 2, the number of hidden layer nodes (hidden _ size) is set to 10, the number of trainings (epoch) is set to 20, the training batch size (batch size) is set to 5, and the output dimension is set to 9.
The 9 nodes divide the input data into 1-10 segments to accomplish data partitioning.
And randomly selecting 4000 groups from 5000 groups of collected data as a training set, and taking the remaining 1000 groups as a test set to train and test the LSTM model. In the test samples, the samples with an error within 10 frames account for 87% of the total samples, and the samples with an error within 20 frames account for 94% of the total samples.
And 7: finely determining the segmentation nodes;
the segmentation points predicted by the LSTM model in step 6 provide an approximate range of segmentation points, but are not necessarily accurate, and need to be adjusted based on this.
The length of the set spline function is 40 frames, the shape of the spline function is a circular arc containing parameters, when no obvious catastrophe point exists, the circular arc spline and the track which is preliminarily restored can reach very high similarity, however, once the actions are not continuous, the connection point of the two actions can be obviously turned, and the track similarity of the two actions at the turning point can be greatly reduced, so that the accurate segmentation node can be accurately found.
And 8: identifying a basic track type based on a preliminary track restoration result;
in this step, a 1D-CNN model is established to complete the identification and judgment of the basic track type, and the parameters are specifically set as follows:
first one-dimensional convolutional layer: input dimension (in _ channels) ═ data input dimension (input _ size) ═ 3, output dimension (out _ channels) ═ 64, convolution kernel size (kernel _ size) ═ 80;
second one-dimensional convolutional layer: input dimension (in _ channels) ═ data input dimension (input _ size) ═ 3, output dimension (out _ channels) ═ 64, convolution kernel size (kernel _ size) ═ 80;
the third one-dimensional convolutional layer: input dimension (in _ channels) ═ data input dimension (input _ size) ═ 3, output dimension (out _ channels) ═ 64, convolution kernel size (kernel _ size) ═ 80, all connected layers: the input dimension (in _ channels) is 57088, and the output dimension (out _ channels) is 12 (base track type).
And step 9: predicting basic track geometric parameters based on a deep learning model;
in this step, geometric parameters of the basic trajectory are predicted more accurately by means of the ALM model, and the trajectory can be regarded as a composite of an S-shaped trajectory and a parabolic trajectory.
Since the appropriate geometric parameters are predicted more accurately, the effect of the track restoration is good.
Step 10: integrating and splicing basic tracks;
after the basic tracks are restored in the previous step, the basic tracks are still independent from one another. At the joint points, the trajectory is not smooth, and the trajectory is obviously inconsistent with the actual motion trajectory, so that a smoother trajectory is constructed near the joint points to replace the original trajectory by adopting a B-spline interpolation method, and finally, a result which is closer to the original motion trajectory is obtained.
Step 11: and searching a parameter combination library.
The steps are repeated, relevant parameters in the process are recorded, and a model with better parameter performance is selected as a segmentation model of the time sequence data, so that the subsequent geometric parameter prediction task is facilitated, and the final efficiency of any track reduction is improved.
The two major innovation points of the invention are:
1. intelligently dividing any motion track into a plurality of combinations of basic geometric tracks;
2. and converting the basic geometric locus restoration task into a geometric parameter prediction task.
Moreover, the two points are proved by stricter mathematics, for example, any track can be approximated by a sequence consisting of arcs with specific curvatures, and the finer the track segmentation is, the higher the approximation efficiency is. Of course, this is not practical in engineering, so that more kinds of "basic geometric models" are designed to approximate the original trajectory, and it can be basically guaranteed that the restoration of any trajectory can be completed only by performing 10 times of segmentation.
Second embodiment:
the tester wears the inertial sensor and executes the trajectory shown in fig. 19, which is the original trajectory to be restored.
The track is restored by the track restoration method provided by the invention. The LSTM model divides the preliminary motion track into two sections, the 1D-CNN identifies each section respectively, and the identification results are respectively a V-shaped track and an S-shaped track. Finally, the trained ALM model is used for parameter prediction, and the obtained final basic action track is shown in FIG. 20. Obviously, the splice is not smooth enough to be significantly different from the original real trajectory, and therefore the splice point is smoothed, as shown in fig. 21. The restored trajectory after the smoothing processing is the restored trajectory obtained by the trajectory restoration method of the present invention, as shown in fig. 22.
The restored trajectory finally obtained by the trajectory restoration method of the present invention is compared with the original trajectory, as shown in fig. 23. Obviously, the track restoration method based on the single inertial sensor can better restore the motion track.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In summary, this summary should not be construed to limit the present invention.

Claims (6)

1. A track restoration method based on a single inertial sensor is characterized by comprising the following steps:
constructing geometric models of multiple basic motion trajectories and determining each geometric modelA geometric parameter; the plurality of basic motion trajectories include: a linear track, an elliptical track, a broken line track and three S-shaped tracks; the geometric model of the linear track is
Figure FDA0003523870040000011
A(x1,y1,z1) And B (x)2,y2,z2) Two points in a given space; the geometric parameters of the linear track are as follows: m0(x0,y0,z0),M0(x0,y0,z0) Is any point on the line segment AB; the geometric model of the elliptic locus is
Figure FDA0003523870040000012
Wherein t is more than or equal to 0 and less than 2 pi, M (x (t), y (t), z (t)) is any point on the ellipse, and C (C)x,Cy,Cz) Is the center point of the ellipse, and t represents the included angle of the arc on the ellipse; the geometric parameter of the elliptic track is a long axis vector
Figure FDA0003523870040000013
And minor axis vector
Figure FDA0003523870040000014
The geometric model of the broken line type track is composed of M 'from the same point'0(x0,y0,z0) The two emitted line segments form, and the point-direction equation of the straight line where the two line segments are located is as follows: l is1:
Figure FDA0003523870040000015
And L2:
Figure FDA0003523870040000016
Two line segments can be represented as vectors
Figure FDA0003523870040000017
And
Figure FDA0003523870040000018
the geometric parameter of the broken line type track is m1、n1、p1、m2、n2、p2(ii) a The geometric model of the first S-shaped track is to divide an S-shaped curve into 6 sections, each section is a quarter circular arc with specific arc length and radius, and the geometric parameters of the first S-shaped track are a radius parameter r and a radius variation parameter k of each quarter circular arc; the second S-shaped track geometric model comprises a large arc formed by fitting arcs 1 and 3 by using a semielliptical parametric equation, and a curve of a rotated cubic function is selected as the geometric track model of the middle part of S; the geometric parameters of the second S-shaped track are a radius parameter r, a radius variation parameter k and a parameter arc of a quarter circular arc; the third geometric model of the S-shaped track comprises a cubic function; the geometric parameters of the third S-shaped track are coefficients of a cubic function;
determining a preliminary motion track of the real-time motion action according to the three-dimensional acceleration and the three-dimensional angular velocity of the real-time motion action measured by the inertial sensor at each moment;
predicting an initial segmentation point between basic motion tracks in the preliminary motion track by using an LSTM model according to the three-dimensional acceleration, the three-dimensional angular velocity and the three-dimensional space coordinate data of the preliminary motion track;
determining a similarity curve of a preset spline function and a part of the preliminary motion trail between two initial segmentation points with the farthest distance, wherein the similarity curve specifically comprises the following steps:
the method comprises the following steps that a preset spline function slides along the initial motion trail from an initial segmentation point of a first time point on the partial initial motion trail, and the Frechet distance of a once-sliding preset spline function sequence and the partial initial motion trail sequence is calculated, and specifically comprises the following steps: generating a distance value matrix F with the size of p multiplied by q and with all initialized matrix elements of-1 according to the sequence length value p of the preset spline function sequence and the sequence length value q of a part of the preliminary motion trail sequence; calculating the distance value F [ of the length value parameter i and the length value parameter j ]i,j]Judging F [ i, j]Whether the value is larger than-1 or not is judged, and a first judgment result is obtained; if the first judgment result shows yes, F [ i, j ] is judged]The Frechet distance is used as a preset spline function sequence and a part of preliminary motion trail sequence; if the first judgment result shows no, respectively judging whether i is equal to 0 and j is equal to 0 to obtain a second judgment result; if the second judgment result indicates that i is greater than 0 and j is greater than 0, calculating the distance value F [ i-1, j ] of the length value parameter i-1 and the length value parameter j]A distance value F [ i, j-1 ] of a length value parameter i and a length value parameter j-1]A distance value F [ i-1, j-1 ] to the length value parameter i-1 and the length value parameter j-1]And calculating the Euclidean distance D (a) between the ith space coordinate point of the preset spline function sequence and the jth space coordinate point of the partial preliminary motion track sequencei,bj) Using the formula F [ i, j]=max(min(F[i-1,j],F[i,j-1],F[i-1,j-1]),D(ai,bj) Determine F [ i, j ]]And F [ i, j ]]Frechet distance Fd (P, Q) as a preset spline function sequence and a part of the preliminary motion trajectory sequence; if the second judgment result indicates that i is greater than 0 and j is equal to 0, F [ i-1,0 ] is calculated]And D (a)i,b0) Using the formula F [ i, j]=max(F[i-1,0],D(ai,b0) Determine F [ i, j ]]And F [ i, j ]]Frechet distance Fd (P, Q) as a preset spline function sequence and a part of the preliminary motion trajectory sequence; if the second judgment result indicates that i is equal to 0 and j is greater than 0, F [0, j-1 ] is calculated]And D (a)0,bj) Using the formula F [ i, j]=max(F[0,j-1],D(a0,bj) Determine F [ i, j ]]And F [ i, j ]]Frechet distance Fd (P, Q) as a preset spline function sequence and a part of the preliminary motion trajectory sequence; if the second determination result indicates that i is equal to 0 and j is equal to 0, D (a) is calculated0,b0) Obtaining F [ i, j]=D(a0,b0) And F [ i, j ]]Frechet distance Fd (P, Q) as a preset spline function sequence and a part of the preliminary motion trajectory sequence; wherein i is more than or equal to 0 and less than or equal to p-1, j is more than or equal to 0 and less than or equal to q-1, aiFor the ith spatial coordinate point of a predetermined spline function sequence, bjIs the jth spatial coordinate point of part of the preliminary motion track sequence, F [ i-1,0 ]]Distance values, D (a), being length value parameters i-1 and 0i,b0) For presetting a spline function sequenceThe ith spatial coordinate point aiAnd the 0 th spatial coordinate point b of partial preliminary motion trail sequence0Euclidean distance of (1), F [0, j-1 ]]Distance values of length value parameters 0 and j-1, D (a)0,bj) The Euclidean distance, D (a), between the 0 th spatial coordinate point of the preset spline function sequence and the jth spatial coordinate point of the partial preliminary motion trajectory sequence is preset0,b0) For the 0 th spatial coordinate point a of the predetermined spline function sequence0And the 0 th spatial coordinate point b of partial preliminary motion trail sequence0The Euclidean distance of;
according to the Frechet distance, determining the similarity between the preset spline function sequence and a part of the preliminary motion trail sequence at each moment, which specifically comprises the following steps: using a formula
Figure FDA0003523870040000031
Calculating the modular length R of a preset spline function sequence; according to the modular length R of the preset spline function sequence and the Frechet distance Fd (P, Q) of the preset spline function sequence and part of the preliminary motion trail sequence, utilizing a formula
Figure FDA0003523870040000032
Calculating to obtain the similarity Accuracy of the preset spline function sequence and part of the preliminary motion trail sequence at each moment; wherein, ai+1For the i +1 st spatial coordinate point of the predetermined spline function sequence, D (a)i,ai+1) For the ith spatial coordinate point a of a predetermined spline function sequenceiAnd the (i + 1) th spatial coordinate point ai+1The Euclidean distance of;
constructing a similarity curve according to the similarity of the preset spline function sequence and part of the preliminary motion trail sequence at each moment;
determining a final segmentation point according to the similarity curve;
obtaining the type of a geometric model corresponding to the preliminary motion track between adjacent final segmentation points by using the trained 1D-CNN model according to the three-dimensional space coordinate data of the preliminary motion track;
obtaining the values of the geometric parameters of the geometric model of the initial motion track between the adjacent final segmentation points by utilizing a trained deep learning model according to the type of the geometric model corresponding to the initial motion track between the three-dimensional acceleration, the three-dimensional angular velocity and the adjacent final segmentation points;
according to the type of a geometric model corresponding to the initial motion track between the adjacent final segmentation points and the value of a geometric parameter, finishing the reduction of the basic motion track between the adjacent final segmentation points, and obtaining the final basic motion track reduced between each adjacent final segmentation point;
and sequentially connecting the final basic motion tracks after reduction end to obtain the reduction tracks of the motion motions.
2. The single inertial sensor-based track restoration method according to claim 1, wherein the determining a preliminary motion track of the real-time motion according to the three-dimensional acceleration and the three-dimensional angular velocity of the real-time motion measured by the inertial sensor at each moment specifically comprises:
calculating the motion attitude of the real-time motion action in real time according to the three-dimensional angular velocity;
determining an object coordinate system according to the motion attitude;
decomposing the gravity from a ground coordinate system to an object coordinate system by using a coordinate system transformation matrix;
subtracting the gravity acceleration under the object coordinate system from the three-dimensional acceleration to obtain the linear acceleration at each moment under the object coordinate system;
performing quadratic integration on the linear acceleration to determine the displacement at each moment;
sequentially connecting the displacements at all moments together according to a time sequence to form a first motion track of the real-time motion action;
and correcting the first motion track by using a particle filtering algorithm, and determining a preliminary motion track of the real-time motion action.
3. The method according to claim 1The track restoration method of the single inertial sensor is characterized in that the output vector of the LSTM model is y ═ y (y)1,y2,y3……ym-2000, -2000 … … -2000), when the output of the LSTM model is a plurality of consecutive-2000, it is determined that the segmentation is finished;
where y represents the output vector, y1、y2、y3And ymA first time position, a second time position, a third time position and an mth time position which respectively represent the initial segmentation points.
4. The method for trajectory reduction based on a single inertial sensor according to claim 1, wherein the determining a final segmentation point according to the similarity curve specifically includes:
obtaining a function of the similarity curve by adopting a function fitting mode;
dividing the abscissa interval [ x ] of the similarity curved,xd+1]Is divided into (x)d+1-xd) A/delta small interval, calculating the first derivative of the function of each small interval, and judging the interval [ x [d,xd+1]Whether all the first-order derivatives in the first-order derivatives are larger than a threshold value or not is judged to obtain a third judgment result;
if the third judgment result shows yes, x is addeddThe label is derivitive _ x [ d ]]Discarding x as 1d
If the third judgment result shows no, x is judgeddThe label is derivitive _ x [ d ]]0, and xdThe corresponding point is used as a segmentation alternative point;
an abscissa interval [ x ] from the divided alternate pointsd,xd+D']Uniformly selecting D '/delta points, obtaining the vertical coordinates corresponding to the D '/delta points according to the function of the similarity curve, and selecting the maximum value maxy and the minimum value miny of the vertical coordinates corresponding to the D '/delta points;
if the difference between the maximum value maxy and the minimum value miny is larger than a threshold value A, deleting the segmentation alternate points;
if the variance of the vertical coordinates corresponding to the D'/delta points is larger than a threshold VAR, deleting the segmentation alternative points;
calculating the difference between the function of the similarity curve and the minimum value miny in an interval [ x ]d,xd+D']If the integral is larger than a threshold value B, deleting the segmentation alternative points;
the segmentation alternative points which are not deleted are used as final segmentation points, and the final segmentation points are determined;
wherein x isdAnd xd+1 is the abscissa of two points on the similarity curve, δ is a constant, and D' is the distance between the abscissas of the two alternative points.
5. The single inertial sensor-based track restoration method according to claim 1, wherein the obtaining of the values of the geometric parameters of the geometric model of the preliminary motion track between the adjacent final segmentation points by using the trained deep learning model according to the type of the geometric model corresponding to the three-dimensional acceleration, the three-dimensional angular velocity and the preliminary motion track between the adjacent final segmentation points further comprises:
acquiring a training data set; the training data set comprises a plurality of groups of basic motion track data, wherein the basic motion track data comprise a geometric model of a basic motion track, geometric parameters of the geometric model and three-dimensional acceleration and three-dimensional angular velocity of the basic motion track measured by an inertial sensor at each moment;
taking an ALM neural network as a deep learning model, and taking a Frechet distance as a loss function of the deep learning model;
and training the ALM neural network by using the training data set to obtain the trained ALM neural network.
6. The single inertial sensor-based track restoration method according to claim 1, wherein the method sequentially connects end-to-end each restored final basic motion track to obtain the restored track of the motion, and then further comprises:
respectively selecting a plurality of points with the same quantity from two sides of each head-tail phase connection point on the reduction track, wherein the total number of the points of each head-tail phase connection point and the plurality of points of two sides of each head-tail phase connection point is N;
constructing a B-spline difference function according to the coordinates of each head-tail phase joint and the coordinates of a plurality of points on two sides of each head-tail phase joint;
and re-selecting N points on the B-spline difference function to replace the N points on the restored track, and finishing the smooth processing of the track.
CN202010994902.7A 2020-09-21 2020-09-21 Track restoration method based on single inertial sensor Active CN112212861B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010994902.7A CN112212861B (en) 2020-09-21 2020-09-21 Track restoration method based on single inertial sensor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010994902.7A CN112212861B (en) 2020-09-21 2020-09-21 Track restoration method based on single inertial sensor

Publications (2)

Publication Number Publication Date
CN112212861A CN112212861A (en) 2021-01-12
CN112212861B true CN112212861B (en) 2022-05-06

Family

ID=74050147

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010994902.7A Active CN112212861B (en) 2020-09-21 2020-09-21 Track restoration method based on single inertial sensor

Country Status (1)

Country Link
CN (1) CN112212861B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113407046A (en) * 2021-06-29 2021-09-17 北京字节跳动网络技术有限公司 User action recognition method and device, electronic equipment and storage medium
CN114878130B (en) * 2022-07-08 2022-10-11 西南交通大学 Informatization ground disaster power protection comprehensive test platform
CN116358562B (en) * 2023-05-31 2023-08-01 氧乐互动(天津)科技有限公司 Disinfection operation track detection method, device, equipment and storage medium
CN116558513B (en) * 2023-07-06 2023-10-03 中国电信股份有限公司 Indoor terminal positioning method, device, equipment and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110443288A (en) * 2019-07-19 2019-11-12 浙江大学城市学院 A kind of track similarity calculation method based on sequence study
CN110889335A (en) * 2019-11-07 2020-03-17 辽宁石油化工大学 Human skeleton double-person interaction behavior recognition method based on multi-channel space-time fusion network
CN111133448A (en) * 2018-02-09 2020-05-08 辉达公司 Controlling autonomous vehicles using safe arrival times
CN111178331A (en) * 2020-01-20 2020-05-19 深圳大学 Radar image recognition system, method, apparatus, and computer-readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111133448A (en) * 2018-02-09 2020-05-08 辉达公司 Controlling autonomous vehicles using safe arrival times
CN110443288A (en) * 2019-07-19 2019-11-12 浙江大学城市学院 A kind of track similarity calculation method based on sequence study
CN110889335A (en) * 2019-11-07 2020-03-17 辽宁石油化工大学 Human skeleton double-person interaction behavior recognition method based on multi-channel space-time fusion network
CN111178331A (en) * 2020-01-20 2020-05-19 深圳大学 Radar image recognition system, method, apparatus, and computer-readable storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Advanced motion-tracking system with multi-layers deep learning framework for innovative car-driver drowsiness monitoring;Francesca Trenta等;《European Union》;20191231;1-5 *
Intelligent Assessment of Percutaneous Coronary Intervention Based on GAN and LSTM Models;ZI-ZHANG ZOU等;《IEEE Access》;20200527;90640-90651 *
基于样条函数表征目标运动轨迹事后数据融合方法研究;宫志华等;《兵工学报》;20140131;第35卷(第1期);120-127 *

Also Published As

Publication number Publication date
CN112212861A (en) 2021-01-12

Similar Documents

Publication Publication Date Title
CN112212861B (en) Track restoration method based on single inertial sensor
CN107564061B (en) Binocular vision mileage calculation method based on image gradient joint optimization
Whelan et al. Deformation-based loop closure for large scale dense RGB-D SLAM
Akhter et al. Trajectory space: A dual representation for nonrigid structure from motion
Newcombe et al. Kinectfusion: Real-time dense surface mapping and tracking
CN103003846B (en) Articulation region display device, joint area detecting device, joint area degree of membership calculation element, pass nodular region affiliation degree calculation element and joint area display packing
CN109146935B (en) Point cloud registration method and device, electronic equipment and readable storage medium
CN106446815A (en) Simultaneous positioning and map building method
Pons-Moll et al. Model-based pose estimation
CN102750704B (en) Step-by-step video camera self-calibration method
Wi et al. Virdo: Visio-tactile implicit representations of deformable objects
CN112750198B (en) Dense correspondence prediction method based on non-rigid point cloud
CN112308110B (en) Hand motion recognition method and system capable of achieving man-machine interaction
CN113139996A (en) Point cloud registration method and system based on three-dimensional point cloud geometric feature learning
CN112767546B (en) Binocular image-based visual map generation method for mobile robot
US20050185834A1 (en) Method and apparatus for scene learning and three-dimensional tracking using stereo video cameras
Jo et al. Mixture density-PoseNet and its application to monocular camera-based global localization
CN112731503A (en) Pose estimation method and system based on front-end tight coupling
Lee et al. Bidirectional invariant representation of rigid body motions and its application to gesture recognition and reproduction
CN116109778A (en) Face three-dimensional reconstruction method based on deep learning, computer equipment and medium
CN114612545A (en) Image analysis method and training method, device, equipment and medium of related model
CN109358316B (en) Line laser global positioning method based on structural unit coding and multi-hypothesis tracking
Bilodeau et al. Generic modeling of 3d objects from single 2d images
Abdelrahman et al. Data-Based dynamic haptic interaction model with deformable 3D objects
Pettersson Localization with Time-of-Flight cameras

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant