CN110501008B - Autonomous evolution method of human motion model - Google Patents

Autonomous evolution method of human motion model Download PDF

Info

Publication number
CN110501008B
CN110501008B CN201910687480.6A CN201910687480A CN110501008B CN 110501008 B CN110501008 B CN 110501008B CN 201910687480 A CN201910687480 A CN 201910687480A CN 110501008 B CN110501008 B CN 110501008B
Authority
CN
China
Prior art keywords
motion model
acceleration
parameters
rule
model parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910687480.6A
Other languages
Chinese (zh)
Other versions
CN110501008A (en
Inventor
史凌峰
刘公绪
董亚军
于淼鑫
何瑞
辛东金
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Chuyun Communication Technology Co ltd
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201910687480.6A priority Critical patent/CN110501008B/en
Publication of CN110501008A publication Critical patent/CN110501008A/en
Application granted granted Critical
Publication of CN110501008B publication Critical patent/CN110501008B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses an autonomous evolution method of a human motion model, which comprises the following steps: the method comprises the following steps of space-time registration of a sensor system, topological wearing of the sensor system, data acquisition and data processing, acquisition of motion model parameters, synthesis of new motion model parameters and screening of the new motion model parameters. The method is based on the proposed integrated learning method with variable weak classifier weight, and the parameters of the motion model are extracted; ensuring the diversity of the motion model based on the proposed synthesis rule; ensuring the interpretability of the motion model based on the proposed screening rules; and realizing the autonomous evolution of the motion model parameters in the iterative synthesis and screening process of the motion model parameters. The method has strong robustness and good universality, and has greater theoretical research value and engineering practice significance.

Description

Autonomous evolution method of human motion model
Technical Field
The invention belongs to the technical field of motion model evolution methods, and particularly relates to an autonomous evolution method of a human motion model.
Background
The combined positioning technology based on the accelerometer, the gyroscope and the magnetometer has the advantages of autonomy, concealment, all weather, complete information and the like, and is generally applied to the field of wearable navigation and positioning of personnel, wherein the result of personnel positioning is restricted by a human motion model, and a reliable motion model can often restrict divergence of a positioning solution, so that higher positioning precision is realized.
The modeling method of the traditional human motion model is as follows: in a broad sense, a human motion model refers to the motion state of a human body in a specific space and time, such as sitting still, standing, walking, running, jumping, going up and down stairs, falling, cycling and the like. Normally walking is taken as the most basic motion model and most studied by learners, including two-step models, four-step models, eight-step models, and the like. Obtaining more complex human motion models such as running, jumping, lateral moving, cycling, walking up and down stairs, running up and down stairs, taking an elevator and the like, wherein usually volunteers need to be collected, wear corresponding sensors according to requirements, complete related motions according to a certain rule, count and classify data, search key attributes according to a certain or a plurality of classification methods, and distinguish motion models to the maximum extent. Common classification methods include machine learning, support vector machines, bayesian classification, PCA classification, hidden markov methods, etc.
Because the actual motion mode of the human body is flexible and variable and unpredictable, students can research human motion models aiming at gait characteristics of personnel, topological structures of sensor installation or wearing and the like, but the established motion models are difficult to expand and transplant aiming at the typical motion states of the human body, such as walking, running, stopping, side shifting, going up and down stairs and the like, which still lack effective and universal models. Therefore, there is a need to develop an autonomous evolution method of human motion model to satisfy the scalability and portability of the motion model.
The application provides a decomposition and synthesis method of a motion model, namely decomposing an arbitrary complex motion model into a plurality of simple motion models, and conversely, synthesizing the plurality of simple motion models into the complex motion model. Based on the method, parameters of a plurality of simple motion models can be extracted through a certain technical means, a plurality of model synthesis rules and screening rules are provided, iterative synthesis and screening are carried out under the rules, the autonomous evolution of the models is realized, and new and complex motion models are obtained. Compared with the traditional modeling method of the human motion model, the method has strong robustness and good universality, and can better solve the problem of modeling of the human motion model under complex conditions.
Disclosure of Invention
The invention aims to provide an autonomous evolution method of a human motion model with strong robustness and good universality, effectively solves the problems in the background technology, and provides guidance for modeling and wide application of a complex human motion model.
In order to achieve the purpose, the invention adopts the technical scheme that:
the autonomous evolution method of the human motion model comprises the following steps:
(1) temporal and spatial registration of the sensor system;
(2) topological wear of the sensor system;
(3) data acquisition and data processing;
(4) obtaining parameters of a motion model;
(5) synthesizing new motion model parameters;
(6) and (4) screening new motion model parameters.
Further, the sensor system in the step (1) comprises at least one of an ARM/DSP/FPGA platform and an accelerometer, a gyroscope, a magnetometer and a Beidou signal receiver, and is used for acquiring the motion acceleration, the speed and the posture of a person or other carriers.
Further, the space-time registration in the step (1) is to compensate the calibration error, the attitude error, the position error and the timing error of the accelerometer, the gyroscope and the magnetometer into a unified space-time frame; the space-time frame comprises a time frame and a space frame, the time frame adopts Beidou time service, and the space frame adopts a CGCS2000 coordinate system.
Further, the topological wearing in the step (2) means wearing the sensor system on typical parts of the human body, wherein the typical parts comprise instep, ankle, lower leg, waist, wrist and head. In addition, the topological wearing mode in the application is flexible, and a special die is not needed.
Further, the data processing in the step (3) is to process the original data from the accelerometer, the gyroscope and the magnetometer by using complementary filtering and Kalman filtering methods so as to obtain three-dimensional acceleration, three-dimensional velocity and three-dimensional attitude information of the human body; the raw data are acceleration, angular velocity and magnetic field strength acquired by the sensor system.
Further, the motion model in the step (4) comprises walking, running, stopping, side shifting, going up and down stairs; the motion model parameters in the step (4) refer to the size and direction of the acceleration, the speed and the posture and the semantic expression of the internal relation of the acceleration, the speed and the posture.
Furthermore, the motion model parameters in the step (4) are obtained by an ensemble learning method with variable weak classifier weights after a large amount of three-dimensional acceleration, three-dimensional velocity and three-dimensional posture sample data are obtained, the ensemble learning method with variable weak classifier weights is to design a plurality of weak classifiers hi (x), obtain the weights wi (x) of all classifiers through training of a training set, and combine all the weak classifiers together according to the formula (1) to realize the method of integrating the classifiers,
H(x)=sign(∑Wi(x) hi(x)) (1)。
further, the new motion model parameters in step (5) are synthesized by a synthesis rule, where the synthesis rule includes a synthesis rule 1: the parameter linear superposition rule is that the acceleration and the speed of each parameter corresponding to the sub-motion model are subjected to vector addition, the attitude is subjected to vector addition and then the model is obtained, the semantic attribute is subjected to Boolean algebra 'or' operation, and the sub-motion model is used for synthesizing a new motion model; synthesis rule 2: a parameter localization random rule, namely, overlapping the attributes of partial parameters by a random small quantity; synthesis rule 3: the multiplication rule of the parameters is to perform vector multiplication on the acceleration and the speed respectively, perform multiplication on the space vector corresponding to the attitude, and perform 'and' operation of Boolean algebra on the semantization attributes.
Further, the order of each execution of the three synthesis rules is random.
Further, the new motion model parameters in step (6) are screened according to a screening rule, where the screening rule is to determine a parameter attribute range of a required model in advance, and search for a parameter whose parameter meets a predetermined parameter range in the synthesized new model, and the screening rule includes screening rule 1: the minimum matching rule of the variance of the module value threshold value is that the attributes of acceleration, speed and attitude are subtracted from the corresponding preset attributes, the variance is calculated, and the attribute with the minimum variance is selected as an alternative attribute; and a screening rule 2, namely a semantic fuzzy matching rule, namely fuzzy matching is carried out on semantic attributes to obtain an attribute which is closest to a preset attribute in the semantic attributes as an alternative attribute.
The invention has the beneficial effects that: the step (5) and the step (6) realize the autonomous evolution of the motion model, namely the evolution from a simple motion model to an arbitrary complex motion model. The robustness and the universality of the human motion model can be greatly improved according to the 6 steps.
Drawings
FIG. 1 is a flow chart of an autonomous evolution method of a human motion model according to the present invention;
FIG. 2 is a spatiotemporal registration of a sensor system;
FIG. 3 is a topological wear of the sensor system;
FIG. 4 is data acquisition and processing;
FIG. 5 is an acquisition of motion model parameters;
FIG. 6 is a synthesis and screening of motion model parameters;
FIG. 7 is an example of motion model evolution 1;
FIG. 8 is an example of motion model evolution 2;
fig. 9 is a motion model evolution example 3.
Detailed Description
For a better understanding of the present invention, the following examples are given to illustrate the present invention, but the present invention is not limited to the following examples.
The invention aims to provide an autonomous evolution method of a human motion model, which is based on the integrated learning method with variable weak classifier weight and extracts the parameters of the motion model; ensuring the diversity of the motion model based on the proposed synthesis rule; ensuring the interpretability of the motion model based on the proposed screening rules; and realizing the autonomous evolution of the motion model parameters in the iterative synthesis and screening process of the motion model parameters. The method has the advantages of strong robustness, good universality and larger theoretical research value and engineering practice significance.
Example 1
As shown in fig. 1, the method for autonomous evolution of a human motion model includes the following steps:
(1) temporal and spatial registration of the sensor system;
(2) topological wear of the sensor system;
(3) data acquisition and data processing;
(4) obtaining parameters of a motion model;
(5) synthesizing new motion model parameters;
(6) and (4) screening new motion model parameters.
Example 2
As shown in fig. 2, the sensor system is a system built based on an ARM/DSP/FPGA platform, an accelerometer, a gyroscope, a magnetometer, a beidou signal receiver, and the like. The time-space registration ensures that the outputs of all the sensors are accurately and objectively projected into a unified time-space frame, so that the execution of subsequent steps is facilitated, the time frame adopts Beidou time service, and the space frame adopts a CGCS2000 coordinate system. And (4) performing space-time registration of the sensor system by considering the calibration error, the attitude error, the position error and the timing error of the sensor.
Example 3
As shown in fig. 3, the sensor system is worn on the instep, ankle, calf, waist, wrist, head, and the like of the human body. The topological wearing does not need to be carried out by means of a special die, the position and the posture of specific wearing are not strictly specified, and the wearing flexibility and the wearing friendliness are achieved.
Example 4
As shown in fig. 4, the original data from the accelerometer, gyroscope and magnetometer are processed by complementary filtering and kalman filtering methods, so as to obtain the three-dimensional acceleration, three-dimensional velocity and three-dimensional attitude (pitch angle, roll angle and course angle) information of the human body; the raw data are the acceleration, angular velocity and magnetic field strength collected by the sensor system.
Example 5
As shown in fig. 5, a block diagram of the ensemble learning classification method with variable weak classifier weights is shown. After a large amount of sample data is acquired, motion model parameters (such as acceleration, speed, posture, semantic expression and the like) are extracted by using an ensemble learning classification method with variable weak classifier weights, the motion model parameters are used as data characteristics, and a data characteristic number N is assumed to be 100, i represents the serial number of weak classification, x is the data characteristics, and the sample number is 10. The samples are divided equally into K (e.g., K10) and the data characteristics are evaluated using a K-fold cross-validation method. The ensemble learning classification method is to design a plurality of weak classifiers hi (x), obtain the weights wi (x) of all the classifiers by training of a training set, and combine all the weak classifiers together according to the formula (1) to realize a strong classifier.
H(x)=sign(∑Wi(x) hi(x)) (1)
The adopted integrated learning classification method with variable weak classifier weight ensures high utilization rate of samples, improves classification efficiency and reduces coupling degree among typical motion model parameters, wherein the motion model parameters are the size and direction of acceleration, speed and posture and semantic expression of internal relation of the motion model parameters;
example 6
FIG. 6 shows a block diagram of the synthesis and screening of motion model parameters. First, a sequence of weighting functions (W1(x), W2(x) …) of each classifier corresponding to a motion model is obtained from example 5, the weighting functions are functions of data characteristics and can be regarded as generalized vectors, the modulus of the generalized vectors represents the weight of the corresponding weak classifier in all classifiers, and the direction of the generalized vectors represents the superposition of each data characteristic. Model synthesis rules, including model synthesis rule 1: the linear parameter superposition rule is that the acceleration and the speed of each parameter corresponding to the sub-motion model are subjected to vector addition, the posture is subjected to vector addition and then the model is obtained, and the semantic attribute is subjected to Boolean algebra 'or' operation and the like; model synthesis rule 2: a parameter localization random rule, namely, overlapping the attributes of partial parameters by a random small quantity; model synthesis rule 3: the multiplication rule of the parameter base is to respectively carry out vector multiplication on the acceleration and the speed, multiply the space vector corresponding to the posture and carry out 'AND' operation of Boolean algebra on the semantization attribute; the sequence of each execution of the three model synthesis rules is random, so that the synthesis models are ensured to keep diversity in a semantically interpretable range to the maximum extent; based on the proposed synthesis rule, the self-crossing, the crossing and the variation of each weight function are realized, new weight functions are generated, and more complex new weight functions are generated in the iterative synthesis. And finally, obtaining a required new motion model based on the proposed screening rule. The basic idea of the screening rule is to determine the parameter attribute range of a required model in advance, and to search for a parameter in a synthesized new model, wherein the parameter meets a predetermined parameter range, and the screening rule is implemented based on the screening rule 1: the minimum matching rule of the variance of the module value threshold value is that the attributes of acceleration, speed and attitude are subtracted from the corresponding preset attributes, the variance is calculated, and the attribute with the minimum variance is selected as an alternative attribute; a screening rule 2, namely a semantic fuzzy matching rule, which is to perform fuzzy matching on semantic attributes to obtain an attribute which is closest to a preset attribute in the semantic attributes as an alternative attribute; and (4) finding a new complex motion model with a specific physical meaning through screening.
Example 7
As shown in fig. 7, after obtaining the parameters of the stair climbing motion model, a weight function sequence containing the data characteristic of the attitude angle is screened through the evolution of the weight function, and the sequence with the maximum module value is obtained to obtain the climbing motion model.
Example 8
As shown in fig. 8, after obtaining the motion model parameters of walking and running, through the evolution of the weight function, the sequence of the weight function containing the data characteristic of speed is screened, and the sequence with the maximum module value is obtained, so as to obtain the jogging model.
Example 9
As shown in fig. 9, after obtaining the static motion model parameters, by the evolution of the weight function, a sequence of weight functions with a horizontal velocity of zero and a vertical velocity of non-zero is screened, and the sequence with the largest module value is obtained, so as to obtain the elevator model.
The above description is only a specific embodiment of the present invention, and not all embodiments, and any equivalent modifications of the technical solutions of the present invention, which are made by those skilled in the art through reading the present specification, are covered by the claims of the present invention.

Claims (7)

1. An autonomous evolution method of a human motion model is characterized by comprising the following steps:
(1) temporal and spatial registration of the sensor system;
(2) topological wear of the sensor system;
(3) data acquisition and data processing;
(4) obtaining parameters of a motion model;
(5) synthesizing new motion model parameters;
(6) screening new motion model parameters;
the sensor system comprises at least one ARM/DSP/FPGA platform, an accelerometer, a gyroscope, a magnetometer and a Beidou signal receiver, and is used for acquiring the motion acceleration, the speed and the attitude of a person or other carriers;
the data acquisition and data processing are used for acquiring and processing the acceleration, the angular velocity and the magnetic field intensity of an accelerometer, a gyroscope and a magnetometer in a sensor system so as to obtain the three-dimensional acceleration, the three-dimensional velocity and the three-dimensional attitude information of the human body;
the motion model parameters are obtained by extracting the acceleration, the speed and the posture size and direction and the semantic expression of the internal relation of the acceleration, the speed and the posture from the three-dimensional acceleration, the three-dimensional speed and the three-dimensional posture information obtained after the data acquisition and the data processing in the step (3);
synthesizing the new motion model parameters to synthesize the acceleration, the speed, the posture size and the direction obtained in the step (4) and the semantic expression of the internal relation of the motion model parameters to obtain the new motion model parameters; specifically, the new motion model parameters are synthesized by a synthesis rule, where the synthesis rule includes a synthesis rule 1: the parameter linear superposition rule is that the acceleration and the speed of each parameter corresponding to the sub-motion model are subjected to vector addition, the attitude is subjected to vector addition and then the model is obtained, the semantic attribute is subjected to Boolean algebra 'or' operation, and the sub-motion model is used for synthesizing a new motion model; synthesis rule 2: a parameter localization random rule, namely, overlapping the attributes of partial parameters by a random small quantity; synthesis rule 3: the multiplication rule of the parameters is to respectively carry out vector multiplication on the acceleration and the speed, multiply the space vector corresponding to the posture and carry out 'AND' operation of Boolean algebra on the semantization attributes;
screening the new motion model parameters by screening the new motion model parameters obtained by synthesizing the new motion model parameters in the step (5); specifically, the new motion model parameters are screened through a screening rule, the screening rule refers to determining a parameter attribute range of a required model in advance, and searching for parameters of which the parameters meet a predetermined parameter range in the synthesized new model, and the screening rule includes a screening rule 1: the minimum matching rule of the variance of the module value threshold value is that the attributes of acceleration, speed and attitude are subtracted from the corresponding preset attributes, the variance is calculated, and the attribute with the minimum variance is selected as an alternative attribute; and a screening rule 2, namely a semantic fuzzy matching rule, namely fuzzy matching is carried out on semantic attributes to obtain an attribute which is closest to a preset attribute in the semantic attributes as an alternative attribute.
2. The method for autonomous evolution of human motion models according to claim 1, characterized in that the spatiotemporal registration of step (1) means to compensate the calibration errors, attitude errors, position errors and timing errors of accelerometers, gyroscopes and magnetometers into a unified spatiotemporal framework; the space-time frame comprises a time frame and a space frame, the time frame adopts Beidou time service, and the space frame adopts a CGCS2000 coordinate system.
3. The method for autonomously evolving human motion model according to claim 1, wherein the topological wearing in step (2) is to wear the sensor system on typical parts of the human body, including instep, ankle, calf, waist, wrist, and head.
4. The method for autonomously evolving a human motion model according to claim 1, wherein the data processing in step (3) is to process raw data from an accelerometer, a gyroscope and a magnetometer by using complementary filtering and kalman filtering methods, so as to obtain three-dimensional acceleration, three-dimensional velocity and three-dimensional attitude information of a human body; the raw data are acceleration, angular velocity and magnetic field strength acquired by the sensor system.
5. The method for autonomously evolving human motion model according to claim 1, wherein the motion model of step (4) includes walking, running, stopping, side-shifting, going up and down stairs; the motion model parameters in the step (4) refer to the size and direction of the acceleration, the speed and the posture and the semantic expression of the internal relation of the acceleration, the speed and the posture.
6. The method of claim 1, wherein the parameters of the motion model in step (4) are obtained by an ensemble learning method with variable weights of weak classifiers, which is a method of designing a plurality of weak classifiers hi (x), obtaining the weights wi (x) of the classifiers by training a training set, and combining the weak classifiers together according to equation (1) to realize an ensemble classifier, after obtaining a large amount of sample data of three-dimensional acceleration, three-dimensional velocity and three-dimensional posture,
H(x)=sign(∑Wi(x) hi(x)) (1)。
7. the method of autonomous evolution of a human motion model according to claim 1, characterized in that: the order of each execution of the three synthesis rules is random.
CN201910687480.6A 2019-07-29 2019-07-29 Autonomous evolution method of human motion model Active CN110501008B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910687480.6A CN110501008B (en) 2019-07-29 2019-07-29 Autonomous evolution method of human motion model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910687480.6A CN110501008B (en) 2019-07-29 2019-07-29 Autonomous evolution method of human motion model

Publications (2)

Publication Number Publication Date
CN110501008A CN110501008A (en) 2019-11-26
CN110501008B true CN110501008B (en) 2021-03-26

Family

ID=68587634

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910687480.6A Active CN110501008B (en) 2019-07-29 2019-07-29 Autonomous evolution method of human motion model

Country Status (1)

Country Link
CN (1) CN110501008B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101034441A (en) * 2007-03-29 2007-09-12 浙江大学 Human motion date recognizing method based on integrated Hidden Markov model leaning method
CN108197364A (en) * 2017-12-25 2018-06-22 浙江工业大学 A kind of polygonal color human motion synthetic method based on movement piece member splicing
CN108245172A (en) * 2018-01-10 2018-07-06 山东大学 It is a kind of not by the human posture recognition method of position constraint
CN108537101A (en) * 2018-01-05 2018-09-14 浙江大学 A kind of pedestrian's localization method based on state recognition

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101034441A (en) * 2007-03-29 2007-09-12 浙江大学 Human motion date recognizing method based on integrated Hidden Markov model leaning method
CN108197364A (en) * 2017-12-25 2018-06-22 浙江工业大学 A kind of polygonal color human motion synthetic method based on movement piece member splicing
CN108537101A (en) * 2018-01-05 2018-09-14 浙江大学 A kind of pedestrian's localization method based on state recognition
CN108245172A (en) * 2018-01-10 2018-07-06 山东大学 It is a kind of not by the human posture recognition method of position constraint

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Body Topology Recognition and Gait Detection;Ling-Feng Shi 等;《IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT》;20190415;第721-728页 *
Learning Actionlet Ensemble for 3D Human Action Recognition;Jiang Wang 等;《 IEEE Transactions on Pattern Analysis and Machine Intelligence》;20131009;第914-927页 *
Recognition of Human Motion State Based on Machine Learning;Yuan Zheng 等;《 2018 11th International Symposium on Computational Intelligence and Design (ISCID)》;20190425;第180-184页 *
基于半监督协同训练和集成学习的人体动作识别研究;景陈勇 等;《中国优秀硕士学位论文全文数据库》;20180115;第1-67页 *
基于多传感器数据融合的肢体动作识别***研究;张少飞;《中国优秀硕士学位论文全文数据库》;20160815;第1-93页 *

Also Published As

Publication number Publication date
CN110501008A (en) 2019-11-26

Similar Documents

Publication Publication Date Title
Qiu et al. Using distributed wearable sensors to measure and evaluate human lower limb motions
Herath et al. Ronin: Robust neural inertial navigation in the wild: Benchmark, evaluations, & new methods
Wang et al. Inertial sensor-based analysis of equestrian sports between beginner and professional riders under different horse gaits
Yan et al. Ronin: Robust neural inertial navigation in the wild: Benchmark, evaluations, and new methods
Yoon et al. Robust biomechanical model-based 3-D indoor localization and tracking method using UWB and IMU
Barshan et al. Recognizing daily and sports activities in two open source machine learning environments using body-worn sensor units
US9357948B2 (en) Method and system for determining the values of parameters representative of a movement of at least two limbs of an entity represented in the form of an articulated line
CN106705968A (en) Indoor inertial navigation algorithm based on posture recognition and step length model
WO2021258333A1 (en) Gait abnormality early identification and risk early-warning method and apparatus
Rao et al. Ctin: Robust contextual transformer network for inertial navigation
Wang et al. Swimming stroke phase segmentation based on wearable motion capture technique
Abid et al. Walking gait step length asymmetry induced by handheld device
Chen et al. Deep learning for inertial positioning: A survey
Alrazzak et al. A survey on human activity recognition using accelerometer sensor
Li et al. An indoor positioning error correction method of pedestrian multi-motions recognized by hybrid-orders fraction domain transformation
Lin RETRACTED ARTICLE: Research on film animation design based on inertial motion capture algorithm
Li et al. Lower limb model based inertial indoor pedestrian navigation system for walking and running
CN107907127A (en) A kind of step-size estimation method based on deep learning
CN110501008B (en) Autonomous evolution method of human motion model
Shi et al. Body topology recognition and gait detection algorithms with nine-axial IMMU
Fu et al. A survey on artificial intelligence for pedestrian navigation with wearable inertial sensors
Li et al. Study on horse-rider interaction based on body sensor network in competitive equitation
Niu et al. Pedestrian Dead Reckoning Based on Complex Motion Mode Recognition Using Hierarchical Classification
Steffan et al. Online stability estimation based on inertial sensor data for human and humanoid fall prevention
İnanç et al. Recognition of daily and sports activities

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20221227

Address after: 272073 room 316, 3 / F, block B, building A1, industry university research base, No. 9 Haichuan Road, high tech Zone, Jining City, Shandong Province

Patentee after: Shandong chuyun Communication Technology Co.,Ltd.

Address before: 710071 Taibai South Road, Yanta District, Xi'an, Shaanxi Province, No. 2

Patentee before: XIDIAN University

TR01 Transfer of patent right