CN111062412A - Novel intelligent identification method for indoor pedestrian movement speed by intelligent shoes - Google Patents

Novel intelligent identification method for indoor pedestrian movement speed by intelligent shoes Download PDF

Info

Publication number
CN111062412A
CN111062412A CN201911078098.1A CN201911078098A CN111062412A CN 111062412 A CN111062412 A CN 111062412A CN 201911078098 A CN201911078098 A CN 201911078098A CN 111062412 A CN111062412 A CN 111062412A
Authority
CN
China
Prior art keywords
speed
data
model
inertial
dictionary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911078098.1A
Other languages
Chinese (zh)
Other versions
CN111062412B (en
Inventor
蒋春煦
刘昱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201911078098.1A priority Critical patent/CN111062412B/en
Publication of CN111062412A publication Critical patent/CN111062412A/en
Application granted granted Critical
Publication of CN111062412B publication Critical patent/CN111062412B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses an intelligent identification method of indoor pedestrian movement speed by a novel intelligent shoe, which comprises the following steps of firstly, extracting pedestrian foot inertial sensing data by using an IMU (inertial measurement Unit); secondly, dividing the inertia data with continuous time domains step by adopting an acceleration peak value dividing method; thirdly, extracting the characteristics of the inertial data of each step, inputting the inertial data into a dictionary learning algorithm, and performing model training to obtain a speed recognition model; fourthly, performing single-step division and feature extraction on the newly input data, and identifying speed information according to a speed identification model; fifthly, integrating the characteristics and the speed of a group of newly obtained inertial data with the existing model, and updating the parameters of the model; and sixthly, transmitting the speed, the step number and other information of the pedestrian to a mobile phone app or a notebook computer terminal through a communication module, and realizing the visualization of the indoor pedestrian movement speed. The invention considers the wearing convenience of the whole set of device, integrates the IMU, the MCU and the wireless communication module on the shoe, is convenient and comfortable, and can conveniently identify the indoor pedestrian movement speed.

Description

Novel intelligent identification method for indoor pedestrian movement speed by intelligent shoes
Technical Field
The invention relates to the field of indoor positioning and machine learning, in particular to a method for recognizing indoor pedestrian movement speed by using intelligent shoes.
Background
Machine Learning (ML) has been emerging in recent years as a multi-domain cross discipline involving probability theory, statistics, approximation theory, convex analysis, and algorithmic complex theory. Machine learning mainly studies how a computer realizes the simulation and learning of human behaviors, acquires new knowledge, and improves the performance of the computer by organizing and perfecting the existing knowledge structure. Machine learning, which is the core of Artificial Intelligence (Artificial Intelligence), is a fundamental approach for computers to have Intelligence, which is applied throughout various fields of Artificial Intelligence, such as data processing and analysis and prediction.
With the improvement of living conditions and the development of science and technology, people are in the room for a longer time. Whether the positioning in a commercial place or the rescue in a fire place needs to be capable of accurately knowing information such as real-time position and speed of people. Due to the complexity of indoor environment and various electromagnetic interferences, satellite positioning signals have attenuation and interference problems indoors, and the indoor positioning requirement cannot be met. The Pedestrian Dead Reckoning (PDR) system can well make up for the shortage of satellite positioning in the indoor environment. However, due to the accumulated offset error existing in the PDR system, a Zero-Velocity correction algorithm (ZUPT) is required to eliminate the error, and a Zero-Velocity Update algorithm needs real-time Velocity information as a basis.
Based on the problems existing in indoor positioning, the moving speed of a pedestrian is acquired in real time when the pedestrian moves in an indoor environment, and the speed information is processed and stored. Different people walk or run differently, but for a particular person, features extracted from their foot inertia data reflect well their state of motion and speed. The identification method can be universally applied to different pedestrians, is convenient to wear, simple and convenient to use and high in operation rate, can acquire pedestrian movement speed information in real time, feeds the pedestrian movement speed information back to the zero-speed correction algorithm and the PDR system, and performs indoor pedestrian positioning.
Disclosure of Invention
Aiming at the problems and the defects in the existing method, the invention provides the intelligent identification method of the indoor pedestrian movement speed by the novel intelligent shoe.
The invention discloses an intelligent identification method of indoor pedestrian movement speed by using novel intelligent shoes, which specifically comprises the following steps:
firstly, extracting inertial sensing data of feet of pedestrians by using an Inertial Measurement Unit (IMU);
secondly, dividing the continuous inertia data of the time domain step by adopting an acceleration peak value division method, and detecting
Figure BDA0002263116530000021
The peak value of the acceleration sensor is divided step by step, and the two norms of the acceleration of the three axes of x, y and z
Figure BDA0002263116530000022
Is defined as shown in formula (1):
Figure BDA0002263116530000023
uploading and storing the divided single step data,
thirdly, extracting the characteristics of the inertial data of each step, inputting the inertial data into a dictionary learning algorithm, and performing model training to obtain a speed recognition model;
firstly, performing feature extraction on the inertia data which is finished step by step, performing increase and decrease experiments on various statistical data derived from the inertia data, comparing the performance of a speed identification model, and finally selecting 33-dimensional features;
the dictionary learning algorithm is used for acquiring more intrinsic characteristic representation so as to improve the accuracy of speed identification; use of
Figure BDA0002263116530000024
Represents a training sample, wherein Y1,...,YCTraining samples representing a total of class C, Yi(i ═ 1.., C) can be split into y1,...,yNThe total number of N training sample data is,
Figure BDA0002263116530000025
meaning a linear space of N x N dimensions, described by trainingThe dimensions of the sample matrix Y are trained. The method takes divided single-step inertial data as a sample unit, and is characterized in that refined 33-dimensional features are adopted, a label is a speed ground channel used for training a treadmill experiment of a speed recognition model, C is a speed category number, n is a feature dimension, and the purpose of dictionary learning is to learn a latent variable projection dictionary
Figure BDA0002263116530000031
And a projection coefficient matrix
Figure BDA0002263116530000032
Wherein K is the atomic weight of the dictionary,
Figure BDA0002263116530000033
the dimension describing the learning dictionary D is n × K, consisting of K n-dimensional vectors Di(i ═ 1.., K);
Figure BDA0002263116530000034
it is described that the projection coefficient matrix X has dimensions K × N, consisting of N K-dimensional vectors Xi(i ═ 1.., N);
the objective function of the dictionary learning algorithm used is shown in equation (2):
Figure BDA0002263116530000035
s.t.||di||2=1,i=1,...,K
setting the maximum iteration number as T due to the mutual iteration relation of D, X, V and LmaxBefore reaching the maximum iteration number, taking the minimum value of the formula (2);
wherein Y is expressed as a training sample, D is a learning dictionary, X is a projection coefficient matrix, V is a coding coefficient matrix, L is a graph Laplacian matrix, Tr is a trace operation of solving the matrix, α, gamma is a regularization parameter, and the constraint condition satisfied by the first half part of the s.t. expression (2) is that all D is expressed as DiTwo norms di||2Are all 1, i has a value range of 1,.. K,
iterating the ith time to obtain DiAnd XiThen, the next iteration D is obtained by giving equation (2)i+1And Xi+1(ii) a Set a good maximum number of times T by iterationmaxThen, the final product is obtained
Figure BDA0002263116530000036
And
Figure BDA0002263116530000037
the two are jointly used as a speed classifier;
fourthly, performing single-step division and feature extraction on the newly input data, and identifying speed information according to a speed identification model;
fifthly, integrating a group of newly obtained characteristics and speed data with the existing model, updating model parameters, and improving the identification performance of the model on input speed data;
and sixthly, transmitting the speed, the step number and other information of the pedestrian to a mobile phone app or a notebook computer terminal through a communication module, and realizing the visualization of the indoor pedestrian movement speed.
Compared with the prior art, the invention considers the wearing convenience of the whole set of device, integrates the IMU, the MCU and the wireless communication module on the shoe, is convenient and comfortable, and can conveniently recognize the indoor pedestrian movement speed.
Drawings
FIG. 1 is a general flow chart of the intelligent identification method of indoor pedestrian movement speed by the novel intelligent shoe of the invention;
FIG. 2 is a basic schematic block diagram of an indoor pedestrian speed intelligent identification device;
FIG. 3 is an illustration of an embodiment of a smart shoe;
reference numerals: 1. wireless communication module, 2, IMU, 3, microprocessor MCU, 4, inertial sensor (can imbed the device in the inside of shoes with the help of the zip fastener).
FIG. 4 is a result of acceleration peak single step data division;
fig. 5 shows the speed recognition result.
Detailed Description
The technical solution of the present invention is described in detail below with reference to the accompanying drawings and examples.
As shown in fig. 1, it is an overall flow chart of the method for intelligently identifying the indoor pedestrian movement speed by using the novel intelligent shoe of the present invention. The method specifically comprises the following steps:
firstly, extracting inertial sensing data of feet of pedestrians by using an Inertial Measurement Unit (IMU);
secondly, dividing the inertia data with continuous time domains step by adopting an acceleration peak value dividing method;
by detecting
Figure BDA0002263116530000041
The peak value of the acceleration sensor is divided step by step, and the two norms of the acceleration of the three axes of x, y and z
Figure BDA0002263116530000042
Is defined by the formula (1):
Figure BDA0002263116530000043
the division result is shown in fig. 4, the horizontal axis is a sampling point, the vertical axis is the two-norm data of the triaxial acceleration calculated by the formula (1), the star mark represents the identified two-norm peak value of the acceleration, and the step result is marked by a dotted line; the divided single step data is uploaded to a hundred-degree network disk;
thirdly, extracting the characteristics of the inertial data of each step, inputting the inertial data into a dictionary learning algorithm, and performing model training to obtain a speed recognition model;
firstly, feature extraction is carried out on the inertia data which is finished step by step, various statistical data derived from the inertia data are subjected to increase and decrease experiments through ablation experiments (which can be regarded as a control variable method) and reading documents, the performance of a speed identification model is compared, and finally, 33-dimensional features are selected and shown in table 1.
The dictionary learning algorithm is used for acquiring more intrinsic characteristic representation so as to improve the accuracy of speed identification; use of
Figure BDA0002263116530000051
Represents a training sample, wherein Y1,...,YCTraining samples representing a total of class C, Yi(i ═ 1.., C) can be split into y1,...,yNThe total number of N training sample data is,
Figure BDA0002263116530000052
meaning a linear space of N x N dimensions, describing the dimensions of the training sample matrix Y. The method takes divided single-step inertial data as a sample unit, and is characterized in that refined 33-dimensional features are adopted, a label is a speed ground channel used for training a treadmill experiment of a speed recognition model, C is a speed category number, n is a feature dimension, and the purpose of dictionary learning is to learn a latent variable projection dictionary
Figure BDA0002263116530000053
And a projection coefficient matrix
Figure BDA0002263116530000054
Wherein K is the atomic weight of the dictionary,
Figure BDA0002263116530000055
the dimension describing the learning dictionary D is n × K, consisting of K n-dimensional vectors Di(i ═ 1.., K);
Figure BDA0002263116530000056
it is described that the projection coefficient matrix X has dimensions K × N, consisting of N K-dimensional vectors Xi(i ═ 1.., N);
the objective function of the dictionary learning algorithm used is shown in equation (2):
Figure BDA0002263116530000057
s.t.||di||2=1,i=1,...,K
because of the mutual iterative relationship of D, X, V and L, the maximum iterative times are set to be TmaxBefore reaching the maximum iteration number, the formula (2) is optimizedSmall value, where Y is the training sample, D is the learning dictionary, X is the projection coefficient matrix, V is the coding coefficient matrix, L is the graph Laplace matrix, Tr is the trace operation of the matrix, α, gamma is the regularization parameter, and s.t. means that the first half of equation (2) satisfies the constraint condition that all DiTwo norms di||2Are all 1, i has a value range of 1.
Iterating the ith time to obtain DiAnd XiThe formula (2) is given to find D of the next iterationi+1And Xi+1(ii) a Set a good maximum number of times T by iterationmaxThen, the final product is obtained
Figure BDA0002263116530000058
And
Figure BDA0002263116530000059
the two are jointly used as a speed classifier;
fourthly, performing single-step division and feature extraction on the newly input data, and identifying speed information according to a speed identification model;
fifthly, integrating a group of newly obtained characteristics and speed data with the existing model, and updating the model to achieve higher recognition rate and better robustness;
and sixthly, transmitting the speed, the step number and other information of the pedestrian to a mobile phone app or a notebook computer terminal through a communication module, and realizing the visualization of the indoor pedestrian movement speed.
Fig. 2 is a basic schematic block diagram of an indoor pedestrian speed intelligent recognition device.
The related algorithm is described in detail as follows:
1. peak detection&And single step division. The method of the invention is based on the basis that the inertial data are accurately and correctly classified into steps, so the problem to be solved is that the steps are accurate and the rest parts of the PDR system and the zero-speed updating algorithm are not influenced. The invention selects the two norms of the triaxial acceleration data
Figure BDA0002263116530000061
AsThe step division is based on the fact that when the data reaches the peak, the most probably is the time when the foot is static relative to the ground (namely, when the acceleration is the maximum, the most probably sole is close to the ground), so that the single step division is more accurate and does not influence the subsequent steps.
2. And (5) feature extraction. There are many statistical features that can be selected, but not every feature is suitable for the job of speed recognition. Through innovation, deletion and summary of the features, 33-dimensional statistical features (shown in table 1) are finally refined as feature expressions (dictionary learning algorithm) of each group of data.
3. Feature representation (dictionary learning algorithm). This section includes the algorithm improvement: the traditional machine learning algorithm comprises an SVM (support vector machine), Naive Bayes (Naive Bayes), a K-nearest neighbor algorithm and the like, improves an algorithm for training a recognition model and performing speed recognition, and selects an algorithm for dictionary learning. The dictionary learning algorithm can obtain stronger robustness and characteristic expression with more discriminative performance on sample data, and can improve the identification accuracy while shortening the algorithm time. The method not only selects the characteristics capable of reflecting the speed characteristics, but also removes redundant characteristics, and prevents the reduction of the recognition rate and the occurrence of overfitting. A specific 33-dimensional feature overview is shown in table 1.
TABLE 1
Figure BDA0002263116530000071
The first three steps are fused, and meanwhile, the step number and the identified speed information are input into the mobile phone app or the notebook computer terminal through the wireless communication module, so that a user can conveniently know the motion information of the user in the indoor environment in real time.
Fig. 3 is a diagram of an embodiment of a smart shoe. On the basis of common sports shoes, preset positions of a plurality of devices such as an inertial sensor are set, mark points are made, and only a specified data acquisition module, a data processing and storage module or a communication module needs to be installed at the specified positions during mass production, so that the cost is low. The embodiment comprises a wireless communication module 1, an IMU2, a microprocessor MCU 3 and an inertial sensor 4 to realize the intelligent identification method of the novel intelligent shoe for the indoor pedestrian movement speed. The original inertial data are collected through the foot inertial sensor 4, the characteristics are extracted, and the wearable recognition device for recognizing the speed is carried out in real time after the recognition model is trained. The required inertial sensing unit has higher portability degree, can be bound on shoes, has a wireless communication function, can detect the real-time speed of pedestrians, and then serves an indoor positioning system. The method provides convenience for people to acquire the position and the state of the people in the indoor environment in real time.
The present invention is not limited to the specific steps described above. The invention extends to any novel feature or any novel combination of features disclosed in this specification or to any novel combination of steps. In summary, this summary should not be construed to limit the present invention.

Claims (1)

1. The method for intelligently identifying the indoor pedestrian movement speed by the novel intelligent shoe is characterized by comprising the following steps:
firstly, extracting inertial sensing data of feet of pedestrians by using an Inertial Measurement Unit (IMU);
secondly, dividing the continuous inertia data of the time domain step by adopting an acceleration peak value division method, and detecting
Figure FDA0002263116520000011
The peak value of the acceleration sensor is divided step by step, and the two norms of the acceleration of the three axes of x, y and z
Figure FDA0002263116520000012
Is defined as shown in formula (1):
Figure FDA0002263116520000013
uploading and storing the divided single step data,
thirdly, extracting the characteristics of the inertial data of each step, inputting the inertial data into a dictionary learning algorithm, and performing model training to obtain a speed recognition model;
firstly, performing feature extraction on the inertia data which is finished step by step, performing increase and decrease experiments on various statistical data derived from the inertia data, comparing the performance of a speed identification model, and finally selecting 33-dimensional features;
the dictionary learning algorithm is used for acquiring more intrinsic characteristic representation so as to improve the accuracy of speed identification; use of
Figure FDA0002263116520000014
Represents a training sample, wherein Y1,...,YCTraining samples representing a total of class C, Yi(i ═ 1.., C) can be split into y1,...,yNThe total number of N training sample data is,
Figure FDA0002263116520000015
meaning a linear space of N x N dimensions, describing the dimensions of the training sample matrix Y. The method takes divided single-step inertial data as a sample unit, and is characterized in that refined 33-dimensional features are adopted, a label is a speed ground channel used for training a treadmill experiment of a speed recognition model, C is a speed category number, n is a feature dimension, and the purpose of dictionary learning is to learn a latent variable projection dictionary
Figure FDA0002263116520000016
And a projection coefficient matrix
Figure FDA0002263116520000017
Wherein K is the atomic weight of the dictionary,
Figure FDA0002263116520000018
the dimension describing the learning dictionary D is n × K, consisting of K n-dimensional vectors Di(i ═ 1.., K);
Figure FDA0002263116520000019
it is described that the projection coefficient matrix X has dimensions K × N, consisting of N K-dimensional vectors Xi(i ═ 1.., N);
the objective function of the dictionary learning algorithm used is shown in equation (2):
Figure FDA0002263116520000021
s.t.||di||2=1,i=1,...,K
setting the maximum iteration number as T due to the mutual iteration relation of D, X, V and LmaxBefore reaching the maximum iteration number, taking the minimum value of the formula (2);
wherein Y is expressed as a training sample, D is a learning dictionary, X is a projection coefficient matrix, V is a coding coefficient matrix, L is a graph Laplacian matrix, Tr is a trace operation of solving the matrix, α, gamma is a regularization parameter, and the constraint condition satisfied by the first half part of the s.t. expression (2) is that all D is expressed as DiTwo norms di||2Are all 1, i has a value range of 1,.. K,
iterating the ith time to obtain DiAnd XiThen, the next iteration D is obtained by giving equation (2)i+1And Xi+1(ii) a Set a good maximum number of times T by iterationmaxThen, the final product is obtained
Figure FDA0002263116520000022
And
Figure FDA0002263116520000023
the two are jointly used as a speed classifier;
fourthly, performing single-step division and feature extraction on the newly input data, and identifying speed information according to a speed identification model;
fifthly, integrating a group of newly obtained characteristics and speed data with the existing model, updating model parameters, and improving the identification performance of the model on input speed data;
and sixthly, transmitting the speed, the step number and other information of the pedestrian to a mobile phone app or a notebook computer terminal through a communication module, and realizing the visualization of the indoor pedestrian movement speed.
CN201911078098.1A 2019-11-06 2019-11-06 Novel intelligent shoe intelligent recognition method for indoor pedestrian movement speed Active CN111062412B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911078098.1A CN111062412B (en) 2019-11-06 2019-11-06 Novel intelligent shoe intelligent recognition method for indoor pedestrian movement speed

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911078098.1A CN111062412B (en) 2019-11-06 2019-11-06 Novel intelligent shoe intelligent recognition method for indoor pedestrian movement speed

Publications (2)

Publication Number Publication Date
CN111062412A true CN111062412A (en) 2020-04-24
CN111062412B CN111062412B (en) 2023-06-30

Family

ID=70297716

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911078098.1A Active CN111062412B (en) 2019-11-06 2019-11-06 Novel intelligent shoe intelligent recognition method for indoor pedestrian movement speed

Country Status (1)

Country Link
CN (1) CN111062412B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112564560A (en) * 2020-12-09 2021-03-26 山东志盈医学科技有限公司 Method and device for controlling acceleration and deceleration of stepping motor of digital slice scanner
CN117288692A (en) * 2023-11-23 2023-12-26 四川轻化工大学 Method for detecting tannin content in brewing grains

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160235344A1 (en) * 2013-10-24 2016-08-18 Breathevision Ltd. Motion monitor
CN106326906A (en) * 2015-06-17 2017-01-11 姚丽娜 Activity identification method and device
CN106705968A (en) * 2016-12-09 2017-05-24 北京工业大学 Indoor inertial navigation algorithm based on posture recognition and step length model
CN106991355A (en) * 2015-09-10 2017-07-28 天津中科智能识别产业技术研究院有限公司 The face identification method of the analytical type dictionary learning model kept based on topology

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160235344A1 (en) * 2013-10-24 2016-08-18 Breathevision Ltd. Motion monitor
CN106326906A (en) * 2015-06-17 2017-01-11 姚丽娜 Activity identification method and device
CN106991355A (en) * 2015-09-10 2017-07-28 天津中科智能识别产业技术研究院有限公司 The face identification method of the analytical type dictionary learning model kept based on topology
CN106705968A (en) * 2016-12-09 2017-05-24 北京工业大学 Indoor inertial navigation algorithm based on posture recognition and step length model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LAURENT OUDRE ET AL.: "Template-Based step detection with Inertial Measurement Units" *
李照洋: "基于深度学习的人类动作识别研究" *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112564560A (en) * 2020-12-09 2021-03-26 山东志盈医学科技有限公司 Method and device for controlling acceleration and deceleration of stepping motor of digital slice scanner
CN112564560B (en) * 2020-12-09 2022-11-04 山东志盈医学科技有限公司 Method and device for controlling acceleration and deceleration of stepping motor of digital slice scanner
CN117288692A (en) * 2023-11-23 2023-12-26 四川轻化工大学 Method for detecting tannin content in brewing grains
CN117288692B (en) * 2023-11-23 2024-04-02 四川轻化工大学 Method for detecting tannin content in brewing grains

Also Published As

Publication number Publication date
CN111062412B (en) 2023-06-30

Similar Documents

Publication Publication Date Title
CN110070074B (en) Method for constructing pedestrian detection model
Luo et al. Temporal convolutional networks for multiperson activity recognition using a 2-d lidar
CN111027487A (en) Behavior recognition system, method, medium, and apparatus based on multi-convolution kernel residual network
CN109276255B (en) Method and device for detecting tremor of limbs
US20200275895A1 (en) Methods and apparatus for unsupervised one-shot machine learning for classification of human gestures and estimation of applied forces
KR101779800B1 (en) System and method for evaluating multifaceted growth based on machine learning
CN109697469A (en) A kind of self study small sample Classifying Method in Remote Sensing Image based on consistency constraint
WO2010083562A1 (en) Activity detection
CN110674875A (en) Pedestrian motion mode identification method based on deep hybrid model
CN104298977B (en) A kind of low-rank representation Human bodys' response method constrained based on irrelevance
CN111199202B (en) Human body action recognition method and recognition device based on circulating attention network
CN111062412B (en) Novel intelligent shoe intelligent recognition method for indoor pedestrian movement speed
Wang et al. Human activity prediction using temporally-weighted generalized time warping
CN112597921B (en) Human behavior recognition method based on attention mechanism GRU deep learning
CN109976526A (en) A kind of sign Language Recognition Method based on surface myoelectric sensor and nine axle sensors
CN109934095A (en) A kind of remote sensing images Clean water withdraw method and system based on deep learning
CN104463916B (en) Eye movement fixation point measurement method based on random walk
CN108629295A (en) Corner terrestrial reference identification model training method, the recognition methods of corner terrestrial reference and device
CN103093237A (en) Face detecting method based on structural model
CN110664412A (en) Human activity recognition method facing wearable sensor
Wang et al. A2dio: Attention-driven deep inertial odometry for pedestrian localization based on 6d imu
CN117133057A (en) Physical exercise counting and illegal action distinguishing method based on human body gesture recognition
CN115187772A (en) Training method, device and equipment of target detection network and target detection method, device and equipment
Sideridis et al. Gesturekeeper: Gesture recognition for controlling devices in iot environments
CN115083011A (en) Sign language understanding visual gesture recognition method and system based on deep learning, computer device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant