CN114440884A - Intelligent analysis method for human body posture for intelligent posture correction equipment - Google Patents

Intelligent analysis method for human body posture for intelligent posture correction equipment Download PDF

Info

Publication number
CN114440884A
CN114440884A CN202210370888.2A CN202210370888A CN114440884A CN 114440884 A CN114440884 A CN 114440884A CN 202210370888 A CN202210370888 A CN 202210370888A CN 114440884 A CN114440884 A CN 114440884A
Authority
CN
China
Prior art keywords
data
lines
rows
acceleration
posture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210370888.2A
Other languages
Chinese (zh)
Inventor
杨晓峰
刘波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Fruit Technology Co ltd
Original Assignee
Tianjin Fruit Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Fruit Technology Co ltd filed Critical Tianjin Fruit Technology Co ltd
Priority to CN202210370888.2A priority Critical patent/CN114440884A/en
Publication of CN114440884A publication Critical patent/CN114440884A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/18Stabilised platforms, e.g. by gyroscope
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C1/00Measuring angles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention relates to an intelligent analysis method of human body posture for intelligent posture correction equipment, which comprises the following steps: s1, data acquisition and training: s101, collecting original data; s102, manually calibrating data; s103, generating a data set; s104, training a random forest model; s105, deploying to an embedded system; s2, motion recognition algorithm of the posture correcting equipment: s201, data sampling; s202, extracting a characteristic value; s203, traversing the random forest model; s204, extracting an identification result; and S205, counting the actions. The invention collects a large amount of action data in advance, trains the random forest model, and deploys the trained model to the embedded system for operation, can identify more complex action types, has high response speed and low requirement on equipment hardware, can identify on the embedded processor of the posture correction equipment at the speed of several times per second, and does not obviously increase the power consumption.

Description

Intelligent analysis method for human body posture for intelligent posture correction equipment
Technical Field
The invention relates to the technical field of posture correcting equipment, in particular to an intelligent analysis method for human body posture for intelligent posture correcting equipment.
Background
In daily life, many people develop bad habits due to some external reasons, such as the incoordination of the heights of the tables and chairs, or some self-reasons. If walking, slightly bowing to the waist and laying down the shoulder; when standing, the user holds the table top by hand or takes the table top; when sitting, the user bends the back, leans forwards to get enough keyboard, and lies on a desk to sleep; when sleeping, the user prefers to lie on his side, and adopts postures of leaning shoulders forward and bending legs. The shoulders of the user can be bent forwards in different degrees for a long time, and the back of the user can be raised to form humpback in serious cases, so that the normal life is influenced; if the child is a child, the child can cause the abnormal development of shoulders, so that the chest space is squeezed, and the normal development of chest organs is influenced.
In order to correct these harmful body postures, more and more posture correcting devices are produced. However, the existing general intelligent posture correcting equipment judges the posture by acquiring angle data and analyzing whether the angle data exceeds a boundary angle value, has simple functions, can only distinguish the upright state and the humpback state, cannot identify relatively complex movement, and cannot meet the requirements.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides an intelligent analysis method for human body posture of intelligent posture correction equipment.
In order to achieve the purpose, the invention adopts the following technical scheme:
an intelligent analysis method for human body posture for intelligent posture correction equipment comprises the following specific steps:
s1, data acquisition and training
S101, collecting original data:
the method comprises the following steps that a plurality of people with different sexes and different body types wear posture correcting equipment to act, each action is repeated for a plurality of times, a three-axis acceleration sensor and a three-axis gyroscope sensor are arranged in the posture correcting equipment, the posture correcting equipment is connected with a mobile phone through Bluetooth, collected six-axis data are continuously sent to the mobile phone according to the sampling frequency of 50 times per second, the actions to be collected are selected on the mobile phone, then data of a period of time are recorded, and finally the data are uploaded to a database of a server;
s102, manual calibration data:
marking invalid data and the start and end intervals of the valid data by manually calibrating the acquired data;
s103, generating a data set:
connecting a database, filtering according to the action to be used and the user id, and downloading action data;
filtering out the data marked as invalid, and cutting the data according to the starting and ending intervals of the marked valid data;
dividing data, and performing 128 samples in each segment with the length of 2.56 seconds, performing 64 samples in 128 samples at intervals, wherein 1.28 seconds after the start of the acquisition, and 50% of two adjacent segments are overlapped;
for each segment of data, 129 characteristic values are extracted;
recording all characteristic values and action ids to obtain a data set;
s104, training a random forest model:
dividing a data set into a training set and a testing set in proportion;
inputting the training set into scimit-learn, and training a random forest model;
testing the model accuracy by using the test set;
adjusting training parameters, and repeating the steps until a model with composite requirements is obtained;
s105, deploying to an embedded system:
serializing the trained random forest model into a json format by using a sklern-json sequence;
reading in a json model by using python, converting the data of the model into a c language form, and storing by using a three-dimensional array; the first dimension of the array is a tree in the model; the second dimension is a node of the tree; the third dimension is data of the node, including: the serial number of the left child node, the serial number of the right child node, the serial number of the characteristic value, the threshold value of the characteristic value and the serial number of the action type;
s2 motion recognition algorithm of posture correction equipment
S201, data sampling:
recording six-axis data according to the sampling frequency of 50 times per second; sampling 16 times every 0.32 seconds, and executing an action recognition algorithm by using the latest 128 samples with the length of 2.56 seconds;
s202, extracting characteristic values:
using c language to realize the same extraction algorithm as that used in training the model, and extracting 129 characteristic values;
s203, traversing the random forest model:
obtaining an action recognition result for each tree according to the following steps: starting from a root node of the tree, taking the root node as a current node; selecting a corresponding characteristic value according to the characteristic value serial number of the current node, comparing the characteristic value with a characteristic value threshold, jumping to the left child node if the characteristic value is smaller than the characteristic value threshold, and jumping to the right child node if the characteristic value is not smaller than the characteristic value threshold; repeating the previous step until the current node has no child node, ending the decision process of the tree, and obtaining the result which is the action type sequence number of the current node;
s204, extracting an identification result:
selecting the result with the most occurrence times from the results of all the trees as the recognition result of the time; dividing the occurrence frequency of the action by the total number of the trees to serve as the confidence coefficient of the identification result;
s205, action counting:
when the confidence coefficient is higher than 80%, starting to count the current action, when the confidence coefficient is between 70% and 80%, continuing to count the current action, and when the confidence coefficient is lower than 70%, ending to count the current action;
selecting one axis from the six-axis data according to the action type, and counting according to the number of wave crests; performing median filtering on the data sampled 128 times on the axis, and then searching a peak; because the acceleration and the data of the gyroscope have different variation amplitudes and noises, different thresholds need to be set to filter wave crests; recording the positions of the wave crests, and filtering repeated wave crests in the next counting; resetting the peak record when the motion changes.
The specific steps of extracting the feature value in the generated data set of step S103 and extracting the feature value of step S202 are:
p1, sample data:
the six-axis sensor outputs 3-axis acceleration and 3-axis angular velocity data;
50 samples per second, 128 samples for each calculation;
obtaining original sampling data with the size of 6x128, wherein an axis corresponds to a row of 2-dimensional data, and a sampling corresponds to a column of the 2-dimensional data;
p2, filtering:
respectively reducing noise by using median filtering on 6 lines of original sampling data;
butterworth low-pass filtering is respectively used for 6 lines of the median filtering result, and high-frequency noise is further reduced;
the data size after filtering is unchanged and still is 6x 128;
p3, data processing:
using the filtered 6x128 data;
due to the existence of gravity, when the human body is still, the acceleration data is the gravity acceleration, and when the human body moves, jumps and other actions, the acceleration data is the superposition of the acceleration generated by the motion of the human body and the gravity acceleration;
the acceleration data is decomposed into gravity and human motion:
respectively using Butterworth low-pass filtering on the acceleration 3-line data to obtain low-frequency gravity components;
respectively using Butterworth high-pass filtering on the acceleration 3-line data to obtain high-frequency body motion components;
the data at this time comprises 3 lines of acceleration, 3 lines of angular velocity, 3 lines of gravity and 3 lines of body movement; 12 rows total, data total size 12x 128;
in order to extract frequency domain characteristics, performing fast Fourier transform on the 12 rows of data respectively, and transforming the 12 rows of data into a frequency domain from a time domain to obtain 12 rows of frequency domain data;
the processed data comprises 12 rows of time domain data, 12 rows of frequency domain data and 24x128 total data size;
p4, characteristic value extraction:
extracting characteristic values by using 24x128 data obtained after processing;
for a row, the feature values that can be extracted are: 5 average values, standard deviations, maximum values, minimum values and energies;
for two related rows, a correlation coefficient can be extracted, and 3 correlation coefficients XY, XZ and YZ can be obtained by combining every two of the 3 rows and serve as 3 characteristic values;
with the method, 3x5+3=18 eigenvalues can be extracted from each 3 rows of data, and at most 144 eigenvalues can be extracted from 24 rows of data;
the 15 eigenvalues of the 3 rows after the fast fourier transform of the acceleration are removed,
the following 129 feature values were extracted in total:
the average value, standard deviation, maximum value, minimum value and energy of 3 lines of acceleration are 15 in total;
3 correlation coefficients XY, XZ and YZ between every two acceleration lines 3 are obtained;
3 accelerated speed lines are subjected to fast Fourier transform, and the number of correlation coefficients XY, XZ and YZ between every two accelerated speed lines is 3;
the average value, standard deviation, maximum value, minimum value and energy of 3 rows of the gyroscope are 15 in total;
3 correlation coefficients XY, XZ and YZ between every two gyroscope 3 rows;
15 mean values, standard deviations, maximum values, minimum values and energies of the gyroscope after 3 lines of fast Fourier transform are obtained;
3 correlation coefficients XY, XZ and YZ between every two gyroscope lines after fast Fourier transform are obtained;
the average value, standard deviation, maximum value, minimum value and energy of 3 rows of body movement are 15 in total;
3 correlation coefficients XY, XZ and YZ between every two of 3 lines of body movement;
the average value, the standard deviation, the maximum value, the minimum value and the energy of 3 lines of body movement after the fast Fourier transform are 15 in total;
3 correlation coefficients XY, XZ and YZ between every two after 3 lines of body movement are subjected to fast Fourier transform;
the average value, standard deviation, maximum value, minimum value and energy of 3 gravity lines are 15 in total;
the gravity 3 lines have 3 correlation coefficients XY, XZ and YZ between every two lines;
15 gravity lines are subjected to fast Fourier transform, and the number of the gravity lines is 15;
and 3 correlation coefficients XY, XZ and YZ between every two gravity lines after 3 lines of fast Fourier transform.
In the step S102, a PC upper computer program developed by PyQt is used in the process of manually calibrating data, and the data is connected to a database and manually processed.
In the training random forest model of step S104, the ratio of the training set to the test set is 4: 1.
In the training random forest model of the step S104, the model accuracy rate required by compounding is not less than 98%, and the total number of nodes is below 5000.
In the training random forest model of step S104, the parameters used for training are: the number of trees, min _ impurity _ decay, is 50.003.
The invention has the beneficial effects that: the invention collects a large amount of action data in advance, trains the random forest model, and deploys the trained model to the embedded system for operation, can identify more complex action types, has high response speed and low requirement on equipment hardware, can identify on the embedded processor of the posture correction equipment at the speed of several times per second, and does not obviously increase the power consumption.
Drawings
FIG. 1 is a flow chart of the present invention;
the following detailed description will be made in conjunction with embodiments of the present invention with reference to the accompanying drawings.
Detailed Description
The invention is further illustrated by the following examples in conjunction with the accompanying drawings:
an intelligent analysis method for human body posture for intelligent posture correction equipment is shown in figure 1, and comprises the following specific steps:
s1, data acquisition and training
S101, collecting original data:
the method comprises the following steps that a plurality of people with different sexes and different body types wear posture correcting equipment to act, each action is repeated for a plurality of times, a three-axis acceleration sensor and a three-axis gyroscope sensor are arranged in the posture correcting equipment, the posture correcting equipment is connected with a mobile phone through Bluetooth, collected six-axis data are continuously sent to the mobile phone according to the sampling frequency of 50 times per second, the actions to be collected are selected on the mobile phone, then data of a period of time are recorded, and finally the data are uploaded to a database of a server;
s102, manual calibration data:
marking invalid data and the start and end intervals of the valid data by manually calibrating the acquired data; in the process of manually calibrating data, a PC upper computer program developed by PyQt is used for connecting a database and manually processing;
PyQt is a GUI programming solution in Python language;
s103, generating a data set:
connecting a database, filtering according to the action to be used and the user id, and downloading action data;
filtering out the data marked as invalid, and cutting the data according to the starting and ending intervals of the marked valid data;
dividing data, and performing 128 samples in each segment with the length of 2.56 seconds, performing 64 samples in 128 samples at intervals, wherein 1.28 seconds after the start of the acquisition, and 50% of two adjacent segments are overlapped;
for each segment of data, 129 characteristic values are extracted;
recording all characteristic values and action ids to obtain a data set;
s104, training a random forest model:
dividing a data set into a training set and a testing set in proportion, wherein the proportion of the training set to the testing set is 4: 1;
inputting the training set into scimit-learn, and training a random forest model;
the random forest is a machine learning algorithm and is composed of a plurality of decision trees, different decision trees are not related, each decision tree is judged and classified when a classification task is carried out, each decision tree can obtain a classification result, and the classification with the most classification in all the results is used as the final result of the random forest;
sciit-leann is a python language based machine learning toolkit;
testing the model accuracy by using the test set;
adjusting training parameters, and repeating the steps until a model with composite requirements is obtained;
the model with composite requirements has two requirements, namely, the accuracy is high, and the model cannot be too complex, because the model is used in an embedded system, the storage space and the running speed are limited;
through repeated experiments, the accuracy of the finally adopted model reaches 98%, and the total node number is below 5000; the parameters used for training were: the number of trees is 50, min _ impurity _ default is 0.003; the min _ input _ default parameter influences the number of tree nodes, and the principle is that when the influence generated by a newly added node is smaller than a threshold value, the node is abandoned to be added;
s105, deploying to an embedded system:
serializing the trained random forest model into a json format by using a sklern-json sequence;
sklern-json is a python toolkit for sharing and deploying scimit-lern models;
reading in a json model by using python, converting the data of the model into a c language form, and storing by using a three-dimensional array; the first dimension of the array is a tree in the model; the second dimension is a node of the tree; the third dimension is data of the node, including: the serial number of the left child node, the serial number of the right child node, the serial number of the characteristic value, the threshold value of the characteristic value and the serial number of the action type;
s2 motion recognition algorithm of posture correction equipment
S201, data sampling:
recording six-axis data according to the sampling frequency of 50 times per second; sampling 16 times every 0.32 seconds, and executing an action recognition algorithm by using the latest 128 samples with the length of 2.56 seconds;
s202, extracting characteristic values:
using c language to realize the same extraction algorithm as that used in training the model, and extracting 129 characteristic values;
s203, traversing the random forest model:
obtaining an action recognition result for each tree according to the following steps: starting from a root node of the tree, taking the root node as a current node; selecting a corresponding characteristic value according to the characteristic value serial number of the current node, comparing the characteristic value with a characteristic value threshold, jumping to the left child node if the characteristic value is smaller than the characteristic value threshold, and jumping to the right child node if the characteristic value is not smaller than the characteristic value threshold; repeating the previous step until the current node has no child node, ending the decision process of the tree, and obtaining the result which is the action type sequence number of the current node;
s204, extracting an identification result:
selecting the result with the most occurrence times from the results of all the trees as the recognition result of the time; dividing the occurrence frequency of the action by the total number of the trees to serve as the confidence coefficient of the identification result;
s205, action counting:
when the confidence coefficient is higher than 80%, starting to count the current action, when the confidence coefficient is between 70% and 80%, continuing to count the current action, and when the confidence coefficient is lower than 70%, ending to count the current action; because the confidence coefficient fluctuates, a certain buffer degree is needed to prevent frequent starting and stopping immediately;
selecting one axis from six-axis data according to the action type, and counting according to the number of wave crests; performing median filtering on the data sampled 128 times on the axis, and then searching a peak; because the acceleration and the data of the gyroscope have different variation amplitudes and noises, different thresholds need to be set to filter wave crests; recording the positions of the wave crests, and filtering repeated wave crests in the next counting; resetting the peak record when the motion changes.
Because the data of the two counts has an overlapping part, the number of the peaks left is the number of the action count increase to remove the counted peaks. The number of each increment is accumulated to be the total number.
For example:
the first calculation is as follows: walk confidence 80%, start counting. The vertical acceleration within 2.56 seconds has 10 peaks, indicating that 10 steps are taken;
after 0.32 seconds, a second calculation: the walking confidence rate is 75%, counting is continued, 10 wave peaks exist in the vertical acceleration within 2.56 seconds, 7 wave peaks follow the last repetition, 3 steps are added, and 13 steps are accumulated;
the third calculation: the walking confidence rate is 60%, and the walking counting is stopped.
The specific steps of extracting the feature value in the generated data set of step S103 and extracting the feature value of step S202 are:
p1, sample data:
the six-axis sensor outputs 3-axis acceleration and 3-axis angular velocity data;
50 samples per second, 128 samples for each calculation;
obtaining original sampling data with the size of 6x128, wherein an axis corresponds to a row of 2-dimensional data, and a sampling corresponds to a column of the 2-dimensional data;
p2, filtering:
respectively reducing noise by using median filtering on 6 lines of original sampling data;
butterworth low-pass filtering is respectively used for 6 lines of the median filtering result, and high-frequency noise is further reduced;
the data size after filtering is unchanged and still is 6x 128;
p3, data processing:
using the filtered 6x128 data;
due to the existence of gravity, when the human body is still, the acceleration data is the gravity acceleration, and when the human body moves, jumps and other actions, the acceleration data is the superposition of the acceleration generated by the motion of the human body and the gravity acceleration;
the acceleration data is decomposed into gravity and human motion:
respectively using Butterworth low-pass filtering on the acceleration 3-line data to obtain low-frequency gravity components;
respectively using Butterworth high-pass filtering on the acceleration 3-line data to obtain high-frequency body motion components;
the data at this time includes 3 lines of acceleration, 3 lines of angular velocity, 3 lines of gravity and 3 lines of body movement; 12 rows total, data total size 12x 128;
in order to extract frequency domain characteristics, performing fast Fourier transform on the 12 rows of data respectively, and transforming the 12 rows of data into a frequency domain from a time domain to obtain 12 rows of frequency domain data;
the processed data comprises 12 rows of time domain data, 12 rows of frequency domain data and 24x128 total data size;
p4, characteristic value extraction:
extracting characteristic values by using 24x128 data obtained after processing;
for a row, the feature values that can be extracted are: 5 average values, standard deviations, maximum values, minimum values and energies;
for two related rows, a correlation coefficient can be extracted, and 3 correlation coefficients XY, XZ and YZ can be obtained by combining every two of the 3 rows and serve as 3 characteristic values;
with the method, 3x5+3=18 eigenvalues can be extracted from each 3 rows of data, and at most 144 eigenvalues can be extracted from 24 rows of data;
the 15 eigenvalues of the 3 rows after the fast fourier transform of the acceleration are removed,
the following 129 feature values were extracted in total:
the average value, standard deviation, maximum value, minimum value and energy of 3 lines of acceleration are 15 in total;
3 correlation coefficients XY, XZ and YZ between every two acceleration lines 3 are obtained;
3 accelerated speed lines are subjected to fast Fourier transform, and the number of correlation coefficients XY, XZ and YZ between every two accelerated speed lines is 3;
the average value, standard deviation, maximum value, minimum value and energy of 3 rows of the gyroscope are 15 in total;
3 correlation coefficients XY, XZ and YZ between every two gyroscope 3 rows;
15 mean values, standard deviations, maximum values, minimum values and energies of the gyroscope after 3 lines of fast Fourier transform are obtained;
3 correlation coefficients XY, XZ and YZ between every two gyroscope lines after fast Fourier transform are obtained;
the average value, standard deviation, maximum value, minimum value and energy of 3 rows of body movement are 15 in total;
3 correlation coefficients XY, XZ, YZ between every two of 3 rows of body movement;
the average value, the standard deviation, the maximum value, the minimum value and the energy of 3 lines of body movement after the fast Fourier transform are 15 in total;
3 correlation coefficients XY, XZ and YZ between every two of the body movement after 3 lines of fast Fourier transform are obtained;
the average value, standard deviation, maximum value, minimum value and energy of 3 gravity lines are 15 in total;
the gravity 3 lines have 3 correlation coefficients XY, XZ and YZ between every two lines;
15 gravity lines are subjected to fast Fourier transform, and the number of the gravity lines is 15;
and 3 correlation coefficients XY, XZ and YZ between every two gravity lines after 3 lines of fast Fourier transform.
The invention collects a large amount of action data in advance, trains the random forest model, and deploys the trained model to the embedded system for operation, can identify more complex action types, has high response speed and low requirement on equipment hardware, can identify on the embedded processor of the posture correction equipment at the speed of several times per second, and does not obviously increase the power consumption.
The invention is based on six-axis sensor data (3-axis acceleration and 3-axis gyroscope) of the posture correction equipment, and the posture correction equipment can identify various human motion states in real time by using a machine learning algorithm. The actions that can be identified at present are: standing still, lying prone on the desk still, turning around about, walk, run, open and shut jump, squatting deeply, enable the original bow-backed warning function of posture correcting equipment more intelligent, reduce the erroneous judgement, can expand new function for posture correcting equipment, if automatic recording the person's of wearing amount of exercise, use action discernment to carry out interactive game etc..
The invention has been described in connection with the accompanying drawings, it is to be understood that the invention is not limited to the specific embodiments disclosed, but is intended to cover various modifications, adaptations or uses of the invention, and all such modifications and variations are within the scope of the invention.

Claims (6)

1. An intelligent analysis method for human body posture for intelligent posture correction equipment is characterized by comprising the following specific steps:
s1, data acquisition and training
S101, collecting original data:
the method comprises the following steps that a plurality of people with different sexes and different body types wear posture correcting equipment to act, each action is repeated for a plurality of times, a three-axis acceleration sensor and a three-axis gyroscope sensor are arranged in the posture correcting equipment, the posture correcting equipment is connected with a mobile phone through Bluetooth, collected six-axis data are continuously sent to the mobile phone according to the sampling frequency of 50 times per second, the actions to be collected are selected on the mobile phone, then data of a period of time are recorded, and finally the data are uploaded to a database of a server;
s102, manual calibration data:
marking invalid data and the start and end intervals of the valid data by manually calibrating the acquired data;
s103, generating a data set:
connecting a database, filtering according to the action to be used and the user id, and downloading action data;
filtering out the data marked as invalid, and cutting the data according to the starting and ending intervals of the marked valid data;
dividing data, and performing 128 samples in each segment with the length of 2.56 seconds, performing 64 samples in 128 samples at intervals, wherein 1.28 seconds after the start of the acquisition, and 50% of two adjacent segments are overlapped;
for each segment of data, 129 characteristic values are extracted;
recording all characteristic values and action ids to obtain a data set;
s104, training a random forest model:
dividing a data set into a training set and a testing set in proportion;
inputting the training set into scimit-learn, and training a random forest model;
testing the model accuracy by using the test set;
adjusting training parameters, and repeating the steps until a model with composite requirements is obtained;
s105, deploying to an embedded system:
serializing the trained random forest model into a json format by using a sklern-json sequence;
reading in a json model by using python, converting the data of the model into a c language form, and storing by using a three-dimensional array; the first dimension of the array is a tree in the model; the second dimension is a node of the tree; the third dimension is data of the node, including: the serial number of the left child node, the serial number of the right child node, the serial number of the characteristic value, the threshold value of the characteristic value and the serial number of the action type;
s2 motion recognition algorithm of posture correction equipment
S201, data sampling:
recording six-axis data according to the sampling frequency of 50 times per second; sampling 16 times every 0.32 seconds, and executing an action recognition algorithm by using the latest 128 samples with the length of 2.56 seconds;
s202, extracting characteristic values:
using c language to realize the same extraction algorithm as that used in training the model, and extracting 129 characteristic values;
s203, traversing the random forest model:
obtaining an action recognition result for each tree according to the following steps: starting from a root node of the tree, taking the root node as a current node; selecting a corresponding characteristic value according to the characteristic value serial number of the current node, comparing the characteristic value with a characteristic value threshold, jumping to the left child node if the characteristic value is smaller than the characteristic value threshold, and jumping to the right child node if the characteristic value is not smaller than the characteristic value threshold; repeating the previous step until the current node has no child node, ending the decision process of the tree, and obtaining the result which is the action type sequence number of the current node;
s204, extracting an identification result:
selecting the result with the most occurrence times from the results of all the trees as the recognition result of the time; dividing the occurrence frequency of the action by the total number of the trees to serve as the confidence coefficient of the identification result;
s205, action counting:
when the confidence coefficient is higher than 80%, starting to count the current action, when the confidence coefficient is between 70% and 80%, continuing to count the current action, and when the confidence coefficient is lower than 70%, ending to count the current action;
selecting one axis from six-axis data according to the action type, and counting according to the number of wave crests; performing median filtering on the data sampled 128 times on the axis, and then searching a peak; because the acceleration and the data of the gyroscope have different variation amplitudes and noises, different thresholds need to be set to filter wave crests; recording the positions of the wave crests, and filtering repeated wave crests in the next counting; resetting the peak record when the motion changes.
2. The intelligent analysis method for the body posture of the human body for the intelligent posture-correcting device as claimed in claim 1, wherein the specific steps of extracting the characteristic value in the generated data set of step S103 and extracting the characteristic value of step S202 are as follows:
p1, sample data:
the six-axis sensor outputs 3-axis acceleration and 3-axis angular velocity data;
50 samples per second, 128 samples for each calculation;
obtaining original sampling data with the size of 6x128, wherein an axis corresponds to a row of 2-dimensional data, and a sampling corresponds to a column of the 2-dimensional data;
p2, filtering:
respectively reducing noise by using median filtering on 6 lines of original sampling data;
butterworth low-pass filtering is respectively used for 6 lines of the median filtering result, and high-frequency noise is further reduced;
the data size after filtering is unchanged and still is 6x 128;
p3, data processing:
using the filtered 6x128 data;
due to the existence of gravity, when the human body is still, the acceleration data is the gravity acceleration, and when the human body moves, jumps and other actions, the acceleration data is the superposition of the acceleration generated by the motion of the human body and the gravity acceleration;
the acceleration data is decomposed into gravity and human motion:
respectively using Butterworth low-pass filtering on the acceleration 3-line data to obtain low-frequency gravity components;
respectively using Butterworth high-pass filtering on the acceleration 3-line data to obtain high-frequency body motion components;
the data at this time comprises 3 lines of acceleration, 3 lines of angular velocity, 3 lines of gravity and 3 lines of body movement; 12 rows total, data total size 12x 128;
in order to extract frequency domain characteristics, performing fast Fourier transform on the 12 rows of data respectively, and transforming the 12 rows of data into a frequency domain from a time domain to obtain 12 rows of frequency domain data;
the processed data comprises 12 rows of time domain data, 12 rows of frequency domain data and 24x128 total data size;
p4, characteristic value extraction:
extracting characteristic values by using 24x128 data obtained after processing;
for a row, the feature values that can be extracted are: 5 average values, standard deviations, maximum values, minimum values and energies;
for two related rows, a correlation coefficient can be extracted, and 3 correlation coefficients XY, XZ and YZ can be obtained by combining every two of the 3 rows and serve as 3 characteristic values;
with the method, 3x5+3=18 eigenvalues can be extracted from each 3 rows of data, and at most 144 eigenvalues can be extracted from 24 rows of data;
the 15 eigenvalues of the 3 rows after the fast fourier transform of the acceleration are removed,
the following 129 feature values were extracted in total:
the average value, standard deviation, maximum value, minimum value and energy of 3 lines of acceleration are 15 in total;
3 correlation coefficients XY, XZ and YZ between every two acceleration lines 3 are obtained;
3 accelerated speed lines are subjected to fast Fourier transform, and the number of correlation coefficients XY, XZ and YZ between every two accelerated speed lines is 3;
the average value, standard deviation, maximum value, minimum value and energy of 3 rows of the gyroscope are 15 in total;
3 correlation coefficients XY, XZ and YZ between every two gyroscope 3 rows;
15 mean values, standard deviations, maximum values, minimum values and energies of the gyroscope after 3 lines of fast Fourier transform are obtained;
3 correlation coefficients XY, XZ and YZ between every two gyroscope lines after fast Fourier transform are obtained;
the average value, standard deviation, maximum value, minimum value and energy of 3 rows of body movement are 15 in total;
3 correlation coefficients XY, XZ, YZ between every two of 3 rows of body movement;
the average value, the standard deviation, the maximum value, the minimum value and the energy of 3 lines of body movement after the fast Fourier transform are 15 in total;
3 correlation coefficients XY, XZ and YZ between every two of the body movement after 3 lines of fast Fourier transform are obtained;
the average value, standard deviation, maximum value, minimum value and energy of 3 gravity lines are 15 in total;
the gravity 3 lines have 3 correlation coefficients XY, XZ and YZ between every two lines;
15 gravity lines are subjected to fast Fourier transform, and the number of the gravity lines is 15;
and 3 correlation coefficients XY, XZ and YZ between every two gravity lines after 3 lines of fast Fourier transform.
3. The method according to claim 2, wherein the step of manually calibrating the data in step S102 is performed by connecting a PC host computer program developed by PyQt to a database and manually processing the data.
4. The intelligent analysis method for the body posture of the human body for the intelligent posture-correcting device as claimed in claim 3, wherein in the training random forest model of step S104, the ratio of the training set to the testing set is 4: 1.
5. The intelligent analysis method for the body posture of the human body for the intelligent posture correcting equipment as claimed in claim 4, wherein in the training random forest model of the step S104, the model accuracy rate required by compounding is not less than 98%, and the total number of nodes is below 5000.
6. The intelligent analysis method for the body posture of the human body for the intelligent posture correcting device as claimed in claim 5, wherein in the training random forest model of the step S104, parameters adopted by training are as follows: the number of trees, min _ impurity _ decay, is 50.003.
CN202210370888.2A 2022-04-11 2022-04-11 Intelligent analysis method for human body posture for intelligent posture correction equipment Pending CN114440884A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210370888.2A CN114440884A (en) 2022-04-11 2022-04-11 Intelligent analysis method for human body posture for intelligent posture correction equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210370888.2A CN114440884A (en) 2022-04-11 2022-04-11 Intelligent analysis method for human body posture for intelligent posture correction equipment

Publications (1)

Publication Number Publication Date
CN114440884A true CN114440884A (en) 2022-05-06

Family

ID=81359840

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210370888.2A Pending CN114440884A (en) 2022-04-11 2022-04-11 Intelligent analysis method for human body posture for intelligent posture correction equipment

Country Status (1)

Country Link
CN (1) CN114440884A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115068938A (en) * 2022-06-14 2022-09-20 深圳十米网络科技有限公司 Motion sensing game method based on jumping motion

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106503667A (en) * 2016-10-26 2017-03-15 太原理工大学 A kind of based on WISP and the fall detection method of pattern recognition
CN106491138A (en) * 2016-10-26 2017-03-15 歌尔科技有限公司 A kind of motion state detection method and device
CN106643722A (en) * 2016-10-28 2017-05-10 华南理工大学 Method for pet movement identification based on triaxial accelerometer
CN107016233A (en) * 2017-03-14 2017-08-04 中国科学院计算技术研究所 The association analysis method and system of motor behavior and cognitive ability
CN107212890A (en) * 2017-05-27 2017-09-29 中南大学 A kind of motion identification and fatigue detection method and system based on gait information
CN108008151A (en) * 2017-11-09 2018-05-08 惠州市德赛工业研究院有限公司 A kind of moving state identification method and system based on 3-axis acceleration sensor
CN108509924A (en) * 2018-03-29 2018-09-07 北京微播视界科技有限公司 The methods of marking and device of human body attitude
US20190095814A1 (en) * 2017-09-27 2019-03-28 International Business Machines Corporation Detecting complex user activities using ensemble machine learning over inertial sensors data
CN110163264A (en) * 2019-04-30 2019-08-23 杭州电子科技大学 A kind of walking mode recognition methods based on machine learning
CN110443226A (en) * 2019-08-16 2019-11-12 重庆大学 A kind of student's method for evaluating state and system based on gesture recognition
CN111288986A (en) * 2019-12-31 2020-06-16 中科彭州智慧产业创新中心有限公司 Motion recognition method and motion recognition device
CN112861624A (en) * 2021-01-05 2021-05-28 哈尔滨工业大学(威海) Human body posture detection method, system, storage medium, equipment and terminal
CN113095379A (en) * 2021-03-26 2021-07-09 厦门中翎易优创科技有限公司 Human motion state identification method based on wearable six-axis sensing data
CN113749644A (en) * 2021-08-03 2021-12-07 武汉纺织大学 Intelligent garment capable of monitoring lumbar movement of human body and automatically correcting posture

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106491138A (en) * 2016-10-26 2017-03-15 歌尔科技有限公司 A kind of motion state detection method and device
CN106503667A (en) * 2016-10-26 2017-03-15 太原理工大学 A kind of based on WISP and the fall detection method of pattern recognition
CN106643722A (en) * 2016-10-28 2017-05-10 华南理工大学 Method for pet movement identification based on triaxial accelerometer
CN107016233A (en) * 2017-03-14 2017-08-04 中国科学院计算技术研究所 The association analysis method and system of motor behavior and cognitive ability
CN107212890A (en) * 2017-05-27 2017-09-29 中南大学 A kind of motion identification and fatigue detection method and system based on gait information
US20190095814A1 (en) * 2017-09-27 2019-03-28 International Business Machines Corporation Detecting complex user activities using ensemble machine learning over inertial sensors data
CN108008151A (en) * 2017-11-09 2018-05-08 惠州市德赛工业研究院有限公司 A kind of moving state identification method and system based on 3-axis acceleration sensor
CN108509924A (en) * 2018-03-29 2018-09-07 北京微播视界科技有限公司 The methods of marking and device of human body attitude
CN110163264A (en) * 2019-04-30 2019-08-23 杭州电子科技大学 A kind of walking mode recognition methods based on machine learning
CN110443226A (en) * 2019-08-16 2019-11-12 重庆大学 A kind of student's method for evaluating state and system based on gesture recognition
CN111288986A (en) * 2019-12-31 2020-06-16 中科彭州智慧产业创新中心有限公司 Motion recognition method and motion recognition device
CN112861624A (en) * 2021-01-05 2021-05-28 哈尔滨工业大学(威海) Human body posture detection method, system, storage medium, equipment and terminal
CN113095379A (en) * 2021-03-26 2021-07-09 厦门中翎易优创科技有限公司 Human motion state identification method based on wearable six-axis sensing data
CN113749644A (en) * 2021-08-03 2021-12-07 武汉纺织大学 Intelligent garment capable of monitoring lumbar movement of human body and automatically correcting posture

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115068938A (en) * 2022-06-14 2022-09-20 深圳十米网络科技有限公司 Motion sensing game method based on jumping motion

Similar Documents

Publication Publication Date Title
CN111035367B (en) Signal detection system for judging sleep apnea
CN109480783B (en) Apnea detection method and device and computing equipment
CN108109336B (en) Human body falling identification method based on acceleration sensor
CN105426814A (en) Old people stumbling detection method based on handset
CN105588577B (en) A kind of detection method and device of the abnormal step counting for sport monitoring device
CN105662375A (en) Method and device for non-contact detecting vital sign signals
Wang et al. Real time accelerometer-based gait recognition using adaptive windowed wavelet transforms
CN106228200A (en) A kind of action identification method not relying on action message collecting device
CN106210269A (en) A kind of human action identification system and method based on smart mobile phone
CN111700718B (en) Method and device for recognizing holding gesture, artificial limb and readable storage medium
Lee et al. A single tri-axial accelerometer-based real-time personal life log system capable of activity classification and exercise information generation
CN105868712A (en) Method for searching object image by combining potential vision and machine vision based on posterior probability model
CN108717548B (en) Behavior recognition model updating method and system for dynamic increase of sensors
CN114440884A (en) Intelligent analysis method for human body posture for intelligent posture correction equipment
CN109805935A (en) A kind of intelligent waistband based on artificial intelligence hierarchical layered motion recognition method
Chuang et al. A wearable activity sensor system and its physical activity classification scheme
CN106503667B (en) A kind of fall detection method based on WISP and pattern-recognition
CN109498001B (en) Sleep quality evaluation method and device
CN111401435A (en) Human body motion mode identification method based on motion bracelet
Monge-Álvarez et al. Effect of importance sampling on robust segmentation of audio-cough events in noisy environments
CN110956192A (en) Method and device for classifying non-reconstruction compressed sensing physiological data
CN107463689A (en) Generation method, moving state identification method and the terminal in motion characteristic data storehouse
Liao et al. The application of EMD in activity recognition based on a single triaxial accelerometer
US20090012921A1 (en) Method for identifying a person's posture
CN114034313B (en) Step counting method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20220506