CN109543762A - A kind of multiple features fusion gesture recognition system and method - Google Patents
A kind of multiple features fusion gesture recognition system and method Download PDFInfo
- Publication number
- CN109543762A CN109543762A CN201811431810.7A CN201811431810A CN109543762A CN 109543762 A CN109543762 A CN 109543762A CN 201811431810 A CN201811431810 A CN 201811431810A CN 109543762 A CN109543762 A CN 109543762A
- Authority
- CN
- China
- Prior art keywords
- node
- force
- sensing sensor
- human body
- foot
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/251—Fusion techniques of input or preprocessed data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The present invention relates to a kind of multiple features fusion gesture recognition system and methods, the system includes management terminal, Cloud Server, wireless network and human body node, wherein human body node includes chest node, foot node L and foot node R, foot node L includes second singlechip, the 2nd 2.4G module, second power supply module and the first force-sensing sensor group, and foot node R includes third single-chip microcontroller, the second baroceptor, the 3rd 2.4G module, third power module and the second force-sensing sensor group.The present invention detects the attitudes vibration of human body upper body and foot by resultant acceleration, attitude angle and difference in height percentage, gravity center of human body's variation is monitored herein in connection with vola pressure feature, and posture is identified using the parameter that cloud computing training obtains, it can effectively identify the daily behavior posture of body, and can inquire at the terminal, it is with a wide range of applications.
Description
Technical field
The present invention relates to gesture recognition technical field, in particular to a kind of multiple features fusion gesture recognition system and methods.
Background technique
With the development of sensor technology and technology of Internet of things, gesture recognition using more and more extensive.In medical treatment & health
In field, it is primarily useful for the detection of the abnormal behaviours such as mankind's tumble and daily behavior, reduces and falls to disadvantaged group such as old men
Injury, help normal person reduce or correct sitting, long station etc. bad life habits;VR game industry is applied also for, is passed through
To the gesture recognition of player, the experience sense of game is greatly enhanced.
Gesture recognition technology relies primarily on camera and wearable device at present, and it is as follows that there are problems:
(1) camera is analyzed by acquiring image, it is difficult to guarantee the individual privacy of user;
(2) camera is more sensitive to light, and infrared camera, higher cost can only be relied under dark surrounds;
(3) contemporary wearable equipment relies primarily on acceleration transducer or force-sensing sensor, and feature is more single, False Rate
It is higher.
Summary of the invention
To overcome shortcoming existing for existing gesture recognition system, the present invention provides a kind of knowledges of multiple features fusion posture
Other system and method, its object is to overcome in the prior art attitude detection technology by environmental restrictions, function is simple, False Rate compared with
The problems such as high.
To achieve the goals above, the present invention has following constitute:
The multiple features fusion gesture recognition system, including management terminal, Cloud Server, wireless network and human body node;Its
In, the human body node includes chest node, foot node L and foot node R;The chest node includes first singlechip, 9
Axle sensor, the first baroceptor, Wi-Fi module, the first 2.4G module and the first power module;The foot node L packet
Include second singlechip, the 2nd 2.4G module, second power supply module and the first force-sensing sensor group positioned at left insole sandwich;It is described
Foot node R includes third single-chip microcontroller, the second baroceptor, the 3rd 2.4G module, third power module and is located at right insole
Second force-sensing sensor group of interlayer;
The Wi-Fi module is communicated by wireless network with Cloud Server, 9 axle sensors, the first baroceptor, Wi-
Fi module, the first 2.4G module are connect with first singlechip, and the first power module for chest node to power;
The first force-sensing sensor group and the 2nd 2.4G module are connect with second singlechip, second power supply module to
For foot node L power supply;
Second baroceptor, the 3rd 2.4G module and the second force-sensing sensor group are connect with third single-chip microcontroller,
Third power module for foot node R to power.
Optionally, the foot node L is located in the cavity of left heel of a shoe, and the top of the foot node L is left insole,
First force-sensing sensor group is located in left insole sandwich.
Optionally, the foot node R is located in the cavity of right heel of a shoe, and the top of the foot node R is right insole,
Second force-sensing sensor group is located in right insole sandwich.
Optionally, first force-sensing sensor and the second force-sensing sensor group are made of 8 force-sensing sensors respectively,
Voltage output is respectively Li(i ∈ [1,8]) and Ri(i ∈ [1,8]), in the first force-sensing sensor group, force-sensing sensor L1Positioned at a left side
At the ossa suffraginis of foot, force-sensing sensor L2, force-sensing sensor L3With force-sensing sensor L4At the articulationes metatarsophalangeae of left foot, power
Dependent sensor L5With force-sensing sensor L6Positioned at the foot outside of left foot, force-sensing sensor L7With force-sensing sensor L8Positioned at heel;
In second force-sensing sensor, force-sensing sensor R1At the ossa suffraginis of right crus of diaphragm, force-sensing sensor R2, force-sensing sensor R3With
Force-sensing sensor R4At the articulationes metatarsophalangeae of left foot, force-sensing sensor R5With force-sensing sensor R6Positioned at the foot outside of left foot.
The embodiment of the present invention also provides a kind of multiple features fusion gesture recognition method, and described method includes following steps:
(1) user's attitude parameter is acquired using management terminal notifier body node;
(2) the chest node notice foot node L and foot node R in human body node acquires data, foot node L acquisition
The voltage data of first force-sensing sensor group, foot node R acquire voltage data and the second air pressure of the second force-sensing sensor group
The data of sensor, chest node acquire three shaft angle degree of 9 axle sensors, 3-axis acceleration data and the first baroceptor
Data;
(3) human body and the inclination angle of horizontal plane, human body resultant acceleration, pereiopoda height are calculated according to the data of human body node acquisition
The unit area stress of poor percentage and each point for being provided with the second force-sensing sensor group, judges current state, if currently
State is trained or more new state, then the label of human body attitude classification corresponding to calculated result and calculated result is sent to cloud
Server, and continue step (4), if current state is identification state, continue step (5);
(4) label of Cloud Server human body attitude classification according to corresponding to the calculated result and calculated result of step (3)
Calculate the Classification Index and divide value of different human body attitude classifications;
(5) people is judged according to the Classification Index and divide value of the calculated result of step (3) and different human body attitude classifications
Body posture classification.
Optionally, the step (2) includes the following steps:
(2-1) described chest node sends commands to foot node L and foot node R by the first 2.4G module respectively;
After (2-2) described node L receives the order of chest node by the 2nd 2.4G module, by collected first power
The voltage data L of dependent sensor group1~L8Chest node is sent to by the 2nd 2.4G module;
After (2-3) described foot node R receives the order of chest node by the 3rd 2.4G module, by collected
The voltage data R of two force-sensing sensor groups1~R8With the data P of the second baroceptor2Chest is sent to by the 3rd 2.4G module
Portion's node;
(2-4) described chest node receives the data of foot node L and foot node R respectively, while acquiring 9 axle sensors
Three shaft angle degree (x, y, z) and 3-axis acceleration data (ax, ay, az) and the first baroceptor data P1。
Optionally, the step (3) includes the following steps:
(3-1) calculates the angle of inclination B TA of human body and horizontal plane according to the following formula:
(3-2) calculates human body resultant acceleration ha according to the following formula:
(3-3) calculates pereiopoda difference in height percentage HP according to the following formula:
HP=44330 ((P2/P0)1/5.255-(P1/P0)1/5.255)/H0
In formula, P0For standard atmospheric pressure, H0For user's height;
(3-4) calculates separately the list of each point in the first force-sensing sensor group and the second force-sensing sensor group according to the following formula
Plane accumulates stress LPai、RPai:
LPai=0.2/ (ln (Li)-1.17)-0.2
RPai=0.2/ (ln (Ri)-1.17)-0.2
(3-5) calculates separately the switch of the first force-sensing sensor group and the second force-sensing sensor group each point according to the following formula
Measure LPaDi、RPaDi:
LPaDi=ε (LPai-ρ)
RPaDi=ε (RPai-ρ)
In formula, ρ is preset vola threshold pressure;
(3-6) calculates separately the first force-sensing sensor group according to the following formula and the switching value of the second force-sensing sensor group is comprehensive
Close output LPaDSUM、RPaDSUM:
Optionally, the step (4) includes the following steps:
(4-1) Cloud Server calculates separately: the optimum division value HP of the pereiopoda difference in height percentage HP of walking and sitting posture1,
Squat down and pick up the optimum division value HP of the pereiopoda difference in height percentage HP of posture2, squat down and the human body of sitting posture and horizontal plane incline
The optimum division value θ of angle BTA1, squat down and pick up the human body of posture and the optimum division value θ of the angle of inclination B TA of horizontal plane2;
(4-2) human body attitude parameter, which determines, to be completed, and Cloud Server returns the Classification Index being calculated and optimum division value
It is back to chest node.
Optionally, the step (5) includes the following steps:
(5-1) detects human motion degree according to human body resultant acceleration ha first, and combines the inclination angle of human body and horizontal plane
The inclined degree and plantar pressure of BTA detection human body upper body;
If | ha | >=15m/s2Or | ha |≤5m/s2, and subsequent time HP < P2, BTA < θ2, LPaDSUM<ω1, RPaDSUM<
ω1, then it is judged as tumble, continues (5-2);If 5m/s2<|ha|<15m/s2, then carry out (5-3);
(5-2) if | ax | < 5m/s2And x≤0 °, then to fall forward;If | ax | < 5m/s2And x > 0 °, then to fall back;
If | ay | < 5m/s2And y > 0 °, then to fall to the left;If | ay | < 5m/s2And y≤0 °, then to fall to the right;
(5-3) detects whether human body has decline behavior according to pereiopoda difference in height percentage HP, if HP≤P1, then it is judged as seat
Appearance is squatted down or is picked up, and (5-4) is continued;Otherwise it is judged as walking or standing, carries out (5-6);
(5-4) comprehensively considers the angle of inclination B TA of pereiopoda difference in height percentage HP and human body and horizontal plane, if BTA >=θ1And P2≤
HP≤P1, then it is judged as sitting posture, if θ2≤BTA<θ1And HP < P2, then it is judged as and squats down;If BTA < θ2, HP≤P1And LPaDSUM≥
ω1Or RPaDSUM≥ω1, then it is judged as and picks up, continues the type identification that (5-5) is specifically picked up;
(5-5) is if LPaDSUM≥ω1And RPaDSUM≥ω1, then it is judged as and picks up forward;If LPaDSUM≥ω1And RPaDSUM=
0, then it is judged as and picks up to the left;If LPaDSUM=0 and RPaDSUM≥ω1, then it is judged as and picks up to the right;
(5-6) calculates cadence according to the vola pressure change period and is judged as standing if cadence is minimum;If cadence meets
Human locomotion rule, then be judged as walking.
Optionally, the method also includes following steps:
(6) if gesture recognition result is to fall, or detect the unhealthy behaviors such as long station, sitting, then pass through short message or language
Sound reminds household or user;
(7) the posture result of identification is uploaded to Cloud Server by the chest node, and management terminal is according to Cloud Server number
According to the posture result of display data and curves and identification.
Multiple features fusion gesture recognition system provided by the present invention and method, have the following beneficial effects:
The present invention utilizes sensor technology and technology of Internet of things, devises a kind of multiple features fusion gesture recognition system;Also
Using pressure, height, angle and acceleration signature, modeling analysis is carried out to human normal, abnormal behaviour posture, devises one kind
Multiple features fusion gesture recognition method;The present invention can effectively identify human body attitude, and realize that the alarm of abnormal posture mentions
The real-time exhibition waken up with daily attitude data.
Detailed description of the invention
Fig. 1 is the schematic diagram of the multiple features fusion gesture recognition system of one embodiment of the invention;
Fig. 2 is the schematic diagram that the first and second sensor group of one embodiment of the invention is laid out in insole;
Fig. 3 is the flow chart of the multiple features fusion gesture recognition method of one embodiment of the invention;
Appended drawing reference in figure: management terminal 100, Cloud Server 200, wireless network 300, human body node 400, chest section
Point 410, foot node L420, foot node R430, first singlechip 411,9 axle sensors 412, the first baroceptor 413,
Wi-Fi module 414, the first 2.4G module 415, the first power module 416, second singlechip 421, the 2nd 2.4G module 422, the
Two power modules 423, the first force-sensing sensor group 424, third single-chip microcontroller 431, the second baroceptor 432, the 3rd 2.4G mould
Block 433, third power module 434, the second force-sensing sensor group 435.
Specific embodiment
Below with reference to Fig. 1 to Fig. 3, technical solution of the present invention is described in detail:
As shown in Figure 1, the technical issues of in order to solve in the prior art, the embodiment of the invention provides a kind of multiple features to melt
Close gesture recognition system.Wherein, the system comprises management terminal 100, Cloud Server 200, wireless network 300 and human body nodes
400.Wherein human body node 400 includes chest node 410, foot node L420 and foot node R430.The chest node 410
Including first singlechip 411,9 axle sensors 412, the first baroceptor 413, Wi-Fi module 414, the first 2.4G module 415
With the first power module 416.The Wi-Fi module 414 is communicated by wireless network 300 with Cloud Server 200,9 axle sensors
412, the first baroceptor 413, Wi-Fi module 414, the first 2.4G module 415 are connect with first singlechip 411, and first
Power module 416 is powered to chest node 410.The foot node L420 includes second singlechip 421, the 2nd 2.4G module
422, second power supply module 423 and the first force-sensing sensor group 424 positioned at left insole sandwich.The foot node L420 is located at
In the cavity of left heel of a shoe, top is left insole, and the first force-sensing sensor group 424 is located in left insole sandwich;The quick biography of first power
Sensor group 424 and the 2nd 2.4G module 422 are connect with second singlechip 421, and second power supply module 423 is to foot node
L420 power supply.The foot node R430 include third single-chip microcontroller 431, the second baroceptor 432, the 3rd 2.4G module 433,
Third power module 434 and the second force-sensing sensor group 435 positioned at right insole sandwich.The foot node R430 is located at right shoes
In the cavity of heel, top is right insole, and the second force-sensing sensor group 435 is located in right insole sandwich;Second baroceptor
432, the 3rd 2.4G module equal 433 and the second force-sensing sensor group 435 are connect with third single-chip microcontroller 431, third power module
434 power to foot node R430.
As shown in Fig. 2, being the first force-sensing sensor 424 of the invention and the second cloth in insole of force-sensing sensor group 435
The schematic diagram of office, the first force-sensing sensor 424 and the second force-sensing sensor group 435 are made of 8 force-sensing sensors respectively, electricity
Pressure output is respectively Li(i ∈ [1,8]) and Ri(i∈[1,8]).By taking the first force-sensing sensor group 424 as an example, L1Positioned at the first toe
At bone, L2、L3And L4At articulationes metatarsophalangeae, L5And L6Positioned at sufficient outside, L7And L8Positioned at heel.
As shown in figure 3, the embodiment of the invention also provides a kind of multiple features fusion gesture recognition method, including following step
It is rapid:
Step (1) carries out the training of user's attitude parameter by 100 notifier's body node 400 of management terminal or updates, and marks
Remember specific posture, such as: walking stands, sits back and waits, executes step (2);
400 data of step (2) human body node are acquired and are communicated:
(2-1) chest node 410 sends commands to foot node L/R420 and 430 by the first 2.4G module 415 respectively;
It, will be collected after (2-2) foot node L420 receives the order of chest node by the 2nd 2.4G module 422
First force-sensing sensor group, 424 voltage data L1~L8Chest node 410 is sent to by the 2nd 2.4G module 422;
After (2-3) foot node R430 receives the order of chest node 410 by the 3rd 2.4G module 433, it will acquire
The 435 voltage data R of the second force-sensing sensor group arrived1~R8With 432 data P of the second baroceptor2Pass through the 3rd 2.4G module
433 are sent to chest node 410;
(2-4) chest module 410 receives the data of foot node L420 and foot node R430 respectively, while acquiring 9 axis
The three shaft angle degree (x, y, z) and 3-axis acceleration data (ax, ay, az) of sensor 412 and the first baroceptor 413 number
According to P1;
Step (3) process of data preprocessing is as follows, and processing result and initial data are uploaded to cloud service by after treatment
Device 200;If parameter training or more new state, then step (4) are continued to execute;If routine use state, then follow the steps
(5);
(3-1) calculates the angle of inclination B TA of human body and horizontal plane according to the following formula:
(3-2) calculates human body resultant acceleration ha according to the following formula:
(3-3) calculates pereiopoda difference in height percentage HP according to the following formula:
HP=44330 ((P2/P0)1/5.255-(P1/P0)1/5.255)/H0
In formula, P0For standard atmospheric pressure, H0For user's height.
(3-4) calculates separately 435 each point of the first force-sensing sensor 424 and the second force-sensing sensor group according to the following formula
Unit area stress LPai、RPai:
LPai=0.2/ (ln (Li)-1.17)-0.2
RPai=0.2/ (ln (Ri)-1.17)-0.2
(3-5) calculates separately 435 each point of the first force-sensing sensor 424 and the second force-sensing sensor group according to the following formula
Switching value LPaDi、RPaDi:
LPaDi=ε (LPai-ρ)
RPaDi=ε (RPai-ρ)
In formula, ρ is vola threshold pressure, takes 0.45N/cm2,
(3-6) calculates separately the switch of the first force-sensing sensor 424 and the second force-sensing sensor group 435 according to the following formula
The comprehensive output LPaD of amountSUM、RPaDSUM:
The processing of step (4) Cloud Server 200:
(4-1) Cloud Server 200 calculates separately: the optimum division value HP of walking and sitting posture HP1, squat down and pick up posture HP's
Optimum division value HP2, the optimum division value θ that squats down with sitting posture BTA1, squat down and pick up the optimum division value θ of posture BTA2;
(4-2) human body attitude parameter, which determines, to be completed, and parameter is back to chest node 410 by Cloud Server 200;
Step (5) gesture recognition:
(5-1) detects human motion degree according to ha first, and combines inclined degree and the vola of BTA detection human body upper body
Pressure;If | ha | >=15m/s2Or | ha |≤5m/s2, and subsequent time HP < P2, BTA < θ2, LPaDSUM<ω1, RPaDSUM<ω1,
Then it is judged as tumble, continues (5-2);If 5m/s2<|ha|<15m/s2, then carry out (5-3);
(5-2) if | ax | < 5m/s2And x≤0 °, then to fall forward;If | ax | < 5m/s2And x > 0 °, then to fall back;
If | ay | < 5m/s2And y > 0 °, then to fall to the left;If | ay | < 5m/s2And y≤0 °, then to fall to the right;
(5-3) detects whether human body has decline behavior according to HP, if HP≤P1, then it is judged as sitting posture, squats down or pick up, continues
It carries out (5-4);Otherwise it is judged as walking or standing, carries out (5-6);
(5-4) comprehensively considers HP and BTA, if BTA >=θ1And P2≤HP≤P1, then it is judged as sitting posture, if θ2≤BTA<θ1And
HP<P2, then it is judged as and squats down;If BTA < θ2, HP≤P1And LPaDSUM≥ω1Or RPaDSUM≥ω1, then be judged as and pick up, continue into
The type identification that row (5-5) is specifically picked up;
(5-5) is if LPaDSUM≥ω1And RPaDSUM≥ω1, then it is judged as and picks up forward;If LPaDSUM≥ω1And RPaDSUM=
0, then it is judged as and picks up to the left;If LPaDSUM=0 and RPaDSUM≥ω1, then it is judged as and picks up to the right;
(5-6) calculates cadence according to the vola pressure change period and is judged as standing if cadence is minimum;If cadence meets
Human locomotion rule, then be judged as walking;
Step (6), step (7) are carried out after (5-7) gesture recognition;
Step (6) alarm and reminding: if gesture recognition result is to fall, or detect the unhealthy behaviors such as long station, sitting, then
Remind household or user;
Step (7) management terminal 100 is shown: the posture result of identification is uploaded to Cloud Server 200 by chest node 410,
Management terminal 100 shows the relevant informations such as data and curves and current pose according to 200 data of Cloud Server.
Using one of present invention multiple features fusion gesture recognition system and method, the daily of body can be effectively identified
Behavior posture, and query history capable of recording at the terminal, while while occurring tumble event or sitting, the long unhealthy posture such as station, can
Alarm and reminding.The present invention detects human body upper body attitudes vibration by inclination angle, resultant acceleration and attitude angle, passes through pereiopoda difference in height hundred
Divide and change than the vertical range of analysis upper body and foot, gravity center of human body's variation is monitored herein in connection with vola pressure feature, finally
Posture is identified using the parameters that cloud computing training obtains;Protect that user's is hidden while improving accuracy rate
Private, and application range is with a wide range of applications not by scene restriction in medical treatment & health industry and game industry.
In this description, the present invention is described with reference to its specific embodiment.But it is clear that can still make
Various modifications and alterations are without departing from the spirit and scope of the invention.Therefore, the description and the appended drawings should be considered as illustrative
And not restrictive.
Claims (10)
1. a kind of multiple features fusion gesture recognition system, which is characterized in that including management terminal (100), Cloud Server (200),
Wireless network (300) and human body node (400);Wherein, the human body node (400) includes chest node (410), foot node
L (420) and foot node R (430);The chest node (410) includes first singlechip (411), 9 axle sensors (412),
One baroceptor (413), Wi-Fi module (414), the first 2.4G module (415) and the first power module (416);The foot
Portion node L (420) includes second singlechip (421), the 2nd 2.4G module (422), second power supply module (423) and is located at left shoes
Pad the first force-sensing sensor group (424) of interlayer;The foot node R (430) includes third single-chip microcontroller (431), the second air pressure
Sensor (432), the 3rd 2.4G module (433), third power module (434) and the quick sensing of the second power positioned at right insole sandwich
Device group (435);
The Wi-Fi module (414) is communicated with Cloud Server (200) by wireless network (300), 9 axle sensors (412), the
One baroceptor (413), Wi-Fi module (414), the first 2.4G module (415) are connect with first singlechip (411), the
One power module (416) for chest node (410) to power;
The first force-sensing sensor group (424) and the 2nd 2.4G module (422) are connect with second singlechip (421), and second
Power module (423) for foot node L (420) to power;
Second baroceptor (432), the 3rd 2.4G module (433) and the second force-sensing sensor group (435) are and third
Single-chip microcontroller (431) connection, third power module (434) for foot node R (430) to power.
2. a kind of multiple features fusion gesture recognition system according to claim 1, which is characterized in that the foot node L
(420) in the cavity of left heel of a shoe, the top of the foot node L (420) is left insole, the first force-sensing sensor group
(424) it is located in left insole sandwich.
3. a kind of multiple features fusion gesture recognition system according to claim 1, which is characterized in that the foot node R
(430) in the cavity of right heel of a shoe, the top of the foot node R (430) is right insole, the second force-sensing sensor group
(435) it is located in right insole sandwich.
4. a kind of multiple features fusion gesture recognition system according to claim 1, which is characterized in that the quick biography of the first power
Sensor (424) and the second force-sensing sensor group (435) are made of 8 force-sensing sensors respectively, and voltage output is respectively Li(i
∈ [1,8]) and Ri(i ∈ [1,8]), in the first force-sensing sensor group (424), force-sensing sensor L1Positioned at the ossa suffraginis of left foot
Place, force-sensing sensor L2, force-sensing sensor L3With force-sensing sensor L4At the articulationes metatarsophalangeae of left foot, force-sensing sensor L5With
Force-sensing sensor L6Positioned at the foot outside of left foot, force-sensing sensor L7With force-sensing sensor L8Positioned at heel;The quick sensing of second power
In device (435), force-sensing sensor R1At the ossa suffraginis of right crus of diaphragm, force-sensing sensor R2, force-sensing sensor R3With the quick sensing of power
Device R4At the articulationes metatarsophalangeae of left foot, force-sensing sensor R5With force-sensing sensor R6Positioned at the foot outside of left foot.
5. a kind of multiple features fusion gesture recognition method, which is characterized in that described method includes following steps:
(1) user's attitude parameter is acquired using management terminal notifier body node;
(2) the chest node notice foot node L and foot node R in human body node acquires data, foot node L acquisition first
The voltage data of force-sensing sensor group, foot node R acquire the voltage data and the second air pressure sensing of the second force-sensing sensor group
The data of device, chest node acquire the number of three shaft angle degree of 9 axle sensors, 3-axis acceleration data and the first baroceptor
According to;
(3) human body and the inclination angle of horizontal plane, human body resultant acceleration, pereiopoda difference in height hundred are calculated according to the data of human body node acquisition
Point than and be provided with the second force-sensing sensor group each point unit area stress, current state is judged, if current state
For trained or more new state, then the label of human body attitude classification corresponding to calculated result and calculated result is sent to cloud service
Device, and continue step (4), if current state is identification state, continue step (5);
(4) label of Cloud Server human body attitude classification according to corresponding to the calculated result and calculated result of step (3) calculates
The Classification Index and divide value of different human body attitude classifications;
(5) human body appearance is judged according to the Classification Index and divide value of the calculated result of step (3) and different human body attitude classifications
State classification.
6. multiple features fusion gesture recognition method according to claim 5, which is characterized in that the step (2) includes such as
Lower step:
(2-1) described chest node sends commands to foot node L and foot node R by the first 2.4G module respectively;
After (2-2) described node L receives the order of chest node by the 2nd 2.4G module, by the collected quick biography of first power
The voltage data L of sensor group1~L8Chest node is sent to by the 2nd 2.4G module;
After (2-3) described foot node R receives the order of chest node by the 3rd 2.4G module, by collected second power
The voltage data R of dependent sensor group1~R8With the data P of the second baroceptor2Chest section is sent to by the 3rd 2.4G module
Point;
(2-4) described chest node receives the data of foot node L and foot node R respectively, while acquiring the three of 9 axle sensors
Shaft angle degree (x, y, z) and 3-axis acceleration data (ax, ay, az) and the first baroceptor data P1。
7. multiple features fusion gesture recognition method according to claim 6, which is characterized in that the step (3) includes such as
Lower step:
(3-1) calculates the angle of inclination B TA of human body and horizontal plane according to the following formula:
(3-2) calculates human body resultant acceleration ha according to the following formula:
(3-3) calculates pereiopoda difference in height percentage HP according to the following formula:
HP=44330 ((P2P0)15.255-(P1P0)15.255)H0
In formula, P0For standard atmospheric pressure, H0For user's height;
(3-4) calculates separately the unit plane of each point in the first force-sensing sensor group and the second force-sensing sensor group according to the following formula
Product stress LPai、RPai:
LPai=0.2 (ln (Li)-1.17)-0.2
RPai=0.2 (ln (Ri)-1.17)-0.2
(3-5) calculates separately the switching value of the first force-sensing sensor group and the second force-sensing sensor group each point according to the following formula
LPaDi、RPaDi:
LPaDi=ε (LPai-ρ)
RPaDi=ε (RPai-ρ)
In formula, ρ is preset vola threshold pressure;
(3-6) calculates separately the first force-sensing sensor group according to the following formula and the switching value synthesis of the second force-sensing sensor group is defeated
LPaD outSUM、RPaDSUM:
8. multiple features fusion gesture recognition method according to claim 7, which is characterized in that the step (4) includes such as
Lower step:
(4-1) Cloud Server calculates separately: the optimum division value HP of the pereiopoda difference in height percentage HP of walking and sitting posture1, squat down with
Pick up the optimum division value HP of the pereiopoda difference in height percentage HP of posture2, squat down and the angle of inclination B TA's of the human body of sitting posture and horizontal plane
Optimum division value θ1, squat down and pick up the human body of posture and the optimum division value θ of the angle of inclination B TA of horizontal plane2;
(4-2) human body attitude parameter, which determines, to be completed, and the Classification Index being calculated and optimum division value are back to by Cloud Server
Chest node.
9. multiple features fusion gesture recognition method according to claim 8, which is characterized in that the step (5) includes such as
Lower step:
(5-1) detects human motion degree according to human body resultant acceleration ha first, and the angle of inclination B TA of human body and horizontal plane is combined to examine
Survey the inclined degree and plantar pressure of human body upper body;
If | ha | >=15m/s2Or | ha |≤5m/s2, and subsequent time HP < P2, BTA < θ2, LPaDSUM<ω1, RPaDSUM<ω1, then
It is judged as tumble, continues (5-2);If 5m/s2<|ha|<15m/s2, then carry out (5-3);
(5-2) if | ax | < 5m/s2And x≤0 °, then to fall forward;If | ax | < 5m/s2And x > 0 °, then to fall back;If |
ay|<5m/s2And y > 0 °, then to fall to the left;If | ay | < 5m/s2And y≤0 °, then to fall to the right;
(5-3) detects whether human body has decline behavior according to pereiopoda difference in height percentage HP, if HP≤P1, then be judged as sitting posture, under
It squats or picks up, continue (5-4);Otherwise it is judged as walking or standing, carries out (5-6);
(5-4) comprehensively considers the angle of inclination B TA of pereiopoda difference in height percentage HP and human body and horizontal plane, if BTA >=θ1And P2≤HP≤
P1, then it is judged as sitting posture, if θ2≤BTA<θ1And HP < P2, then it is judged as and squats down;If BTA < θ2, HP≤P1And LPaDSUM≥ω1Or
RPaDSUM≥ω1, then it is judged as and picks up, continues the type identification that (5-5) is specifically picked up;
(5-5) is if LPaDSUM≥ω1And RPaDSUM≥ω1, then it is judged as and picks up forward;If LPaDSUM≥ω1And RPaDSUM=0, then
It is judged as and picks up to the left;If LPaDSUM=0 and RPaDSUM≥ω1, then it is judged as and picks up to the right;
(5-6) calculates cadence according to the vola pressure change period and is judged as standing if cadence is minimum;If cadence meets human body
Walking rule, then be judged as walking.
10. multiple features fusion gesture recognition method according to claim 9, which is characterized in that the method also includes such as
Lower step:
(6) it if gesture recognition result is to fall, or detect the unhealthy behaviors such as long station, sitting, is then mentioned by short message or voice
Wake up household or user;
(7) the posture result of identification is uploaded to Cloud Server by the chest node, and management terminal is aobvious according to Cloud Server data
Show the posture result of data and curves and identification.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811431810.7A CN109543762B (en) | 2018-11-28 | 2018-11-28 | Multi-feature fusion gesture recognition system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811431810.7A CN109543762B (en) | 2018-11-28 | 2018-11-28 | Multi-feature fusion gesture recognition system and method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109543762A true CN109543762A (en) | 2019-03-29 |
CN109543762B CN109543762B (en) | 2021-04-06 |
Family
ID=65851912
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811431810.7A Active CN109543762B (en) | 2018-11-28 | 2018-11-28 | Multi-feature fusion gesture recognition system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109543762B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110427890A (en) * | 2019-08-05 | 2019-11-08 | 华侨大学 | More people's Attitude estimation methods based on depth cascade network and mass center differentiation coding |
CN113686256A (en) * | 2021-08-19 | 2021-11-23 | 广州偶游网络科技有限公司 | Intelligent shoe and squatting action identification method |
CN116250830A (en) * | 2023-02-22 | 2023-06-13 | 武汉易师宝信息技术有限公司 | Human body posture judging and identifying system, device and method |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003236002A (en) * | 2002-02-20 | 2003-08-26 | Honda Motor Co Ltd | Method and apparatus for protecting body |
JP2006158431A (en) * | 2004-12-02 | 2006-06-22 | Kaoru Uchida | Fall prevention training auxiliary device |
CN103076619A (en) * | 2012-12-27 | 2013-05-01 | 山东大学 | System and method for performing indoor and outdoor 3D (Three-Dimensional) seamless positioning and gesture measuring on fire man |
CN106448057A (en) * | 2016-10-27 | 2017-02-22 | 浙江理工大学 | Multisensor fusion based fall detection system and method |
CN106887115A (en) * | 2017-01-20 | 2017-06-23 | 安徽大学 | A kind of Falls Among Old People monitoring device and fall risk appraisal procedure |
-
2018
- 2018-11-28 CN CN201811431810.7A patent/CN109543762B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003236002A (en) * | 2002-02-20 | 2003-08-26 | Honda Motor Co Ltd | Method and apparatus for protecting body |
JP2006158431A (en) * | 2004-12-02 | 2006-06-22 | Kaoru Uchida | Fall prevention training auxiliary device |
CN103076619A (en) * | 2012-12-27 | 2013-05-01 | 山东大学 | System and method for performing indoor and outdoor 3D (Three-Dimensional) seamless positioning and gesture measuring on fire man |
CN106448057A (en) * | 2016-10-27 | 2017-02-22 | 浙江理工大学 | Multisensor fusion based fall detection system and method |
CN106887115A (en) * | 2017-01-20 | 2017-06-23 | 安徽大学 | A kind of Falls Among Old People monitoring device and fall risk appraisal procedure |
Non-Patent Citations (3)
Title |
---|
YUNKUN NING 等: "Real-time Action Recognition and Fall Detection Based on Smartphone", 《2018 40TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY》 * |
屠碧琪: "基于多传感融合的老人跌倒检测算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
郑娱 等: "跌倒检测***的研究进展", 《中国医学物理学杂志》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110427890A (en) * | 2019-08-05 | 2019-11-08 | 华侨大学 | More people's Attitude estimation methods based on depth cascade network and mass center differentiation coding |
CN110427890B (en) * | 2019-08-05 | 2021-05-11 | 华侨大学 | Multi-person attitude estimation method based on deep cascade network and centroid differentiation coding |
CN113686256A (en) * | 2021-08-19 | 2021-11-23 | 广州偶游网络科技有限公司 | Intelligent shoe and squatting action identification method |
CN113686256B (en) * | 2021-08-19 | 2024-05-31 | 广州市偶家科技有限公司 | Intelligent shoe and squatting action recognition method |
CN116250830A (en) * | 2023-02-22 | 2023-06-13 | 武汉易师宝信息技术有限公司 | Human body posture judging and identifying system, device and method |
Also Published As
Publication number | Publication date |
---|---|
CN109543762B (en) | 2021-04-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Vallabh et al. | Fall detection monitoring systems: a comprehensive review | |
US10319209B2 (en) | Method and system for motion analysis and fall prevention | |
US11047706B2 (en) | Pedometer with accelerometer and foot motion distinguishing method | |
CN104146712B (en) | Wearable plantar pressure detection device and plantar pressure detection and attitude prediction method | |
US20180177436A1 (en) | System and method for remote monitoring for elderly fall prediction, detection, and prevention | |
CN104361321B (en) | A kind of method for judging the elderly and falling down behavior and balance ability | |
CN109726672B (en) | Tumbling detection method based on human body skeleton sequence and convolutional neural network | |
EP3525673B1 (en) | Method and apparatus for determining a fall risk | |
WO2018223505A1 (en) | Gait-identifying wearable device | |
CN109171734A (en) | Human body behavioural analysis cloud management system based on Fusion | |
CN109543762A (en) | A kind of multiple features fusion gesture recognition system and method | |
WO2018217652A1 (en) | Systems and methods for markerless tracking of subjects | |
CN106923839A (en) | Exercise assist device, exercising support method and recording medium | |
CN110600125B (en) | Posture analysis assistant system based on artificial intelligence and transmission method | |
Majumder et al. | A multi-sensor approach for fall risk prediction and prevention in elderly | |
CN110706255A (en) | Fall detection method based on self-adaptive following | |
Zhao et al. | Recognition of human fall events based on single tri-axial gyroscope | |
CN112115827B (en) | Falling behavior identification method based on human body posture dynamic characteristics | |
KR20170043308A (en) | Method for identificating Person on the basis gait data | |
Jatesiktat et al. | An elderly fall detection using a wrist-worn accelerometer and barometer | |
Ren et al. | Chameleon: personalised and adaptive fall detection of elderly people in home-based environments | |
Liu et al. | Posture recognition algorithm for the elderly based on BP neural networks | |
CN106778497A (en) | A kind of intelligence endowment nurse method and system based on comprehensive detection | |
Ren et al. | Real-time energy-efficient fall detection based on SSR energy efficiency strategy | |
CN109730660B (en) | Infant wearing equipment and user side |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |