CN106022378A - Camera and pressure sensor based cervical spondylopathy identification method - Google Patents
Camera and pressure sensor based cervical spondylopathy identification method Download PDFInfo
- Publication number
- CN106022378A CN106022378A CN201610343035.4A CN201610343035A CN106022378A CN 106022378 A CN106022378 A CN 106022378A CN 201610343035 A CN201610343035 A CN 201610343035A CN 106022378 A CN106022378 A CN 106022378A
- Authority
- CN
- China
- Prior art keywords
- sitting posture
- user
- face
- current
- node
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Human Computer Interaction (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The invention discloses a camera and pressure sensor based cervical spondylopathy identification method. The method comprises that S1) a camera and a pressure sensor are used to collect test samples of a user in different sitting postures; S2) the test samples are identified by a face classifier and a sitting posture classifier, and accuracies of the face classifier and the sitting posture classifier are obtained; S3) the weights of the face classifier and the sitting posture classifier are adjusted; S4) a present face image and present pressure data of the user are collected; S5) the face classifier identifies the probabilities of different sitting postures according to the present face image, the sitting posture classifier identifies the probabilities of different sitting postures according to the present pressure data, the comprehensive probability of each sitting posture is calculated, and the sitting posture of the highest comprehensive probability is a present sitting posture of the user; and S6) the time lengths of a standard sitting posture and a non-standard sitting posture of the user in a preset period are counted respectively, and the risk of cervical spondylopathy of the user is determined. The neck attitude is determined by combining the face image and body pressure, and the determination precision is improved.
Description
Technical field
The present invention relates to image recognition and pressure distribution technical field, be specifically related to a kind of based on photographic head and pressure transducer
Cervical spondylosis recognition methods.
Background technology
Cervical spondylosis is a kind of common and very harmful chronic disease, cures the most difficult.The generation of cervical spondylosis be one by shallow
Enter deep process, if prevention and the early diagnosis of cervical spondylosis can be carried out by long-term monitor mode, make the patient can the most just
Examine, from without delay treatment.
Cervical spondylosis prevention based on photographic head with pressure transducer needs to carry out head movement according to image in diagnostic method
Judge, often use image recognition technology to be accurately positioned the obvious characteristic in face and background, and pass through positioned face
With relation (typically distance and angle) between feature judges the motion conditions of head in background.But, head movement is sentenced
Disconnected realization there is also following technical bottleneck:
(1) more due to target to be positioned, it is limited to level of hardware, locating speed is low, there will be under normal circumstances
Postpone;
(2) as obvious characteristic in the background of object of reference, it is typically based on eigenvalue and selects, it is impossible to guarantee it must is background
Middle still life, if the object of reference selected moves in judge process, it will cause result error;
(3) when user's head deflection is no longer in the face of photographic head, account of receiving guests portion deflects back to will cause judging to be forbidden again
Really or even not can determine whether.
At present, multi-cam can be used to carry out the mode of omnidirectional Recognition to solve the problems referred to above, but exist difficult arrangement,
The problem such as relatively costly.
Summary of the invention
For solving the technical problem mentioned in background technology, the invention provides a kind of efficiently and accurately based on photographic head and pressure
The cervical spondylosis prevention of force transducer and diagnostic method.
For solving above-mentioned technical problem, the present invention adopts the following technical scheme that:
One, a kind of motion capture sitting posture determination methods based on image, including:
The facial image of S1 camera collection user's difference sitting posture is as training sample;
S2 use training sample and each training sample corresponding sitting posture training random forest, obtain face classification device;
Current face's image of S3 camera collection user, uses face classification device to obtain face from current face's image
Coordinate, the center point coordinate of face coordinate i.e. left eye, right eye and mouth, calculate the average coordinates of face coordinate, be designated as face
Average coordinates;
S4 is according to the current sitting posture of face coordinate identification user in current face's image, and this step farther includes:
4.1 use face classification device to obtain face coordinate, i.e. face standard coordinate from the training sample of standard sitting posture, meter
Calculate the average coordinates of each face standard coordinate, be designated as face average coordinate;
The x-axis coordinate of face average coordinates is deducted the x-axis coordinate of face average coordinate by 4.2, and gained difference is designated as
First difference;The y-axis coordinate of face average coordinates deducts the y-axis coordinate of face average coordinate, and gained difference is remembered
It it is the second difference;
4.3 according to 1. the first difference and the size of the second difference, 2. face average coordinates and the company of face average coordinate
The difference of each face spacing in the angle of line and horizontal direction and 3. current face's image and standard sitting posture training sample
Size, it is judged that the offset direction of face in the training sample of face relative standard sitting posture in current face's image, thus identify
The current sitting posture of user;
S5, based on current face's image, uses the face classification device current sitting posture of identification user;
S6 comparison step S4 and the recognition result of S5, if recognition result is identical, then this recognition result i.e. user currently sits
Appearance, advises to user according to the current sitting posture of user or alerts, and then, next frame facial image is performed step S3~S6;
If recognition result is different, directly next frame facial image is performed step S3~S6.
Above-mentioned different sitting posture include standard sitting posture, face upward head, bow, head left avertence, head right avertence, head be away from photographic head
With head near photographic head.
Sub-step 4.3 particularly as follows:
If a () the first difference and the second difference are all higher than a, calculate face average coordinates and the company of face average coordinate
Line and the tangent value of horizontal direction angle, if tangent value is more than 1, then judge that the current sitting posture of user is as facing upward head;Otherwise, sentence
Determining the current sitting posture of user is head right avertence;
If b () first difference less than-a, calculates face average coordinates and face average more than a the second difference simultaneously
The line of coordinate and the tangent value of horizontal direction angle, if tangent value is more than 1, then judge that the current sitting posture of user is as bowing;
Otherwise, it is determined that the current sitting posture of user is head right avertence;
If c () first difference is positioned in scope [-a, a] more than a simultaneously the second difference, then judge the current sitting posture of user as
Head right avertence;
If d () first difference more than a, calculates face average coordinates and face average less than-a the second difference simultaneously
The line of coordinate and the tangent value of horizontal direction angle, if tangent value is more than 1, then judge that the current sitting posture of user is as facing upward head;
Otherwise, it is determined that the current sitting posture of user is head left avertence;
If e () the first difference and the second difference are respectively less than-a, calculate face average coordinates and the company of face average coordinate
Line and the tangent value of horizontal direction angle, if tangent value is more than 1, then judge that the current sitting posture of user is as bowing;Otherwise, sentence
Determining the current sitting posture of user is head left avertence;
If f () first difference is positioned at scope [-a, a] less than-a the second difference simultaneously, then judge that the current sitting posture of user is as head
Left avertence;
If g () first difference is positioned in scope [-a, a], the second difference is more than a simultaneously, then judge the current sitting posture of user
For facing upward head;
If h () first difference is positioned in scope [-a, a], the second difference is more than a simultaneously, then judge the current sitting posture of user
For bowing;
If i () the first difference and the second difference are respectively positioned in scope [-a, a], calculate in the training sample of standard sitting posture left
Eye and right eye, left eye and mouth, right eye and the distance of mouth, be designated as gauged distance 1, gauged distance 2, gauged distance 3 respectively,
Calculate left eye and right eye, left eye and mouth, right eye and the distance of mouth in current face's image, be designated as distance 1, distance respectively
2, distance 3, calculate gauged distance 1 and distance 1, gauged distance 2 and distance 2, gauged distance 3 and the difference of distance 3
Value, if all differences are all higher than threshold value b, then judges that the current sitting posture of user is that head is near photographic head;If all differences are all
Less than threshold value b, then judge that the current sitting posture of user is that head is away from photographic head;Otherwise, it is judged that the current sitting posture of user is standard
Sitting posture.
Threshold value a is the empirical value relevant with face width pixel value in standard sitting posture training sample with b, passes through test of many times
Adjust and obtain.
Two, a kind of sitting posture determination methods based on pressure distribution detection, including:
Arrange pressure transducer node on S1 medicated cushion, gather the force value of each pressure transducer node under user's difference sitting posture,
The force value of pressure transducer node and the sitting posture composing training sample set D of correspondence;
S2 is characterized with the force value of node, and different sitting posture types are different classes of, uses Relief or ReliefF method to divide
Analysing each node weights, the top n node of weight selection maximum is as effective node, and N is in 15~20 scope values;
S3 gathers the force value of each effective node, the force value of effective node and the sitting posture of correspondence under different sitting posture and constitutes instruction
Practice sample set D', use training sample set D' to train random forest, obtain sitting posture grader;
The effective node of S4 gathers current pressure data, based on current pressure data, uses sitting posture grader identification user to work as
Front sitting posture.
In step S2, use Relief method to analyze each node weights, farther include:
2a.1 randomly selects sample R from training sample set D, finds sample from the training sample identical with sample R classification
Nearest samples H of this R, finds nearest samples M of sample R from the training sample different with sample R classification;
The initial weight of each pressure transducer node is set to 0 by 2a.2, then carries out each pressure transducer node one by one:
On current pressure sensor node, compare the distance of R and H and the distance of R and M, if the distance of R and H
Less than the distance of R and M, increasing the weight of current pressure sensor node, the increments of weight is to pass at current pressure
The distance of R and M on sensor node;Otherwise, reducing the weight of current pressure sensor node, the decrement of weight is
The distance of R and H on current pressure sensor node;
2a.3 judges whether the difference of the present weight of all the sensors node and the variance of a upper weight is less than predetermined threshold value,
If less than predetermined threshold value, perform sub-step 2a.4;Otherwise, sub-step 2a.1 is re-executed;
The top n pressure transducer node of 2a.4 weighting weight maximum is as effective node.
In step S2, use ReliefF method to analyze each node weights, farther include:
2b.1 randomly selects sample R from training sample set D, finds sample from the training sample identical with sample R classification
K neighbour's sample of R, is designated as Set (H);The k neighbour of sample R is found from the training sample different with sample R classification
Sample, is designated as Setc(M);
The initial weight of each pressure transducer node is set to 0 by 2b.2, then carries out each pressure transducer node one by one:
On current pressure sensor node, compare distance and R and Set of R and Set (H)c(M) distance, if R
It is less than R and Set with the distance of Set (H)c(M) distance, increases the weight of current pressure sensor node, the increasing of weight
Dosage is each sample H in R and Set (H) on current pressure sensor nodeiDistance sum;Otherwise, current pressure is reduced
The weight of sensor node, the decrement of weight is R and Set on current pressure sensor nodec(M) each sample M inci's
The weighted sum of distance, R and MciThe weights of distance be Setc(M) with sample M inciAccount for the sample number of type
Setc(M) ratio of total sample number in;
2b.3 judges whether the difference of the present weight of all the sensors node and the variance of a upper weight is less than predetermined threshold value,
If less than predetermined threshold value, perform sub-step 2b.4;Otherwise, sub-step 2b.1 is re-executed;
The top n pressure transducer node of 2b.4 weighting weight maximum is as effective node.
Three, cervical spondylosis recognition methods based on photographic head Yu pressure transducer, including:
S1 uses photographic head to gather the facial image under the different sitting posture of user and effective node pressure respectively with pressure transducer
Value, the sitting posture of facial image, effective node pressure value and correspondence constitutes test sample;
S2 is respectively adopted claim 1 step S2 gained face grader and claim 4 step S3 gained sitting posture divides
The sample number that class device identification test sample, calculating face classification device and sitting posture grader identification are correct accounts for test sample sum
Percentage ratio, i.e. face classification device and the accuracy rate of sitting posture grader;
S3 adjusts weight according to the accuracy rate of face classification device and sitting posture grader, i.e. if face classification device accuracy rate is low
In sitting posture grader, reduce face classification device weight by presetting weight adjusted value, improve sitting posture grader weight;Otherwise,
Improve face classification device weight by presetting weight adjusted value, reduce sitting posture grader;Face classification device and sitting posture grader
Initial weight is set to 0.5;Weight adjusted value is rule of thumb manually set;
Current face's image of S4 camera collection user, effective node gathers the current pressure data of user;
S5 uses face classification device from current face's image recognition each sitting posture probability, uses sitting posture grader from current pressure
Data identification each sitting posture probability, calculates the combined chance c of each sitting posture respectivelyi=m ai+n·bi, the seat of combined chance maximum
The current sitting posture of appearance i.e. user;M, n are respectively face classification device and the weight of sitting posture grader;ai、biRepresent people respectively
Face grader and sitting posture grader are from the probability that current face's image recognition is the i-th class sitting posture;
In S6 adds up predetermined period respectively, user is in standard sitting posture and the duration of non-standard sitting posture, calculates non-standard sitting posture
Duration account for the ratio of predetermined period, when this ratio reaches specified degree, it is determined that user has and suffers from cervical spondylosis risk.
When judgement user has and suffers from cervical spondylosis risk, according to user's cervical vertebra health performance level, further determine whether to suffer from
Cervical spondylosis risk, specifically includes:
(1) user bows to greatest extent, the facial image of camera collection user, obtains the angle of depression according to facial image,
By the angle of depression and predetermined normal level comparison;
(2) user faces upward head, the facial image of camera collection user to greatest extent, obtains the elevation angle according to facial image, will
The elevation angle and setting normal value comparison;
(3) user's torticollis, facial image of camera collection user to greatest extent to both sides respectively, obtains according to facial image
Take inclination angle, by inclination angle and setting normal value comparison, judge that two angles of heel are the most equal simultaneously;
(4) user's rotary head to greatest extent to both sides respectively, the facial image of camera collection user, obtain according to facial image
Take family rotary head track;
(5) user does action of shrugging, the facial image of camera collection user, obtains the shoulder of user according to facial image
Horizontal line and the distance at chin edge;
(6) user does pocket shoulder action, the facial image of camera collection user;
(7) user's wrist and ancon curve inwardly to greatest extent, the facial image of camera collection user, according to face figure
As obtaining crooked radian;
(8) user makees chest expanding action, the facial image of camera collection user, obtains user's chest expanding according to facial image
Cheng Zhong, with shoulder as the center of circle, the arm maximum diameter of a circle as diameter and track;
(9) user is the most clockwise, shake the head counterclockwise, and the facial image of camera collection user obtains according to facial image
Take frequency of shaking the head.
Four, cervical spondylosis identification system based on photographic head Yu pressure transducer, including:
Video monitoring apparatus, data transmission module, it is located at pressure transducer node, processing unit and the warning system of medicated cushion,
Video monitoring apparatus, warning system are all connected with processing unit, and pressure transducer node is processed by data transmission module connection
Unit;
Video monitoring apparatus is used for gathering the video flowing of user's face;Pressure transducer node is used for gathering the pressure letter of user
Breath, processing unit is used for gathering the current sitting posture of data identification user according to video monitoring apparatus and pressure transducer node,
And non-standard sitting posture duration accounts for the ratio of predetermined period, it is determined that whether user has is suffered from cervical spondylosis risk;Warning system is used for working as
User is judged as having when suffering from cervical spondylosis risk provides warning.
For needing to identify that the problem of multiple target, the present invention have only to identify eyes and face target, decrease quite a lot of
System operations amount, and reality test in use optimal frequency detection so that the speed of service of system faster.Additionally,
For judge user not in the face of screen time status monitoring and motion tracking, the present invention also uses random forest method to be trained,
According to the training of individual subscriber sample, reach the purpose accurately monitored.
The human face image information and the body pressure gathered by pressure transducer that use camera collection in the present invention measure associating
The attitude carrying out cervical region determines, increases and judges precision.
Accompanying drawing explanation
Fig. 1 is present system Organization Chart;
Fig. 2 is present system operation workflow figure;
Fig. 3 is the weight map of ReliefF;
Fig. 4 is node deployment figure.
Detailed description of the invention
The detailed description of the invention of three kinds of technical schemes of the present invention will be described in detail below.
One, motion capture sitting posture determination methods based on image, specifically comprises the following steps that
Step 1, gathers training sample.
Using the image of camera collection user's difference sitting posture as training sample, image is user's shoulder above section.This
In being embodied as, the sitting posture gathered include standard sitting posture, face upward head, bow, head left avertence, head right avertence, head remote
From photographic head, head near photographic head seven kinds;Sample frequency is 24 frames per second, and each sitting posture gathers 1000 samples respectively,
Sampling required time 8-10 minute.
Step 2, uses the training sample of step 1 collection and the sitting posture training random forest that each training sample is corresponding, obtains people
Face grader.
Random forest method, i.e. uses random fashion to set up random forest, and random forest is made up of some decision trees, the most gloomy
Not association between each decision tree in woods.After setting up random forest, often input a training sample (being designated as current training sample),
I.e. allow all decision trees in random forest judge sitting posture belonging to current training sample respectively, be judged most sitting postures the most current
The sitting posture of training sample.
Step 3, uses the present image of photographic head Real-time Collection user, uses face classification device to obtain face in real time and is working as
Coordinate in front image, face include left eye, right eye and mouth here.The present invention uses with left eye, right eye, the center of mouth
Point represents the position of left eye, right eye, mouth respectively for three circles in the center of circle, using central coordinate of circle as left eye, right eye, mouth
Coordinate, face coordinate had both comprised the coordinate of left eye, right eye, mouth.
Step 4, according to the current sitting posture of face coordinate identification user in present image.
This step farther includes:
(1) using face coordinate in the training sample of standard sitting posture as face standard coordinate.
(2) removing 3 after left eye, right eye, the x-axis coordinate of mouth are added in present image, y-axis coordinate removes 3 after being also added,
Gained coordinate is designated as face average coordinates.By the x-axis coordinate of the standard coordinate of left eye, right eye, mouth in face standard coordinate
Except 3 after addition, y-axis coordinate removes 3 after being also added, and gained coordinate is designated as face average coordinate.
(3) the x-axis coordinate of face average coordinates deducts the x-axis coordinate of face average coordinate, and gained difference is remembered
It it is the first difference;The y-axis coordinate of face average coordinates is deducted the y-axis coordinate of face average coordinate, gained difference
It is designated as the second difference;
The current sitting posture of user is judged according to the first difference and the second difference, particularly as follows:
If a () the first difference and the second difference are all higher than a, calculate face average coordinates and the company of face average coordinate
Line and the tangent value of horizontal direction angle, if tangent value is more than 1, then judge that the current sitting posture of user is as facing upward head;Otherwise, sentence
Determining the current sitting posture of user is head right avertence;
If b () first difference less than-a, calculates face average coordinates and face average more than a the second difference simultaneously
The line of coordinate and the tangent value of horizontal direction angle, if tangent value is more than 1, then judge that the current sitting posture of user is as bowing;
Otherwise, it is determined that the current sitting posture of user is head right avertence;
If c () first difference is positioned in scope [-a, a] more than a simultaneously the second difference, then judge the current sitting posture of user as
Head right avertence;
If d () first difference more than a, calculates face average coordinates and face average less than-a the second difference simultaneously
The line of coordinate and the tangent value of horizontal direction angle, if tangent value is more than 1, then judge that the current sitting posture of user is as facing upward head;
Otherwise, it is determined that the current sitting posture of user is head left avertence;
If e () the first difference and the second difference are respectively less than-a, calculate face average coordinates and the company of face average coordinate
Line and the tangent value of horizontal direction angle, if tangent value is more than 1, then judge that the current sitting posture of user is as bowing;Otherwise, sentence
Determining the current sitting posture of user is head left avertence;
If f () first difference is positioned at scope [-a, a] less than-a the second difference simultaneously, then judge that the current sitting posture of user is as head
Left avertence;
If g () first difference is positioned in scope [-a, a], the second difference is more than a simultaneously, then judge the current sitting posture of user
For facing upward head;
If h () first difference is positioned in scope [-a, a], the second difference is more than a simultaneously, then judge the current sitting posture of user
For bowing;
If i () the first difference and the second difference are respectively positioned in scope [-a, a], calculate in the training sample of standard sitting posture left
Eye and right eye, left eye and mouth, right eye and the distance of mouth, be designated as gauged distance 1, gauged distance 2, gauged distance 3 respectively,
Calculate the distance of left eye and right eye in present image, left eye and mouth, right eye and mouth, be designated as respectively distance 1, distance 2,
Distance 3, calculates gauged distance 1 and distance 1, gauged distance 2 and distance 2, gauged distance 3 and the difference of distance 3,
If all differences are all higher than threshold value b, then judge that the current sitting posture of user is that head is near photographic head;If all differences are both less than
Threshold value b, then judge that the current sitting posture of user is that head is away from photographic head;Otherwise, it is judged that the current sitting posture of user is standard sitting posture.
Threshold value a is the empirical value relevant with face width pixel value in standard sitting posture training sample with b, passes through test of many times
Adjust and obtain.In the present embodiment, threshold value a is set to the 1/3 of face width pixel value;Threshold value b is face width pixel value
1/4.
In the present invention, coordinate uses absolute coordinate, and used coordinate system is: with the image upper left corner as initial point, and level is to the right
X-axis positive direction, is y-axis positive direction straight down, a length of pixel of coordinate unit.
Step 5, based on present image, uses the face classification device current sitting posture of identification user.
Step 6, the recognition result of comparison step 4 and 5, if recognition result is identical, then this same identification result i.e. user
Current sitting posture, advises to user according to the current sitting posture of user or alert, then, to next frame image execution step 3~6;
If recognition result is different, skip present image, jump to next frame image, next frame image is performed step 3~6.
Two, sitting posture determination methods based on pressure distribution detection, specifically comprises the following steps that
Step 1, medicated cushion is arranged pressure transducer node, gathers the force value of each pressure transducer node under different sitting posture,
The force value of pressure transducer node and the sitting posture composing training sample set D of correspondence.
Step 2, is characterized with the force value of pressure transducer node, and different sitting posture types are different classes of, use Relief
Method or ReliefF method analyze each pressure transducer node weights, the top n pressure transducer node work that weight selection is maximum
For effective node, N is preferably 15~20.
Relief method, for the classification of two kinds of sitting postures, uses Relief method to analyze the concrete step of each pressure transducer node weights
Rapid as follows:
2a.1 randomly selects sample R from training sample set D, finds sample from the training sample identical with sample R classification
Nearest samples H of this R, finds nearest samples M of sample R from the training sample different with sample R classification;
Each pressure transducer node is carried out by 2a.2 one by one:
On current pressure sensor node, compare the distance of R and H and the distance of R and M, if the distance of R and H
Less than the distance of R and M, show that current pressure sensor node is useful to difference, increase current pressure sensor node
Weight;Otherwise, the weight of current pressure sensor node is reduced.
When updating weight, the increments of weight is the distance of R and M, weight on current pressure sensor node every time
Decrement be the distance of R and H on current pressure sensor node.
The initial weight of each pressure transducer node is set to 0.In this step, the distance of sample and nearest samples can be used to
Represent sample and the nearest samples pressure differential at current pressure sensor node.
2a.3 judges whether the difference of the present weight of all the sensors node and the variance of a upper weight is less than predetermined threshold value,
If less than predetermined threshold value, then it is assumed that each pressure transducer node weights will not occur acute variation, now restrains, performs son
Step 2a.4;Otherwise, sub-step 2a.1 is re-executed;
The top n pressure transducer node of 2a.4 weighting weight maximum, as effective node, only arranges pressure at effective node
Force transducer.
ReliefF method, for the classification of two or more sitting postures, uses ReliefF method to analyze each pressure transducer node weights
Specifically comprise the following steps that
2b.1 randomly selects sample R from training sample set D, finds sample from the training sample identical with sample R classification
K neighbour's sample of R, is designated as Set (H);The k neighbour of sample R is found from the training sample different with sample R classification
Sample, is designated as Setc(M);
Each pressure transducer node is carried out by 2b.2 one by one:
On current pressure sensor node, compare distance and R and Set of R and Set (H)c(M) distance, if R
It is less than R and Set with the distance of Set (H)c(M) distance, shows that current pressure sensor node is useful to difference, increases
The weight of current pressure sensor node;Otherwise, the weight of current pressure sensor node is reduced.
When every time updating weight, the increments of weight is each sample H in R and Set (H) on current pressure sensor nodei
Distance sum, the decrement of weight is R and Set on current pressure sensor nodec(M) each sample M inciDistance
Weighted sum, R and MciThe weights of distance be Setc(M) with sample M inciSet is accounted for the sample number of typec(M) sample in
The ratio of this sum.
The initial weight of each pressure transducer node is set to 0.In this step, the distance of sample and k neighbour's sample can be used to
Represent sample and the nearest samples pressure differential at current pressure sensor node.
2b.3 judges whether the difference of the present weight of all the sensors node and the variance of a upper weight is less than predetermined threshold value,
If less than predetermined threshold value, then it is assumed that each pressure transducer node weights will not occur acute variation, now restrains, performs son
Step 2b.4;Otherwise, sub-step 2b.1 is re-executed;
The top n pressure transducer node of 2b.4 weighting weight maximum, as effective node, only arranges pressure at effective node
Force transducer.
Step 3, gathers the force value of each effective node, the force value of effective node and the sitting posture structure of correspondence under different sitting posture
Become training sample set D', use training sample set D' to train random forest, obtain sitting posture grader;Sitting posture grader can root
Pressure information time factually, exports the current sitting posture of user in real time by ballot mode.
Step 4, gathers the current pressure data of effective node, based on current pressure data, uses sitting posture grader identification
The current sitting posture of user.
Three, cervical spondylosis recognition methods based on photographic head Yu pressure transducer, specifically comprises the following steps that
Step 1, according to the accuracy rate of face classification device Yu sitting posture grader identification sitting posture, adjusts face classification device and sitting posture
The weight of grader, particularly as follows:
It is respectively adopted the data of photographic head and the pressure transducer different sitting posture of collection user, is designated as test sample, test sample
Including facial image, the pressure data of effective node and the sitting posture of correspondence.In this concrete real-time mode, by 7 kinds of sitting postures
Facial image is respectively labeled as a1, a2, a3, a4, a5, a6, a7;The pressure data of 7 kinds of sitting postures is respectively labeled as
b1、b2、b3、b4、b5、b6、b7。
It is respectively adopted face classification device and sitting posture grader identification test sample, calculates and identify that correct sample number accounts for test sample
The percentage ratio of sum, i.e. obtains face classification device and the accuracy rate of sitting posture grader.
Accuracy rate according to face classification device and sitting posture grader adjusts weight, if face classification device accuracy rate is divided less than sitting posture
Class device, reduces face classification device weight by the weight adjusted value preset, and improves sitting posture grader weight;Otherwise, then by pre-
If weight adjusted value improve face classification device weight, reduce sitting posture grader.In the present invention, face classification device and sitting posture divide
The initial weight of class device is set as 0.5, and weight adjusted value is manually set, and in the present embodiment, weight adjusted value is set to 0.01.
Step 2, judges user's current sitting posture type according to weight after adjusting.
The present image of camera collection user, effective node gathers the current pressure data of user.Use face classification device
Probability a from each sitting posture of present image identificationi, use the sitting posture grader probability from the current pressure each sitting posture of data identification
bi.Calculate the combined chance c of each sitting posture respectivelyi=m ai+n·bi, wherein, m, n are respectively face classification device and sitting posture
The weight of grader, aiRepresent the probability of the face classification device the i-th class sitting posture from present image identification, biClassify for sitting posture
Device is from the probability of the i-th class sitting posture of current pressure data identification;The sitting posture i.e. user current sitting posture type that combined chance is maximum.
The probability of the i-th class sitting posture is to be identified as the decision tree number of the i-th class sitting posture to account for the ratio of total decision tree number.
Step 3, in statistics predetermined period, user is in standard sitting posture and the duration of non-standard sitting posture respectively, calculates mark respectively
The duration of quasi-sitting posture and non-standard sitting posture accounts for the ratio of predetermined period, when non-standard sitting posture duration proportion reaches regulation journey
Degree, it is determined that user has and suffers from cervical spondylosis risk, and reminds user.
Function mode of the present invention is further illustrated below in conjunction with accompanying drawing.
See Fig. 1~2, present system include video monitoring apparatus, data transmission module, medicated cushion module, processing unit,
Warning system and server.Video monitoring apparatus is that computer carries photographic head or external camera;Data transmission module uses
Bluetooth 4.0 agreement, the bluetooth module being connected with processing unit sets a property as main frame, the bluetooth mould being connected with medicated cushion module
Block sets a property as from machine, and data transmission uses data penetration transmission pattern;Medicated cushion module includes pressure transducer and Arduino
Control module;Processing unit uses computer, warning to be shown in computer screen;Server is responsible for data backup memory.
Video monitoring apparatus is used for obtaining video flowing.User makes corresponding sitting posture according to prompting before video monitoring apparatus and moves
Making and keep certain time, video monitoring apparatus now gathers training sample, uses training sample training random forest,
The face classification device of active user.
Video flowing is divided into image input processing unit according to the interval of 1 frame per second, and use in opencv storehouse is the most gloomy
Formwork erection type carries out image procossing, predominantly detects the eyes of user in image and the coordinate of mouth.The coordinate using eyes and mouth is sentenced
The preliminary motion direction in broken end portion, further according to threshold value a, the comparison of b, draw the change of face precise motion with this.Right
Incorrect sitting posture alerts.
System default coordinate system is: with the image upper left corner as initial point, and level is to the right x-axis positive direction, is y straight down
Axle positive direction, so can ensure that on image, point coordinates is positive number.
See Fig. 3~4, in sitting posture determination methods based on pressure distribution detection, first, 11 row 11 are set and arrange totally 121
Pressure transducer node, gathers the force value of each pressure transducer node, the pressure of pressure transducer node under different sitting postures
The sitting posture type composing training collection D of force value and correspondence;Then, chosen in 121 nodes effectively by ReliefF method
Node.In the present embodiment, have found the maximum pressure transducer node of 20 weights as effectively joint by ReliefF method
Point, is shown in Fig. 4.
Random forest is set up multiple decision tree indeed through random fashion and is carried out decision-making, for the test data of input,
Classified by every decision tree, classify finally by ballot mode.The mode of ballot is that every decision tree produces
One sitting posture type identification, adds up the sitting posture type that each decision tree produces, the most final output of sitting posture type that quantity is the highest
Sitting posture type.Random forest shows well on data set, it is possible to process the most high-dimensional data, and it goes without doing feature
Selection, training speed is fast, may be readily formed as parallel method.By the pressure data of pressure transducer node is inputted instruction
The random forest perfected, can obtain the sitting posture of user in real time.
When user tentatively judges to have and suffers from cervical spondylosis risk, native system also provides for cervical vertebra health, user's execution, root
According to user action performance level, determine whether whether user has and suffer from cervical spondylosis risk:
One, bow, action point: bow to greatest extent.Camera collection image, bowing when record is bowed to greatest extent
Angle, with the normal value comparison set.
Two, head is faced upward, action point: face upward head to greatest extent.Camera collection image, facing upward when record is bowed to greatest extent
Angle, with the normal value comparison set.
Three, torticollis, action point: the torticollis to greatest extent to both sides respectively.Camera collection image, records two angles of heel,
Normal value comparison with setting, and judges that two angles of heel are the most equivalent.
Four, rotary head, action point: the rotary head to greatest extent to both sides respectively, the concept of rotary head is to do the best the most forward in the crown, under
Ba Jinli stretches to rear flank side.Camera collection image, in rotary course, module provides a user with simulation rotational trajectory, and
Track during record user's rotary head.
Five, shrug action point: a hands holds another hands naturally, both shoulders are alarmmed, draws back one's neck.Camera collection figure
Picture, record shoulder level line and the relative distance at chin edge.
Six, pocket is takeed on, action point: both shoulders to be tried one's best front pocket.
Seven, elbow is bent in wrist flexion, action point: wrist, ancon curve inwardly as possible.Module record crooked radian.
Eight, chest expanding.Camera collection image, during record chest expanding, with shoulder as the center of circle, hands is the greatest circle of diameter
Diameter and track.
Nine, shake the head, action point: the most clockwise, shake the head counterclockwise.Module record is shaken the head frequency, reminds user to press
Shake the head according to setpoint frequency, too fast or the slowest with antishimmy.
By photographic head, above-mentioned action all judges whether action puts in place.
Claims (9)
1. a motion capture sitting posture determination methods based on image, is characterized in that, including:
The facial image of S1 camera collection user's difference sitting posture is as training sample;
S2 use training sample and each training sample corresponding sitting posture training random forest, obtain face classification device;
Current face's image of S3 camera collection user, uses face classification device to obtain face from current face's image
Coordinate, the center point coordinate of face coordinate i.e. left eye, right eye and mouth, calculate the average coordinates of face coordinate, be designated as face
Average coordinates;
S4 is according to the current sitting posture of face coordinate identification user in current face's image, and this step farther includes:
4.1 use face classification device to obtain face coordinate, i.e. face standard coordinate from the training sample of standard sitting posture, meter
Calculate the average coordinates of each face standard coordinate, be designated as face average coordinate;
The x-axis coordinate of face average coordinates is deducted the x-axis coordinate of face average coordinate by 4.2, and gained difference is designated as
First difference;The y-axis coordinate of face average coordinates deducts the y-axis coordinate of face average coordinate, and gained difference is remembered
It it is the second difference;
4.3 according to 1. the first difference and the size of the second difference, 2. face average coordinates and the company of face average coordinate
The difference of each face spacing in the angle of line and horizontal direction and 3. current face's image and standard sitting posture training sample
Size, it is judged that the offset direction of face in the training sample of face relative standard sitting posture in current face's image, thus identify
The current sitting posture of user;
S5, based on current face's image, uses the face classification device current sitting posture of identification user;
S6 comparison step S4 and the recognition result of S5, if recognition result is identical, then this recognition result i.e. user currently sits
Appearance, advises to user according to the current sitting posture of user or alerts, and then, next frame facial image is performed step S3~S6;
If recognition result is different, directly next frame facial image is performed step S3~S6.
2. motion capture sitting posture determination methods based on image as claimed in claim 1, is characterized in that, including:
Different sitting postures described in S1 include standard sitting posture, face upward head, bow, head left avertence, head right avertence, head remote
From photographic head and head near photographic head.
3. motion capture sitting posture determination methods based on image as claimed in claim 1, is characterized in that, including:
Sub-step 4.3 particularly as follows:
If a () the first difference and the second difference are all higher than a, calculate face average coordinates and the company of face average coordinate
Line and the tangent value of horizontal direction angle, if tangent value is more than 1, then judge that the current sitting posture of user is as facing upward head;Otherwise, sentence
Determining the current sitting posture of user is head right avertence;
If b () first difference less than-a, calculates face average coordinates and face average more than a the second difference simultaneously
The line of coordinate and the tangent value of horizontal direction angle, if tangent value is more than 1, then judge that the current sitting posture of user is as bowing;
Otherwise, it is determined that the current sitting posture of user is head right avertence;
If c () first difference is positioned in scope [-a, a] more than a simultaneously the second difference, then judge the current sitting posture of user as
Head right avertence;
If d () first difference more than a, calculates face average coordinates and face average less than-a the second difference simultaneously
The line of coordinate and the tangent value of horizontal direction angle, if tangent value is more than 1, then judge that the current sitting posture of user is as facing upward head;
Otherwise, it is determined that the current sitting posture of user is head left avertence;
If e () the first difference and the second difference are respectively less than-a, calculate face average coordinates and the company of face average coordinate
Line and the tangent value of horizontal direction angle, if tangent value is more than 1, then judge that the current sitting posture of user is as bowing;Otherwise, sentence
Determining the current sitting posture of user is head left avertence;
If f () first difference is positioned at scope [-a, a] less than-a the second difference simultaneously, then judge that the current sitting posture of user is as head
Left avertence;
If g () first difference is positioned in scope [-a, a], the second difference is more than a simultaneously, then judge the current sitting posture of user
For facing upward head;
If h () first difference is positioned in scope [-a, a], the second difference is more than a simultaneously, then judge the current sitting posture of user
For bowing;
If i () the first difference and the second difference are respectively positioned in scope [-a, a], calculate in the training sample of standard sitting posture left
Eye and right eye, left eye and mouth, right eye and the distance of mouth, be designated as gauged distance 1, gauged distance 2, gauged distance 3 respectively,
Calculate left eye and right eye, left eye and mouth, right eye and the distance of mouth in current face's image, be designated as distance 1, distance respectively
2, distance 3, calculate gauged distance 1 and distance 1, gauged distance 2 and distance 2, gauged distance 3 and the difference of distance 3
Value, if all differences are all higher than threshold value b, then judges that the current sitting posture of user is that head is near photographic head;If all differences are all
Less than threshold value b, then judge that the current sitting posture of user is that head is away from photographic head;Otherwise, it is judged that the current sitting posture of user is standard
Sitting posture;
Threshold value a is the empirical value relevant with face width pixel value in standard sitting posture training sample with b, passes through test of many times
Adjust and obtain.
4. a sitting posture determination methods based on pressure distribution detection, is characterized in that, including:
Arrange pressure transducer node on S1 medicated cushion, gather the force value of each pressure transducer node under user's difference sitting posture,
The force value of pressure transducer node and the sitting posture composing training sample set D of correspondence;
S2 is characterized with the force value of node, and different sitting posture types are different classes of, uses Relief or ReliefF method to divide
Analysing each node weights, the top n node of weight selection maximum is as effective node, and N is in 15~20 scope values;
S3 gathers the force value of each effective node, the force value of effective node and the sitting posture of correspondence under different sitting posture and constitutes instruction
Practice sample set D', use training sample set D' to train random forest, obtain sitting posture grader;
S4 gathers the current pressure data of effective node, based on current pressure data, uses sitting posture grader identification user
Current sitting posture.
5. the sitting posture determination methods detected based on pressure distribution as claimed in claim 4, is characterized in that:
In step S2, use Relief method to analyze each node weights, farther include:
2a.1 randomly selects sample R from training sample set D, finds sample from the training sample identical with sample R classification
Nearest samples H of this R, finds nearest samples M of sample R from the training sample different with sample R classification;
The initial weight of each pressure transducer node is set to 0 by 2a.2, then carries out each pressure transducer node one by one:
On current pressure sensor node, compare the distance of R and H and the distance of R and M, if the distance of R and H
Less than the distance of R and M, increasing the weight of current pressure sensor node, the increments of weight is to pass at current pressure
The distance of R and M on sensor node;Otherwise, reducing the weight of current pressure sensor node, the decrement of weight is
The distance of R and H on current pressure sensor node;
2a.3 judges whether the difference of the present weight of all the sensors node and the variance of a upper weight is less than predetermined threshold value,
If less than predetermined threshold value, perform sub-step 2a.4;Otherwise, sub-step 2a.1 is re-executed;
The top n pressure transducer node of 2a.4 weighting weight maximum is as effective node.
6. the sitting posture determination methods detected based on pressure distribution as claimed in claim 4, is characterized in that:
In step S2, use ReliefF method to analyze each node weights, farther include:
2b.1 randomly selects sample R from training sample set D, finds sample from the training sample identical with sample R classification
K neighbour's sample of R, is designated as Set (H);The k neighbour of sample R is found from the training sample different with sample R classification
Sample, is designated as Setc(M);
The initial weight of each pressure transducer node is set to 0 by 2b.2, then carries out each pressure transducer node one by one:
On current pressure sensor node, compare distance and R and Set of R and Set (H)c(M) distance, if R
It is less than R and Set with the distance of Set (H)c(M) distance, increases the weight of current pressure sensor node, the increasing of weight
Dosage is each sample H in R and Set (H) on current pressure sensor nodeiDistance sum;Otherwise, current pressure is reduced
The weight of sensor node, the decrement of weight is R and Set on current pressure sensor nodec(M) each sample M inci's
The weighted sum of distance, R and MciThe weights of distance be Setc(M) with sample M inciAccount for the sample number of type
Setc(M) ratio of total sample number in;
2b.3 judges whether the difference of the present weight of all the sensors node and the variance of a upper weight is less than predetermined threshold value,
If less than predetermined threshold value, perform sub-step 2b.4;Otherwise, sub-step 2b.1 is re-executed;
The top n pressure transducer node of 2b.4 weighting weight maximum is as effective node.
7. cervical spondylosis recognition methods based on photographic head Yu pressure transducer, is characterized in that, including:
S1 uses photographic head to gather the facial image under the different sitting posture of user and effective node pressure respectively with pressure transducer
Value, the sitting posture of facial image, effective node pressure value and correspondence constitutes test sample;
S2 is respectively adopted claim 1 step S2 gained face grader and claim 4 step S3 gained sitting posture divides
The sample number that class device identification test sample, calculating face classification device and sitting posture grader identification are correct accounts for test sample sum
Percentage ratio, i.e. face classification device and the accuracy rate of sitting posture grader;
S3 adjusts weight according to the accuracy rate of face classification device and sitting posture grader, i.e. if face classification device accuracy rate is low
In sitting posture grader, reduce face classification device weight by presetting weight adjusted value, improve sitting posture grader weight;Otherwise,
Improve face classification device weight by presetting weight adjusted value, reduce sitting posture grader;Face classification device and sitting posture grader
Initial weight is set to 0.5;Weight adjusted value is rule of thumb manually set;
Current face's image of S4 camera collection user, effective node gathers the current pressure data of user;
S5 uses face classification device from current face's image recognition each sitting posture probability, uses sitting posture grader from current pressure
Data identification each sitting posture probability, calculates the combined chance c of each sitting posture respectivelyi=m ai+n·bi, the seat of combined chance maximum
The current sitting posture of appearance i.e. user;M, n are respectively face classification device and the weight of sitting posture grader;ai、biRepresent people respectively
Face grader and sitting posture grader are from the probability that current face's image recognition is the i-th class sitting posture;
In S6 adds up predetermined period respectively, user is in standard sitting posture and the duration of non-standard sitting posture, calculates non-standard sitting posture
Duration account for the ratio of predetermined period, when this ratio reaches specified degree, it is determined that user has and suffers from cervical spondylosis risk.
8. cervical spondylosis recognition methods based on photographic head Yu pressure transducer as claimed in claim 7, is characterized in that:
When judgement user has and suffers from cervical spondylosis risk, according to user's cervical vertebra health performance level, further determine whether to suffer from
Cervical spondylosis risk, specifically includes:
(1) user bows to greatest extent, the facial image of camera collection user, obtains the angle of depression according to facial image,
By the angle of depression and predetermined normal level comparison;
(2) user faces upward head, the facial image of camera collection user to greatest extent, obtains the elevation angle according to facial image, will
The elevation angle and setting normal value comparison;
(3) user's torticollis, facial image of camera collection user to greatest extent to both sides respectively, obtains according to facial image
Take inclination angle, by inclination angle and setting normal value comparison, judge that two angles of heel are the most equal simultaneously;
(4) user's rotary head to greatest extent to both sides respectively, the facial image of camera collection user, obtain according to facial image
Take family rotary head track;
(5) user does action of shrugging, the facial image of camera collection user, obtains the shoulder of user according to facial image
Horizontal line and the distance at chin edge;
(6) user does pocket shoulder action, the facial image of camera collection user;
(7) user's wrist and ancon curve inwardly to greatest extent, the facial image of camera collection user, according to face figure
As obtaining crooked radian;
(8) user makees chest expanding action, the facial image of camera collection user, obtains user's chest expanding according to facial image
Cheng Zhong, with shoulder as the center of circle, the arm maximum diameter of a circle as diameter and track;
(9) user is the most clockwise, shake the head counterclockwise, and the facial image of camera collection user obtains according to facial image
Take frequency of shaking the head.
9. cervical spondylosis identification system based on photographic head Yu pressure transducer, is characterized in that, including:
Video monitoring apparatus, data transmission module, it is located at pressure transducer node, processing unit and the warning system of medicated cushion,
Video monitoring apparatus, warning system are all connected with processing unit, and pressure transducer node is processed by data transmission module connection
Unit;
Video monitoring apparatus is used for gathering the video flowing of user's face;Pressure transducer node is used for gathering the pressure letter of user
Breath, processing unit is used for gathering the current sitting posture of data identification user according to video monitoring apparatus and pressure transducer node,
And non-standard sitting posture duration accounts for the ratio of predetermined period, it is determined that whether user has is suffered from cervical spondylosis risk;Warning system is used for working as
User is judged as having when suffering from cervical spondylosis risk provides warning.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610343035.4A CN106022378B (en) | 2016-05-23 | 2016-05-23 | Sitting posture judgment method and based on camera and pressure sensor cervical spondylosis identifying system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610343035.4A CN106022378B (en) | 2016-05-23 | 2016-05-23 | Sitting posture judgment method and based on camera and pressure sensor cervical spondylosis identifying system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106022378A true CN106022378A (en) | 2016-10-12 |
CN106022378B CN106022378B (en) | 2019-05-10 |
Family
ID=57096124
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610343035.4A Active CN106022378B (en) | 2016-05-23 | 2016-05-23 | Sitting posture judgment method and based on camera and pressure sensor cervical spondylosis identifying system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106022378B (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106991399A (en) * | 2017-04-01 | 2017-07-28 | 浙江陀曼精密机械有限公司 | A kind of sitting posture image detection Compare System and its method |
CN107562352A (en) * | 2017-08-15 | 2018-01-09 | 苏州三星电子电脑有限公司 | Computer user's posture correcting method |
CN108741862A (en) * | 2018-05-22 | 2018-11-06 | 四川斐讯信息技术有限公司 | A kind of sitting posture method of adjustment and sitting posture adjust seat |
CN108881827A (en) * | 2018-06-22 | 2018-11-23 | 上海掌门科技有限公司 | Determine that user sits in meditation into the method, equipment and storage medium fixed time |
CN109685025A (en) * | 2018-12-27 | 2019-04-26 | 中科院合肥技术创新工程院 | Shoulder feature and sitting posture Activity recognition method |
CN110443147A (en) * | 2019-07-10 | 2019-11-12 | 广州市讯码通讯科技有限公司 | A kind of sitting posture recognition methods, system and storage medium |
CN110781741A (en) * | 2019-09-20 | 2020-02-11 | 中国地质大学(武汉) | Face recognition method based on Relief feature filtering method |
CN110968854A (en) * | 2018-09-29 | 2020-04-07 | 北京航空航天大学 | Sitting posture identity authentication method and device |
CN112220212A (en) * | 2020-12-16 | 2021-01-15 | 湖南视觉伟业智能科技有限公司 | Table/chair adjusting system and method based on face recognition |
CN113288122A (en) * | 2021-05-21 | 2021-08-24 | 河南理工大学 | Wearable sitting posture monitoring device and sitting posture monitoring method |
CN113361342A (en) * | 2021-05-20 | 2021-09-07 | 杭州麦淘淘科技有限公司 | Multi-mode-based human body sitting posture detection method and device |
CN116884083A (en) * | 2023-06-21 | 2023-10-13 | 圣奥科技股份有限公司 | Sitting posture detection method, medium and equipment based on key points of human body |
CN116999222A (en) * | 2023-09-28 | 2023-11-07 | 杭州键嘉医疗科技股份有限公司 | Soft tissue tension measuring device and pressure calibration method thereof |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101697199A (en) * | 2009-08-11 | 2010-04-21 | 北京盈科成章科技有限公司 | Detection method of head-face gesture and disabled assisting system using same to manipulate computer |
CN101916496A (en) * | 2010-08-11 | 2010-12-15 | 无锡中星微电子有限公司 | System and method for detecting driving posture of driver |
US20110158546A1 (en) * | 2009-12-25 | 2011-06-30 | Primax Electronics Ltd. | System and method for generating control instruction by using image pickup device to recognize users posture |
CN102298692A (en) * | 2010-06-24 | 2011-12-28 | 北京中星微电子有限公司 | Method and device for detecting body postures |
CN102629305A (en) * | 2012-03-06 | 2012-08-08 | 上海大学 | Feature selection method facing to SNP (Single Nucleotide Polymorphism) data |
CN104239860A (en) * | 2014-09-10 | 2014-12-24 | 广东小天才科技有限公司 | Detecting and reminding method and device for sitting posture in using process of intelligent terminal |
CN105139447A (en) * | 2015-08-07 | 2015-12-09 | 天津中科智能技术研究院有限公司 | Sitting posture real-time detection method based on double cameras |
-
2016
- 2016-05-23 CN CN201610343035.4A patent/CN106022378B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101697199A (en) * | 2009-08-11 | 2010-04-21 | 北京盈科成章科技有限公司 | Detection method of head-face gesture and disabled assisting system using same to manipulate computer |
US20110158546A1 (en) * | 2009-12-25 | 2011-06-30 | Primax Electronics Ltd. | System and method for generating control instruction by using image pickup device to recognize users posture |
CN102298692A (en) * | 2010-06-24 | 2011-12-28 | 北京中星微电子有限公司 | Method and device for detecting body postures |
CN101916496A (en) * | 2010-08-11 | 2010-12-15 | 无锡中星微电子有限公司 | System and method for detecting driving posture of driver |
CN102629305A (en) * | 2012-03-06 | 2012-08-08 | 上海大学 | Feature selection method facing to SNP (Single Nucleotide Polymorphism) data |
CN104239860A (en) * | 2014-09-10 | 2014-12-24 | 广东小天才科技有限公司 | Detecting and reminding method and device for sitting posture in using process of intelligent terminal |
CN105139447A (en) * | 2015-08-07 | 2015-12-09 | 天津中科智能技术研究院有限公司 | Sitting posture real-time detection method based on double cameras |
Non-Patent Citations (2)
Title |
---|
肖振华: "基于体压分布检测和支持向量机分类的乘员体征识别", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 * |
贾若辰: "基于机器视觉技术的人体坐姿特征提取及识别算法研究", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 * |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106991399A (en) * | 2017-04-01 | 2017-07-28 | 浙江陀曼精密机械有限公司 | A kind of sitting posture image detection Compare System and its method |
CN106991399B (en) * | 2017-04-01 | 2020-07-31 | 浙江陀曼精密机械有限公司 | Sitting posture image detection and comparison system and method thereof |
CN107562352A (en) * | 2017-08-15 | 2018-01-09 | 苏州三星电子电脑有限公司 | Computer user's posture correcting method |
CN108741862A (en) * | 2018-05-22 | 2018-11-06 | 四川斐讯信息技术有限公司 | A kind of sitting posture method of adjustment and sitting posture adjust seat |
CN108881827A (en) * | 2018-06-22 | 2018-11-23 | 上海掌门科技有限公司 | Determine that user sits in meditation into the method, equipment and storage medium fixed time |
CN110968854A (en) * | 2018-09-29 | 2020-04-07 | 北京航空航天大学 | Sitting posture identity authentication method and device |
CN109685025A (en) * | 2018-12-27 | 2019-04-26 | 中科院合肥技术创新工程院 | Shoulder feature and sitting posture Activity recognition method |
CN110443147A (en) * | 2019-07-10 | 2019-11-12 | 广州市讯码通讯科技有限公司 | A kind of sitting posture recognition methods, system and storage medium |
CN110781741A (en) * | 2019-09-20 | 2020-02-11 | 中国地质大学(武汉) | Face recognition method based on Relief feature filtering method |
CN112220212A (en) * | 2020-12-16 | 2021-01-15 | 湖南视觉伟业智能科技有限公司 | Table/chair adjusting system and method based on face recognition |
CN113361342A (en) * | 2021-05-20 | 2021-09-07 | 杭州麦淘淘科技有限公司 | Multi-mode-based human body sitting posture detection method and device |
CN113361342B (en) * | 2021-05-20 | 2022-09-20 | 杭州好学童科技有限公司 | Multi-mode-based human body sitting posture detection method and device |
CN113288122A (en) * | 2021-05-21 | 2021-08-24 | 河南理工大学 | Wearable sitting posture monitoring device and sitting posture monitoring method |
CN113288122B (en) * | 2021-05-21 | 2023-12-19 | 河南理工大学 | Wearable sitting posture monitoring device and sitting posture monitoring method |
CN116884083A (en) * | 2023-06-21 | 2023-10-13 | 圣奥科技股份有限公司 | Sitting posture detection method, medium and equipment based on key points of human body |
CN116884083B (en) * | 2023-06-21 | 2024-05-28 | 圣奥科技股份有限公司 | Sitting posture detection method, medium and equipment based on key points of human body |
CN116999222A (en) * | 2023-09-28 | 2023-11-07 | 杭州键嘉医疗科技股份有限公司 | Soft tissue tension measuring device and pressure calibration method thereof |
Also Published As
Publication number | Publication date |
---|---|
CN106022378B (en) | 2019-05-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106022378A (en) | Camera and pressure sensor based cervical spondylopathy identification method | |
US11644896B2 (en) | Interactive motion-based eye tracking calibration | |
US11989340B2 (en) | Systems, methods, apparatuses and devices for detecting facial expression and for tracking movement and location in at least one of a virtual and augmented reality system | |
CN109477951B (en) | System and method for identifying persons and/or identifying and quantifying pain, fatigue, mood and intent while preserving privacy | |
CN107169453B (en) | Sitting posture detection method based on depth sensor | |
CN107103309A (en) | A kind of sitting posture of student detection and correcting system based on image recognition | |
CN111083922A (en) | Dental image analysis method for correction diagnosis and apparatus using the same | |
WO2016084499A1 (en) | Behavior classification system, behavior classification device, and behavior classification method | |
CN106923839A (en) | Exercise assist device, exercising support method and recording medium | |
CN102298692B (en) | A kind of detection method of human body attitude and device | |
CN112101124A (en) | Sitting posture detection method and device | |
JP2023549838A (en) | Method and system for detecting child sitting posture based on child face recognition | |
CN109044375A (en) | A kind of control system and its method of real-time tracking detection eyeball fatigue strength | |
CN107680351A (en) | A kind of quick correcting sitting posture is reminded and eye care method | |
Estrada et al. | Sitting posture recognition for computer users using smartphones and a web camera | |
CN110123280A (en) | A kind of finger dexterity detection method based on the identification of intelligent mobile terminal operation behavior | |
CN109758737A (en) | Posture correction device and method for underwater motion | |
CN116469174A (en) | Deep learning sitting posture measurement and detection method based on monocular camera | |
CN115063891B (en) | Human body abnormal sign data monitoring method | |
JP3686418B2 (en) | Measuring device and method | |
JP7439932B2 (en) | Information processing system, data storage device, data generation device, information processing method, data storage method, data generation method, recording medium, and database | |
WO2022024274A1 (en) | Image processing device, image processing method, and recording medium | |
CN108378822A (en) | A kind of dormant wearable device of detection user | |
CN107845112A (en) | A kind of method and system for guiding user to use electronic product posture | |
CN114255507A (en) | Student posture recognition analysis method based on computer vision |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |