Disclosure of Invention
The invention provides a sleep staging stage identification method based on human body monitoring sleep data, which divides each stage of a human sleep cycle by adopting a novel and simple algorithm to generate a complete sleep analysis report so as to evaluate the sleep quality and further help solve the problem of clinical diseases related to sleep.
In order to achieve the above object, the sleep staging stage identification method based on human body monitoring sleep data provided by the invention comprises the following steps:
step S1, performing sleep data training to obtain a focus parameter required by non-contact human sleep state data monitoring;
step S2, using the trained point focusing parameters to perform sleep stage identification and judgment, specifically comprising the following steps:
step S21, sleep state recognition: judging whether the human body is in a sleep state within unit time according to the real index data of the sleep stage of the human body and the sleep state initial judgment rule;
step S22, sleep staging stage identification: carrying out data processing on the index data result in the unit time, and judging the sleep staging stage: a deep sleep stage, a light sleep stage, a rapid eye movement stage, or a waking stage;
step S23, forming a complete sleep cycle time: and (4) splicing the time periods of the deep sleep stage, the light sleep stage, the rapid eye movement stage and the waking stage according to the time sequence obtained in the step (S22) to form a complete sleep time sequence.
Preferably, step S1 specifically includes the following steps:
step S11, extracting a training data set required by non-contact human sleep state data monitoring;
and step S12, training the sleep data according to the training data set, and solving the focus point parameter required by the non-contact human sleep state data monitoring.
Preferably, the training data set in step S11 is specifically an uploaded history data set labeled manually and actively, and the data set of the attribute value x of the multipoint parameter includes data sets of a detection distance between the device and the human body, a respiration rate, a heart rate, and a signal intensity.
Preferably, the step S11 is to obtain historical known whole-night complete sleep data, which is uploaded by the device completely and generates real sleep monitoring data, including historical known whole-night complete sleep data and known sleep stage standard data, where the known sleep stage standard data includes sleep specific time information of sleep onset sleep time, wake-up time, deep sleep, light sleep, and rapid eye movement;
the step S12 specifically includes the following steps:
step S121, processing the sleep data according to a training data set of historical known overnight complete sleep data, and solving a focus parameter required by non-contact human body sleep state data monitoring;
and S122, verifying the required point focusing parameters according to the known sleep stage standard data to obtain a verification set.
Preferably, the step S121 of obtaining the required point focusing parameters of the model specifically includes the following steps:
step S1211, preprocessing data;
calculating the weight w of each attribute value x of each sleep time point in the training data set:
score=|x-max(x)|/std(x)
Wherein std (x) is the standard deviation of the attribute value x, max (x) is the maximum value of the attribute value x, score is the intermediate parameter for obtaining the weight, Σ score is the sum of the intermediate parameters of all the attribute values x, and max (score) is the maximum value of the intermediate parameters of all the attribute values x;
calculating a clustering result pointvalue Truth of the weight w of each attribute value x by using a weighted average function WeiightedMedian:
Truth=WeightedMedian(x,w)
each attribute value x data corresponds to a weight, and the attribute value x data is accumulated from the starting point of the attribute value x data until 1/2 of the data sum is reached, so that the attribute value x data point is selected as the convergence value Truth of the training data set;
step S1212, selecting a feature variable: calculating the standard difference value of the clustering result point
To determine the error fluctuation range interval of the spot.
Preferably, the step S22 specifically includes the following steps:
step S221, judging sleep staging stages within N times of unit time period: performing window sliding in a period which is N times of unit time, traversing and judging the sleep staging stage in the period which is N times of the unit time according to the sleep staging stage rule;
step S222, splicing N times of unit time period data, and updating the sleep staging stage again: and according to the sleep state updating rule, scoring the sleep staging in the unit time period which is N times that of the sleep staging, and updating the sleep staging again.
Preferably, the unit time in step S21 is 1S, and the N-times period of the unit time in step S221 is specifically 60 times period.
Preferably, the sleep state initial determination rule in step S21 is specifically: if the signal intensity is less than the detection distance between the equipment and the human body (the signal intensity focus point + the signal intensity focus point standard deviation) and the heart rate is greater than 0, the respiratory rate is greater than 0, and the signal intensity is-5, judging that the equipment is in a sleep state in the unit time; otherwise, judging that the device is in the non-sleep state in the unit time.
Preferably, the sleep staging rules in step S221 specifically include:
when the state is 'sleep' and the occurrence frequency within N times of unit time period is more than or equal to N times of unit, the sleep staging stage is judged to be 'deep sleep';
when the state is 'sleep', and the occurrence frequency in the N times of unit time period is less than N times of units and is greater than or equal to 2/3N times of units, judging the sleep staging stage as 'light sleep';
when the state is 'sleep', and the occurrence frequency in the N times of unit time period is less than 2/3N times of units and is more than or equal to 1/2N times of units, the sleep staging stage is judged to be 'rapid eye movement';
and when the state is 'sleep', and the occurrence frequency in the N times of unit time period is less than 1/2N times of units and is more than or equal to 0, judging that the sleep staging stage is 'wakeful'.
Preferably, the updating the sleep state rule in step S222 specifically includes the following steps:
step S2221, defining a set X as four attribute values of distance, respiration rate, heart rate and signal intensity respectively corresponding to a deep sleep stage, a light sleep stage and a rapid eye movement stage in sleep staging stages;
step S2222, the four attribute values in the set X are normalized:
step S2223, summing the normalized standard deviations of the four attributes (std (the detection distance between the device and the human body) + std (respiratory rate) + std (heart rate) + std (signal strength))
Wherein std (the detection distance between the device and the human body) is a distance standard deviation, std (respiration rate) is a respiration rate standard deviation, std (heart rate) is a heart rate standard deviation, std (signal strength) is a signal strength standard deviation, and score2 is the sum of the standard deviations of the four attributes;
step S2224, updating the sleep staging stage according to the standard deviation sum score 2:
when in a sleep staging that is not "awake," and score < t1, the updated sleep staging is "deep sleep";
when in a non-waking sleep stage, and score ≧ t1 and score < t2, the renewed sleep stage is "light sleep";
when in a non "awake" sleep stage, and the score ≧ t2, the updated sleep stage is "rapid eye movement".
Wherein, the set sleep stage threshold rule of each stage is t1< t2, t1 and t2, which are historical training experience data.
The invention adopts a novel and simple algorithm to divide each stage of the human sleep cycle, divides the overnight sleep data into four states of waking, rapid eye movement, light sleep and deep sleep, and generates a complete sleep analysis report to evaluate the sleep quality, thereby helping to solve the problem of clinical diseases related to sleep.
Detailed Description
In order to make the technical problems, technical solutions and advantages of the present invention more apparent, the following detailed description is given with reference to the accompanying drawings and specific embodiments.
Aiming at the existing problems, the invention provides a sleep staging stage identification method based on human body monitoring sleep data, as shown in figure 1, comprising the following steps:
step S1, performing sleep data training to obtain a focus parameter required by non-contact human sleep state data monitoring;
and step S2, using the trained point focusing parameters to perform sleep stage identification judgment.
Step S1 specifically includes the following steps:
step S11, extracting a training data set required by non-contact human sleep state data monitoring; the training data set is specifically an uploaded history data set labeled manually and actively, and the data set of the attribute value x of the point focusing parameter comprises data sets of the detection distance between the equipment and the human body, the respiration rate, the heart rate and the signal intensity; for example: i know that x is a night before going to sleep, and y is a night after getting up the next day, the training data are data of ' xx: xx: xx point to ' xx: xx: xx ' point uploaded by the equipment (needing manual extraction for training). The specific structure of the acquired data is as follows:
the method comprises the steps that historical known overnight complete sleep data are obtained, the data are uploaded completely by equipment and real sleep monitoring data are generated, the historical known overnight complete sleep data comprise known overnight complete sleep data and known sleep stage standard data, and the known sleep stage standard data comprise specific time information of sleep starting time, wake-up time, deep sleep, light sleep and rapid eye movement;
step S12, training the sleep data according to the training data set, and solving the focus point parameter required by the non-contact human sleep state data monitoring;
step S12 specifically includes the following steps:
step S121, processing the sleep data according to a training data set of historical known overnight complete sleep data, and solving a focus parameter required by non-contact human body sleep state data monitoring;
and S122, verifying the required point focusing parameters according to the known sleep stage standard data to obtain a verification set.
The step S121 of obtaining the point focusing parameters required by the model specifically includes the following steps:
step S1211, preprocessing data;
calculating the weight w of each attribute value x of each sleep time point in the training data set:
score=|x-max(x)|/std(x)
Wherein std (x) is the standard deviation of the attribute value x, max (x) is the maximum value of the attribute value x, score is the intermediate parameter for obtaining the weight, Σ score is the sum of the intermediate parameters of all the attribute values x, and max (score) is the maximum value of the intermediate parameters of all the attribute values x;
calculating a clustering result pointvalue Truth of the weight w of each attribute value x by using a weighted average function WeiightedMedian:
Truth=WeightedMedian(x,w)
each attribute value x data corresponds to a weight, the attribute value x data is accumulated from the starting point until 1/2 of the sum of the data is reached, namely, the total value is calculated firstly, the sum is added in sequence, and in addition, the sum reaches 1/2 of the total value, and then the attribute value x data point is selected as the convergence value Truth of the training data set;
for example, given the weight of each point out of the data set, then order:
as shown in fig. 2 and the table above, according to the fast sorting idea, a number is found, then the number sequence is divided into two segments, i.e. left and right segments, according to the sum of the weights of the two segments, the left half or right half number sequence is recursively called, the middle w is 0.328, f (v) is 0.550, and that point is the so-called "true value", i.e. we find the convergence point Truth is 3.
Step S1212, selecting a feature variable: calculating the standard difference value of the clustering result point
To determine the error fluctuation range interval of the spot.
In the process of constructing the algorithm model in the last stage, the required parameters of the model training output, namely the point convergence and the point convergence standard deviation, are obtained. Because the data of the model participating in training is real sleep data, and the result trained by the model is real index data of the human sleep stage, the parameters can be used as important basis for judging the sleep state (sleeping or non-sleeping). Next, the initial sleep state is judged by performing granularity division with time of 1s, and the following rule is used to judge whether the sleep state is in the granularity of 1 s:
as shown in fig. 3, step S2 specifically includes the following steps:
step S21, sleep state recognition: judging whether the human body is in a sleep state within unit time according to the real index data of the sleep stage of the human body and the sleep state initial judgment rule; the unit time is 1 s;
the sleep state initial judgment rule is used for judging whether a model point convergence condition is met, and specifically comprises the following steps: if the signal intensity is less than the detection distance between the equipment and the human body (the signal intensity focus point + the signal intensity focus point standard deviation) and the heart rate is greater than 0, the respiratory rate is greater than 0, and the signal intensity is-5, judging that the equipment is in a sleep state in the unit time; otherwise, judging that the device is in the non-sleep state in the unit time.
Step S22, sleep staging stage identification: carrying out data processing on the index data result in the unit time, and judging the sleep staging stage: a deep sleep stage, a light sleep stage, a rapid eye movement stage, or a waking stage;
the step S22 specifically includes the following steps:
step S221, judging sleep staging stages within N times of unit time period: performing window sliding in a period which is N times of unit time, traversing and judging the sleep staging stage in the period which is N times of the unit time according to the sleep staging stage rule; the N times period of the unit time is 60 times period; namely: the data result of step S21 is slid in a 60S time period window, and the sleep states are classified into deep sleep, light sleep, rapid eye movement, and awake. And traversing from the 1 st s of the data, sliding in a window with 60s as a period, wherein each window is independent, and repeated data are not overlapped in each window interval.
In this embodiment, the 60s window is used as a window to slide in a cycle, and the sleep staging stage is determined by the number of sleep times in a cycle.
The sleep staging stage rules are specifically:
when the state is 'sleep' and the occurrence frequency within N times of unit time period is more than or equal to N times of unit, the sleep staging stage is judged to be 'deep sleep';
when the state is 'sleep', and the occurrence frequency in the N times of unit time period is less than N times of units and is greater than or equal to 2/3N times of units, judging the sleep staging stage as 'light sleep';
when the state is 'sleep', and the occurrence frequency in the N times of unit time period is less than 2/3N times of units and is more than or equal to 1/2N times of units, the sleep staging stage is judged to be 'rapid eye movement';
and when the state is 'sleep', and the occurrence frequency in the N times of unit time period is less than 1/2N times of units and is more than or equal to 0, judging that the sleep staging stage is 'wakeful'. In the embodiment, the method comprises the following steps:
rules
|
Staging of sleep
|
The state is 'sleep' and the occurrence frequency in 60s is more than or equal to 60
|
Deep sleep
|
The state is 'sleep' and the occurrence frequency in 60s is less than 60 and is more than or equal to 40
|
Superficial sleep
|
The state is 'sleep' and the occurrence frequency in 60s is less than 40 and more than or equal to 30
|
Rapid eye movement
|
The state is 'sleep' and the occurrence frequency in 60s is less than 40 and more than or equal to 0
|
Sobering up |
Step S222, splicing N times of unit time period data, and updating the sleep staging stage again: and according to the sleep state updating rule, scoring the sleep staging in the unit time period which is N times that of the sleep staging, and updating the sleep staging again.
And performing normalized weighted score analysis through the preliminarily judged states of deep sleep, light sleep and rapid eye movement, and accurately updating the sleep state. Extracting each sleep state obtained just before, calculating data sleep data scores under the sleep stage by using a scoring function, and updating the sleep states (deep sleep, light sleep and rapid eye movement) again according to the data scores.
In this embodiment, 60s cycle data is spliced to form sleep stages, and the sleep stages are judged to be updated again for each sleep stage data, for example: deep sleep is changed to light sleep, and light sleep is changed to rapid eye movement.
The updating of the sleep state rule specifically comprises the following steps:
step S2221, defining a set X as four attribute values of distance, respiration rate, heart rate and signal intensity respectively corresponding to a deep sleep stage, a light sleep stage and a rapid eye movement stage in sleep staging stages;
step S2222, the four attribute values in the set X are normalized:
step S2223, summing the normalized standard deviations of the four attributes (std (the detection distance between the device and the human body) + std (respiratory rate) + std (heart rate) + std (signal strength))
Wherein std (the detection distance between the device and the human body) is a distance standard deviation, std (respiration rate) is a respiration rate standard deviation, std (heart rate) is a heart rate standard deviation, std (signal strength) is a signal strength standard deviation, and score2 is the sum of the standard deviations of the four attributes;
step S2224, updating the sleep staging stage according to the standard deviation sum score 2:
when in a sleep staging that is not "awake," and score < t1, the updated sleep staging is "deep sleep";
when in a sleep staging that is not "awake," and score > t1, and score < t2, the more recent sleep staging is "light sleep";
when in a sleep staging that is not "awake" and score > t2, the sleep staging is updated to "rapid eye movement". Wherein, the set sleep stage threshold rule of each stage is t1< t2, t1 and t2, which are historical training experience data. In the embodiment, the method comprises the following steps:
rules
|
Sleep staging updates
|
Sleep staging not ` awake ` and scored<0.15
|
Deep sleep
|
Sleep staging not ` awake ` and scored>Score 0.15 and<0.2
|
superficial sleep
|
Sleep staging not ` awake ` and scored>=0.2
|
Rapid eye movement |
Step S23, forming a complete sleep cycle time: and (4) splicing the time periods of the deep sleep stage, the light sleep stage, the rapid eye movement stage and the waking stage according to the time sequence obtained in the step (S22) to form a complete sleep time sequence.
The invention adopts a novel and simple algorithm to divide each stage of the human sleep cycle, divides the overnight sleep data into four states of waking, rapid eye movement, light sleep and deep sleep, and generates a complete sleep analysis report to evaluate the sleep quality, thereby helping to solve the problem of clinical diseases related to sleep.
The invention provides a sleep stage identification method based on human body monitoring sleep data, and provides a sleep data analysis method for sleep stage identification. The sleep staging stage identification algorithm firstly extracts historical known whole-night complete sleep data to train to obtain a convergence point required by a model, then repeatedly verifies the model through data of real known sleep stages (sleep report indexes such as sleep starting and ending time, deep sleep, light sleep, rapid eye movement, waking state time period and the like) to obtain the most reasonable and accurate convergence point parameter, and finally processes 24-hour sleep monitoring data uploaded by equipment by using the trained convergence point and the model to generate a complete sleep staging stage time sequence.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims.