CN117158891A - System and method for sleep state tracking - Google Patents

System and method for sleep state tracking Download PDF

Info

Publication number
CN117158891A
CN117158891A CN202310642638.4A CN202310642638A CN117158891A CN 117158891 A CN117158891 A CN 117158891A CN 202310642638 A CN202310642638 A CN 202310642638A CN 117158891 A CN117158891 A CN 117158891A
Authority
CN
China
Prior art keywords
sleep
examples
motion
state
channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310642638.4A
Other languages
Chinese (zh)
Inventor
M·莫拉扎德
V·卡利达斯
N·E·巴盖尔扎德
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US18/309,386 external-priority patent/US20230389862A1/en
Application filed by Apple Inc filed Critical Apple Inc
Publication of CN117158891A publication Critical patent/CN117158891A/en
Pending legal-status Critical Current

Links

Landscapes

  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The present disclosure relates to systems and methods for sleep state tracking. A wearable device including a motion tracking sensor may be used to track sleep. The data from the motion tracking sensor may be used to estimate/classify sleep states for multiple periods of time and/or to determine sleep intervals. In some examples, to improve performance, sleep state classification may be performed on data within a sleep tracking session. The start of a sleep tracking session may be defined by detecting a rest state and the end of a sleep tracking session may be defined by an active state. In some examples, to improve performance, the classified sleep states for multiple periods may be filtered and/or smoothed. In some examples, a signal quality check may be performed on data from the motion tracking sensor. In some examples, the classification of these sleep states and/or the display of the results of sleep tracking may be premised on one or more signal quality checks.

Description

System and method for sleep state tracking
Cross Reference to Related Applications
The present application claims the benefit of U.S. provisional application No. 63/365,840 filed on 3 at 6 and 3 at 2023 and U.S. patent application No. 18/309,386 filed on 28 at 4 and 2023, the contents of both of which are incorporated herein by reference in their entirety for all purposes.
Technical Field
The present disclosure relates generally to systems and methods for tracking sleep states, and more particularly, to tracking sleep states using wearable devices.
Background
Good sleep is considered critical to health. Abnormal sleep habits can cause a number of health disorders. Some sleep disorders may adversely affect the physical and psychological functions of the human body. Thus, providing information about sleep states to a user may be useful for improving sleep habits and wellbeing.
Disclosure of Invention
The present disclosure relates to systems and methods for tracking sleep using a wearable device. The wearable device may include one or more sensors including motion (and/or orientation) tracking sensors (e.g., accelerometers, gyroscopes, inertial Measurement Units (IMUs), etc.), among other possible sensors. Data from the one or more sensors may be processed in the wearable device and/or by another device in communication with the one or more sensors of the wearable device to estimate/classify sleep states for multiple periods of time and/or to determine sleep state intervals (e.g., during a sleep tracking session). In some examples, to improve performance, sleep/awake classification may be performed on data from a sleep tracking session (e.g., classifying sleep states as awake/awake or asleep/sleep). In some examples, to improve performance, sleep/wake classification may be performed on data from sleep tracking sessions to determine more detailed sleep states (e.g., wake, rapid Eye Movement (REM) sleep, non-REM sleep stage one, non-REM sleep stage two, non-REM sleep stage F-EF239309
Third), the third step is to set the third step. The start of a sleep tracking session may be defined by detecting a rest state and the end of a sleep tracking session may be defined by an active state. In some examples, to improve performance, the classified sleep states for the plurality of time periods may be filtered and/or smoothed. In some examples, a signal quality check may be performed on data from one or more sensors. In some examples, the display of the results of sleep tracking may be premised on a signal quality check.
Drawings
Fig. 1A-1B illustrate an exemplary system that may be used to track sleep according to examples of the present disclosure.
Fig. 2A-2D illustrate example block diagrams and corresponding timing diagrams for sleep tracking according to examples of the present disclosure.
Fig. 3 illustrates an exemplary process for a rest/activity classifier according to an example of the present disclosure.
Fig. 4 illustrates an exemplary process for a sleep/awake classifier according to an example of the present disclosure.
Fig. 5 illustrates an exemplary block diagram of feature extraction for sleep/wake classification according to an example of the present disclosure.
Fig. 6 illustrates an exemplary process for a quality check classifier according to an example of the present disclosure.
Fig. 7A-7B illustrate block diagrams for smoothing/filtering and diagrams indicating in-bed detection, according to examples of the present disclosure.
Fig. 8 illustrates an exemplary process for a sleep state classifier according to an example of the present disclosure.
Detailed Description
In the following description of the examples, reference is made to the accompanying drawings which form a part hereof, and in which is shown by way of illustration specific examples which may be practiced. It is to be understood that other examples may be utilized and structural changes may be made without departing from the scope of the disclosed examples.
The present disclosure relates to systems and methods for tracking sleep using a wearable device. The wearable device may include one or more sensors including motion (and/or orientation) tracking sensors (e.g., accelerometers, gyroscopes, inertial Measurement Units (IMUs), etc.), among other possible sensors. Data from the one or more sensors may be processed in the wearable device and/or by another device in communication with the one or more sensors of the wearable device to estimate/classify sleep states for multiple periods of time and/or to determine sleep state intervals (e.g., during a sleep tracking session). In some examples, to improve performance, sleep/awake classification (e.g., classifying sleep states as awake/awake or asleep/sleep) may be performed on data from the sleep tracking session F-EF 239309. In some examples, to improve performance, sleep/wake classification may be performed on data from sleep tracking sessions to determine more detailed sleep states (e.g., wake, rapid Eye Movement (REM) sleep, non-REM sleep stage one, non-REM sleep stage two, non-REM sleep stage three). A more detailed sleep state classification is generally referred to herein as a sleep state classification (performed by a sleep state classifier), but may be understood as a more detailed example of a sleep/awake classification. The start of a sleep tracking session may be defined by detecting a rest state and the end of a sleep tracking session may be defined by an active state. In some examples, to improve performance, the classified sleep states for the plurality of time periods may be filtered and/or smoothed. In some examples, a signal quality check may be performed on data from one or more sensors. In some examples, the display of the results of sleep tracking may be premised on a signal quality check.
Fig. 1A-1B illustrate an exemplary system that may be used to track sleep according to examples of the present disclosure. The system may include one or more sensors and processing circuitry to use data from the one or more sensors to estimate/classify sleep states for a plurality of periods. In some examples, the system may be implemented in a wearable device (e.g., wearable device 100). In some examples, the system may be implemented in more than one device (e.g., wearable device 100 and a second device in communication with wearable device 100).
Fig. 1A illustrates an exemplary wearable device 100 that may be attached to a user using straps 146 or other fasteners. The wearable device 100 may include one or more sensors for estimating/classifying sleep states for a plurality of periods and/or determining sleep intervals, and optionally may include a touch screen 128 to display the results of sleep tracking as described herein.
Fig. 1B illustrates an exemplary block diagram of an architecture of a wearable device 100 for tracking sleep according to an example of the present disclosure. As shown in fig. 1B, wearable device 100 may include one or more sensors. For example, the wearable device 100 may optionally include an optical sensor that includes one or more light emitters 102 (e.g., one or more Light Emitting Diodes (LEDs)) and one or more optical sensors 104 (e.g., one or more photodetectors/photodiodes). One or more light emitters may generate light in a range corresponding to Infrared (IR), green, amber, blue, and/or red light, among other possibilities. The optical sensor may be used to emit light into the skin 114 of the user and to detect the reflection of light reflected back from the skin. The optical sensor measurements obtained by the light sensor may be converted to a digital signal (e.g., a time domain photoplethysmography (PPG) signal) via an analog-to-digital converter (ADC) 105b for processing. In some examples, the optical sensor and processing of the optical signals by the one or more processors 108 may be used for various functions, including estimating physiological characteristics (e.g., heart rate, arterial blood oxygen saturation, etc.) or detecting contact with the user (e.g., on-wrist/off-wrist detection).
The one or more sensors may include motion tracking and/or orientation tracking sensors, such as accelerometers, gyroscopes, inertial Measurement Units (IMUs), and the like. For example, the wearable device 100 may include an accelerometer 106, which may be a multi-channel accelerometer (e.g., a 3-axis accelerometer). As described in more detail herein, motion tracking and/or orientation tracking sensors may be used to extract motion and respiratory features for estimating sleep states. The measurement of accelerometer 106 may be converted to a digital signal for processing via ADC 105 a.
The wearable device 100 may also optionally include other sensors including, but not limited to, photo-thermal sensors, magnetometers, barometers, compasses, proximity sensors, cameras, ambient light sensors, thermometers, global positioning system sensors, and various system sensors that may sense remaining battery life, power consumption, processor speed, CPU load, etc. Although various sensors are described, it should be understood that fewer, more, or different sensors may be used.
Data (e.g., motion data, optical data, etc.) acquired from one or more sensors may be stored in a memory of the wearable device 100. For example, the wearable device 100 may include a data buffer (or other volatile or non-volatile memory or storage) to temporarily (or permanently) store data from the sensor for processing by the processing circuitry. In some examples, volatile or non-volatile memory or storage may be used to store partially processed data (e.g., filtered data, downsampled data, extracted features, etc.) for subsequent processing, or to store fully processed data for storing sleep tracking results and/or to display or report sleep tracking results to a user.
The wearable device 100 may also include processing circuitry. The processing circuitry may include one or more processors 108. One or more of the processors may include a Digital Signal Processor (DSP) 109, a microprocessor, a Central Processing Unit (CPU), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), or the like. In some examples, wearable device 100 may include a host processor and a low power processor. The low power processor may be a continuously powered processor and the host processor may be powered up or powered down depending on the mode of operation. For example, the low power processor may sample the accelerometer 106 while the user is asleep (e.g., when the host processor may be powered down), while the host processor may perform some or all of the sleep/wake classification or sleep state classification at the end of the sleep tracking session (e.g., when the host processor may be powered up). The various processes and classifiers described in more detail herein may be implemented entirely in the low power processor, entirely in the host processor, or partially in both the low power processor and the host processor.
In some examples, some of the sensing and/or some of the processing may be performed by a peripheral device 118 in communication with the wearable device. The peripheral device 118 may be a smart phone, a media player, a tablet computer, a desktop computer, a laptop computer, a data server, a cloud storage service, or any other portable or non-portable electronic computing device (including a second wearable device). The peripheral may include one or more sensors (e.g., motion sensors, etc.) for providing input to one of the classifiers described herein and processing circuitry for performing some of the processing functions described herein. The wearable device 100 may also include a communication circuit 110 to communicatively couple to the peripheral device 118 via a wired or wireless communication link 124. For example, the communication circuitry 110 may include circuitry for one or more wireless communication protocols including cellular, bluetooth, wi-Fi, and the like.
In some examples, wearable device 100 may include a touch screen 128 for displaying sleep tracking results (e.g., displaying sleep intervals and/or total sleep time of a sleep tracking session, optionally with details of sleep times of different sleep state intervals) and/or for receiving input from a user. In some examples, the touch screen 128 may be replaced by a non-touch sensitive display, or the touch and/or display functions may be implemented in another device. In some examples, wearable device 100 may include microphone/speaker 122 for audio input/output functions, haptic circuitry to provide haptic feedback to the user, and/or other sensors and input/output devices. The wearable device 100 may also include an energy storage device (e.g., a battery) to provide power to the components of the wearable device 100.
The one or more processors 108 (also referred to herein as processing circuitry) may be connected to the program storage 111 and may be configured (programmed) to execute instructions stored in the program storage 111 (e.g., non-transitory computer readable storage medium). For example, the processing circuitry may provide control and data signals to generate a display image, such as a display image of a User Interface (UI), on the touch screen 128, optionally including results for a sleep tracking session. The processing circuitry may also receive touch input from the touch screen 128. Touch input may be used by a computer program stored in program storage device 111 to perform actions, which may include, but are not limited to: mobile objects such as cursors or pointers, scrolling or panning, adjusting control settings, opening files or documents, viewing menus, making selections, executing instructions, operating peripheral devices connected to host devices, answering telephone calls, placing telephone calls, terminating telephone calls, changing volume or audio settings, storing information related to telephone communications (such as addresses, frequently dialed numbers, missed calls), logging onto a computer or computer network, allowing authorized individuals to access a limited area F-EF239309 of a computer or computer network
Domain, loading user profiles associated with user-preferred placement of computer desktops, allowing access to web page content, launching specific programs, encrypting or decrypting messages, etc. The processing circuitry may also perform additional functions that may not be relevant to touch processing and display. In some examples, the processing circuitry may perform some of the signal processing functions described herein (e.g., classification).
It is noted that one or more of the functions described herein, including sleep tracking (e.g., sleep/wake classification, sleep state classification), may be performed by firmware stored in memory or instructions stored in program storage 111 and executed by processing circuitry. The firmware may also be stored and/or transported within any non-transitory computer-readable storage medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. In the context of this document, a "non-transitory computer readable storage medium" may be any medium (excluding signals) that can contain or store the program for use by or in connection with the instruction execution system, apparatus, or device. The computer-readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, a portable computer diskette (magnetic), a Random Access Memory (RAM) (magnetic), a read-only memory (ROM) (magnetic), an erasable programmable read-only memory (EPROM) (magnetic), or a flash memory such as a compact flash card, a secure digital card, a Universal Serial Bus (USB) memory device, a memory stick, etc.
The firmware may also be propagated within any transport medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. In the context of this document, a "transmission medium" may be any medium that can communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. Transmission media can include, but are not limited to, electronic, magnetic, optical, electromagnetic, or infrared wired or wireless propagation media.
It should be apparent that the architecture shown in fig. 1B is only one exemplary architecture, and that the wearable device may have more or fewer components than shown, or a different configuration of components. The various components shown in fig. 1B may be implemented in hardware, software, firmware, or any combination thereof (including one or more signal processing and/or application specific integrated circuits). In addition, the components shown in FIG. 1B may be included within a single device or may be distributed among multiple devices.
Fig. 2A-2D illustrate example block diagrams and corresponding timing diagrams for sleep tracking according to examples of the present disclosure. Fig. 2A-2B illustrate example block diagrams and corresponding timing diagrams for sleep tracking (e.g., sleep/wake classification) according to examples of this disclosure. FIG. 2A shows a process according to F-EF239309
An exemplary block diagram 200 of processing circuitry for sleep tracking is disclosed. The processing circuitry may include a digital signal processor (e.g., corresponding to DSP 109 in fig. 1B) and/or one or more additional processors (e.g., corresponding to processor 108). In some examples, the processing circuitry may include a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), or other logic device. The processing circuitry may include a rest/activity classifier 205, a sleep/awake classifier 210, a quality check classifier 215, and a smoothing/filtering post-processor 220. Classification and/or filtering/smoothing may be implemented in hardware, software, firmware, or any combination thereof.
The rest/activity classifier 205 may optionally be included as part of sleep tracking to bind data to be stored and/or processed for sleep/wake classification (potentially reducing storage and/or processing requirements and power consumption for a sleep tracking system). In particular, the rest/activity classifier 205 may be used to define a start time of a sleep tracking session (e.g., corresponding to an estimate/classification that the user is resting) and/or an end time of a sleep tracking session (e.g., corresponding to an estimate that the user is active and not resting or sleeping). The boundaries of the sleep tracking session assume that the user is unlikely to sleep while active/not resting. In some examples, the rest/activity classifier 205 may be implemented as one or more classifiers (e.g., a separate rest classifier and a separate activity classifier). In some examples, the same classifier may be used, but a different threshold may be used for resting classification before the sleep tracking session begins than the threshold used for activity classification during the sleep tracking session.
A quality check classifier 215 may optionally be included for sleep tracking to estimate/classify the quality of sensor data (e.g., using one or more features extracted during sleep/wake classification). The quality of the sensor data may indicate that the wearable device is on the wrist during the sleep tracking session and may establish a confidence in the sleep/wake classification. A smoothing and filtering post-processor 220 may optionally be included to smooth/filter the sleep/wake classification.
Fig. 2B illustrates an example timing diagram 230 showing features and operation of a processing circuit for sleep tracking according to examples of this disclosure. At time T0, a rest classifier (e.g., a rest/activity classifier using a "rest" threshold parameter) may be triggered and may begin processing the input data according to process 300 to detect whether the user is at rest (e.g., in a rest state or an activity state). In some examples, the resting classification may begin in response to satisfaction of one or more first trigger criteria. The one or more first trigger criteria may include first trigger criteria that are met at a predefined time or in response to user input. For example, the rest classifier may be at a user-specified "bedtime" (or at a default bedtime if sleep tracking features are enabled for the system without the user specifying a bedtime) or a predefined time F-EF239309 prior to the user-specified bedtime (or the default bedtime)
(e.g., 120 minutes, 90 minutes, 60 minutes, 45 minutes, 30 minutes, etc.) is triggered. In some examples, the rest classifier may be triggered by a user request to perform a sleep tracking session (or an indication that the user is currently in bed or is planning to go to bed soon). In some examples, the resting classifier may process the input only after an indication that the wearable device is worn by the user (or the wearable device is not missing an indication that is outside the wrist) in addition to the first trigger criteria. For example, the one or more first trigger criteria may also include a second criterion that is met when the wearable device is detected (e.g., using an optical sensor or other sensor) on the wrist. The one or more first triggering criteria may also include a third criterion that is met when it is detected (e.g., via an inductive charger) that the wearable device is not being charged. Although three exemplary criteria are described, it should be understood that fewer, more, or different criteria may be used in some examples. In some examples, the resting classifier may process the data until the resting classifier indicates that the user is in a resting state (at T1). T1 may define the start of a session. In some examples, the rest classifier may process the data until a timeout occurs, at which point sleep tracking may be terminated.
In some examples, at time T2, an activity classifier (e.g., a rest/activity classifier using an "activity" threshold parameter) may be triggered and processing of the input data according to process 300 may begin to detect whether the user is active (e.g., in an active state or a rest state). In some examples, the activity classifier may begin in response to satisfaction of one or more second trigger criteria. The one or more second trigger criteria may include a first trigger criteria that is met at a predefined time or in response to user input. For example, the activity classifier may be triggered at a user-specified "wake time" (or default wake time) or a predefined time (e.g., 120 minutes, 90 minutes, 60 minutes, 45 minutes, 30 minutes, etc.) before the user-specified "wake time" (or default wake time). In some examples, the activity classifier may process the data until the activity classifier indicates that the user is in an active state. In some examples, after the activity state is indicated by the activity classifier, a notification may be presented to the user, and responsive user input (e.g., tapping a button on a touch screen of the wearable device) may confirm the activity state. In some examples, the active state (and its confirmation via user input, if implemented) may define the end of the session. As shown in fig. 2B, T3 may define the end of the session.
In some examples, the session may be terminated in other ways. In some examples, the session may terminate upon de-alerting (e.g., using an optical sensor or other sensor) detecting that the wearable device is outside the wrist, detecting that the wearable device is charging, a session timeout (e.g., after a threshold time after T1 or after a user-specified wake time), user input ending the session, or a likelihood of detecting an activity state classification by an activity classifier after a user-specified wake time, or the like.
F-EF239309
As shown in fig. 2B, a session may be defined by a start time T1 and an end time T3. The data collected in the period between T1 and T3 may be included in the sleep/awake classification window 235. Although fig. 2B defines a sleep/awake classification window 235 between T1 and T3, in some examples, the sleep/awake classification window 235 may begin earlier. In some examples, the sleep/awake classification window may begin at T0. In some examples, the sleep/awake classification window may begin some threshold period of time before T1. For example, the threshold time period may be the same as the first time period described below for thresholding at 335 in process 300.
The data in the sleep/awake classification window 235 may be processed by the sleep/awake classifier 210, as described in more detail with respect to process 400 and block diagram 500. In some examples, the sleep/awake classification by the sleep/awake classifier 210 may begin in response to the end of a session (or a threshold period of time after the session or in response to a user request). In some examples, sleep/awake classification by the sleep/awake classifier 210 may only begin after meeting confidence in the session as determined by the quality check classifier 215. In some examples, sleep/awake classification by sleep/awake classifier 210 may begin (e.g., at the end of a session), but may cease if ongoing, if confidence in the session is not met as determined by a quality check classifier. In some examples, sleep/wake classifications that estimate a user's sleep state may be stored in memory and/or displayed to the user. For example, sleep/awake classifications that estimate the sleep state of a user may be displayed or stored as a series of sleep intervals (e.g., consecutive time periods classified as sleep states) represented by blocks 240A-240C as shown on the timeline in fig. 2B.
Although the rest classifier operates for a period of time (e.g., from T0 to T1) and the activity classifier operates for a period of time (e.g., starting at T2 until T3) as described above, in some examples the rest/activity classifier may operate for a longer duration. For example, the rest/activity classifier may run continuously (e.g., 24 hours a day, optionally only while the wearable device is on the wrist and/or is not charging), or the rest/activity classifier may run continuously between the user-defined bedtime and wake-up (or a threshold time before and/or after the user-defined bedtime/wake-up), and multiple sleep/rest classification windows may be identified (instead of one window as shown in fig. 2B). Samples from each identified sleep/rest classification window may be processed and attempted to identify sleep intervals, as described herein. In some examples, the operation of the rest/activity classifier may be periodic, intermittent, or responsive to one or more triggers rather than continuous operation.
In some examples, sleep/awake classifications that estimate a user's sleep state may be displayed and/or stored only when confidence in a session is met as indicated by the quality check classifier 215. The quality check by the quality check classifier 215 may begin in response to the end of the session. In some examples, the quality check classifier may estimate whether the motion data collected by the wearable device corresponds to the wearable device remaining on the wrist during the session (e.g., between indications on the wrist by the optical sensor). Using motion data may save power and reduce light while the user is sleeping, as compared to on-wrist detection using optical sensors during a sleep tracking session.
In some examples, the sleep/wake classification that estimates the user's sleep state may be smoothed or filtered by smoothing/filtering the post-processor 220 to remove indications of very short sleep durations that may be incorrect due to the presence of a quiet wake (e.g., wake periods with breathing and movement characteristics that indicate sleep, but before sleep begins). Smoothing and filtering by the smoothing/filtering post-processor 220 is described in more detail with respect to fig. 7A-7B. In some examples, smoothing/filtering may be performed on the output of the sleep/awake classifier 210 only after quality checks are met (e.g., to avoid filtering/smoothing when sleep/awake classification is not to be displayed and/or stored).
Fig. 3 illustrates an exemplary process for a rest/activity classifier according to an example of the present disclosure. The process 300 may be performed by processing circuitry comprising the processor 108 and/or the DSP 109. Once the rest/activity classification is triggered (e.g., according to meeting one or more first/second trigger criteria), process 300 may be performed in real-time (e.g., when enough data for processing is received). At 305, the rest/activity classifier may optionally filter the data input into the classifier. The data may include motion data from a tri-axial accelerometer (or other suitable motion and/or orientation sensor). In some examples, the filtering may be a low pass filter that filters out high frequency noise (e.g., noise outside of the frequency of expected user motion). In some examples, the motion data may also be downsampled at 310. For example, the accelerometer may capture motion data at a first sampling rate (e.g., 60Hz, 100Hz, 125Hz, 250Hz, etc.), and the motion data may be downsampled (e.g., multi-stage polyphase filter) to a lower rate (e.g., 4Hz, 8Hz, 10Hz, 30Hz, 50Hz, etc.). Downsampling the motion data may reduce the number of samples and thus reduce processing complexity. In some examples, the motion data may be processed without downsampling and/or without filtering.
At 315, the rest/activity classifier may extract one or more features from the motion data. In some examples, one or more features may be extracted for samples in each "rest/activity classifier window" or simply "window" (e.g., a "period" that is different from a longer duration window that may be used for sleep/awake classification or sleep state classification) in the context of rest/activity classification. For example, the motion data is divided into N non-overlapping windows that include M samples of acceleration in each dimension (X, Y, Z) of the three-way accelerometer. In some examples, window F-EF239309
The duration of the mouth may be between 1-30 seconds. In some examples, the duration of the window may be between 1-10 seconds. In some examples, the window may be between 2-5 seconds.
In some examples, the one or more features may include a magnitude feature for each sample in the window and a variance feature for the samples in the window (320). The magnitude of each of the M samples in the window may be calculated using equation (1):
where X, Y and Z represent the x-axis accelerometer measurements of the sample, the y-axis accelerometer measurements of the sample, and the Z-axis accelerometer measurements of the sample, respectively. The variance of the M magnitude values of the window may be calculated using equation (2):
Wherein sigma 2 Representing the variance of the window, M representing the number of samples in the window, mag i Represents the magnitude of the ith sample, anRepresenting the average magnitude of the window.
At 325, an input to the classifier may be assembled. The rest/activity classifier input may be assembled from features for N windows, and thus the input may correspond to a duration period longer than the duration period of the window used to extract the magnitude and variance features described above (e.g., corresponding to a period of 30 seconds, 60 seconds, 90 seconds, 120 seconds, etc.). In some examples, the input may include N x (m+1) features. In some examples, the input may be compressed to reduce the number of features. For example, features from multiple windows may
Reduction by pooling the feature sums of k consecutive windows to reduce the input toAnd features. In some examples, k may be between 2-10. In some examples, k may be between 3-8. A buffer may be used to store data corresponding to longer time durations (raw acceleration data and/or extracted magnitude and variance features) so that enough data is available as input to the rest/activity classifier.
At 330, the classifier input may be processed with a Machine Learning (ML) model, such as logistic regression. It should be appreciated that logistic regression is only one example of an ML model, and that other models may be used, such as gradient enhancement trees, random forests, neural networks, support vector machines, and the like. The output of the ML model may be a confidence value representing the probability that the user is at rest (between 0 and 1). In some examples, the ML model may output a confidence value for each time period corresponding to the duration of the window (e.g., using a sliding window on a data buffer). For example, N windows F-EF239309
A first input of a port (e.g., windows 1-100) may be used to calculate a first confidence value, a second input of N windows (e.g., windows 2-101) may be used to calculate a second confidence value, and so on. Thus, the output of the ML model can be represented as an array of confidence values (per window).
At 335, a threshold may be applied to the output of the ML model to detect a resting or active state, where the parameters for resting classification are different from the parameters for active classification. For example, a rest state (e.g., for a rest classification beginning at T0 in fig. 2B) may be detected when the rest confidence value is greater than a first threshold confidence value for a first threshold number of windows in a given first period. For example, the resting state may be detected when the resting state confidence is greater than a first threshold confidence value (e.g., 85%, 90%, 95%, etc.) for most or all (e.g., 95%, 100%, etc.) of the first period (e.g., duration of 3 minutes, 5 minutes, 10 minutes, etc.). An activity state (e.g., for an activity classification beginning at T2 in fig. 2B) may be detected when the rest confidence value is less than a second threshold confidence value for a second threshold number of windows in a given second period. For example, an active state may be detected when the rest state confidence is less than a second threshold confidence value (e.g., 70%, 75%, 80%, etc.) for a second period (e.g., 10%, 15%, etc.) of time (e.g., duration of 15 minutes, 20 minutes, 30 minutes, etc.). In some examples, the first threshold confidence value and the second threshold confidence value may be the same. In some examples, the first threshold confidence value and the second threshold confidence value may be different such that they may require a relatively high rest confidence to enter the rest state (from the non-rest/active state) and a relatively low rest confidence to enter the active state (from the non-active/rest state). In some examples, detecting the rest state may require a first threshold number of windows in a first period of time to be continuous (e.g., a threshold number of consecutive minutes with a rest state confidence above a threshold), while detecting the activity state may not require a second threshold number of windows in a second period of time to be continuous (e.g., a threshold number of consecutive or non-consecutive activity minutes over a longer period of time).
Fig. 4 illustrates an exemplary process for a sleep/awake classifier according to an example of the present disclosure. The process 400 may be performed by processing circuitry comprising the processor 108 and/or the DSP 109. In some examples, process 400 may be performed in part in real-time (e.g., when enough data for processing is received), in part at a pace during a session, and/or in part at the end of a session. In some examples, process 400 may be performed entirely at the end of the session.
At 405, the sleep/awake classifier may optionally filter the data input into the classifier. The data may include motion data from a tri-axial accelerometer (or other suitable motion and/or orientation sensor). In some examples, the filtering may be to filter out high frequency noise (e.g., in the expected user traffic F-EF239309
Noise outside the frequency of motion/respiration). In some examples, the motion data may also be downsampled at 410. For example, the accelerometer may capture motion data at a first sampling rate (e.g., 60Hz, 100Hz, 125Hz, 250Hz, etc.), and the motion data may be downsampled (e.g., multi-stage polyphase filter) to a lower rate (e.g., 4Hz, 8Hz, 10Hz, 30Hz, 50Hz, etc.). In some examples, downsampling and low-pass filtering may be performed in real-time or in steps during a session to reduce the amount of data to be processed and/or stored. In some examples, the motion data may be processed without downsampling and/or without low-pass filtering.
At 415, the sleep/wake classifier may extract a plurality of features from the motion data. In some examples, the one or more features may include one or more motion features (420), one or more time-domain respiration features (425), and one or more frequency-domain respiration features (430). Multiple features may be calculated for each epoch of motion data. The epoch may represent a window of athletic data samples for sleep/awake classification (e.g., a sleep/awake classifier window) having a duration greater than a duration of a window for rest/active classification (e.g., a rest/active classifier window). In some examples, the time period may represent a window having the same duration as the duration of the window for the rest/activity classification. In some examples, the duration of the period of time may be between 10-120 seconds. In some examples, the duration of the period of time may be between 30-90 seconds. In some examples, the period of time may be between 45-60 seconds. In some examples, feature extraction may be performed over a period defining an overlap period. For example, adjacent time periods may overlap for 5-60 seconds. In some examples, the overlap may be between 20-30 seconds. Feature extraction is described in more detail below with respect to fig. 5.
At 435, inputs to the sleep/awake classifier may be assembled. The sleep/awake classifier inputs may be assembled from features for N epochs and may correspond to longer duration periods (e.g., corresponding to 5 minutes, 10 minutes, etc.). In some examples, the input may include N x M features, where M features are extracted for each of the N epochs. In some examples, the N epochs include an epoch of interest (e.g., an epoch that outputs a classification application) and N-1 epochs before and/or after the epoch of interest. In some examples, (N-1)/2 epochs before the epoch of interest and (N-1)/2 epochs after the epoch of interest are used. In some examples, the N-1 epochs may be unevenly distributed on both sides of the epoch of interest (e.g., 75% before and 25% after the epoch of interest). In some examples, N-1 epochs before the epoch of interest are used. In some examples, the input may be compressed to reduce the number of features. For example, features from multiple epochs can be reduced by pooling the sum of features for k consecutive epochs to reduce input toAnd features. Buffers may be used to store data corresponding to longer duration periods (raw and/or filter/downscale Acceleration data of the sample and/or extracted features) such that sufficient data is available as input to the sleep/wake classifier.
At 440, the classifier input may be processed with an ML model (such as logistic regression). It should be appreciated that logistic regression is only one example of an ML model, and that other models may be used, such as gradient enhancement trees, random forests, neural networks, support vector machines, and the like. The output of the ML model may be a confidence value that represents the probability that the user is in a sleep state (between 0 and 1). In some examples, the ML model may output a confidence value for each time period corresponding to the duration of the epoch (e.g., using a sliding window on the data buffer). For example, a first input for N epochs (e.g., epochs 1-20) can be used to calculate a first confidence value, a second input for N epochs (e.g., epochs 2-21) can be used to calculate a second confidence value, and so on. Thus, the output of the ML model can be represented as an array of confidence values (per epoch).
At 445, a threshold may be applied to the output of the ML model to detect sleep or awake states. For example, a sleep state may be detected when the sleep confidence value is greater than a threshold confidence value, and an awake state may be detected when the sleep confidence value is less than a threshold value. In some examples, thresholds may be set based on the machine learning model and training data to maximize kappa for Cohen. The output of the thresholding may be an array of sleep/awake state classifications (per epoch). An array of sleep/awake state classifications may be displayed (optionally with some post-processing and according to quality check) as sleep intervals (e.g., a series of sleep and awake periods) as described herein.
Fig. 5 illustrates an exemplary block diagram of feature extraction for sleep/wake classification (or sleep state classification) according to an example of the present disclosure. Block diagram 500 illustrates input motion data 502 from a three-axis accelerometer (e.g., three-way motion sensor), which may be taken from the raw data buffer and/or from the output of ADC 105 a. The input motion data may be downsampled and/or low-pass filtered in a downsampling and/or filtering block 504 (e.g., implemented in hardware or software). Extracting features from the motion data may be performed from different motion data streams. One or more motion features may be extracted by motion feature extraction block 514 from the 3-axis flow of motion data further filtered by high pass filter 506 and/or from the 3-axis flow of motion data without further high pass filtering. One or more temporal respiratory features may be extracted by temporal respiratory feature extraction block 522 from a selected one of the 3-axis flows of motion data that is further filtered using band pass filter 508. One or more frequency domain respiratory features may be extracted by frequency domain respiratory feature extraction block 524 from a selected one of the 3-axis flows of motion data without further high pass filtering. The selection of one axis of the 3-axis flow may be performed using the 3-axis flow of motion data without further high pass filtering. In some examples, the high pass filter 506 may filter out F-EF239309
Some or all of the respiratory band (e.g., filtering out data below a threshold frequency (such as 0.5 Hz)), and the band pass filter 508 may filter out some or all of the data outside of the respiratory band (e.g., passing data between frequency ranges (such as between 0.1Hz and 0.6 Hz).
The motion data may be divided into epochs for feature extraction by epoch segmentation block 510 (e.g., implemented in hardware or software). In some examples, the epoch splitting may be implemented using a sliding window of time duration for the epoch (e.g., accessing motion data from a data buffer corresponding to the time duration). Epoch partitioning may be performed on multiple streams of accelerometer data including a 3-axis high-pass filtered accelerometer stream (output by high-pass filter 506), a 3-axis band-pass filtered accelerometer stream (output by band-pass filter 508), and a 3-axis accelerometer stream without high-pass or band-pass filtering (output by downsampling and/or filtering block 504).
The one or more motion features extracted by motion feature extraction block 514 may include "maximum variance" motion features. The maximum variance can be calculated from the 3-axis accelerometer stream 511 (without high pass filtering or band pass filtering) of the divided period. In a similar manner to that described above in equation (2), but for the uniaxial magnitude of each sample in the epoch, for each channel of the 3-axis accelerometer stream 511 dividing the epoch, the variance of the magnitude of the sample in the epoch is calculated. The largest variance of the three variance values of the 3 channels of the 3-axis accelerometer stream 511 for the split period (e.g., a first variance value for a first channel, a second variance value for a second channel, and a third variance value for a third channel) may represent the largest variance feature. Additionally or alternatively, in some examples, the natural logarithm of the maximum variance feature may be used as the motion feature.
The one or more motion features extracted by motion feature extraction block 514 may include "mean variance" motion features. The magnitude of the motion (2-norm) of each sample in the 3-axis accelerometer stream 507 for the split period high-pass filtering may be calculated in a 2-norm magnitude block 512 (e.g., in a manner similar to that described in equation (1) for the 3-axis accelerometer stream 507 applied to the split period high-pass filtering). In some examples, the magnitude may be calculated for the high-pass filtered 3-axis accelerometer flow before epoch partitioning (e.g., on a sample-by-sample basis). A variance is calculated for the magnitude of each of the samples in the time period. The mean variance feature may be calculated as the mean of the calculated variances of all samples in the cross-period. The mean variance feature may be associated with an awake state. Although described as mean variance motion features, additionally or alternatively, the one or more features extracted by motion feature extraction block 514 may include a median variance or a mode variance (e.g., taking the median or mode of variances of all samples in a cross-period).
The one or more motion features extracted by motion feature extraction block 514 may include "motion count" motion features. The motion count feature may be a determination of a number of motion samples in the epoch that have a magnitude of motion above a threshold. The magnitude of the motion (2-norm) of each sample in the 3-axis accelerometer stream 507 for the split period high-pass filtering may be calculated in a 2-norm magnitude block 512 (e.g., in a manner similar to that described in equation (1) for the 3-axis accelerometer stream 507 applied to the split period high-pass filtering). In some examples, the magnitude may be calculated for the high-pass filtered 3-axis accelerometer flow before epoch partitioning (e.g., on a sample-by-sample basis). The motion count feature may be determined by counting the number of samples or fractions/percentages of samples in a time period whose 2-norm value of motion is above a threshold. The motion count feature may indicate the amount of motion above a certain noise threshold of the epoch.
The one or more motion features extracted by motion feature extraction block 514 may include "motion integration" motion features. The motion integration feature may sum the magnitudes of the samples in the time period by integrating the magnitudes as scaled by a dx term (e.g., jmagnitide-dx), where dx may be the sampling period (the inverse of the sampling rate after downsampling). The magnitude of the motion (2-norm) for each sample in the 3-axis accelerometer stream 507 for the high-pass filtering of the divided periods may be calculated in the 2-norm magnitude block 512 as described above. The motion integration feature may indicate an overall magnitude of motion for the period. The motion integration feature may be used to identify slower, sustained movements during a period, while the motion counting feature may be used to identify faster movements (e.g., higher frequency movements/transients).
The one or more motion features extracted by motion feature extraction block 514 may include a "motion integration mean" motion feature. The motion integration mean feature may be the mean of the "motion integration" features described above. The motion integration mean feature may indicate an average of overall variability in the magnitude of motion for a period of time. The motion integration mean feature may be used to potentially identify short-term, high-motion segments, which may correspond to short awake rounds. Although described as motion integration mean features, additionally or alternatively, the one or more features extracted by the motion feature extraction block 514 may include a motion integration median or motion integration maximum.
The above-described motion features are examples of one or more motion features that may be extracted by motion feature extraction block 514. It should be appreciated that additional, fewer, and/or different motion features may be extracted for sleep/wake classification. In some examples, sleep/wake classification may use a "maximum variance" feature, a "motion count" feature, and a "motion integration" feature. In some examples, the sleep state classification described with reference to process 800 may further use a "mean variance" feature and a "motion integration mean" feature.
The one or more frequency-domain respiration features extracted by the frequency-domain respiration feature extraction block 524 may include one or more measures of variability in the respiration signals derived by the motion sensor. In some examples, one or more features may be calculated from one axis of the 3-axis accelerometer stream 511 (without high-pass filtering or band-pass filtering) that divides the epoch. One axis of the 3-axis accelerometer stream 511 that divides the epochs may be selected by the best axis estimation block 518 as the axis with the best respiratory signal for each epoch (e.g., based on signal-to-noise ratio (SNR)). A frequency domain representation may be calculated for each axis of the 3-axis accelerometer stream 511 dividing the epoch in order to determine the optimal respiration signal. For example, a fourier transform may be calculated for each axis (e.g., using a Fast Fourier Transform (FFT) block 516) and/or a Power Spectral Density (PSD) may be calculated for each axis. In some examples, the mean value may optionally be subtracted from the 3-axis accelerometer stream 511 for the divided period before computing the frequency domain representation (e.g., detrending). The SNR for each axis of the 3-axis accelerometer stream 511 may be calculated based on the frequency representation. The "signal" of SNR can be estimated by identifying the maximum peak in the frequency representation and calculating the spectral power (square of absolute value of FFT) within a frequency domain window around the maximum peak (e.g., within the range of fundamental frequencies). In some examples, the folded spectrum may be calculated by summing the power over one or more harmonics of the frequency domain window (e.g., optionally including some of the side lobe bands around the fundamental frequency), and the spectral power may be calculated based on the maximum peak in the folded spectrum (e.g., the main frequency spanning multiple harmonics) and summing the power over multiple harmonics of the side lobe bands including the main frequency. In some examples, the "noise" of the SNR may be estimated by calculating the spectral power outside the frequency domain window around the maximum peak. SNR can be calculated from the ratio of signal to noise defined above. The axis with the best respiratory signal for a epoch may be selected based on the axis with the greatest SNR of the three axes for that epoch.
It should be appreciated that the above description of determining SNR is an example, and that SNR may be calculated in other ways and/or the axis with the best respiratory signal may be determined in other ways. For example, in some examples, the SNR may be calculated as the logarithm of the ratio of the "signal" to the total power of the spectrum (without calculating the noise). In some examples, rather than calculating the optimal axis, a Singular Spectrum Analysis (SSA), principal Component Analysis (PCA), or Rotation Angle (RA) may be used to extract the respiratory signal. However, the SNR approach described above may reduce processing complexity relative to SSA, PCA, and RA while providing desirable performance for sleep/awake classification.
In some examples, the frequency domain respiration signature may include one or more "spectral power" respiration signatures for a selected optimal axis of one or more frequency ranges. Optionally after detrending, a Power Spectral Density (PSD) may be calculated from the 3-axis accelerometer stream 511 (e.g., using FFT block 516) of the divided periods. The spectral power characteristic may be a relative spectral density calculated by the expression:wherein the band power may be calculated by integrating the PSD within the frequency limits of the band and the total power may be calculated by integrating the total PSD. In some examples, the extraction of the frequency domain respiration signature may include calculating a first relative spectral power in a frequency range (e.g., 0.01Hz-0.04 Hz), a second relative spectral power in a frequency range (e.g., 0.04Hz-0.1 Hz), a third relative spectral power in a frequency range (e.g., 0.1Hz-0.4 Hz), and a fourth relative spectral power in a frequency range (e.g., 0.4Hz-0.9 Hz). The relative spectral density characteristics may be used for sleep/wake classification because heart rate and/or respiration rate may have different power modulations in these different frequency bands for sleep states as compared to wake states.
In some examples, the frequency domain respiration features may include "spectral entropy" respiration features. Spectral entropy features (optionally after detrending) can be calculated from the selected optimal axis. For example, the PSD may be calculated from the FFT, and the spectral entropy may be calculated from the PSD. For example, spectral entropy may be calculated by normalizing the PSDs (e.g., summing to 1), treating the normalized PSDs as a Probability Density Function (PDF), and calculating shannon entropy. Spectral entropy can be used for sleep/wake classification because more regular breathing patterns associated with sleep can include sharper PSDs and thus lower spectral entropy.
In some examples, the frequency domain respiration characteristic may include a "respiration rate" respiration characteristic. The respiration rate characteristics (optionally after detrending) may be calculated from the selected optimal axis. In some examples, the frequency domain representation of the optimal axis may be calculated using an FFT, and the frequency with the highest peak in the spectral output of the FFT may be identified as the respiration rate. Calculating the respiration rate in the frequency domain may provide a more robust measurement (e.g., less susceptible to noise) than the time domain. In some examples, the respiration rate may be converted to a number of breaths per time period (e.g., per minute). The breathing rate may be used to identify sleep states due to an understanding of how the breathing rate varies in different stages of sleep.
The frequency domain respiration features described above are examples of one or more frequency domain respiration features that may be extracted by the frequency domain respiration feature extraction block 524. It should be appreciated that additional, fewer, and/or different frequency domain respiration characteristics may be extracted for sleep/wake classification. In some examples, sleep/wake classification may use a "spectral power" feature and a "spectral entropy" feature. In some examples, the sleep state classification described with reference to process 800 may also use a "breathing rate" feature.
The temporal respiration feature extraction block 522 may extract one or more temporal respiration features. Extracting the time domain respiration feature may be based on identifying peak and valley indices and time intervals between peaks and valleys in the band pass filtered 3-axis accelerometer stream 509 of the divided epochs. Peaks and troughs may be associated with inspiration and expiration (where amplitude is associated with respiratory intensity), and time intervals between peaks and troughs may be associated with respiratory time and duration. In some examples, these quantities may be extracted for time periods, and the most stable of these quantities may be used for subsequent time domain feature extraction, as described in more detail below.
In some examples, one or more time-domain respiration features may be calculated from one axis of the divided-period, band-pass filtered 3-axis accelerometer stream 509, where the one axis is selected according to the operation of the best axis estimation block 518. This selection is illustrated in fig. 5 by a multiplexer 520 that receives control signals from the optimal axis estimation block 518 to select one axis of the 3-axis accelerometer stream 509 for time-domain respiration feature extraction.
Because the temporal respiration signature is extracted from the motion data (e.g., a selected axis of the 3-axis accelerometer stream 509 that divides the time period), the respiration signal may be susceptible to motion artifacts (e.g., motion that is independent of respiration). In some examples, the presence of motion artifacts may be estimated by motion artifact detection block 515 using the 3-axis output of band pass filter 508. The motion artifact detection block 515 may calculate the maximum absolute variance of the accelerometer flow across the 3-axis bandpass filtering in a similar manner to the maximum variance motion features described above. However, rather than calculating one maximum variance for a epoch as described for the maximum variance motion feature, the maximum absolute variance calculated by motion artifact detection block 515 may use a smaller sliding window than the epoch. In some examples, the duration of the sliding window may be between 1-10 seconds. In some examples, the duration of the sliding window may be between 2-5 seconds. In some examples, the sliding window may have the same duration as the rest/activity classifier window. After calculating the maximum absolute variance using the sliding window, the maximum absolute variances of the multiple windows may be thresholded. For example, motion artifact detection block 515 may output an array of binary values (binary array), where the binary output value indicates motion artifacts for the window when the maximum absolute variance is above a threshold (e.g., "1"), and the binary output indicates no motion artifacts for the window when the maximum absolute variance is below a threshold (e.g., "0"). The output of motion artifact detection block 515 may be sampled at the same rate as the output of downsampling and/or filtering block 504 (although the maximum absolute variance is determined on a per-window basis, where each window includes multiple samples). In some examples, to mitigate the effects of filter transients, samples indicative of motion artifacts in a binary array may be "padded" such that a threshold number of samples (e.g., 2, 3, 5, 8, 10, etc.) on either side of the samples indicative of motion artifacts may also be marked as indicative of motion artifacts (even though the maximum absolute variance of the samples may be below the threshold). The output of the motion artifact detection block 515 may be divided into epochs and passed as a motion artifact signature array 521 to a temporal respiratory feature extraction block 522 for temporal respiratory feature extraction. The motion artifact signal signature array 521 may mark portions of a selected axis of the 3-axis accelerometer stream 509 that may be excluded from the temporal respiration feature extraction for the divided periods. For example, the motion artifact signal signature array 521 may be used as a mask per sample to suppress artifacts and/or clear respiratory locations and/or intervals during respiratory peak/valley detection. Although motion artifact detection is shown in fig. 5 as occurring prior to the epoch split, it should be appreciated that in some examples, generation of the motion artifact signature array 521 may be performed after the epoch split.
As described herein, the time-domain respiration signature may be based on peaks and valleys detected in selected axes of the 3-axis accelerometer stream 509 that divide the time period. Samples in the period that are not masked by the motion artifact signal signature array 521, which are filtered out, may be processed to identify peak and valley locations having amplitudes (absolute values) above a threshold. In some examples, the threshold value may be determined on a per-epoch basis by calculating a standard deviation of the selected axes of the 3-axis accelerometer stream 509 that divides the epochs and multiplying the standard deviation by a scaling parameter. In some examples, the scaling parameter may be 1. In some examples, the scaling parameter may be greater than one or less than 1.
After computing the peaks and valleys (filtered for motion artifact), the inter-breath interval (IBI) may be computed by taking the time difference between adjacent peak time stamps (peak-to-peak interval) and/or the time difference between adjacent valley time stamps (inter-valley interval). The IBI may be indexed for storage using an interval start timestamp (e.g., a peak start timestamp or a valley start timestamp).
The identified peaks and valleys and peak-to-valley spacing and valley-to-valley spacing may be filtered to remove portions contaminated with motion artifacts from the samples of the epoch (e.g., using the motion artifact signature array 521). For example, peaks at least partially overlapping samples contaminated with motion artifacts, valleys at least partially overlapping samples contaminated with motion artifacts, or IBIs overlapping motion artifacts may be filtered out (e.g., to ensure that both the start and end of each breath interval are free of motion artifacts). For example, peaks or valleys may be detected at or near a sample contaminated with motion artifacts and/or respiratory intervals contaminated with motion artifacts may be masked.
For feature extraction, the selection may be based on the peak (and peak-to-valley spacing) or valley (and valley-to-valley spacing) that they exhibit less variability. In some examples, variability may be determined based on the standard deviation or median absolute derivative of IBI over each epoch. For example, if the variability of the peak-to-peak spacing is lower than the variability of the valley-to-valley spacing of the time period, the peak value (and peak-to-peak spacing) may be used, or if the variability of the valley-to-valley spacing is lower than the variability of the peak-to-peak spacing, the valley value (and valley-to-valley spacing) may be used.
The one or more temporal respiration signatures may include a "number of breaths" respiration signature indicative of the number of breaths detected for a period of time, which may be determined by counting the number of peaks or troughs after the peak/trough and IBI detection and motion artifact filtering described above. The one or more temporal respiration characteristics may include temporal "respiratory amplitude variability" respiration characteristics. The respiratory amplitude variability feature may be calculated by calculating a standard deviation of the amplitude of the peak (or valley) and normalizing the standard deviation of the amplitude of the peak (or valley) by the mean of the amplitude of the peak (or valley). In some examples, the one or more temporal respiration characteristics may include a "median respiratory amplitude" respiration characteristic of the time period. The breath amplitude median feature may be calculated by calculating the median of the amplitudes of the peaks (or valleys). In some examples, the one or more time-domain respiration characteristics may include a respiration amplitude mean (e.g., a mean of the amplitudes of the peaks (or valleys)) and/or a respiration amplitude mode (e.g., a mode of the amplitudes of the peaks (or valleys)).
The one or more temporal respiration characteristics may include one or more respiratory rate variability (inter-respiratory variability) characteristics of the time period. The first respiratory rate variability characteristic may be a "mean normalized median absolute deviation" respiratory characteristic. The first breath rate variability characteristic may be calculated by taking the difference between the instantaneous IBI and the median IBI for a time period and then normalizing by the mean IBI for that time period. The second respiratory rate variability characteristic may be a "mean normalized range" respiratory characteristic. The second breath rate variability characteristic may be calculated by taking the difference between the maximum IBI value and the minimum IBI value for a time period and then normalizing by the mean IBI for that time period. The third breath rate variability characteristic may be a "standard deviation" breath characteristic. The third breath rate variability characteristic may be calculated by taking the standard deviation of the period IBI values. The fourth respiratory rate variability characteristic may be a "root mean square of continuous differences" respiratory characteristic. The fourth respiratory rate variability characteristic may be calculated by taking the root mean square deviation between successive peaks (or valleys) of the time period.
In some examples, there may not be enough data to calculate one or more of the temporal respiratory features (e.g., except for the respiratory count feature, which in this case is zero) due to motion artifact filtering or because no respiration is detected in the epoch. For such periods, the features may be assigned predetermined values (e.g., based on empirical data) corresponding to relatively high likelihood of wakefulness. In some examples, the predetermined value may be a percentile (e.g., 75 th percentile, 85 th percentile, 95 th percentile) for each feature in the empirical data of the person who wakes.
The temporal respiration features described above are examples of one or more temporal respiration features that may be extracted by the temporal respiration feature extraction block 522. It should be appreciated that additional, fewer, and/or different temporal respiration characteristics may be extracted for sleep/wake classification. In some examples, sleep/wake classifications may use a "number of breaths" feature, a "breathing amplitude variability" feature, a "mean normalized median absolute deviation" feature, a "mean normalized range" feature, a "standard deviation" feature. In some examples, the sleep state classification described with reference to process 800 may also use a "root mean square of continuous differences" feature and a "median breath amplitude" feature.
Extracted features from multiple epochs may be assembled 528 (e.g., as described at 435 in process 400). In some examples, assembling may include sum pooling. In some examples, assembling may include storing the extracted features (e.g., in a data buffer) for input into a machine learning model (e.g., a logistic regression classifier). The logistic regression performed by the sleep/awake classifier 530 can process the inputs to classify the inputs from multiple epochs (e.g., as described at 440 in process 400).
Referring back to fig. 2A-2B, a quality check classifier 215 may optionally be included to establish confidence in the sleep/wake classification. In particular, the quality check classifier 215 may evaluate one or more extracted features to provide confidence in the motion data (e.g., instruct the wearable device to be worn by the user during the sleep/awake classification window 235). In some examples, the quality check classifier may use a subset of the plurality of features for sleep/wake classification. In some examples, the quality-check classifier may use one or more extracted motion features, one or more time-domain respiration features, and one or more frequency-domain respiration features.
Fig. 6 illustrates an exemplary process for a quality check classifier according to an example of the present disclosure. The process 600 may be performed by processing circuitry comprising the processor 108 and/or the DSP 109. In some examples, process 600 may be performed at the end of the session before, after, or in parallel with sleep/awake classification of process 400. In some examples, the subset of features may include a motion integration feature and a maximum variance motion feature. In some examples, the subset of features may include one (or more) of a spectral entropy feature and a relative spectral power feature. In some examples, the subset of features may include a breath-count-per-period feature. Using a subset of the extracted features may be used to reduce the size of the classifier input and thus reduce the complexity of the quality check classifier. In addition, using extracted features from sleep/wake classifications may avoid the need to extract additional features. In some examples, the same features extracted for sleep/wake classification may be used for the quality check classifier.
At 605, a quality check classifier may be assembledIs input to the computer. The quality check classifier input may be assembled from a subset of extracted features for multiple epochs of the sleep/awake classification window. In some examples, a subset of the extracted features for all periods of the sleep/awake classification window may be used for quality check classification. In some examples, the input may be compressed to reduce the number of features. For example, features from multiple epochs can be reduced by pooling the sum of features for k consecutive epochs to reduce input
At 610, classifier inputs can be processed with an ML model (such as logistic regression). It should be appreciated that logistic regression is only one example of an ML model, and that other models may be used, such as gradient enhancement trees, random forests, neural networks, support vector machines, and the like. The output of the ML model may be a confidence value that represents the probability (between 0 and 1) that the motion data is of a quality that it can pass the quality check (thereby expressing confidence in sleep/awake classification based on the motion data). The quality check confidence value may correspond to a probability that the wearable device remains on the wrist during the sleep/awake classification window (e.g., not removed and resting on a table or other surface during the sleep/awake classification window).
At 615, a threshold may be applied to the output of the ML model to detect quality inspection results or states. For example, when the quality confidence value is greater than the threshold confidence value, the quality check may pass (pass state), and when the quality confidence value is less than the threshold, the quality check may fail (fail state). As described herein, failing the quality check may result in forgoing the publication of the sleep tracking results (and/or discarding the sleep tracking results) to the user, while failing the quality check may result in the storage and/or publication of the sleep tracking results.
Referring again to fig. 2A-2B, in some examples, a smoothing and filtering post-processor 220 may optionally be included to smooth/filter the sleep/awake classification output. Fig. 7A-7B illustrate a block diagram 700 for smoothing/filtering and a diagram 720 indicating in-bed detection according to an example of the present disclosure. In some examples, the first filter block 705 may filter the output of the sleep/awake classifier to remove very short sleep intervals (e.g., less than a threshold time such as 15 seconds, 30 seconds, 45 seconds, etc.) at any point in the session (e.g., across the entire sleep/awake classification window). These very short sleep intervals may be false positives (high frequency transients) and/or may represent sleep intervals that are not long enough for meaningful sleep/health benefits. These very short sleep intervals may also be difficult to present to the user because these representations of sleep information that are less meaningful to the user disrupt the presentation of longer duration sleep intervals that are more meaningful in the sleep tracking results. Filtering out the very short sleep interval may include replacing the indication of the sleep state for the very short sleep interval with an indication of the awake state.
In some examples, smoothing/filtering may include removing short sleep intervals in a portion of the session that may indicate rest rather than sleep. This portion of the session may refer to the time between the indication of the resting state (e.g., at T1 in fig. 2B) and the detection of the user "in bed" at some point during the sleep session (e.g., after T1 but before T2 in fig. 2B). For example, one or more features extracted above for sleep/wake classification may be used for in-bed detection by in-bed detection block 710. The in-bed detection block 710 may estimate a time (e.g., period) when the user transitions from "out-of-bed" to "in-bed" sessions. The "out of bed" and "in bed" states may be defined as a function of movement rather than actually detecting whether the user is actually in bed. In some examples, the one or more features may include a maximum variance motion feature extracted by motion feature extraction block 514. The maximum variance motion characteristics may be filtered and a transition to an "in bed" state may be detected when the filtered characteristics fall below a threshold. In some examples, the threshold may be a user-specific threshold.
In some examples, a log10 scale of the maximum variance motion features may be used for in-bed detection (e.g., by taking the base 10 logarithm of the maximum variance motion features over the period of the session). For example, fig. 7B shows a graph 720 with an example of a signal 722 corresponding to a maximum variance motion characteristic of a log10 scale between a session start time and a session end time. In some examples, the maximum variance movement characteristics of the log10 scale may be used to determine a user-specific threshold. The user-specific threshold may be set to a maximum between a default threshold (e.g., applicable to most users as defined by empirical data) and a threshold percentile (e.g., 55 th percentile, 60 th percentile, 65 th percentile, etc.) of the maximum variance motion characteristic on a log10 scale. In some examples, the default threshold may be used without determining or using a user-specific threshold.
The maximum variance motion characteristics of the log10 scale may be filtered with a sliding window median filter. The sliding window for in-bed detection may correspond to the duration of a plurality of time periods (e.g., 20, 50, 80, 100, 125, etc.). For filtering, a zero-padded session (indicating a high level of activity on a logarithmic scale with 10 bottoms) may be used at both ends. Fig. 7B shows a signal 724 (shown in dashed lines) corresponding to the maximum variance motion characteristic of the median filtered log10 scale.
The period when the median filtered, log10 scale maximum variance motion characteristic falls below the threshold may be detected as a bed transition period. For example, fig. 7B shows a threshold 726, and the maximum variance motion characteristic on a median filtered, log10 scale spans the period at bed transition time indicated at threshold 726.
In some examples, the second filter block 715 shown in fig. 7A may filter the output of the sleep/wake classifier to remove short sleep intervals corresponding to silent wakefulness that may be interpreted as false positive sleep intervals. The second filter block 715 may filter out short sleep intervals during the period between the session start and the bed transition period indicated by the bed in detection block 710. In some examples, the second filter block 715 may identify the short sleep interval by identifying sleep intervals that meet one or more interval criteria. The one or more interval criteria may include a first criterion that the sleep interval is less than a threshold duration (e.g., less than 5 minutes, less than 10 minutes, less than 20 minutes, etc.). The one or more interval criteria may include a second criterion that the sleep density is less than a threshold sleep density (10%, 20%, 30%, etc.) for a period of time. The sleep density may be calculated by examining a sleep interval and a time period surrounding the sleep interval to determine a percentage of a period in the time period that indicates a sleep state. Sleep intervals meeting one or more criteria may be removed (e.g., sleep/awake classifications for the intervals may be changed from sleep state to awake state).
After filtering the sleep/wake classifications using the first filter block 705 and/or the second filter block 715, the sleep/wake classifications may be represented as sleep intervals and stored in memory and/or presented to a user (e.g., displayed on a touch screen). In some examples, the sleep interval may be defined by a start time and an end time of a period of a set of sleep classifications. In some examples, the sleep intervals may be displayed as a sequence or timeline. In some examples, the total sleep time from the sleep intervals may be summed and presented to the user as the total sleep time for the session in addition to or instead of the sleep intervals.
Although the rest/activity classifier, sleep/awake classifier, and signal quality classifier described herein use only motion data from motion sensors (e.g., 3-axis accelerometers). It should be appreciated that in some examples, these classifiers include additional sensor inputs to improve some or all of these classifiers for improving the overall sleep/wake classification of the system. However, using only motion data may provide low power (and/or low light) classification without using additional sensors. In some examples, respiratory features may be extracted from other sensors (e.g., respiratory features such as heart rate and heart rate variability features are extracted from photoplethysmography (PPG) signals or Electrocardiogram (ECG) signals using optical sensors). In some examples, sensor bars (e.g., including one or more sensors such as piezoelectric sensors and/or proximity sensors) on or in the bed may be used to detect respiratory signals and/or motion signals to extract features (to improve performance and/or confidence of rest/activity classification, sleep/wake classification, and/or quality check classification) and/or to detect in-bed conditions (e.g., for in-bed detection). In some examples, user input or status of the wearable device or another device (e.g., wearable device 100 and peripheral device 118) may also be used as input. For example, user input that unlocks/locks a touch screen or a wearable device or other input device of a mobile phone or tablet computing device in communication with the wearable device and/or interacts therewith may be used as an indicator that the user is not in a sleep state (e.g., in an awake state and/or an active state). This information may be used to correct incorrect classifications (e.g., false positive sleep state classifications) and/or may be used to forego processing data to extract features and/or classify time periods when the context prompt indicates an awake state.
As described herein, the processing of motion data for feature extraction may be performed in real-time or at steps during operation. In some examples, the rest/activity classifier may operate in real-time or in lockstep (e.g., during the operations from T0 to T1 and/or from T2 to T3 shown in fig. 2B). In some examples, the sleep/awake classifier, the quality check classifier, and the filtering/smoothing post-processing may be performed at the end of the session. In some examples, feature extraction for the sleep/wake classifier and/or the quality check classifier may be performed in real-time or at pace during the session, and features may be assembled and/or processed by logistic regression ML model circuitry at the end of the session (or at pace during the session). It should be appreciated that logistic regression is only one example of an ML model, and that other models may be used, such as gradient enhancement trees, random forests, neural networks, support vector machines, and the like.
As described herein, in some examples, sleep/awake classification may be improved by providing additional details regarding sleep states. For example, rather than classifying an interval bin as awake or asleep, the classification may provide a sub-category of sleep. For example, sleep may be classified as REM sleep, non-REM sleep stage one, non-REM sleep stage two, or non-REM sleep stage three. In some examples, one or more of the non-REM sleep stages may be combined (e.g., merged) to reduce the number of states and simplify display. In some such examples, the sleep state may include wake, REM sleep, or non-REM sleep. In some such examples, the sleep state may include wake, REM sleep, non-REM sleep stage one or two (e.g., combining sleep stage one and sleep stage two) or non-REM stage three. In some such examples, the sleep state may include wake, REM sleep, non-REM sleep stages two or three (e.g., combining sleep stages two and three) or non-REM stage one. In some examples, sleep tracking results may be displayed or reported to a user, as described herein. Additional details about sleep states may provide more robust information for sleep tracking and assessing sleep quality.
Fig. 2C-2D illustrate example block diagrams and corresponding timing diagrams for sleep tracking (e.g., sleep state classification) according to examples of this disclosure. Fig. 2C illustrates an exemplary block diagram 250 of a processing circuit for sleep tracking according to an example of the present disclosure. The processing circuitry may include a digital signal processor (e.g., corresponding to DSP 109 in fig. 1B) and/or one or more additional processors (e.g., corresponding to processor 108). In some examples, the processing circuitry may include a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), or other logic device. The processing circuitry may include a rest/activity classifier 205, a first quality check classifier 260, a sleep state classifier 265, a smoothing/filtering post-processor 270, and a second quality check classifier 275. Classification and/or filtering/smoothing may be implemented in hardware, software, firmware, or any combination thereof.
The rest/activity classifier 205 in block 250 may be the same as or similar to the rest/activity classifier 205 described with reference to block 200, details of which are omitted for brevity. The rest/activity classifier 205 may be used to define a start time and an end time for a sleep tracking session.
A first quality check classifier 260 may optionally be included for sleep tracking to estimate/classify the quality of sensor data (e.g., using one or more features extracted during a sleep session for sleep state classification). The quality of the sensor data may indicate that the wearable device is on the wrist during the sleep tracking session, and a confidence in the sleep state classification may be established. In some examples, the quality check performed by the first quality check classifier 260 may correspond to the process 600, details of which are not repeated for the sake of brevity. Additionally or alternatively, the quality check by the first quality check classifier 260 may determine whether the sleep session lasts for a threshold duration (e.g., 1 hour, 2 hours, 4 hours, etc.), because confidence in sleep state classification is improved for sleep sessions longer than the threshold duration as compared to sleep sessions shorter than the threshold duration. In some examples, sleep classification by sleep state classifier 265 is performed when criteria for quality check of first quality check classifier 260 are met (e.g., the device meets on-wrist criteria and/or the sleep session meets threshold duration criteria). In some examples, sleep classification by sleep state classifier 265 is not performed (e.g., thereby saving power) when the criteria for the quality check of first quality check classifier 260 are not met (e.g., the device fails to meet the on-wrist criteria or the sleep session fails to meet the threshold duration criteria). It should be appreciated that in some examples, the results of the session are not displayed and/or stored when classification by sleep state classifier 265 is not performed. In some examples, the quality check for whether the device is on the wrist is performed only after determining that the sleep session duration meets or exceeds the quality check for the threshold duration is met.
A smoothing and filtering post-processor 270 may optionally be included to smooth/filter the sleep state classification. The smoothing and filtering post-processor 270 may be similar to the smoothing and filtering post-processor 220, but with some differences to account for differences in the outputs of the sleep state classifier 265 and the sleep classifier 210. For example, the smoothing and filtering post-processor 270 may also remove very short sleep intervals (e.g., to remove silent wakefulness or other false positive sleep intervals) as described with reference to fig. 7A-7B. However, the smoothing and filtering post-processor 270 may additionally filter very short sleep intervals of a first sleep state (e.g., REM sleep) of the immediately preceding and following sleep intervals of different sleep states (e.g., non-REM sleep stages one, two, or three). For example, similar to the description of the first filter block 705, the output of the sleep state classifier may be filtered to remove very short sleep intervals (e.g., less than a threshold time, such as 15 seconds, 30 seconds, 45 seconds, etc.) for a particular sleep state at any point in the session (e.g., across the entire classification window). These very short sleep state intervals may be false positives (high frequency transients) and/or may represent sleep state intervals that are not long enough to be meaningful for understanding sleep/health benefits. These very short sleep state intervals may also be difficult to present to the user because these representations of sleep information that are less meaningful to the user disrupt the presentation of longer duration sleep state intervals that are more meaningful in the sleep tracking results. Filtering out the very short sleep state interval may include replacing an indication of the sleep state of the very short sleep interval with an indication of the wake state or a different sleep state (e.g., depending on the state before or after the corresponding very short sleep interval). In some examples, smoothing/filtering may be performed on the output of sleep state classifier 265 only after the second quality check by second quality check classifier 275 is satisfied (e.g., to avoid filtering/smoothing when no display and/or storage states are classified).
Fig. 2D illustrates an example timing diagram 290 showing features and operation of a processing circuit for sleep tracking, according to examples of this disclosure. The timelines (e.g., times T1-T3), the operation of the rest classifier 205A and the activity classifier 205B (e.g., the rest/activity classifier 205), the criteria for the start and termination of a sleep session, and the classification window 235/285 described with respect to FIG. 2B are the same or similar to those corresponding elements in FIG. 2D, the details of which are not repeated for the sake of brevity.
The data in the sleep state classification window 285 may be processed by the sleep state classifier 265 as described in more detail with respect to process 800 and block diagram 500. In some examples, sleep state classification by sleep state classifier 265 may begin in response to the end of a session (or a threshold period of time after a session or in response to a user request). In some examples, sleep state classification by sleep state classifier 265 may only begin after confidence in the session is met as determined by first quality check classifier 260 (e.g., by avoiding processing when the first quality check is not met). In some examples, sleep state classification by sleep state classifier 265 may begin (e.g., at the end of a session), but may cease if ongoing, if confidence in the session is not met as determined by first quality check classifier 260. In some examples, sleep state classifications that estimate a user's sleep state may be stored in memory and/or displayed to the user. For example, sleep state classifications that estimate the sleep state of a user may be displayed or stored as a series of sleep intervals (e.g., consecutive time periods classified as respective sleep states) as represented by blocks 280A-280F shown on the timeline in fig. 2D. In some examples, the sleep state is presented on a display (e.g., touch screen 128). In some examples, sleep states are presented on a timeline of different sleep states, represented as sleep state intervals at different heights. For example, blocks 280A, 280D, and 280F may correspond to a first sleep state (e.g., non-REM sleep stage one), blocks 280B and 280E may correspond to a second sleep state (e.g., non-REM sleep stage two/three), and block 280C may correspond to a third sleep state (e.g., REM sleep). It should be appreciated that the wake state interval may be represented by a gap in the timeline where no other sleep states are represented. Alternatively, the awake state interval may be represented by boxes at different heights. It should be appreciated that although three altitudes are shown in fig. 2D, more or fewer altitudes and sleep states may be represented in the data displayed to the user (e.g., depending on how much sleep state output is output by sleep state classifier 265).
In some examples, sleep state classifications that estimate a user's sleep state may be displayed and/or stored only when confidence in a session is met as indicated by the first quality check classifier 260 and the second quality check classifier 275. In some examples, the sleep/awake classification that estimates the user's sleep may be displayed and/or stored when confidence in the session regarding the sleep/awake classification is met as indicated by the first quality check classifier 260 and the second quality check classifier 275 (when the quality check does not establish confidence in the session regarding the sleep state classification, but establishes sufficient confidence in the session regarding the binary sleep/awake classification), rather than the sleep state classification. In some examples, the sleep state classification and/or the sleep/awake state classification are not displayed and/or stored when, for example, the first quality check classifier 260 and the second quality check classifier 275 indicate that confidence in the session is not met.
In some examples, the quality check by the second quality check classifier 275 may include determining whether the classification output from the sleep state classifier 265 meets one or more criteria. In some examples, the quality check by the second quality check classifier 275 may determine whether the total sleep time of the sleep session lasts for a threshold duration (e.g., 1 hour, 2 hours, 3 hours, etc.), because the confidence in the sleep state classification is improved for sleep sessions longer than the threshold duration as compared to sleep sessions shorter than the threshold duration. In some examples, the threshold duration of the second quality check classifier 275 may be shorter than the threshold duration of the first quality check classifier 260. Additionally or alternatively, the quality check by the second quality check classifier 275 may determine (e.g., based on empirical measurements from sleep studies) whether the distribution of sleep states in the classification corresponds to a physiologically observed distribution of sleep states. In some such examples, the quality check may include determining whether a proportion (e.g., percentage) of a total sleep time of a sleep session classified as a first sleep state (e.g., REM sleep) is less than a first threshold (e.g., 65%, 70%, etc.). In some such examples, the quality check may include determining whether a percentage of a total sleep time of a sleep session classified as a second sleep state (e.g., non-REM sleep stage one) is less than a second threshold (e.g., 65%, 70%, etc.). For example, the first threshold and the second threshold may be determined from empirical measurements from sleep studies. In some examples, the first threshold and the second threshold may be the same. In some examples, the first threshold and the second threshold may be different. Although the above description evaluates two sleep states for a threshold (e.g., a first threshold and a second threshold), it should be understood that in some examples fewer or more sleep states may be similarly evaluated for a threshold. In some examples, the sleep classifications by the sleep state classifier 265 may be stored and/or displayed when criteria for quality check of the second quality-check classifier 275 are met (e.g., the total sleep time within a session meets the total sleep time criteria and/or the proportion of the total sleep time within one or more sleep states meets a corresponding threshold). In some examples, when the quality check of the second quality check classifier 275 is not met (e.g., the total sleep time within a session fails to meet the total sleep time criteria or the proportion of the total sleep time within one or more sleep states fails to meet the corresponding threshold), the sleep classification by the sleep state classifier 265 is not stored and/or displayed, and optionally a sleep/awake binary classification is stored and/or displayed. When sleep/awake classification (binary classification) is displayed, the data from sleep state classifier 265 may be combined (e.g., compressed) by combining sleep intervals for all sleep states that are not awake states into a single sleep state. In some examples, the quality check is performed for whether a proportion of the total sleep time within one or more sleep states meets a corresponding threshold value only after the quality check is met to determine whether the total sleep time meets or exceeds the threshold duration.
Fig. 8 illustrates an exemplary process for a sleep state classifier according to an example of the present disclosure. Process 800 may be performed by processing circuitry comprising processor 108 and/or DSP 109. In some examples, process 800 may be performed in part in real-time (e.g., when enough data for processing is received), in part at a pace during a session, and/or in part at the end of a session. In some examples, process 800 may be performed entirely at the end of the session (e.g., after the quality check by first quality check classifier 260 is satisfied).
At 805, the sleep state classifier may optionally filter the data input into a classifier (e.g., sleep state classifier 265). In some examples, the motion data may also optionally be downsampled at 810. At 815, the sleep state classifier may extract a plurality of features from the motion data, optionally including one or more motion features (820), one or more time-domain respiration features (825), and one or more frequency-domain respiration features (830). Multiple features may be calculated for each epoch of motion data. The process 800 from 805 to 830 may be the same or similar to the description of the process 400 from 405 to 430, the details of which are not repeated here for the sake of brevity. In addition, the details of feature extraction described in more detail with respect to fig. 5 are not repeated herein for the sake of brevity. However, it should be appreciated that the sleep/wake classification of process 400 and the sleep state classification of process 800 may depend on different sets of extracted features. For example, the sleep state classification of process 800 may use some features not used for the sleep/wake classification of process 400 (or vice versa).
At 835, inputs to the sleep state classifier can be assembled. The sleep state classifier inputs may be assembled from features for N epochs and may correspond to longer durations (e.g., corresponding to 5 minutes, 10 minutes, etc.). In some examples, sleep state classifier inputs may be assembled from features for N epochs of an entire sleep session. In some examples, the input may include N x M features, where M features are extracted for each of the N epochs. In some examples, the N epochs include an epoch of interest (e.g., an epoch that outputs a classification application) and N-1 epochs before and/or after the epoch of interest. In some examples, (N-1)/2 epochs before the epoch of interest and (N-1)/2 epochs after the epoch of interest are used. In some examples, the N-1 epochs may be unevenly distributed on both sides of the epoch of interest (e.g., 75% before and 25% after the epoch of interest). In some examples, N-1 epochs before the epoch of interest are used. In some examples, the input may be compressed to reduce the number of features. For example, features from multiple epochs may be identified by matching
Feature summation pooling for k consecutive epochs to reduce input to And features. Buffers may be used to store data corresponding to longer duration periods (raw and/or filtered/downsampled)Acceleration data and/or extracted features) such that sufficient data is available as input to the sleep state classifier.
In some examples, the feature may also be scaled at 840. For example, the extracted features may have different ranges (e.g., maximum and minimum values) among other characteristics. In some examples, scaling may transform the range of one or more features. In some examples, scaling may transform the range of each of the features to be the same (e.g., a common range). In some examples, scaling may include using a hyperbolic tangent function to map a range of values for a given feature to (-1:1). In some examples, scaling may map the minimum and maximum values to the 1 st and 95 th percentile values, and outliers outside of the 95 th percentile values may be outside of the range of values (e.g., greater than 1 or less than-1). In some examples, outliers may be handled more carefully by the machine learning model or may reduce confidence in the output of the machine learning model. It should be understood that a range scaled to a value between-1 and 1 is a representative range, but other ranges may be used (and optionally different ranges may be used for different features). In addition, it should be appreciated that scaling may be achieved without using a hyperbolic tangent function. For example, scaling may be achieved using mean normalization or scaling to unit length, among other possibilities.
At 845, classifier inputs can be processed with an ML model, such as a Long Short Term Memory (LSTM) artificial neural network. In some examples, the LSTM neural network may be implemented as a bi-directional LSTM (BiLSTM) neural network (also referred to herein as a BiLSTM machine learning model). The bi-directional LSTM neural network may process data from the end of a session to the start of the session and from the start of the session to the end of the session. In some examples, the BiLSTM neural network includes one or more dense layers (also referred to as fully connected layers). In some examples, a first dense layer may be included to transform classifier inputs prior to providing one or more BiLSTM layers. In some examples, the first dense layer may increase the input dimension (e.g., the input dimension of a feature may increase from M extracted features). In some examples, a second dense layer may be included to transform the output of the BiLSTM layer. In some examples, the second dense layer may reduce the dimension of the output (e.g., combine information into a smaller dimension). While a first dense layer is described before the BiLSTM layer and a second dense layer is described after the BiLSTM layer, it should be appreciated that multiple dense layers may be used to increase or decrease the dimension of the input to or output from the BiLSTM layer. In some examples, the second dense layer reduces the output of the BiLSTM layer to the same dimension as the assembled classifier input prior to the first dense layer. In some examples, a SoftMax layer is included to generate output probabilities from the output of the BiLSTM layer (e.g., after one or more dense layers). In some examples, a third dense layer subsequent to the second dense layer further reduces the dimension of the output from the second dense layer to improve the predictions made by the SoftMax layer. It should be appreciated that LSTM and BiLSTM neural networks are merely examples of ML models, and that other models may be used, such as gradient enhancement trees, convolutional neural networks, random forests, logistic regression, support vector machines, and the like.
In some examples, the output of the ML model may be a confidence value that represents the probability (between 0 and 1) that the user is in a particular sleep state. In some examples, the ML model may output a confidence value for each time period corresponding to the duration of the epoch (e.g., using a sliding window on a data buffer) and for each supported sleep state (optionally excluding awake states). For example, when the system supports five sleep states (e.g., wake, REM sleep, non-REM sleep stage one, non-REM sleep stage two, and non-REM sleep stage three), the output may include five probabilities for each epoch. As another example, when the system supports four sleep states (e.g., wake, REM sleep, non-REM sleep stage one, non-REM sleep stage two/three), the output may include four probabilities for each epoch. The sum of probabilities for each sleep state over the epoch may sum to 1. The output of the ML model may be represented as an array of confidence values for each of the supported sleep states and for each epoch of data (optionally calculated using a sliding window as described herein).
At 850, a maximum function may be applied to the output of the ML model to detect a highest probability sleep state for the epoch. For example, an awake state may be detected when the confidence value for the awake state is greatest, a REM sleep state may be detected when the confidence value for the REM sleep state is greatest, a non-REM sleep state stage one may be detected when the confidence value for the non-REM sleep state stage one is greatest, and so on. The output after maximization may be an array of sleep state classifications (per epoch). An array of sleep state classifications may be displayed (optionally with some post-processing and according to quality check) as sleep state intervals (e.g., a series of sleep states and wake state periods) as described herein.
As described above, aspects of the present technology include the collection and use of physiological information. The techniques may be implemented with techniques involving collection of personal data related to the health of a user and/or uniquely identifying or otherwise available to contact or locate a particular person. Such personal data may include demographic data, date of birth, location-based data, telephone numbers, email addresses, home addresses, and data or records related to the health or wellness level of the user (e.g., vital sign measurements, medication information, exercise information, etc.).
The present disclosure recognizes that personal data of a user (including physiological information, such as data generated and used by the present technology) may be used to benefit the user. For example, assessing a user's sleep condition (e.g., to determine the user's resting/active state and/or sleep/awake state) may allow the user to track or otherwise obtain insight regarding their health.
The present disclosure contemplates that entities responsible for collecting, analyzing, disclosing, transmitting, storing, or otherwise using such personal data will adhere to established privacy policies and/or privacy practices. In particular, such entities should exercise and adhere to privacy policies and practices that are recognized as meeting or exceeding industry or government requirements for maintaining the privacy and security of personal information data. Such policies should be readily accessible to the user and should be updated as the collection and/or use of the data changes. Personal information from users should be collected for legal and reasonable use by entities and not shared or sold outside of these legal uses. Furthermore, such collection/sharing should require informed consent from the user. In addition, such entities should consider taking any necessary steps to defend and secure access to such personal information data and to ensure that others who have access to personal information data adhere to their privacy policies and procedures. In addition, such entities may subject themselves to third party evaluations to prove compliance with widely accepted privacy policies and practices. These policies and practices may be adjusted according to the geographic region and/or the particular type and nature of personal data being collected and used.
In spite of the foregoing, the present disclosure also contemplates embodiments in which a user selectively prevents collection, use, or access of personal data including physiological information. For example, the user may be able to disable hardware and/or software elements that collect physiological information. Additionally, the present disclosure contemplates that hardware and/or software elements may be provided to prevent or block access to collected personal data. In particular, the user may choose to remove, disable, or limit access to certain health-related applications that collect the user's personal health or fitness data.
Thus, in light of the foregoing, some examples of the disclosure relate to a method. The method may include: for each of a plurality of epochs, extracting a first plurality of features from first motion data from a multichannel motion sensor, and classifying a state for each of the plurality of epochs as one of a plurality of sleep states (e.g., sleep state or wake state, or sleep states) using the first plurality of features for the plurality of epochs. The first plurality of features may include: one or more first motion characteristics; one or more temporal respiration features extracted from a first channel of a first motion data stream derived from the first motion data, the first channel corresponding to a selected channel of the multi-channel motion sensor; and one or more frequency-domain respiration features extracted from a second channel of a second motion data stream derived from the first motion data, the second channel corresponding to the selected channel of the multi-channel motion sensor. Additionally or alternatively to one or more of the examples disclosed above, in some examples the multi-channel motion sensor may include a tri-axial accelerometer. Additionally or alternatively to one or more of the examples disclosed above, in some examples the method may further comprise: the first motion data is filtered using a high pass filter. The one or more first motion features may be extracted from the first motion data after filtering using the high pass filter. Additionally or alternatively to one or more of the examples disclosed above, in some examples the method may further comprise: the first motion data is filtered using a band pass filter to generate the first motion data stream. Additionally or alternatively to one or more of the examples disclosed above, in some examples the method may further comprise: filtering the first motion data using a low pass filter; and downsampling the first motion data from a first sampling rate to a second sampling rate that is lower than the first sampling rate. Additionally or alternatively to one or more of the examples disclosed above, in some examples the method may further comprise: for each period: converting the first motion data into a first frequency domain representation for a first channel of the multi-channel motion sensor, a second frequency domain representation for a second channel of the multi-channel motion sensor, and a third frequency domain representation for a third channel of the multi-channel motion sensor; and calculating a first signal-to-noise ratio using the first frequency domain representation, a second signal-to-noise ratio using the second frequency domain representation, and a third signal-to-noise ratio using the third frequency domain representation. The selected channel may correspond to a respective channel of the first channel, the second channel, or the third channel having a largest signal-to-noise ratio of the first signal-to-noise ratio, the second signal-to-noise ratio, and the third signal-to-noise ratio. Additionally or alternatively to one or more of the examples disclosed above, in some examples the method may further comprise: filtering the first motion data using a band pass filter to generate the first motion data stream; calculating a plurality of variances for each of a plurality of windows of the first motion data stream, the plurality of variances including a variance for each channel of the multi-channel motion sensor and a maximum variance of the plurality of variances; and excluding samples corresponding to a respective window of the plurality of windows from the first channel of the first motion data stream based on a determination that the maximum variance for the respective window exceeds a threshold. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the classification may be performed by a logistic regression machine learning model. Additionally or alternatively to one or more of the examples disclosed above, in some examples the method may further comprise: extracting, for each window of the plurality of windows, a second plurality of features from second motion data from the multi-channel motion sensor; classifying the second plurality of features to estimate a plurality of rest state confidences, each of the plurality of rest state confidences corresponding to one of the plurality of windows; and measuring the first motion data from the multi-channel motion sensor based on a determination that the plurality of rest state confidence levels meet one or more first criteria. Additionally or alternatively to one or more of the examples disclosed above, in some examples the one or more first criteria may include a criterion that is met when a threshold number of the plurality of rest state confidence levels corresponding to consecutive windows exceeds a confidence level threshold. Additionally or alternatively to one or more of the examples disclosed above, in some examples the method may further comprise: extracting the second plurality of features from the second motion data based on one or more second criteria being met; and discarding the extraction of the second plurality of features from the second athletic data based on failing to meet the one or more second criteria. Additionally or alternatively to one or more of the examples disclosed above, in some examples the one or more second criteria include: a first criterion that is met a threshold period of time prior to a user-specified bedtime; a second criterion that is met when a device comprising the multi-channel motion sensor is not charged; and/or a third criterion that is met when the device comprising the multi-channel motion sensor is detected in contact with a body part. Additionally or alternatively to one or more of the examples disclosed above, in some examples the method may further comprise: the second plurality of features for a plurality of windows of the plurality of windows are summed and pooled. Additionally or alternatively to one or more of the examples disclosed above, in some examples the method may further comprise: extracting, for each window of the second plurality of windows, a third plurality of features from third motion data from the multi-channel motion sensor; classifying the third plurality of features to estimate a second plurality of rest state confidences, each rest state confidence of the second plurality of rest state confidences corresponding to one of the second plurality of windows; and ceasing to measure the first motion data from the multi-channel motion sensor based on a determination that the second plurality of rest state confidence levels meet one or more second criteria. Additionally or alternatively to one or more of the examples disclosed above, in some examples the method may further comprise: the first motion data is classified as either qualified data or non-qualified data using a subset of the first plurality of features. The subset may include at least one of the one or more first motion features, at least one of the one or more temporal respiration features, and at least one of the one or more frequency-domain respiration features. Additionally or alternatively to one or more of the examples disclosed above, in some examples the method may further comprise: according to classifying the first movement data as qualified data, a sleep interval is stored or displayed based on the classification for each of the plurality of time periods. Additionally or alternatively to one or more of the examples disclosed above, in some examples the method may further comprise: using the classification for each of the plurality of epochs to identify one or more sleep intervals of consecutive epochs classified as sleep states; and reclassifying successive epochs of the respective sleep interval from the sleep state to the awake state based on the respective sleep interval being less than a threshold number of successive epochs of the one or more sleep intervals. Additionally or alternatively to one or more of the examples disclosed above, in some examples the method may further comprise: the first motion data is used to estimate a transition from a first motion state to a second motion state. The second motion state may correspond to a reduced motion relative to the first motion state. Additionally or alternatively to one or more of the examples disclosed above, in some examples the method may further comprise: estimating the transition may further comprise: calculating a logarithmic scale of a motion feature of the one or more motion features extracted from the first motion data for each of the plurality of time periods; median filtering a logarithmic scale of one of the motion features for each of the plurality of time periods; the transition is estimated during the period of median filtering, when the motion characteristics of the logarithmic scale fall below the threshold. Additionally or alternatively to one or more of the examples disclosed above, in some examples the method may further comprise: using the classification for each of the plurality of epochs to identify one or more sleep intervals of consecutive epochs classified as sleep states; and reclassifying the successive periods of the respective sleep intervals from the sleep state to the awake state based on the respective sleep intervals of the one or more sleep intervals prior to the estimated transition being shorter than a threshold number of successive periods and having a sleep density less than the threshold.
Some examples of the disclosure relate to non-transitory computer-readable storage media. The non-transitory computer readable storage medium may store instructions that, when executed by an electronic device comprising processing circuitry, may cause the processing circuitry to perform any of the methods described above. Some examples of the present disclosure relate to an electronic device including: a processing circuit; a memory; and one or more programs. The one or more programs may be stored in the memory and configured to be executed by the processing circuitry. The one or more programs may include instructions for performing any of the methods described above.
Some examples of the present disclosure relate to an electronic device. The electronic device may include: a motion sensor (e.g., a multi-channel motion sensor) and a processing circuit coupled to the motion sensor. The processing circuit may be programmed to: extracting a first plurality of features from the first motion data from the multi-channel motion sensor for each of a plurality of epochs, and classifying a state for each of the plurality of epochs as one of a plurality of sleep states using the first plurality of features for the plurality of epochs. The first plurality of features may include: one or more first motion characteristics; one or more temporal respiration features extracted from a first channel of a first motion data stream derived from the first motion data, the first channel corresponding to a selected channel of the multi-channel motion sensor; and one or more frequency-domain respiration features extracted from a second channel of a second motion data stream derived from the first motion data, the second channel corresponding to the selected channel of the multi-channel motion sensor. Additionally or alternatively to one or more of the examples disclosed above, in some examples the motion sensor comprises a tri-axial accelerometer. Additionally or alternatively to one or more of the examples disclosed above, in some examples the processing circuit may be further programmed to: the first motion data is filtered using a high pass filter. The one or more first motion features may be extracted from the first motion data after filtering using the high pass filter. Additionally or alternatively to one or more of the examples disclosed above, in some examples the processing circuit may be further programmed to: the first motion data is filtered using a band pass filter to generate the first motion data stream. Additionally or alternatively to one or more of the examples disclosed above, in some examples the processing circuit may be further programmed to: filtering the first motion data using a low pass filter; and downsampling the first motion data from a first sampling rate to a second sampling rate that is lower than the first sampling rate. Additionally or alternatively to one or more of the examples disclosed above, in some examples the processing circuit may be further programmed to: for each period: converting the first motion data into a first frequency domain representation for a first channel of the multi-channel motion sensor, a second frequency domain representation for a second channel of the multi-channel motion sensor, and a third frequency domain representation for a third channel of the multi-channel motion sensor; and calculating a first signal-to-noise ratio using the first frequency domain representation, a second signal-to-noise ratio using the second frequency domain representation, and a third signal-to-noise ratio using the third frequency domain representation. The selected channel may correspond to a respective channel of the first channel, the second channel, or the third channel having a largest signal-to-noise ratio of the first signal-to-noise ratio, the second signal-to-noise ratio, and the third signal-to-noise ratio. Additionally or alternatively to one or more of the examples disclosed above, in some examples the processing circuit may be further programmed to: filtering the first motion data using a band pass filter to generate the first motion data stream; calculating a plurality of variances for each of a plurality of windows of the first motion data stream, the plurality of variances including a variance for each channel of the multi-channel motion sensor and a maximum variance of the plurality of variances; and excluding samples corresponding to a respective window of the plurality of windows from the first channel of the first motion data stream based on a determination that the maximum variance for the respective window exceeds a threshold. Additionally or alternatively to one or more of the examples disclosed above, in some examples the processing circuitry may include machine learning circuitry. The classification may be performed by a logistic regression machine learning model. Additionally or alternatively to one or more of the examples disclosed above, in some examples the processing circuit may be further programmed to: extracting, for each window of the plurality of windows, a second plurality of features from second motion data from the multi-channel motion sensor; classifying the second plurality of features to estimate a plurality of rest state confidences, each of the plurality of rest state confidences corresponding to one of the plurality of windows; and measuring the first motion data from the multi-channel motion sensor based on a determination that the plurality of rest state confidence levels meet one or more first criteria. Additionally or alternatively to one or more of the examples disclosed above, in some examples the one or more first criteria may include a criterion that is met when a threshold number of the plurality of rest state confidence levels corresponding to consecutive windows exceeds a confidence level threshold. Additionally or alternatively to one or more of the examples disclosed above, in some examples the processing circuit may be further programmed to: extracting the second plurality of features from the second motion data based on one or more second criteria being met; and discarding the extraction of the second plurality of features from the second athletic data based on failing to meet the one or more second criteria. Additionally or alternatively to one or more of the examples disclosed above, in some examples the one or more second criteria may include: a first criterion that is met a threshold period of time prior to a user-specified bedtime; a second criterion that is met when a device comprising the multi-channel motion sensor is not charged; and/or a third criterion that is met when the device comprising the multi-channel motion sensor is detected in contact with a body part. Additionally or alternatively to one or more of the examples disclosed above, in some examples the processing circuit may be further programmed to: the second plurality of features for a plurality of windows of the plurality of windows are summed and pooled. Additionally or alternatively to one or more of the examples disclosed above, in some examples the processing circuit may be further programmed to: extracting, for each window of the second plurality of windows, a third plurality of features from third motion data from the multi-channel motion sensor; classifying the third plurality of features to estimate a second plurality of rest state confidences, each rest state confidence of the second plurality of rest state confidences corresponding to one of the second plurality of windows; and ceasing to measure the first motion data from the multi-channel motion sensor based on a determination that the second plurality of rest state confidence levels meet one or more second criteria. Additionally or alternatively to one or more of the examples disclosed above, in some examples the processing circuit may be further programmed to: the first motion data is classified as either qualified data or non-qualified data using a subset of the first plurality of features. The subset may include at least one of the one or more first motion features, at least one of the one or more temporal respiration features, and at least one of the one or more frequency-domain respiration features. Additionally or alternatively to one or more of the examples disclosed above, in some examples the processing circuit may be further programmed to: according to classifying the first athletic data as qualified data, a sleep interval is stored and/or displayed based on the classification for each of the plurality of epochs. Additionally or alternatively to one or more of the examples disclosed above, in some examples the processing circuit may be further programmed to: using the classification for each of the plurality of epochs to identify one or more sleep intervals of consecutive epochs classified as sleep states; and reclassifying successive epochs of the respective sleep interval from the sleep state to the awake state based on the respective sleep interval being less than a threshold number of successive epochs of the one or more sleep intervals. Additionally or alternatively to one or more of the examples disclosed above, in some examples the processing circuit may be further programmed to: the first motion data is used to estimate a transition from a first motion state to a second motion state. The second motion state may correspond to a reduced motion relative to the first motion state. Additionally or alternatively to one or more of the examples disclosed above, in some examples estimating the transition may include: calculating a logarithmic scale of a motion feature of the one or more motion features extracted from the first motion data for each of the plurality of time periods; median filtering a logarithmic scale of one of the motion features for each of the plurality of time periods; the transition is estimated during the period of median filtering, when the motion characteristics of the logarithmic scale fall below the threshold. Additionally or alternatively to one or more of the examples disclosed above, in some examples the processing circuit may be further programmed to: using the classification for each of the plurality of epochs to identify one or more sleep intervals of consecutive epochs classified as sleep states; and reclassifying the successive periods of the respective sleep intervals from the sleep state to the awake state based on the respective sleep intervals of the one or more sleep intervals prior to the estimated transition being shorter than a threshold number of successive periods and having a sleep density less than the threshold.
Some examples of the present disclosure relate to an electronic device. The electronic device may include: a motion sensor (e.g., a multi-channel motion sensor) and a processing circuit coupled to the motion sensor. The processing circuit may be programmed to: extracting a first plurality of features from the first motion data from the multi-channel motion sensor for each of a plurality of epochs in the conversation, and classifying a state for each of the plurality of epochs as one of a plurality of sleep states using the first plurality of features for the plurality of epochs according to a determination that one or more first criteria are met. The first plurality of features may include: one or more first motion characteristics; one or more temporal respiration features extracted from a first channel of a first motion data stream derived from the first motion data, the first channel corresponding to a selected channel of the multi-channel motion sensor; and one or more frequency-domain respiration features extracted from a second channel of a second motion data stream derived from the first motion data, the second channel corresponding to the selected channel of the multi-channel motion sensor. The plurality of sleep states may include a first sleep state corresponding to an awake state, a second sleep state corresponding to a fast eye movement sleep state, and a third sleep state corresponding to one or more non-fast eye movement sleep states. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the third sleep state may correspond to a first stage non-rapid eye movement sleep state. The plurality of sleep states may include a fourth sleep state corresponding to the second stage non-rapid eye movement sleep state and the third stage non-rapid eye movement sleep state. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the third sleep state may correspond to a first stage non-rapid eye movement sleep state. The plurality of sleep states may include a fourth sleep state corresponding to the second stage non-rapid eye movement sleep state, and the plurality of sleep states may include a fifth sleep state corresponding to the third stage non-rapid eye movement sleep state. Additionally or alternatively to one or more of the examples disclosed above, in some examples the processing circuit may be further programmed to: the classification of the status for each of the plurality of time periods is abandoned based on a determination that the one or more first criteria are not met. Additionally or alternatively to one or more of the examples disclosed above, in some examples the one or more first criteria may include a criterion that is met when the session is longer than a threshold duration. Additionally or alternatively to one or more of the examples disclosed above, in some examples the one or more first criteria may include a criterion that is met when the electronic device including the multichannel motion sensor is detected as being in contact with a body part during the session. Additionally or alternatively to one or more of the examples disclosed above, in some examples, detecting that the electronic device including the multichannel motion sensor is in contact with the body part during the session may be based on a subset of the first plurality of features, the subset including at least one of the one or more first motion features, at least one of the one or more temporal respiration features, and at least one of the one or more temporal respiration features. Additionally or alternatively to one or more of the examples disclosed above, in some examples the processing circuit may be further programmed to: according to a determination that one or more second criteria are met, a sleep interval is stored or displayed based on the classification of each of the plurality of time periods. The sleep intervals may include a sleep interval corresponding to the first sleep state, a sleep interval corresponding to the second sleep state, and a sleep interval corresponding to the third sleep state. Additionally or alternatively to one or more of the examples disclosed above, in some examples the one or more second criteria may include a criterion that is met when a total duration of the periods classified as different from the first sleep state is greater than a threshold duration. Additionally or alternatively to one or more of the examples disclosed above, in some examples the one or more second criteria may include a criterion that is met when a ratio of a total duration of the periods classified as corresponding to the second sleep state to the total duration of the periods classified as different from the first sleep state is less than a first threshold ratio. Additionally or alternatively to one or more of the examples disclosed above, in some examples the one or more second criteria may include a criterion that is met when a ratio of a total duration of the periods classified as corresponding to the third sleep state to the total duration of the periods classified as different from the first sleep state is less than a second threshold ratio. Additionally or alternatively to one or more of the examples disclosed above, in some examples the processing circuit may be further programmed to: in accordance with a determination that one or more third criteria are met, a sleep interval is stored or displayed based on the classification for each of the plurality of time periods. A sleep interval corresponding to the second sleep state and a sleep interval corresponding to the third sleep state may be combined. Additionally or alternatively to one or more of the examples disclosed above, in some examples the one or more third criteria may include a criterion that is met when a total duration of the periods classified as different from the first sleep state is less than a threshold duration; the ratio of the total duration of the periods classified as corresponding to the second sleep state to the total duration of the periods classified as different from the first sleep state is greater than a first threshold ratio; or the ratio of the total duration of the periods classified as corresponding to the third sleep state to the total duration of the periods classified as different from the first sleep state is greater than a second threshold ratio. Additionally or alternatively to one or more of the examples disclosed above, in some examples the processing circuit may be further programmed to: the sleep intervals are forgoed to be stored or displayed based on the classification for each of the plurality of time periods in accordance with a determination that the one or more second criteria and the one or more third criteria are not met. Additionally or alternatively to one or more of the examples disclosed above, in some examples the processing circuitry may include machine learning circuitry. Classification may be performed by a two-way long and short term memory machine learning model. Additionally or alternatively to one or more of the examples disclosed above, in some examples the processing circuit may be further programmed to: the first plurality of features is scaled to a common range of values for use by the two-way long and short term memory machine learning model. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the machine learning circuit may be configured to output a probability of each of the plurality of sleep states for each of the plurality of time periods and to classify the state for each of the plurality of time periods using a maximum of the probabilities of each of the plurality of sleep states for each of the plurality of time periods. Additionally or alternatively to one or more of the examples disclosed above, in some examples the processing circuit may be further programmed to: identifying, using the classification for each of the plurality of epochs, a first sleep interval of consecutive epochs classified as a respective sleep state of the plurality of sleep states, the first sleep interval being preceded by a second sleep interval of consecutive epochs classified as a different respective sleep state and followed by a third sleep interval of consecutive epochs classified as the different respective sleep state; and reclassifying the successive periods of the first sleep interval from the respective sleep state to the different respective sleep state based on the first sleep interval being shorter than a threshold number of successive periods. Additionally or alternatively to one or more of the examples disclosed above, in some examples the processing circuit may be further programmed to: the first motion data is used to estimate a transition from a first motion state to a second motion state. The second motion state may correspond to a reduced motion relative to the first motion state. Additionally or alternatively to one or more of the examples disclosed above, in some examples estimating the transition may include: calculating a logarithmic scale of a motion feature of the one or more motion features extracted from the first motion data for each of the plurality of time periods; median filtering a logarithmic scale of one of the motion features for each of the plurality of time periods; the transition is estimated during the period of median filtering, when the motion characteristics of the logarithmic scale fall below the threshold. Additionally or alternatively to one or more of the examples disclosed above, in some examples the processing circuit may be further programmed to: using the classification for each of the plurality of epochs to identify one or more sleep intervals classified as consecutive epochs of the second sleep state or the third sleep state; and reclassifying the consecutive periods of the respective sleep intervals from the second sleep state or the third sleep state to the first sleep state based on the respective sleep interval of the one or more sleep intervals prior to the transition being shorter than a threshold number of consecutive periods and having a sleep density less than a sleep density threshold. Additionally or alternatively to one or more of the examples disclosed above, in some examples the multi-channel motion sensor comprises a tri-axial accelerometer. Additionally or alternatively to one or more of the examples disclosed above, in some examples the processing circuit may be further programmed to: the first motion data is filtered using a high pass filter. The one or more first motion features may be extracted from the first motion data after filtering using the high pass filter. Additionally or alternatively to one or more of the examples disclosed above, in some examples the processing circuit may be further programmed to: the first motion data is filtered using a band pass filter to generate the first motion data stream. Additionally or alternatively to one or more of the examples disclosed above, in some examples the processing circuit may be further programmed to: filtering the first motion data using a low pass filter; and downsampling the first motion data from a first sampling rate to a second sampling rate that is lower than the first sampling rate. Additionally or alternatively to one or more of the examples disclosed above, in some examples the processing circuit may be further programmed to: for each period: the first motion data is converted into a first frequency domain representation for a first channel of the multi-channel motion sensor, a second frequency domain representation for a second channel of the multi-channel motion sensor, and a third frequency domain representation for a third channel of the multi-channel motion sensor. Additionally or alternatively to one or more of the examples disclosed above, in some examples the processing circuit may be further programmed to: for each period: a first signal-to-noise ratio is calculated using the first frequency domain representation, a second signal-to-noise ratio is calculated using the second frequency domain representation, and a third signal-to-noise ratio is calculated using the third frequency domain representation. The selected channel may correspond to a respective channel of the first channel, the second channel, or the third channel having a largest signal-to-noise ratio of the first signal-to-noise ratio, the second signal-to-noise ratio, and the third signal-to-noise ratio. Additionally or alternatively to one or more of the examples disclosed above, in some examples the processing circuit may be further programmed to: filtering the first motion data using a band pass filter to generate the first motion data stream; calculating a plurality of variances for each of a plurality of windows of the first motion data stream, the plurality of variances including a variance for each channel of the multi-channel motion sensor and a maximum variance of the plurality of variances; and excluding samples corresponding to a respective window of the plurality of windows from the first channel of the first motion data stream based on a determination that the maximum variance for the respective window exceeds a threshold.
Some examples of the disclosure relate to a method. The method may include: for each of a plurality of epochs in the conversation, a first plurality of features is extracted from first motion data from the multi-channel motion sensor. The first plurality of features may include: one or more first motion characteristics; one or more temporal respiration features extracted from a first channel of a first motion data stream derived from the first motion data, the first channel corresponding to a selected channel of the multi-channel motion sensor; and one or more frequency-domain respiration features extracted from a second channel of a second motion data stream derived from the first motion data, the second channel corresponding to the selected channel of the multi-channel motion sensor. The method may include: the method further includes classifying a state for each of the plurality of epochs as one of a plurality of sleep states using the first plurality of features for the plurality of epochs in accordance with a determination that one or more first criteria are met. The plurality of sleep states may include a first sleep state corresponding to an awake state, a second sleep state corresponding to a fast eye movement sleep state, and a third sleep state corresponding to one or more non-fast eye movement sleep states. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the third sleep state may correspond to a first stage non-rapid eye movement sleep state. The plurality of sleep states may include a fourth sleep state corresponding to the second stage non-rapid eye movement sleep state and the third stage non-rapid eye movement sleep state. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the third sleep state may correspond to a first stage non-rapid eye movement sleep state. The plurality of sleep states may include a fourth sleep state corresponding to the second stage non-rapid eye movement sleep state, and the plurality of sleep states may include a fifth sleep state corresponding to the third stage non-rapid eye movement sleep state. Additionally or alternatively to one or more of the examples disclosed above, in some examples the method may further comprise: the classification of the status for each of the plurality of time periods is abandoned based on a determination that the one or more first criteria are not met. Additionally or alternatively to one or more of the examples disclosed above, in some examples the one or more first criteria may include a criterion that is met when the session is longer than a threshold duration. Additionally or alternatively to one or more of the examples disclosed above, in some examples the one or more first criteria may include a criterion that is met when the electronic device including the multichannel motion sensor is detected as being in contact with a body part during the session. Additionally or alternatively to one or more of the examples disclosed above, in some examples, detecting that the electronic device including the multichannel motion sensor is in contact with the body part during the session may be based on a subset of the first plurality of features, the subset including at least one of the one or more first motion features, at least one of the one or more temporal respiration features, and at least one of the one or more temporal respiration features. Additionally or alternatively to one or more of the examples disclosed above, in some examples the method may further comprise: according to a determination that one or more second criteria are met, a sleep interval is stored or displayed based on the classification of each of the plurality of time periods. The sleep intervals may include a sleep interval corresponding to the first sleep state, a sleep interval corresponding to the second sleep state, and a sleep interval corresponding to the third sleep state. Additionally or alternatively to one or more of the examples disclosed above, in some examples the one or more second criteria may include a criterion that is met when a total duration of the periods classified as different from the first sleep state is greater than a threshold duration. Additionally or alternatively to one or more of the examples disclosed above, in some examples the one or more second criteria may include a criterion that is met when a ratio of a total duration of the periods classified as corresponding to the second sleep state to the total duration of the periods classified as different from the first sleep state is less than a first threshold ratio. Additionally or alternatively to one or more of the examples disclosed above, in some examples the one or more second criteria may include a criterion that is met when a ratio of a total duration of the periods classified as corresponding to the third sleep state to the total duration of the periods classified as different from the first sleep state is less than a second threshold ratio. Additionally or alternatively to one or more of the examples disclosed above, in some examples the method may further comprise: in accordance with a determination that one or more third criteria are met, a sleep interval is stored or displayed based on the classification for each of the plurality of time periods. A sleep interval corresponding to the second sleep state and a sleep interval corresponding to the third sleep state may be combined. Additionally or alternatively to one or more of the examples disclosed above, in some examples the one or more third criteria may include a criterion that is met when a total duration of the periods classified as different from the first sleep state is less than a threshold duration; the ratio of the total duration of the periods classified as corresponding to the second sleep state to the total duration of the periods classified as different from the first sleep state is greater than a first threshold ratio; or the ratio of the total duration of the periods classified as corresponding to the third sleep state to the total duration of the periods classified as different from the first sleep state is greater than a second threshold ratio. Additionally or alternatively to one or more of the examples disclosed above, in some examples the method may further comprise: the sleep intervals are forgoed to be stored or displayed based on the classification for each of the plurality of time periods in accordance with a determination that the one or more second criteria and the one or more third criteria are not met. Additionally or alternatively to one or more of the examples disclosed above, in some examples, the classification may be performed by a two-way long-short term memory machine learning model. Additionally or alternatively to one or more of the examples disclosed above, in some examples the method may further comprise: the first plurality of features is scaled to a common range of values for use by the two-way long and short term memory machine learning model. Additionally or alternatively to one or more of the examples disclosed above, in some examples the method may further comprise: estimating a probability of each of the plurality of sleep states for each of the plurality of time periods, and classifying the state for each of the plurality of time periods using a maximum of the probabilities of each of the plurality of sleep states for each of the plurality of time periods. Additionally or alternatively to one or more of the examples disclosed above, in some examples the method may further comprise: identifying, using the classification for each of the plurality of epochs, a first sleep interval of consecutive epochs classified as a respective sleep state of the plurality of sleep states, the first sleep interval being preceded by a second sleep interval of consecutive epochs classified as a different respective sleep state and followed by a third sleep interval of consecutive epochs classified as the different respective sleep state; and reclassifying the successive periods of the first sleep interval from the respective sleep state to the different respective sleep state based on the first sleep interval being shorter than a threshold number of successive periods. Additionally or alternatively to one or more of the examples disclosed above, in some examples the method may further comprise: the first motion data is used to estimate a transition from a first motion state to a second motion state. The second motion state may correspond to a reduced motion relative to the first motion state. Additionally or alternatively to one or more of the examples disclosed above, in some examples estimating the transition may include: calculating a logarithmic scale of a motion feature of the one or more motion features extracted from the first motion data for each of the plurality of time periods; median filtering a logarithmic scale of one of the motion features for each of the plurality of time periods; the transition is estimated during the period of median filtering, when the motion characteristics of the logarithmic scale fall below the threshold. Additionally or alternatively to one or more of the examples disclosed above, in some examples the method may further comprise: using the classification for each of the plurality of epochs to identify one or more sleep intervals classified as consecutive epochs of the second sleep state or the third sleep state; and reclassifying the consecutive periods of the respective sleep intervals from the second sleep state or the third sleep state to the first sleep state based on the respective sleep interval of the one or more sleep intervals prior to the transition being shorter than a threshold number of consecutive periods and having a sleep density less than a sleep density threshold. Additionally or alternatively to one or more of the examples disclosed above, in some examples the multi-channel motion sensor comprises a tri-axial accelerometer. Additionally or alternatively to one or more of the examples disclosed above, in some examples the method may further comprise: the first motion data is filtered using a high pass filter. The one or more first motion features may be extracted from the first motion data after filtering using the high pass filter. Additionally or alternatively to one or more of the examples disclosed above, in some examples the method may further comprise: the first motion data is filtered using a band pass filter to generate the first motion data stream. Additionally or alternatively to one or more of the examples disclosed above, in some examples the method may further comprise: filtering the first motion data using a low pass filter; and downsampling the first motion data from a first sampling rate to a second sampling rate that is lower than the first sampling rate. Additionally or alternatively to one or more of the examples disclosed above, in some examples the method may further comprise: for each period: the first motion data is converted into a first frequency domain representation for a first channel of the multi-channel motion sensor, a second frequency domain representation for a second channel of the multi-channel motion sensor, and a third frequency domain representation for a third channel of the multi-channel motion sensor. Additionally or alternatively to one or more of the examples disclosed above, in some examples the method may further comprise: for each period: a first signal-to-noise ratio is calculated using the first frequency domain representation, a second signal-to-noise ratio is calculated using the second frequency domain representation, and a third signal-to-noise ratio is calculated using the third frequency domain representation. The selected channel may correspond to a respective channel of the first channel, the second channel, or the third channel having a largest signal-to-noise ratio of the first signal-to-noise ratio, the second signal-to-noise ratio, and the third signal-to-noise ratio. Additionally or alternatively to one or more of the examples disclosed above, in some examples the method may further comprise: filtering the first motion data using a band pass filter to generate the first motion data stream; calculating a plurality of variances for each of a plurality of windows of the first motion data stream, the plurality of variances including a variance for each channel of the multi-channel motion sensor and a maximum variance of the plurality of variances; and excluding samples corresponding to a respective window of the plurality of windows from the first channel of the first motion data stream based on a determination that the maximum variance for the respective window exceeds a threshold.
Some examples of the disclosure relate to non-transitory computer-readable storage media. The non-transitory computer readable storage medium may store instructions that, when executed by an electronic device comprising processing circuitry, may cause the processing circuitry to perform any of the methods described above. Some examples of the present disclosure relate to an electronic device including: a processing circuit; a memory; and one or more programs. The one or more programs may be stored in the memory and configured to be executed by the processing circuitry. The one or more programs may include instructions for performing any of the methods described above.
Although examples of the present disclosure have been fully described with reference to the accompanying drawings, it is to be noted that various changes and modifications will become apparent to those skilled in the art. It is to be understood that such variations and modifications are to be considered included within the scope of the examples of the present disclosure as defined by the appended claims.

Claims (20)

1. A method, the method comprising:
extracting a first plurality of features from first motion data from a multi-channel motion sensor for each of a plurality of periods in a conversation, wherein the first plurality of features comprises:
One or more first motion characteristics;
one or more temporal respiration features extracted from a first channel of a first motion data stream derived from the first motion data, the first channel corresponding to a selected channel of the multi-channel motion sensor; and
one or more frequency-domain respiration features extracted from a second channel of a second motion data stream derived from the first motion data, the second channel corresponding to a selected channel of the multi-channel motion sensor; and
in accordance with a determination that one or more first criteria are met, the first plurality of features for the plurality of epochs are used to classify a state for each of the plurality of epochs as one of a plurality of sleep states including a first sleep state corresponding to an awake state, a second sleep state corresponding to a fast eye movement sleep state, and a third sleep state corresponding to one or more non-fast eye movement sleep states.
2. The method of claim 1, wherein the third sleep state corresponds to a first stage non-rapid eye movement sleep state, and wherein the plurality of sleep states includes a fourth sleep state corresponding to a second stage non-rapid eye movement sleep state and a third stage non-rapid eye movement sleep state.
3. The method of claim 1, wherein the third sleep state corresponds to a first stage non-rapid eye movement sleep state, wherein the plurality of sleep states includes a fourth sleep state corresponding to a second stage non-rapid eye movement sleep state, and wherein the plurality of sleep states includes a fifth sleep state corresponding to a third stage non-rapid eye movement sleep state.
4. The method of claim 1, further comprising:
in accordance with a determination that the one or more first criteria are not met, the classification of the state for each of the plurality of time periods is abandoned.
5. The method of claim 4, wherein the one or more first criteria comprise a criterion that is met when the session is longer than a threshold duration.
6. The method of claim 4, wherein the one or more first criteria include a criterion that is met during the session when an electronic device comprising the multi-channel motion sensor is detected as being in contact with a body part.
7. The method of claim 1, further comprising:
in accordance with a determination that one or more second criteria are met, a sleep interval is stored or displayed based on a classification of each of the plurality of time periods, wherein the sleep interval includes a sleep interval corresponding to the first sleep state, a sleep interval corresponding to the second sleep state, and a sleep interval corresponding to the third sleep state.
8. The method of claim 7, wherein the one or more second criteria include a criterion that is met when a total duration of the time period classified as different from the first sleep state is greater than a threshold duration.
9. The method of claim 7, further comprising:
in accordance with a determination that one or more third criteria are met, a sleep interval is stored or displayed based on the classification for each of the plurality of epochs, wherein a sleep interval corresponding to the second sleep state and a sleep interval corresponding to the third sleep state are combined.
10. The method of claim 1, wherein classifying is performed by a two-way long-short term memory machine learning model.
11. The method of claim 10, further comprising:
the first plurality of features is scaled to a common range of values for use by the two-way long-short term memory machine learning model.
12. The method of claim 10, further comprising:
estimating a probability of each of the plurality of sleep states for each of the plurality of epochs, and classifying the state for each of the plurality of epochs using a maximum of the probabilities of each of the plurality of sleep states for each of the plurality of epochs.
13. The method of claim 1, further comprising:
identifying a first sleep interval of consecutive epochs classified as a respective one of the plurality of sleep states using the classification of each epoch of the plurality of epochs, the first sleep interval being preceded by a second sleep interval of consecutive epochs classified as a different respective sleep state, and the first sleep interval being followed by a third sleep interval of consecutive epochs classified as the different respective sleep state; and
the consecutive epochs of the first sleep interval are reclassified from the respective sleep states to the different respective sleep states according to the first sleep interval being shorter than a threshold number of consecutive epochs.
14. The method of claim 1, wherein the multichannel motion sensor comprises a tri-axial accelerometer.
15. The method of claim 1, further comprising:
the first motion data is filtered using a high pass filter, wherein the one or more first motion features are extracted from the first motion data after filtering using the high pass filter.
16. The method of claim 1, further comprising:
The first motion data is filtered using a band pass filter to generate the first motion data stream.
17. The method of claim 1, further comprising:
filtering the first motion data using a low pass filter; and
the first motion data is downsampled from a first sampling rate to a second sampling rate that is lower than the first sampling rate.
18. The method of claim 1, further comprising:
for each period:
converting the first motion data into a first frequency domain representation for a first channel of the multi-channel motion sensor, a second frequency domain representation for a second channel of the multi-channel motion sensor, and a third frequency domain representation for a third channel of the multi-channel motion sensor; and
calculating a first signal-to-noise ratio using the first frequency domain representation, a second signal-to-noise ratio using the second frequency domain representation, and a third signal-to-noise ratio using the third frequency domain representation;
wherein the selected channel corresponds to the respective channel of the first, second, or third channels having the largest signal-to-noise ratio of the first, second, and third signal-to-noise ratios.
19. A non-transitory computer-readable storage medium storing instructions which, when executed by an electronic device comprising processing circuitry, cause the processing circuitry to perform the method of any one of claims 1 to 18.
20. An electronic device, the electronic device comprising:
a multichannel motion sensor; and
processing circuitry coupled to the multi-channel motion sensor, the processing circuitry programmed to perform the method of any one of claims 1 to 18.
CN202310642638.4A 2022-06-03 2023-06-01 System and method for sleep state tracking Pending CN117158891A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US63/365,840 2022-06-03
US18/309,386 US20230389862A1 (en) 2022-06-03 2023-04-28 Systems and methods for sleep state tracking
US18/309,386 2023-04-28

Publications (1)

Publication Number Publication Date
CN117158891A true CN117158891A (en) 2023-12-05

Family

ID=88934329

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310642638.4A Pending CN117158891A (en) 2022-06-03 2023-06-01 System and method for sleep state tracking

Country Status (1)

Country Link
CN (1) CN117158891A (en)

Similar Documents

Publication Publication Date Title
US11678838B2 (en) Automated detection of breathing disturbances
US10321871B2 (en) Determining sleep stages and sleep events using sensor data
KR102313552B1 (en) Apparatus and method for sleep monitoring
Al-Mardini et al. Classifying obstructive sleep apnea using smartphones
WO2018049852A1 (en) Sleep evaluation method, apparatus and system
EP2696754B1 (en) Stress-measuring device and method
US10194834B2 (en) Detection of sleep apnea using respiratory signals
Ni et al. Automated recognition of hypertension through overnight continuous HRV monitoring
CN108201435A (en) Sleep stage determines method, relevant device and computer-readable medium
WO2017067010A1 (en) Sleep evaluation display method and apparatus, and evaluation device
WO2021208656A1 (en) Sleep risk prediction method and apparatus, and terminal device
JP6813837B2 (en) Activity rhythm judgment method and activity rhythm judgment device
Altini et al. Cardiorespiratory fitness estimation using wearable sensors: Laboratory and free-living analysis of context-specific submaximal heart rates
Ahanathapillai et al. Assistive technology to monitor activity, health and wellbeing in old age: The wrist wearable unit in the USEFIL project
KR102588694B1 (en) Method of Determining Respiration Rate and Method and Apparatus for Determining Respiration State
Zhang et al. Sleep/wake classification via remote PPG signals
EP4285818A1 (en) Systems and methods for sleep state tracking
CN117158891A (en) System and method for sleep state tracking
US11064906B2 (en) Method and apparatus for determining respiration state based on plurality of biological indicators calculated using bio-signals
Guul et al. Portable prescreening system for sleep apnea
US11937938B1 (en) Methods for assessing sleep conditions
Lampier et al. A Deep Learning Approach for Estimating SpO 2 Using a Smartphone Camera
CN115054248B (en) Emotion monitoring method and emotion monitoring device
Wongtaweesup et al. Using Consumer-Graded Wearable Devices for Sleep Apnea Pre-Diagnosis: A Survey and Recommendations
Ghandeharioun Online Obstructive Sleep Apnea Detection Based on Hybrid Machine Learning And Classifier Combination For Home-based Applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination