CN111194468A - Continuously monitoring user health with a mobile device - Google Patents

Continuously monitoring user health with a mobile device Download PDF

Info

Publication number
CN111194468A
CN111194468A CN201880065407.9A CN201880065407A CN111194468A CN 111194468 A CN111194468 A CN 111194468A CN 201880065407 A CN201880065407 A CN 201880065407A CN 111194468 A CN111194468 A CN 111194468A
Authority
CN
China
Prior art keywords
data
health indicator
user
machine learning
health
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201880065407.9A
Other languages
Chinese (zh)
Inventor
A·V·瓦里斯
F·L·彼得森
C·D·C·盖洛韦
大卫·E·艾伯特
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
AliveCor Inc
Original Assignee
AliveCor Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by AliveCor Inc filed Critical AliveCor Inc
Publication of CN111194468A publication Critical patent/CN111194468A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients

Abstract

Disclosed herein are apparatuses, systems, methods, and platforms for continuously monitoring a health condition (e.g., cardiac health condition) of a user. The present invention describes systems, methods, devices, software and platforms for continuously monitoring health indicator data (such as, but not limited to, PPG signals, heart rate or blood pressure, etc.) from a user of a user device in conjunction with (temporally) corresponding data relating to factors that may affect the health indicator ("other factors") to determine whether the user has normal health by, for example, but not limited to, determining or comparing to: i) a group of individuals affected by similar other factors; or ii) the user himself, who is affected by similar other factors.

Description

Continuously monitoring user health with a mobile device
Background
Indicators of the physiological health of an individual ("health indicators"), such as, but not limited to, heart rate variability, blood pressure, and ECG (electrocardiogram), etc., may be measured or calculated at any discrete point or points in time from data collected for measuring health indicators. In many cases, the value of the health indicator at a particular time, or changes over time, provide information about the health condition of the individual. For example, a low or high heart rate or blood pressure, or an ECG that clearly exhibits myocardial ischemia, may exhibit a need for immediate intervention. It is noted, however, that readings, a series of readings, or changes in readings over time of these indicators may provide information that is not recognizable to a user or even a health professional.
Arrhythmias may occur continuously or may occur intermittently, for example. A persistent arrhythmia can be most clearly diagnosed by an individual's electrocardiogram. Since persistent arrhythmias always exist, ECG analysis can be applied at any time to diagnose the arrhythmia. ECG may also be used to diagnose intermittent arrhythmias. However, since intermittent arrhythmias may be asymptomatic and/or intermittent by definition, diagnosis presents a challenge to applying diagnostic techniques when an individual is experiencing an arrhythmia. Therefore, the actual diagnosis of intermittent arrhythmia is very difficult. This particular difficulty is associated with asymptomatic arrhythmias (accounting for nearly 40% of american arrhythmias). Boriani G. and Pettorelli D, Atrial fibre Burden and Atial fibre type:clinicalSignificance andImpact on the Risk of Stroke and Decision Making for Long-termAnticoagulation,Vascul Pharmacol.83:26-35 (8 months 2016), page 26.
There are sensors and mobile electronic technologies that allow for frequent or continuous monitoring and recording of health indicators. However, the capabilities of these sensor platforms often exceed the capabilities of traditional medical science to interpret the data generated by the sensors. The physiological significance of health indicator parameters such as heart rate is often well defined only in certain medical contexts: for example, in an out-of-context situation, heart rate is traditionally evaluated as a single scalar value based on other data/information that may affect a health indicator. A resting heart rate in the range of 60-100 Beats Per Minute (BPM) can be considered normal. Users may manually measure their resting heart rate, generally once or twice a day.
A mobile sensor platform (e.g., mobile blood pressure cuff; mobile heart rate monitor; or mobile ECG device) may be capable of continuously monitoring a health indicator (e.g., heart rate), for example, capable of producing a measurement every second or every 5 seconds, while also acquiring other data related to the user, such as, but not limited to: activity level, body position, and environmental parameters such as air temperature, air pressure, location, etc. Over a 24 hour period, this may result in thousands of independent health indicator measurements. There is relatively little data or medical consensus as to how a "normal" sequence of thousands of measurements looks, as opposed to one or two measurements per day.
Current devices for continuously measuring a user/patient's health indicator range from bulky, invasive and inconvenient devices to simple wearable or handheld mobile devices. Currently, these devices do not provide the ability to effectively utilize data to continuously monitor the health of an individual. The user or health professional is relied upon to evaluate these health indicators based on other factors that may affect the health indicators to determine the health status of the user.
Drawings
Certain features described herein are set forth with particularity in the appended claims. A better understanding of the features and advantages of the disclosed embodiments will be obtained by reference to the following detailed description that sets forth illustrative embodiments, in which the principles described herein are utilized, and the accompanying drawings of which:
1A-1B depict convolutional neural networks that may be used in accordance with some embodiments as described herein;
2A-2B depict a recurrent neural network that may be used in accordance with some embodiments as described herein;
FIG. 3 depicts an alternative recurrent neural network that may be used in accordance with some embodiments as described herein;
FIGS. 4A-4C depict hypothetical data plots to reveal applications of some embodiments as described herein;
5A-5E depict alternative recurrent neural networks and hypothetical plots used to describe some of these embodiments, in accordance with some embodiments as described herein;
FIG. 6 depicts an expanding recurrent neural network, in accordance with some embodiments as described herein;
7A-7B depict systems and apparatus according to some embodiments as described herein;
FIG. 8 depicts a method according to some embodiments as described herein;
9A-9B depict methods according to some embodiments as described herein and hypothetical plots of heart rate versus time to represent one or more embodiments;
FIG. 10 depicts a method according to some embodiments as described herein;
FIG. 11 depicts a hypothetical data plot to represent an application of some embodiments as described herein; and
fig. 12 depicts systems and devices according to some embodiments as described herein.
Detailed Description
The large data volume, complexity of the interaction between health indicators and other factors, and limited clinical guidance may limit the effectiveness of any monitoring system that attempts to detect abnormalities in continuous and/or flow sensor data through specific rules based on traditional medical practice. Embodiments described herein include apparatuses, systems, methods, and platforms that may utilize predictive machine learning models to detect anomalies in a time series from health indicator data, alone or in combination with other factor (as defined herein) data, in an unsupervised manner.
Atrial fibrillation (AF or AFib) occurs in the general population at 1-2% and the presence of AF increases the risk of morbidity and adverse outcomes such as stroke and heart failure. Boriani G. and Pettorelli D, atomic fibre Burden and atomic fibre type, Clinical Signification and simulation on the Risk of Stroke and Decision Making for Long-termAntivibrating,Vascul Pharmacol.83:26-35 (8 months 2016), page 26. In many people (estimated to be up to 40% of patients with AF), AFib can be asymptomatic, and these asymptomatic patients have similar risk profiles of stroke and heart failure as do symptomatic patients. See the same supra. However, symptomatic patients may take positive measures (such as taking blood thinners or other medications) to reduce the risk of negative outcomes. Asymptomatic AF (so-called silent AF or SAF) as well as the duration of time a patient is in AF can be detected using an implantable electrical device (CIED). The same as the above. From this information, the time spent by these patients at AF or AF load can be determined. The same as the above. AF loads of more than 5-6 minutes, in particular more than 1 hour, are associated with a significantly increased risk of stroke and other negative health consequences. The same as the above. Thus, the ability to measure AF burden in asymptomatic patients may enable early intervention and may reduce the risk of negative health consequences associated with AF. The same as the above. The detection of SAF is challenging and typically requires some form of continuous monitoring. Currently, continuous monitoring of AF requires bulky, sometimes invasive and expensive devices, where such monitoring requires a high level of supervision and review by medical professionals.
Many devices continuously acquire data to provide measurements or calculations of health indicator data, such as, but not limited to
Figure BDA0002441471340000041
Apple
Figure BDA0002441471340000042
Smartphones, tablet computers, and the like fall into the category of wearable devices and/or mobile devices. Other devices include permanent or semi-permanent devices (e.g., holter) on or in the user/patient, while other devices may include larger devices within the hospital that are movable on a cart. However, the measurement data is rarely processed, except for periodic viewing of the measurement data on a display or establishing simple data thresholds. Observations of data (even of trained medical professionals) may often appear normal, with a major exception being the case where users have acute symptoms that are easily recognized. It is difficult and nearly impossible for a medical professional to continuously monitor health indicators to observe data anomalies and/or trends that may indicate more serious conditions.
As used herein, a platform includes one or more customization software applications (or "applications") configured to interact with each other locally or over a distributed network including the cloud and the internet. An application of the platform as described herein is configured to collect and analyze user data and may include one or more software models. In some embodiments of the platform, the platform includes one or more hardware components (e.g., one or more sensing devices or microprocessors). In some embodiments, the platform is configured to operate with one or more devices and/or one or more systems. That is, in some embodiments, an apparatus as described herein is configured to run applications of a platform using a built-in processor, and in some embodiments, the platform is utilized by a system that includes one or more computing devices that interact with or run one or more applications of the platform.
The present invention describes systems, methods, devices, software and platforms for continuously monitoring user data (such as but not limited to PPG signals, heart rate or blood pressure, etc.) from a user device related to one or more health indicators in conjunction with (temporally) corresponding data related to factors (referred to herein as "other factors") that may affect the health indicators to determine whether the user has normal health by, for example and without limitation, determining or comparing to: i) a group of individuals affected by similar other factors; or ii) the user himself, who is affected by similar other factors. In some embodiments, the measured health indicator data is input into a trained machine learning model, alone or in combination with other factor data, wherein the machine learning model determines a probability that the user's measured health indicator is considered to be within a healthy range, and notifies the user of such a situation if the user's measured health indicator is considered not to be within a healthy range. A user who is not within a healthy range may increase the likelihood that the user may be experiencing a health event that requires high fidelity information to confirm a diagnosis, such as a potentially symptomatic or asymptomatic arrhythmia. The notification may take the form of, for example, a request for the user to obtain an ECG. Other high fidelity measurements (blood pressure, pulse oximeter, etc.) may be requested, with ECG being just one example. The high fidelity measurement (ECG in this embodiment) may be evaluated by an algorithm and/or medical professional to make a notification or diagnosis (collectively referred to herein as a "diagnosis," recognizing that only a physician can make a diagnosis). In the ECG example, the diagnosis may be AFib or any other number of well known conditions diagnosed using ECG.
In further embodiments, the diagnostics are used to flag low fidelity data sequences (e.g., heart rate or PPG), which may include other factor data sequences. This low fidelity data sequence after high fidelity diagnostic labeling is used to train a high fidelity machine learning model. In these further embodiments, the training of the high fidelity machine learning model may be trained by unsupervised learning, or may be updated from time to time with new training examples. In some embodiments, a user's measured low-fidelity health indicator data sequence, and optionally a corresponding data sequence (in time) of other factors, are input into a trained high-fidelity machine learning model to determine the probability and/or prediction that the user is experiencing or has experienced a diagnostic condition for which the high-fidelity machine learning model is trained. Such probabilities may include the probability of when an event starts and when an event ends. For example, some embodiments may calculate the Atrial Fibrillation (AF) load of the user, or the amount of time the user experiences AF over time. Previously, AF burden could only be determined using cumbersome and expensive electrocardiography or implanted continuous ECG monitoring devices. Accordingly, some embodiments described herein may continuously monitor the health condition of the user and inform the user of changes in health condition by continuously monitoring health indicator data (such as, but not limited to, PPG data, blood pressure data, heart rate data, etc.) obtained from a device worn by the user, alone or in combination with corresponding data for other factors. As used herein, "other factors" include any factor that may affect the health indicator and/or may affect the data representing the health indicator (e.g., PPG data). These other factors may include various factors such as, but not limited to: air temperature, altitude, exercise level, weight, gender, diet, standing, sitting, falling, lying down, weather, and BMI, among others. In some embodiments, mathematical or empirical models other than machine learning models may be used to determine when to notify a user that a high fidelity measurement is obtained, which may then be analyzed and used to train a high fidelity machine training model as described herein.
Some embodiments described herein may detect anomalies of a user in an unsupervised manner by: receiving a master time series of health indicator data; optionally receiving a secondary time series of one or more other factor data corresponding in time to the primary time series of health indicator data, the secondary series may be from a sensor or from an external data source (e.g., via a network connection, computer API, etc.); providing the primary and secondary time series to a preprocessor, which may perform operations on the data such as filtering, caching, averaging, time aligning, buffering, upsampling, and downsampling; providing the time series of data to a machine learning model, the machine learning model being trained and/or configured to predict a next value of the primary time series at a future time using values of the primary time series and the secondary time series; comparing the predicted master time series value generated by the machine learning model at a particular time t with the measured value of the master time series at time t; and alerting or prompting the user to take an action if the difference between the predicted future time series and the measured time series exceeds a threshold or criterion.
Thus, some embodiments described herein detect when the behavior of a primary sequence of physiological data observed with respect to the passage of time and/or in response to a secondary sequence of observed data differs from what would be expected given the training examples used to train the model. The system may be used as an anomaly detector in the case of training examples collected from normal individuals or from data previously classified as normal for a particular user. If the data is only obtained from a particular user without any other classification, the system may be used as a change detector for detecting changes in the health indicator data being measured by the primary sequence relative to the time at which the training data was captured.
Described herein are software platforms, systems, apparatuses, and methods as follows: for generating a trained machine learning model and using the model to predict or determine the probability that the measured health indicator data (primary sequence) of a user affected by other factors (secondary sequences) is outside the normal bounds of a healthy population affected by similar other factors (i.e., a global model) or outside the normal bounds of that particular user affected by similar other factors (i.e., a personalized model), wherein such notification is provided to the user. In some embodiments, the user may be prompted to obtain additional measured high fidelity data that may be used to label previously acquired low fidelity user health indicator data to generate a different trained high fidelity machine learning model with the ability to predict or diagnose anomalies or events using only the low fidelity health indicator data, where such anomalies are typically identified or diagnosed using only the high fidelity data.
Some embodiments described herein may include inputting health metric data of a user, and optionally inputting (temporally) corresponding data of other factors into a trained machine learning model, wherein the trained machine learning model predicts the health metric data or a probability distribution of the health metric data of the user at a future time step. In some embodiments, the prediction is compared to measured health indicator data of the user at the prediction time step, wherein if the absolute value of the difference exceeds a threshold, the user is notified that his or her health indicator data is outside of a normal range. In some embodiments, the notification may include an indication to diagnose or do something, such as, but not limited to, obtaining additional measurements or contacting a health professional. In some embodiments, the machine learning model is trained using health index data from healthy people and (temporally) corresponding data for other factors. It should be appreciated that other factors in the training examples used to train the machine learning model may not be an average of the population, rather, the data for each of the other factors correspond in time to the set of health indicator data for the individual in the training examples.
Some embodiments are described as receiving discrete data points over time, predicting discrete data points at a future time from an input, and then determining whether a loss between the discrete measurement input at the future time and the predicted value at the future time exceeds a threshold. Those skilled in the art will readily appreciate that the input data and output predictions may take forms other than discrete data points or scalars. For example, and without limitation, a health indicator data sequence (also referred to herein as a primary sequence) and other data sequences (also referred to herein as secondary sequences) may be divided into time segments. Those skilled in the art will recognize that the manner in which data is segmented is a matter of design choice and may take many different forms.
Some embodiments segment the health indicator data sequence (also referred to herein as the primary sequence) and the other data sequence (also referred to herein as the secondary sequence) into two segments: in the past, all data before a particular time t is represented; and in the future, all data at or after time t. These embodiments input the health indicator data sequence for the past time segment and all other data sequences for the past time segment into a machine learning model configured to predict the most likely future segment (or distribution of likely future segments) of the health indicator data. Optionally, these embodiments input the health indicator data sequence for the past time segment, all other data sequences for the past time segment, and other data sequences for the future segment to a machine learning model configured to predict the most likely future segment (or distribution of likely future segments) of the health indicator data. The future segment of predicted health indicator data is compared to the measured health indicator data of the user at the future segment to determine a loss and whether the loss exceeds a threshold, in which case some action is taken. Actions may include, for example, but are not limited to: notifying the user to obtain additional data (e.g., ECG or blood pressure); notifying the user to contact a health professional; or automatically trigger the acquisition of additional data. Automatically acquiring additional data may include, for example and without limitation, ECG acquisition via sensors operatively coupled (wired or wireless) to a computing device worn by the user, or blood pressure via a mobile cuff surrounding the wrist or other suitable body part of the user and coupled to the computing device worn by the user. A data segment may include a single data point, a number of data points over a period of time, an average of these data points over the period of time, where an average may include a true average, a median, or a mode. In some embodiments, the segments may overlap in time.
These embodiments detect when the behavior or measure of the health indicator data sequence observed with respect to the passage of time, as influenced (in time) by the corresponding other factor data sequence, differs from the behavior or measure expected according to the training examples collected under similar other factors. If training examples are collected from healthy individuals under similar other factors, or from data previously classified as healthy for a particular user under similar other factors, then these embodiments function as anomaly detectors from healthy people or particular users, respectively. If the training examples were only obtained from a particular user without any other classification, then these embodiments serve as a change detector for detecting changes in the health indicator at the time of measurement for the particular user relative to the time at which the training examples were collected.
Some embodiments described herein utilize machine learning to continuously monitor a person's health index under the influence of one or more other factors, and evaluate whether the person is healthy based on a population classified as healthy under the influence of similar other factors. As will be readily appreciated by those skilled in the art, a variety of different machine learning algorithms or models (including, but not limited to Bayes, Markov, gaussian processes, clustering algorithms, generative models, kernel and neural network algorithms) may be used without departing from the scope described herein. As understood by those skilled in the art, a typical neural network employs one or more layers, such as, but not limited to, a nonlinear activation function, to predict the output of a received input, and may include one or more hidden layers in addition to the input and output layers. The output of each hidden layer of some of these networks is used as input to the next layer in the network. Examples of neural networks include, for example, but are not limited to, generating neural networks (convolutional neural networks), convolutional neural networks, and recurrent neural networks.
Some embodiments of the health monitoring system monitor heart rate and activity data of an individual as low fidelity data (e.g., heart rate or PPG data) and detect conditions (e.g., AFib) that are typically detected using high fidelity data (e.g., ECG data). For example, the heart rate of an individual may be provided by the sensor continuously or at discrete intervals (such as every five seconds). The heart rate may be determined based on PPG, pulse oximeter, or other sensors. In some embodiments, the activity data may be generated as a number of steps taken, an amount of movement sensed, or other data points indicative of the activity level. Low fidelity (e.g., heart rate) data and activity data may then be input into the machine learning system to determine a prediction of high fidelity outcome. For example, the machine learning system may use the low fidelity data to predict arrhythmias or other indications of the heart health of the user. In some embodiments, the machine learning system may use the input of the segment of data input to determine the prediction. For example, activity level data and heart rate data for one hour may be input into the machine learning system. The system may then use this data to generate predictions of conditions such as atrial fibrillation. Various embodiments of the invention are discussed in more detail below.
Referring to fig. 1A, a trained Convolutional Neural Network (CNN)100, an example of a feed forward network, brings input data 102 (e.g., a picture of a ship) into convolutional layers (also known as hidden layers) 103, applying a series of trained weights or filters 104 to the input data 106 in each convolutional layer 103. The output of a first convolutional layer is an activation map (not shown), which is the input to a second convolutional layer to which trained weights or filters (not shown) are applied, where the output of subsequent convolutional layers results in an activation map representing increasingly complex features of the input data of the first layer. After each convolutional layer, a non-linear layer (not shown) is applied to introduce non-linearity issues, where the non-linear layer may include tanh, sigmoid, or ReLU. In some cases, a pooling layer (not shown), also referred to as a downsampling layer, may be applied after the nonlinear layer, with filters and step sizes (stride) of substantially the same length applied to the input and outputting the maximum number of each sub-region for which the filter performs a convolution operation. Other options for pooling are average pooling and L2 norm pooling. The pooling layer reduces the spatial dimension of the input volume, thereby reducing computational cost and controlling overfitting. The last layer of the network is the fully-connected layer that takes the output of the last convolutional layer and outputs an n-dimensional output vector representing the quantities to be predicted (e.g., probability of image classification: 20% car, 75% ship, 5% bus, and 0% bicycle), i.e., resulting in a predicted output 106 (O), which may be a picture of a ship, for example. The output may be a scalar value data point, such as stock price, that the network is predicting. As described more fully below, the trained weights 104 may be different for each convolutional layer 103. To achieve such real-world prediction/detection (e.g., it is a ship), the neural network needs to be trained on known data inputs or training examples, resulting in a trained CNN 100. To train CNN 100, many different training examples (e.g., many pictures of a ship) are input into the model. Those skilled in the art of neural networks will fully appreciate that the above description provides some simplistic view of CNNs to provide some context for the present discussion, and that it will be fully understood that applying CNNs alone or in combination with other neural networks will be equally applicable and within the scope of some embodiments described herein.
Fig. 1B presents a training CNN 108. In FIG. 1B, convolutional layer 103 is shown as a single hidden convolutional layer 105, 105' up to convolutional layer 105n-1And the last nth layer is a fully connected layer. It should be understood that the last layer may be more than one fully connected layer. The training examples 111 are input into the convolutional layer 103, the nonlinear activation function (not shown) and the weights 110, 110' to 110nApplied successively in the training example 111, where the output of any hidden layer is the input of the next layer, and so on, up to the last nth fully-connected layer 105nProducing an output 114. The output or prediction 114 is compared to the training examples 111 (e.g., a picture of a ship) resulting in a difference 116 between the output or prediction 114 and the training examples 111. If the difference or loss 116 is less than some preset loss (e.g., the output or prediction 114 predicts that the object is a ship), then the CNN converges and is considered trained. If the CNN has not converged, then the weights 110 and 110' to 110 are updated according to how close the prediction is to the known input using a back propagation techniquen. Those skilled in the art will appreciate that methods other than back-propagation may be used to adjust the weights. A second training example (e.g., a picture of a different ship) is entered and the process is repeated again with updated weights, then the weights are updated again, and so on until the nth training example (e.g., the nth picture of the nth ship) has been entered. This process is repeated iteratively through the same n training examples until the Convolutional Neural Network (CNN) is trained or converges to the correct output for the known input. Once CNN 108 is trained, weights 110, 110' through 110n(i.e., weights 104 as depicted in fig. 1A) are fixed and used for the trained CNN 100. As explained, there are different weights for each convolutional layer 103 and each fully-connected layer. Then, the user can use the device to perform the operation,the trained CNN 100 or model is fed with image data to determine or predict what it is trained to predict/recognize (e.g., a ship), as described above. Any trained model, CNN, RNN, etc., may be further trained with additional training examples or predictive data output by the model, which then serves as training examples, i.e., may allow for modification of the weights. The machine learning model may be trained "off-line," e.g., on a computing platform separate from the platform on which the trained model is used/executed, and then transferred to the platform on which the trained model is used/executed. Optionally, embodiments described herein may periodically or continuously update the machine learning model based on newly acquired training data. This update training may be done on a separate computing platform that passes the updated trained model over a network connection to the platform that uses/performs the retrained model, or the training/retraining/updating process may be done on the platform that uses/performs the retrained model itself as new data is acquired. Those skilled in the art will appreciate that CNN applies to data (e.g., pictures, characters, words, etc.) or time series of data in a fixed array. For example, the CNN may be used to model serialized health indicator data and other factor data. Some embodiments utilize a feed forward CNN with a skip connection and a gaussian mixture model output to determine a probability distribution of a predicted health indicator (e.g., heart rate, PPG, or arrhythmia).
Some embodiments may utilize other types and configurations of neural networks. The number of convolutional layers and the number of fully connected layers may be increased or decreased. In general, the optimal number and proportion of convolutional layers relative to fully-connected layers can be set experimentally by determining which configuration provides the best performance for a given data set. The number of convolutional layers can be reduced to 0, leaving a fully connected network. The number of convolution filters and the width of each filter may also be increased or decreased.
The output of the neural network may be a single scalar value corresponding to an accurate prediction of the master time series. Alternatively, the output of the neural network may be a logistic regression in which each class corresponds to a particular range or class of primary time series values, where these primary time series values are any number of alternative outputs as would be readily understood by one skilled in the art.
In some embodiments, the use of gaussian mixture model outputs aims to constrain well-formed probability distributions for network learning and to improve the generalization of limited training data. In some embodiments, the use of multiple elements in a gaussian mixture model is intended to allow the model to learn a multi-modal probability distribution. Machine learning models that combine or aggregate the results of different neural networks may also be used, where the results may be combined.
Machine learning models with updateable memory or state from previous predictions to apply to subsequent predictions are another approach for modeling serialized data. In particular, some embodiments described herein utilize a recurrent neural network. Referring to the example of fig. 2A, a diagram of a trained Recurrent Neural Network (RNN)200 is shown. The trained RNN 200 has an updatable state (S)202 and trained weights (W) 204. Input data 206 is input into the state 202 where the weights (W)204 are applied, and a prediction 206 (P) is output. In contrast to a linear neural network (e.g., CNN 100), state 202 is updated based on input data, thereby serving as a memory from previous states for use in turn for a next prediction with next data. Updating the state provides a ring or loop feature for the RNN. For better presentation, fig. 2B shows the unfolded trained RNN 200 and its applicability to serialized data. Upon expansion, the RNN behaves like a CNN, but in the expanded RNN, each apparently similar layer behaves as a single layer with updated state, with the same weights applied in each iteration of the loop. Those skilled in the art will appreciate that a single layer may itself have sub-layers, but for clarity of explanation, a single layer is described herein. Input data (I) at time tt)208 to the state at time t (S)t)210, and neurons at time t (C)t) The trained weights 204 are applied within 212. CtThe output of 212 is a prediction at time step t +1
Figure BDA0002441471340000131
214 and updated state S t+1216. Similarly, at C t+1220, mixing I witht+1218 to S t+1216, the same trained weights 204 are applied, and C t+1220 is output of
Figure BDA0002441471340000132
222. As described above, by StUpdating St+1Thus St+1With S from a previous time steptTo memorize the data. For example, but not limiting of, the memory may include previous health indicator data from one or more previous time steps or previous other factor data. The process continues in n steps, where I is addedt+n224 to S t+n226 and the same weights 204 are applied. Neuron Ct+nIs a prediction
Figure BDA0002441471340000133
In particular, the state is updated according to a previous time step, providing RNN with the benefit of memory from the previous state. For some embodiments, this property makes RNNs a selectable option for predicting serialized data. Nonetheless, and as noted above, there are other suitable machine learning techniques for making such predictions on serialized data, including CNN.
Like the CNN, the RNN may process the data string as an input and output a predicted data string. A simple way to explain this aspect of using RNNs is to use an example of natural language prediction. Take the following phrases as examples: the sky blue. The word string (i.e., data) has a context. Thus, as the state updates, the data string updates from one iteration to the next, providing a context to predict blue. As just described, the RNN has a memory component to aid in the prediction of the serialized data. However, memory in the updated state of the RNN may be limited in how far it can trace back, similar to short-term memory. When it is desired to predict serialized data with longer backtracking (similar to long-term memory), this can be achieved using fine-tuning of the RNN just described. The ambiguity of the sentence of the word to be predicted from the immediately preceding or surrounding word is again a simple example to explain: mary speedsfluent French (Mary speaks fluent French). It is unclear from the immediately preceding word that French is the correct prediction; only is it clear that a certain language is the correct prediction, but is it the correct prediction? Correct predictions may exist in the context of words separated by intervals larger than a single word string. The Long Term Memory (LSTM) network is a special RNN that is able to learn these (more) Long Term dependencies.
As mentioned above, RNNs have a relatively simple repeating structure, e.g., they comprise a single layer with a non-linear activation function (e.g., tanh or sigmoid). Similarly, LSTM has a chain-like structure, but (for example) has four neural network layers instead of one. These additional neural network layers provide LSTM with the ability to delete or add information with respect to state (S) by using structures called neuron gates. The same as the above. Fig. 3 shows a neuron 300 for LSTM RNN. Line 302 represents the neuron state (S), and may be considered an information highway; it is relatively easy to flow information along unchanged neuronal states. The same as the above. Neuron gates 304, 306, and 308 determine how much information is allowed to pass through the state or along the information highway. Neuron gate 304 first determines the state S of the neuron to be driventI.e. how much information is deleted in the so-called forgetting gate level. The same as the above. Next, neuron gates 306 and 306 'determine which information is to be added to the neuron state, and neuron gates 308 and 308' determine what information is to be output from the neuron state as a prediction
Figure BDA0002441471340000151
The information highway or neuron state is now the updated neuron state St+1For use in the next neuron. LSTM allows RNN to have a more durable or (longer) term memory. Compared to simpler RNN structures, LSTM provides the following additional advantages for RNN-based machine learning models: output prediction based on how data is serializedA longer spatial or temporal context is considered separate from the input data.
In some embodiments utilizing RNNs, at each time step, the primary and secondary time series may not be provided to the RNN as a vector. Instead, only the current values of the primary and secondary time series, and the future values of the secondary time series or the aggregation function within the prediction interval are provided to the RNN. In this way, the RNN uses the persistent state vector to retain information about previous values for use in making predictions.
Machine learning is well suited to continuously monitor one or more criteria to identify anomalies or trends of varying sizes in the input data as compared to the training examples used to train the model. Thus, some embodiments described herein input the health metric data of the user and optionally other factor data into a trained machine learning model that predicts what the health metric data of a healthy person would look like at the next time step and compares the prediction to the measured health metric data of the user at a future time step. If the absolute value of the difference (e.g., loss described below) exceeds a threshold, the user is notified that his or her health indicator data is not within a normal or healthy range. The threshold is a number set by the designer and may be altered by the user in some embodiments to allow the user to adjust the notification sensitivity. The machine learning models of these embodiments may be trained on health indicator data from healthy people, alone or in combination with (temporally) corresponding other factor data, or may be trained on other training examples to meet the design requirements of the models.
The data from the health indicator (e.g., heart rate data) is serialized data, more particularly time-serialized data. The heart rate may be measured in a number of different ways, for example, but not limited to, measuring an electrical signal from the chest band or derived from the PPG signal. Some embodiments obtain a heart rate derived from a device, where each data point (e.g., heart rate) is generated at approximately equal intervals (e.g., 5 seconds). However, in some cases and in other embodiments, the derived heart rate is not provided in roughly equal time steps, for example because the data needed for the derivation is not reliable (e.g., because the device is moving or because the PPG signal is unreliable due to light contamination). The same is true for secondary data sequences obtained from motion sensors or other sensors used to collect data for other factors.
The raw signal/data (electrical signal from ECG, chest strap or PPG signal) is itself a time series of data that may be used according to some embodiments. For clarity, and not by way of limitation, the present specification uses PPG to refer to data representing a health indicator. Those skilled in the art will readily appreciate that the form of health metric data, raw data, waveforms, or numbers derived from raw data or waveforms may be used in accordance with some embodiments described herein.
Machine learning models that may be used with embodiments described herein include, for example, but are not limited to, Bayes, Markov, Gaussian processes, clustering algorithms, generative models, kernel, and neural network algorithms. Some embodiments utilize a machine learning model based on a trained neural network, other embodiments utilize a recurrent neural network, and additional embodiments use LTSMRNN. For clarity, and not by way of limitation, some embodiments of the present specification will be described using a recurrent neural network.
Fig. 4A to 4C show hypothetical plots of PPG (fig. 4A), the steps taken (fig. 4B) and air temperature (fig. 4C) versus time. PPG is an example of health indicator data, where step, activity level, and air temperature are exemplary other factor data of other factors that may affect health indicator data. Those skilled in the art will appreciate that other data may be obtained from any of a number of known sources including, but not limited to, accelerometer data, GPS data, weight scales, user inputs, etc., and may include, but is not limited to, temperature, activity (running, walking, sitting, cycling, falling, climbing stairs, taking a step, etc.), BMI, weight, height, age, etc. The first dashed line extending vertically across all three plots represents the time t at which user data is obtained for input into the trained machine learning model (discussed below). The hash plot lines in fig. 4A represent predicted or likely output data 402, and the solid lines 404 in fig. 4A represent measured data. Fig. 4B is a hypothetical plot of the number of steps taken by the user at various times, and fig. 4C is a hypothetical plot of the air temperature at various times.
Fig. 5A-5B depict schematic diagrams of a trained recurrent neural network 500 receiving the input data depicted in fig. 4A-4C (i.e., ppg (p), step (R), and air temperature (T)). Again, these input data (P, R and T) are merely examples of health metric data and other factor data. It will also be appreciated that data for more than one health indicator may be input and predicted, and that more or less than two other factor data may be used, with the selection depending on what the model is designed for. Those skilled in the art will also appreciate that other factor data is collected to correspond in time to the collection or measurement of health indicator data. In some cases (e.g., weight), other factor data will remain relatively constant over a period of time.
Fig. 5A depicts the trained neural network 500 as a loop. P, T and R are input into RNN 500 state 502 with weight W applied, and RNN 500 outputs a predicted PPG 504 (P). In step 506, the difference P-P is calculated*(ΔP*) And at step 508, | Δ P is judged*If | is greater than the threshold. If so, step 510 notifies/alerts the user that his/her health indicator is outside of the bounds/thresholds predicted to be normal or for healthy people. The warning/notification/detection may be, for example, but not limited to, a proposed/consultant doctor, a simple notification such as a tactile feedback, a request to take additional measurements such as an ECG, or a simple annotation without any advice, or any combination thereof. If | Δ P*If | is less than or equal to the threshold, then step 512 does nothing. In both steps 510 and 512, the process is repeated at the next time step with the new user data. In the present embodiment, the state is updated after the prediction data is output, and the prediction data may be used in updating the state.
In another embodiment (not shown), the primary sequence of heart rate data (e.g., derived from the PPG signal) and the secondary sequence of other factor data are provided to a trained machine learning model, which may be an RNN, a CNN, other machine learning model, or a combination of these models. In this embodiment, the machine learning model is configured to receive as input:
A. a vector (V) of length 300 up to and including the last 300 health indicator samples of any health indicator data at time t (e.g. heart rate in beats per minute)H);
B. Comprising VHOf the samples of (a) at least one vector (V) of length 300 of the most recent other factor data (e.g., number of steps) at the approximate time of each sample of (a)O);
C. Vector (V) of length 300TD) Input V with index iDT(i) Sample V containing health indexH(i) Time stamp and V ofH(ii) a time difference between the timestamps of (i-1); and
D. scalar prediction interval other factor rate O representing the average other factor rate (e.g., stride rate) measured over a period of time from t to t + τrate(e.g., without limitation, a step rate), where τ may be, for example, without limitation, 2.5 minutes, and is a future prediction interval.
The output of this embodiment may be, for example, a probability distribution characterizing the predicted heart rate measured over a time period from t to t + τ. In some embodiments, the machine learning model is trained using training examples that include a continuous time series of health indicator data and a series of other factor data. In an alternative embodiment, the notification system assigns a timestamp t + τ/2 to each predicted health indicator (e.g., heart rate) distribution, thereby concentrating the predicted distribution within the prediction interval (τ). In this embodiment, the notification logic then considers the length to be WLEither 2 x (τ) or all samples within a sliding window (W), in this example 5 minutes, and three parameters were calculated:
1. average of all health indicator serialized data within a time window
Figure BDA0002441471340000181
2. Average of all model predictions predicting health indicators with timestamps falling within a time window
Figure BDA0002441471340000182
Wherein; and
3. mean value of root mean square of each predicted health indicator distribution within a time window
Figure BDA0002441471340000183
Wherein
4. In one embodiment, if
Figure BDA0002441471340000184
Or
Figure BDA0002441471340000185
Figure BDA0002441471340000186
(where ψ is a threshold), a notification is generated.
In this embodiment, a warning is generated in the case where the measured health indicator within a particular window W is more than a certain multiple of the standard deviation from the average of the predicted health indicator values. The window W may be applied in a sliding manner in the sequence of measured and predicted health indicator values, where each window overlaps in time with the previous window by a fraction (e.g., 0.5 minutes) specified by the designer.
The notification may take any number of different forms. For example, but not limited to, the user may be notified of the acquisition of the ECG and/or blood pressure, the computing system (e.g., wearable computing system, etc.) may be directed to automatically acquire the ECG or blood pressure (e.g., the user may be notified to see a doctor, or the user may simply be notified that the health indicator data is abnormal.
In the present embodiment, V is input as a modelDTIs selected to allow the model to utilize VHMay be derived from information contained in a variable spacing between health indicator data in (1), wherein the variable spacing may be from deriving the health indicator data from less consistent raw dataAnd (4) an algorithm. For example, the heart rate samples are generated by Apple Watch algorithm only with raw PPG data reliable enough to output reliable heart rate values, which results in irregular time gaps between heart rate samples. In a similar manner, the present embodiment utilizes other factor data (V) having the same length as the other vectorsO) To handle the different and irregular sampling rates between the primary sequence (health indicator) and the secondary sequence (other factors). In this embodiment, the secondary sequence is remapped or interpolated to the same point in time as the primary time sequence.
Further, in some embodiments, the configuration of data in the secondary time series that exists as input to the machine learning model for a future prediction time interval (e.g., after t) may be modified. In some embodiments, a single scalar value that contains the average other factor data rate over the prediction interval may be modified with multiple scalar values (e.g., one scalar value per time series). Alternatively, a vector of values within the prediction interval may be used. In addition, the prediction interval itself may be adjusted. For example, a shorter prediction interval may provide a faster response to changes and improved detection of events that are (shorter) in the basic time metric, but may also be more sensitive to interference from noise sources (e.g., motion artifacts).
Similarly, the output prediction of the machine learning model itself need not be a scalar. For example, some embodiments may generate a time series of predictions for multiple times t within a time interval between t and t + τ, and the warning logic may compare each of these predictions to a measured value within the same time interval.
In this previous embodiment, the machine learning model itself may comprise, for example, a 7-layer feed-forward neural network. The first 3 layers may be convolutional layers containing 32 cores, each core having a core width of 24 and a step size of 2. The first layer may have as inputs an array V in three channelsH、VOAnd VTD. The last 4 layers may be fully-connected layers, all but the last layer utilizing a hyperbolic tangent activation function. The outputs of the third layer may be flattened into an array for input into the first fully-connected layer.The last layer outputs 30 values, parameterizing the gaussian mixture model into 10 mixtures (with three parameters, mean, variance, and weight, for each mixture). The network uses a hopping connection between the first fully-connected layer and the third fully-connected layer such that the output of layer 6 is summed with the output of layer 4 to produce the input of layer 7. The standard batch normalization can be used on all layers except the last layer with an attenuation of 0.97. The ability to propagate gradients through the network can be improved using hopping junctions and batch normalization.
The selection of the machine learning model may affect the performance of the system. Machine learning model configurations can be divided into two types of considerations. The first is the internal architecture of the model, i.e. the choice of the model type (generalized non-linear regression of convolutional neural networks, recursive neural networks, random forests, etc.), and the parameters characterizing the implementation of the model (typically the number of parameters, and/or the number of layers, the number of decision trees, etc.). The second is the external architecture of the model-the arrangement of the data being fed into the model and the specific parameters of the problem that the model is required to solve. The external architecture may be characterized, in part, by the dimensions and type of data provided as input to the model, the time range spanned by the data, and pre-or post-processing of the data.
In general, the choice of external architecture is a balance between increasing the number of parameters and increasing the amount of information provided as input, which can increase the predictive power of the machine learning model (with the available storage and computational power to train and evaluate larger models) and the availability of a sufficient amount of data to prevent overfitting.
Many variations of the external architecture of the model discussed in some embodiments are possible. The number of input vectors can be modified as well as the absolute length (number of elements) and the time span covered. The input vectors need not be the same length or cover the same time span. The data need not be sampled equally in time, for example, but is not limited to, a 6 hour heart rate data history may be provided in which data less than one hour prior to t is sampled at a rate of 1Hz, data more than 1 hour prior to t but less than 2 hours prior to t is sampled at a rate of 0.5Hz, and data earlier than 2 hours is sampled at a rate of 0.1Hz, where t is a reference time.
Fig. 5B shows the unfolded trained RNN 500. Data 513 (P) is inputt、RtAnd Tt) Input to the state at time t (S)t)514, and applying trained weights 516. Neuron (C)t) The output of 518 is a prediction at time t +1
Figure BDA0002441471340000211
520 and updated state S t+1522. Similarly, at Ct+1At 524, data (P) is inputt+1、Rt+1And Tt+1) 513' input to S t+1522, and applying trained weights 516, and Ct+1The output of 524 is
Figure BDA0002441471340000212
523. As described above, by updating StTo obtain St+1Thus St+1With S from a previous time steptIs composed of neurons (C)t) 518. The process continues with n steps, wherein the data (P) is inputn、RnAnd Tn) 513' input to Sn530, and the trained weights 516 are applied. Neuron CtIs the prediction 532
Figure BDA0002441471340000213
In particular, the trained RNN always applies the same weights, but and more importantly, updates the state according to the previous time step, providing RNN with the benefit of memory from the previous time step. Those skilled in the art will appreciate that the chronological order in which the dependent health indicator data is entered may vary and still produce the desired results. For example, measured health indicator data (e.g., P) from a previous time stept-1) And other factor data (e.g., R) from the current time steptAnd Tt) Can be input to the current time step (S)t) In a state where the model predicts the current timeHealth index at intermediate step
Figure BDA0002441471340000214
The health index is measured as described above
Figure BDA0002441471340000215
And comparing with the measured health indicator data at the current time step to determine whether the user's health indicator is normal or within a healthy range.
Fig. 5C shows an alternative embodiment of a trained RNN to determine whether the health indicator serialized data (PPG in our example) of the user is within the band or threshold of a healthy person. The input data in this embodiment is linear combinations
Figure BDA0002441471340000216
Wherein
Figure BDA0002441471340000217
Is a predicted health index value at time t, and PtIn this embodiment, the nonlinearity of α as a function of loss (L) ranges from 0 to 1, with loss sum α discussed in more detail belowtInput into the network and when α approaches 1, predict data
Figure BDA0002441471340000218
Input into the network to make a prediction at the next time step. Other factor data (O) at time t may also optionally be inputt)。
ItAnd OtIs state StWherein in some embodiments, state StOutputting the predicted health index data at the time step t +1
Figure BDA0002441471340000219
Probability distribution of
Figure BDA00024414713400002110
β therein(P*)Is a predicted health index (P)*) Is determined. In some embodiments, the probability distribution function is sampled to select the predicted health indicator value at t +1
Figure BDA0002441471340000221
As will be appreciated by those skilled in the art, different pairs of methods β may be used depending on the goals of the network designer(P*)The sampling is performed, where the method may include obtaining an average, maximum, or random sampling of the probability distribution using the measurement data at time t +1 to evaluate βt+1Provides a state St+1A predicted probability for the measurement data.
To illustrate this concept, FIG. 5D shows a hypothetical probability distribution of a hypothetical health indicator data range at time t + 1. This function is sampled, for example, with a maximum probability of 0.95 to determine the predicted health indicator at time t +1
Figure BDA0002441471340000222
Also using measured or actual health indicator data
Figure BDA0002441471340000223
To evaluate the probability distribution (β)t+1) And determines the probability that the model will predict if actual data has been entered into the model. In the case of the present example,
Figure BDA0002441471340000224
is 0.85.
Losses may be defined to help determine whether to inform a user that his or her health condition is not within the normal range predicted by the trained machine learning model. The loss is selected to model how close the predicted data is to the actual or measured data. Those skilled in the art will appreciate the many ways to define the loss. In other embodiments described herein, for example, the absolute value (| Δ P) of the difference between the predicted data and the actual data*In some embodiments, the loss (L) may be L ═ ln [ β |)(P)]Wherein
Figure BDA0002441471340000225
Figure BDA0002441471340000226
L is a measure of how close the predicted data is to the measured or actual data β(P)In the range of 0 to 1, where 1 means that the predicted value and the measured value are the same. Thus, a low loss indicates that the predicted value is likely to be the same or close to the measured value; in this context, low loss means that the measurement data appears to be from a healthy/normal person. In some embodiments, a threshold value for L is set, e.g., L>5, wherein the user is notified that the health indicator data is outside of a range considered healthy. Other embodiments may take an average of the losses over a certain period of time and compare the average to a threshold. In some embodiments, the threshold itself may be a function of a statistical calculation of the predicted values or an average of the predicted values. In some embodiments, the user may be notified that the health indicator is not within the health range using the following formula:
Figure BDA0002441471340000231
<Prange>determined by averaging measured health indicator data over a time range;
Figure BDA0002441471340000232
the method is determined by averaging the predicted health index data in the same time range;
Figure BDA0002441471340000233
is the median of the sequence of standard deviations obtained from the network over the same time horizon; and
Figure BDA0002441471340000234
is that
Figure BDA0002441471340000235
A function of the standard deviation of the evaluation and may be used as a threshold.
Averaging methods that may be used include, for example, but are not limited to, mean, arithmetic mean, median, and mode. In some embodiments, outliers are deleted so as not to deviate the calculated numbers.
Refer back to the input data of the embodiment depicted in FIG. 5C
Figure BDA0002441471340000236
αtFor example, α (L) may be a linear function or a non-linear function, or may be linear within some range of L and non-linear within a separate range of L. in one example, as shown in FIG. 5E, the function α (L) is linear for L between 0 and 3, quadratic for L between 3 and 13, and 1 for L greater than 13. for the present embodiment, when L is between 0 and 3 (i.e., when the predicted health indicator data and the measured health indicator data approximately match), the input data I approaches zero as α -1 approaches zerot+1Approximated as measurement data Pt+1When L is large (e.g., greater than 13), α (L) is 1, which results in input data
Figure BDA0002441471340000237
For time (predicted health indicator at time t + 1), when L is between 1 and 13, α (L) varies twice, and the relative contribution of the predicted health indicator data and the measured health indicator data to the input data also variest). This is just one example of self-sampling, where some combination of prediction data and measurement data is used as input to the trained network. Those skilled in the art will appreciate that other examples may be used.
The machine learning model in embodiments uses a trained machine learning model. In some embodiments, the machine learning model uses a recurrent neural network that requires a trained RNN. By way of example and not limitation, fig. 6 depicts an expanded RNN to reveal a training RNN, in accordance with some embodiments. Neuron 602 has an initial state S 0604 and a weight matrix W606. The step rate data R at time step 00And temperature data T0And initial PPG data P0Input to state S0Apply the weight W and output the prediction at the first time step from the neuron 602
Figure BDA0002441471340000241
And using the PPG (P) obtained at time step 11) To calculate
Figure BDA0002441471340000242
Neuron 602 also outputs state 608 at updated time step 1 (S)1) State 608 (S)1) Into the neuron 610. The step rate data R at time step 11And temperature data T1And PPG data P1Is input to S1Apply the weight 606W and output the prediction at time step 2 from the neuron 610
Figure BDA0002441471340000243
And using the PPG (P) obtained at time step 22) To calculate
Figure BDA0002441471340000244
Neuron 610 also outputs state 612 at updated time step 2 (S)2) State 612 (S)2) Into the neuron 614. The step rate data R at time step 33And temperature data T3And PPG data (P)3) Is input to S2Apply weights 606W and output the predicted PPG at time step 3 from neuron 614
Figure BDA0002441471340000245
And used in the time stepPPG (P) obtained at 33) To calculate
Figure BDA0002441471340000246
This process continues until state 616 at output time step n is reached and calculated
Figure BDA0002441471340000247
Until now. Similar to the training of convolutional neural networks, Δ P is used in back propagation*' to adjust the weight matrix. However, unlike convolutional networks, the same weight matrix in the recurrent neural network is applied in each iteration; during training, the weight matrix is only modified in the back propagation. Many training examples with health indicator data and corresponding other factor data are iteratively input into the RNN 600 until they converge. As previously discussed, LTSM RNNs may be used in some embodiments, where the state of such networks provides longer-term context analysis of input data, which may provide better predictions if the network knows (more) long-term relevance. As mentioned, and as those skilled in the art will readily appreciate, other machine learning models will fall within the scope of the embodiments described herein, and may include, for example, but not limited to, CNNs or other feed forward networks.
FIG. 7A depicts a system 700 for predicting whether a user's measured health indicator is within or outside of a threshold value that is normal for a healthy person under similar other factors. The system 700 has a machine learning model 702 and a health detector 704. For example, but not limited to, embodiments of machine learning model 702 include a trained machine learning model, a trained RNN, CNN, or other feed-forward network. The trained RNN, other network or combination of networks may be trained by training examples from healthy populations from which health indicator data and (temporally) corresponding other factor data are collected. Alternatively, the trained RNN, other networks, or a combination of networks may be trained by training examples from a particular user into a personalized trained machine learning model. Those skilled in the art will appreciate that training examples from different populations may be generally selected based on the use or design of the trained network and system. Those skilled in the art will also readily appreciate that the health indicator data in this and other embodiments may be one or more health indicators. The model may be trained and the user's health predicted using, for example, but not limited to, one or more of PPG data, heart rate data, blood pressure data, body temperature data, and blood oxygen concentration data, among others. The health detector 704 uses the predictions 708 and input data 710 from the machine learning model 702 to determine if the loss or other metric determined by analyzing the prediction output with the measurement data exceeds a threshold that is considered normal and is therefore unhealthy. The system 700 then outputs a notification or the health of the user. The notification may take many forms as discussed herein. The input generator 706 continuously obtains data from a user wearing or in contact with a sensor (not shown), where the data represents one or more health indicators of the user. The corresponding (in time) other factor data may be collected by another sensor or obtained by other means described herein or apparent to those skilled in the art.
The input generator 706 may also collect data to determine/calculate other factor data. For example, but not limiting of, the input generator may include a smart watch, wearable, or mobile device (e.g., Apple)
Figure BDA0002441471340000252
Or
Figure BDA0002441471340000251
A smart phone, tablet computer, or laptop computer), a combination of a smart watch and a mobile device, a surgical implant device with the ability to send data to a mobile device or other portable computing device, or a device on a cart in a medical care facility. Preferably, the user input generator 706 has sensors (e.g., PPG sensors, electrode sensors, etc.) to measure data related to one or more health indicators. The smart watch, tablet computer, mobile phone or laptop of some embodiments may carry sensors,or the sensor may be placed remotely (surgically embedded, in contact with the body remote from the mobile device or some separate device), where in all of these cases the mobile device communicates with the sensor to collect health index data. In some embodiments, system 700 may be provided on a mobile device alone, in combination with other mobile devices, or in combination with other computing systems via communications over a network over which the devices may communicate. For example, but not limiting of, the system 700 may be a smart watch or wearable device having a machine learning model 702 and a health detector 704, where the machine learning model 702 and the health detector 704 are located on the device (e.g., a memory of the watch or firmware on the watch). The watch may have a user input generator 706 and communicate with other computing devices (e.g., mobile phone, tablet computer, laptop or desktop computer, etc.) via direct communication, wireless communication (e.g., WiFi, voice, bluetooth, etc.), or over a network (e.g., internet, intranet, extranet, etc.), or a combination thereof, where the trained machine learning model 702 and health detector 704 may be located on the other computing devices. Those skilled in the art will appreciate that any number of configurations of system 700 may be utilized without exceeding the scope of the embodiments described herein.
Referring to fig. 7B, a smart watch 712 is depicted, according to an embodiment. Smart watch 712 includes a watch 714 that contains all of the circuitry and microprocessors (not shown) known to those skilled in the art. Watch 714 also includes a display 716, where on the display 716, health metric data 718 (in this example, heart rate data) for the user may be displayed. A predicted health indicator band 720 for a normal or healthy population may also be displayed on the display 716. In fig. 7B, the user's measured heart rate data does not exceed the predicted healthy band, so in this particular example, no notification would be made. Watch 714 may also include a watchband 722 and a high fidelity sensor 724 (e.g., an ECG sensor). Alternatively, watchband 722 can be an expandable cuff for measuring blood pressure. A low fidelity sensor 726 (shown shaded) is provided on the back of the watch 714 to collect user health indicator data, such as PPG data, which may be used to derive, for example, heart rate data or other data, such as blood pressure. Alternatively, as will be understood by those skilled in the art, in some embodiments, a fitness bracelet (such as FitButt or Polar, etc.) may be used, where the fitness bracelet has similar processing capabilities and other factor measuring devices (e.g., ppg and accelerometer).
Fig. 8 depicts an embodiment of a method 800 for continuously monitoring the health condition of a user. Step 802 receives user input data, which may include data for one or more health indicators (also referred to as primary data sequences) and (temporally) corresponding data for other factors (also referred to as secondary data sequences). Step 804 inputs user data into a trained machine learning model, which may include trained RNNs, CNNs, other feed forward networks as described herein, or other neural networks known to those skilled in the art. In some embodiments, the health indicator input data may be one or a combination, e.g. a linear combination, of predicted health indicator data and measured health indicator data, as described in some embodiments herein. Step 806 outputs data of one or more predicted health indicators at a time step, where the output may include, for example and without limitation, a single prediction value, a probability distribution as a function of the prediction value. Step 808 determines a loss based on the predicted health indicator, where the loss may be, for example and without limitation, a simple difference between the predicted health indicator and the measured health indicator, or some other suitably selected loss function (e.g., a negative logarithm of a probability distribution of an evaluation of the value of the measured health indicator). Step 810 determines whether the loss exceeds a threshold that is considered normal or unhealthy, where the threshold may be, for example and without limitation, a simple number selected by the designer, or a more complex function of some parameter related to the prediction. If the loss is greater than the threshold, step 812 notifies the user that his or her health indicator exceeds the threshold that is deemed normal or healthy. As described herein, the notification may take many forms. In some embodiments, this information may be visible to the user. For example, but not limiting of, information may be displayed on a user interface, such as a graph showing (i) a distribution of measured health metric data (e.g., heart rate) and other factor data (e.g., step number) as a function of time, (ii) predicted health metric data (e.g., predicted heart rate value) generated by a machine learning model. In this way, the user can visually compare the measured data points with the predicted data points and determine, by visual observation, whether, for example, his heart rate falls within a range expected by the machine learning model.
Some embodiments described herein have referred to using a threshold to determine whether to notify a user. In one or more of these embodiments, the user may change the threshold to adjust or fine tune the system or method to more closely match the user's personal health knowledge. For example, if the physiological metric used is blood pressure, and the user has a high blood pressure, embodiments may frequently alert/notify the user that their health metric is outside of normal or healthy ranges according to a model trained by healthy people. Thus, certain embodiments allow a user to increase the threshold so that the user is not notified that his/her health indicator data is beyond a range considered normal or healthy so frequently.
Some embodiments prefer to use raw data of the health indicator. If the raw data is processed to derive a particular measurement (e.g., heart rate), the derived data may be used according to an embodiment. In some cases, the provider of the health monitoring device has no control over the raw data, instead, the received data is data processed in the form of a calculated health indicator (e.g., heart rate or blood pressure). As will be appreciated by those skilled in the art, the form of data used to train the machine learning model should match the form of data collected from the user and input into the trained model, otherwise the prediction may prove erroneous. For example, Apple Watch provides heart rate measurement data at unequal time steps, but does not provide raw PPG data. In this example, the user wears Apple Watch, which outputs heart rate data with heart rate data at unequal time steps according to Apple's PPG processing algorithm. The model is trained on this data. Apple decides to change its algorithm that provides heart rate data, which may make the model trained with data from previous algorithms obsolete for use with data input from new algorithms. To address this potential problem, some embodiments resample irregularly spaced data (heart rate, blood pressure data, or ECG data, etc.) onto a regularly spaced grid and sample according to the regularly spaced grid with the data collected to train the model. If Apple or other data provider changes its algorithm, the model need only be retrained with the newly collected training examples, and need not be reconstructed to account for the algorithm change.
In a further embodiment, the trained machine learning model may be trained by user data, resulting in a personalized trained machine learning model. Such a trained personalized machine learning model may be used in place of or in combination with the machine learning model trained by a healthy population as described herein. If the personalized trained machine learning model itself is used, the user's data is input into the machine learning model, which will output a prediction of the individual health indicator in the next time step that is normal for the user, which is then compared with the actual/measured data from the next time step in a manner consistent with the embodiments described herein to determine if the user's health indicator differs from the health indicator predicted to be normal for the user by a certain threshold. Additionally, such personalized machine learning models may be used in combination with machine learning models trained with training examples from healthy people to generate predictions and related notifications regarding both health indicators predicted to be normal for the individual user and health indicators predicted to be normal for the healthy people.
Fig. 9A depicts a method 900 according to another embodiment, and fig. 9B shows a hypothetical plot 902 of heart rate as a function of time for explanatory purposes (e.g., without limitation). Step 904 (FIG. 9A) receives user heart rate data (or other health indicator data) and optionally corresponding (in time) other factor data and inputs the data to a personalized trained machine learning modelIn type (III). In some embodiments, the personalized trained model is trained by the user's individual health indicator data and optionally (temporally) corresponding other data, as described herein. Thus, in step 906, the personalized trained machine learning model predicts the normal heart rate data of the individual user, subject to other factors, and step 908 compares the user's health metric data to the health metric data predicted to be normal for that particular user to identify anomalies or abnormalities in the user's health metric data. As discussed in this specification, some embodiments are from a wearable device (e.g., Apple Watch, smart Watch, wearable device) on the user's body,
Figure BDA0002441471340000291
Etc.) or from a sensor on the user (e.g.,
Figure BDA0002441471340000292
a belt, PPG sensor, etc.) receives health indicator data for a user.
A loss may be defined to help determine whether to notify the user in step 908 that the user's measurement data is anomalous for data that is predicted to be normal for that particular user. The loss is selected to model how close the prediction is to the actual or measured data. Those skilled in the art will appreciate the many ways to define the loss. In other embodiments described herein and equally applicable, for example, the absolute value | Δ P of the difference between the predicted and measured values*In some embodiments, the loss (L) may be L ═ ln [ β(P)]Wherein
Figure BDA0002441471340000301
L is generally a measure of how close the predicted data is to the measured data β(P)(probability distribution in this example) is in the range of 0 to 1, where 1 means that the prediction data and the measurement data are the same. Thus, in some embodiments, a low loss indicates that the predicted data is likely to be the same as the measured dataOr close. In some embodiments, a threshold value for L is set, e.g., L>5, wherein a particular user is notified of the existence of an abnormal condition based on the prediction for that user. Such notification may take a variety of forms, as described elsewhere herein. Further as described elsewhere herein, other embodiments may take an average of the losses over a certain period of time and compare the average to a threshold. In some embodiments, the threshold itself may be a function of a statistical calculation of the predicted data or an average of the predicted data, as described in more detail elsewhere herein. Losses have been described in detail elsewhere herein, and for the sake of brevity will not be discussed further herein. Those skilled in the art will appreciate that the input and prediction data may be scalar values, or segments of data over a period of time. For example, but not limiting of, a system designer may be interested in a 5 minute segment of data and input all data prior to time t and all other data for t +5 minutes, predict health indicator data for t +5 minutes, and determine the loss between measured health indicator data for the t +5 minute segment relative to predicted health indicator data for the t +5 minute segment.
Step 908 determines whether an anomaly exists. As discussed, it may be determined whether the loss exceeds a threshold. As previously mentioned, the threshold is set at the discretion of the designer and based on the purpose of the system being designed. In some embodiments, the threshold may be modified by the user, but preferably is not modified in this embodiment. If there are no exceptions, the process repeats at step 904. If an anomaly exists, step 910 notifies or alerts the user to obtain high fidelity measurements, such as, but not limited to, ECG or blood pressure measurements. In step 912, the high fidelity data is analyzed by the algorithm, health professional, or both, and described as normal or abnormal, and if not, some diagnosis may be assigned based on the obtained high fidelity measurement, such as AFib, tachycardia, bradycardia, atrial fibrillation, or high/low blood pressure. For clarity, it should be noted that the notification to record high fidelity data is equally applicable and possible in other embodiments as well as the specific embodiments described above that use a general modelIn (1). In some embodiments, the high fidelity measurement may be obtained directly by the user using a mobile monitoring system (such as an ECG or blood pressure system, etc.), which in some embodiments may be associated with the wearable device. Optionally, the informing step 910 causes the high fidelity measurement to be automatically acquired. For example, the wearable device may communicate with the sensor (either by hardwire or via wireless communication) and obtain ECG data, or it may communicate with a blood pressure cuff system (e.g., a wrist or arm cuff of the wearable device) to automatically obtain a blood pressure measurement, or it may communicate with an implanted device such as a pacemaker or ECG electrode. For example, AliveCor, inc. provides a system for remotely obtaining an ECG that includes (but is not limited to) one or more sensors in contact with a user in two or more locations, wherein the sensors collect electrocardiographic data sent to a mobile computing device, either wired or wirelessly, wherein the app generates from the data an ECG strip that can be analyzed by an algorithm, a medical professional, or both. Alternatively, the sensor may be a blood pressure monitor, where blood pressure data is sent to the mobile computing device, either wired or wirelessly. The wearable device itself may be a blood pressure system with a cuff capable of measuring health indicator data and optionally with an ECG sensor similar to the one described above. The mobile computing device may be, for example and without limitation, a tablet computer (e.g., iPad), a smartphone (e.g.,
Figure BDA0002441471340000311
) A wearable device (e.g., Apple Watch), or a device in a medical care facility (which may be mounted on a cart). In some embodiments, the mobile computing device may be a laptop computer or a computer that communicates with some other mobile device. Those skilled in the art will appreciate that a wearable device or smart watch will also be considered a mobile computing device in terms of the capabilities provided in the context of the embodiments described herein. In the case of a wearable device, the sensor may be placed on a loop of the wearable device, where the sensor may send data to the meter wirelessly or through a wireThe computing device/wearable device, or the cuff may also be a blood pressure monitoring cuff, or both as previously described. In the case of a mobile phone, the sensor may be a pad attached to or remote from the phone, where the pad senses the cardiac electrical signal and communicates the data wirelessly or by hardwire to the wearable device or other mobile computing device. A more detailed description of some of these systems is provided in one or more of U.S. patent nos. 9,420,956, 9,572,499, 9,351,654, 9,247,911, 9,254,095, and 8,509,882 and one or more of U.S. patent application publication nos. 2015/0018660, 2015/0297134, and 2015/0320328, all of which are incorporated herein for all purposes. Step 912 analyzes the high fidelity data and provides a description or diagnosis, as previously described.
In step 914, a diagnosis or classification of high fidelity measurements is received by the computing system, which in some embodiments may be a mobile or wearable computing system for collecting heart rate data (or other health indicator data) of the user, and in step 916, the low fidelity health indicator data sequence (in this example, heart rate data) is tagged by the diagnosis. In step 918, the high fidelity machine learning model is trained using the labeled user low fidelity data sequence, and optionally also other factor data sequences are provided to train the model. In some embodiments, the trained high fidelity machine learning model is capable of receiving a sequence of measured low fidelity health indicator data (e.g., heart rate data or PPG data) and optionally other factor data, and giving a probability that the user is experiencing an event that is typically diagnosed or detected using high fidelity data, or predicting or diagnosing or detecting when the user experiences an event that is typically diagnosed or detected using high fidelity data. This can be done by the trained high fidelity machine learning model because it has been trained by using user health indicator data (and optionally other factor data) that labels diagnosis of high fidelity data. Thus, the trained model is able to predict when a user has an event (e.g., Afib, hypertension, etc.) associated with one or more markers based solely on measuring a low-fidelity health indicator input data sequence (e.g., heart rate or ppg data) (and optionally other factor data). Those skilled in the art will appreciate that the training of the high fidelity model may be performed on the user's mobile device, remote from the user's mobile device, both, or in a distributed network. For example, but not limiting of, the user's health indicator data may be stored in the cloud system, and the data may be tagged in the cloud using the diagnosis from step 914. Those skilled in the art will readily appreciate any number of methods and ways to store, tag, and access this information. Alternatively, a globally trained high fidelity model may be used that will be trained with labeled training examples from a population experiencing these conditions that are typically diagnosed or detected with high fidelity measurements. These global training examples will provide low fidelity data sequences (e.g., heart rate) labeled with conditions diagnosed using high fidelity measurements (e.g., Afib referred to by medical professionals or algorithms from ECGs).
Referring now to fig. 9B, plot 902 shows a schematic of plotted heart rate as a function of time. An abnormality 920 occurs at time t with respect to the user's normal heart rate data1、t2、t3、t4、t5、t6、t78To (3). As described above, normal means that the predicted data for that particular user is within a threshold of the measured data, with anomalies outside the threshold. Upon abnormality relative to normal, some embodiments prompt the user to obtain a more definitive or high fidelity reading, such as but not limited to being identified as an ECG1、ECG2、ECG3、ECG4、ECG5、ECG6、ECG7、ECG8The ECG reading of (a). As described above, the high fidelity reading may be obtained automatically, the user may obtain the high fidelity reading, or the high fidelity reading may be something other than an ECG, such as blood pressure. Analyzing the high fidelity readings by an algorithm, a health professional, or both, to identify the high fidelity data as normal/abnormal, and further to identify/diagnose the abnormality (e.g., abnormal)But not limited to, AFib). This information is used to mark the health indicator data (e.g., heart rate or PPG data) at the outlier 920 in the user's serialized data.
The difference between high fidelity and low fidelity data is that high fidelity data or measurements are typically used to make a judgment, test, or diagnosis, whereas low fidelity data may not be readily available for such a judgment, test, or diagnosis. For example, ECG scans may be used to identify, detect or diagnose arrhythmias, while heart rate or PPG data generally do not provide this capability. As will be appreciated by those skilled in the art, the description herein with respect to machine learning algorithms (e.g., Bayes, Markov, gaussian processes, clustering algorithms, generative models, kernel and neural network algorithms) applies equally to all embodiments described herein.
In some cases, despite these problems, the user is still asymptomatic, and even if symptomatic, it may be impractical to obtain the high fidelity measurements necessary to make a diagnosis or test. For example, but not limited to, arrhythmias, particularly AF, may not be present, and even if symptoms do exist, it is very difficult to record an ECG at that moment, and it is very difficult to continuously monitor the user without expensive, bulky and sometimes invasive monitoring devices. As discussed elsewhere herein, it is important to know when a user experiences AF, because AF can be at least the cause of stroke, among other serious conditions. Similarly and as discussed elsewhere, the AF load may have similar inputs. Some embodiments allow for continuous monitoring of arrhythmias (e.g., AF) or other severe conditions using only continuous monitoring of low fidelity health indicator data, such as heart rate or ppg, and optionally other factor data.
Fig. 10 depicts a method 1000 in accordance with some embodiments of the health monitoring systems and methods. Step 1002 receives measured or actual user low fidelity health indicator data (e.g., heart rate or PPG data from a sensor on the wearable device) and optionally corresponding other factor data (in time) that may affect the health indicator data described herein. As discussed elsewhere herein, low-fidelity health indicator data may be measured by a mobile computing device, such as a smart watch, other wearable device, or a tablet computer. In step 1004, the user's low-fidelity health indicator data (and optionally other factor data) is input into a trained high-fidelity machine learning model, which in step 1006 outputs a predictive identification or diagnosis for the user based on the measured low-fidelity health indicator data (and optionally (temporally) corresponding other factor data). Step 1008 asks whether the identification or diagnosis is normal, and if so, the process is restarted. If the identification or diagnosis is not normal, step 1010 notifies the user of the problem or detection. Alternatively, a system, method, or platform may be provided to notify any combination of a user, family, friend, medical care professional, or emergency 911, among others. Informing which of these persons may depend on identification, detection or diagnosis. In the case of identifying, detecting, or diagnosing a life-threatening situation, then certain persons may be contacted or notified, who may not be notified if the diagnosis is not life-threatening. Additionally, in some embodiments, the measured health indicator data sequence is input into a trained high fidelity machine learning model and the amount of time that the user is experiencing an abnormal event is calculated (e.g., the difference between the start and stop of the abnormal event is predicted), thereby allowing a better understanding of the user's abnormal load. In particular, understanding AF burden can be very important in preventing stroke and other severe conditions. Thus, some embodiments allow for continuous monitoring of anomalous events with mobile computing devices, wearable computing devices, or other portable devices capable of acquiring only low fidelity health factor data, and optionally other factor data.
FIG. 11 depicts example data 1100 that is analyzed based on low fidelity data to generate high fidelity output predictions or detections, according to some embodiments described herein. Although described with reference to the detection of atrial fibrillation, additional predictions for high fidelity diagnosis based on low fidelity measurements may generate similar data. The first graph 1110 shows heart rate calculations for a user over time. The heart rate may be determined based on PPG data or other heart rate sensors. The second graph 1120 shows activity data of the user during the same time period. For example, activity data may be determined based on the number of steps or other measures of user movement. The third graph 1130 shows the classifiers output from the machine learning model and the level thresholds for when notifications are generated. The machine learning model may generate predictions based on the input of the low fidelity measure. For example, the data in the first graph 1110 and the second graph 1120 can be analyzed by a machine learning system as further described above. The results of the machine learning system analysis may be provided as the atrial fibrillation probability shown in graph 1130. When the probability exceeds a threshold (shown in this case as being above 0.6 confidence), the health monitoring system may trigger a notification or other alert for the user, physician, or other user associated with the user.
In some embodiments, the data in graphs 1110 and 1120 may be provided to the machine learning system as a continuous measure. For example, heart rate and activity level may be generated as measurements every 5 seconds to make accurate measurements. The time segment with the plurality of measurements may then be input into a machine learning model. For example, the previous hour of data may be used as input to a machine learning model. In some embodiments, a shorter or longer period of time may be provided instead of one hour. As shown in fig. 11, the output graph 1130 provides an indication of the time period during which the user is experiencing an abnormal health event. For example, the health monitoring system may use a time period predicted to be above a certain confidence level to determine atrial fibrillation. This value may then be used to determine the atrial fibrillation load of the user during the measurement period.
In some embodiments, the machine learning model used to generate the prediction output in graph 1130 may be trained based on labeled user data. For example, labeled user data may be provided based on high fidelity data (such as ECG readings, etc.) acquired over a period of time when low fidelity data (e.g., PPG, heart rate, etc.) and other data (e.g., activity level or step, etc.) are also available. In some embodiments, the machine learning model is designed to determine whether atrial fibrillation may be present during a previous time period. For example, a machine learning model may take as input one hour of low fidelity data and provide a likelihood of an event occurring. Thus, the training data may include recorded data for multiple hours of an individual population. In the case where a condition is diagnosed based on high fidelity data, the data may be health event marker times. Thus, if there is a health event marker time based on high fidelity data, the machine learning model may determine that low fidelity data input into the untrained machine learning model with any one hour window of the event should provide a prediction of a health event. The untrained machine learning model may then be updated based on comparing the predictions to the labels. After repeating the multiple iterations and determining that the machine learning model has converged, the health monitoring system may use the machine learning model to monitor atrial fibrillation of the user based on the low-fidelity data. In various embodiments, low fidelity data may be used to detect conditions other than atrial fibrillation.
Fig. 12 shows a schematic representation of a machine in the example form of a computer system 1200, where within the computer system 1200 a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed. In alternative embodiments, the machine may be connected (e.g., networked) to other machines in a Local Area Network (LAN), an intranet, an extranet, or the internet. The machine may operate in the capacity of a server or a client machine in a client-server network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a Personal Computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, switch or bridge, hub, access point, network access control device, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term "machine" shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. In one embodiment, computer system 1200 may represent a server, mobile computing device, wearable device, or the like configured to perform health monitoring as described herein.
The exemplary computer system 1200 includes a processing device 1202, a main memory 1204 (e.g., Read Only Memory (ROM), flash memory, Dynamic Random Access Memory (DRAM)), a static memory 1206 (e.g., flash memory, Static Random Access Memory (SRAM), etc.), and a data storage device 1218, which communicate with each other via a bus 1230. Any of the signals provided over the various buses described herein may be time multiplexed with other signals and provided over one or more common buses. Additionally, the interconnection between circuit components or blocks may be shown as buses or as single signal lines. Each bus may optionally be one or more single signal lines, and each single signal line may optionally be a bus.
The processing device 1202 represents one or more general-purpose processing devices such as a microprocessor or central processing unit or the like. More specifically, the processing device may be a Complex Instruction Set Computing (CISC) microprocessor, Reduced Instruction Set Computing (RISC) microprocessor, Very Long Instruction Word (VLIW) microprocessor, or processor implementing other instruction sets, or processors implementing a combination of instruction sets. The processing device 1202 may also be one or more special-purpose processing devices such as an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), or a network processor. Processing device 1202 is configured to execute processing logic 1226, which processing logic 1226 may be one example of a health monitor 1250 and related system for performing the operations and steps discussed herein.
The data storage device 1218 may include a machine-readable storage medium 1228 on which is stored one or more sets of instructions 1222 (e.g., software) embodying any one or more of the methodologies of functionality described herein, including instructions to cause the processing device 1202 to perform the health monitor 1250 and associated processes described herein. During execution of instructions 1222 by computer system 1200, instructions 1222 may also reside, completely or at least partially, within main memory 1204 or within processing device 1202; the main memory 1204 and the processing device 1202 also constitute machine-readable storage media. The instructions 1222 may also be transmitted or received over a network 1220 via the network interface device 1208.
The machine-readable storage medium 1228 may also be used to store instructions to perform methods for monitoring the health of a user, as described herein. While the machine-readable storage medium 1228 is shown in an example embodiment to be a single medium, the term "machine-readable storage medium" should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) that store the one or more sets of instructions. A machine-readable medium includes any mechanism for storing information in a form (e.g., software, processing application) readable by a machine (e.g., a computer). The machine-readable medium may include, but is not limited to, magnetic storage media (e.g., floppy diskettes), optical storage media (e.g., CD-ROMs), magneto-optical storage media, read-only memories (ROMs), Random Access Memories (RAMs), erasable programmable memories (e.g., EPROMs and EEPROMs), flash memory, or other type of media suitable for storing electronic instructions.
The above description sets forth numerous specific details, such as examples of specific systems, components, and methods, etc., in order to provide a thorough understanding of various embodiments of the present invention. It will be apparent, however, to one skilled in the art that at least some embodiments of the present invention may be practiced without these specific details. In other instances, well-known components or methods are not described in detail or are presented in simple block diagram format in order to avoid unnecessarily obscuring the present invention. Accordingly, the specific details set forth are merely exemplary. Specific embodiments may vary from these exemplary details and still be contemplated to be within the scope of the present invention.
In addition, some embodiments may be practiced in distributed computing environments where machine-readable media are stored on or executed by more than one computer system. Additionally, information transferred between computer systems may be pulled or pushed across the communication medium connecting the computer systems.
Embodiments of the claimed subject matter include, but are not limited to, various operations described herein. These operations may be performed by hardware components, software, firmware, or a combination thereof.
Although the operations of the methods herein are shown and described in a particular order, the order of the operations of the methods may be changed such that certain operations may be performed in an inverse order, or such that certain operations may be performed, at least in part, concurrently with other operations. In another embodiment, instructions or sub-operations of different operations may be in an intermittent or alternating manner.
The above description of illustrated implementations of the invention, including what is described in the abstract, is not intended to be exhaustive or to limit the invention to the precise forms disclosed. While specific implementations of, and examples for, the invention are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the invention, as those skilled in the relevant art will recognize. The word "example" or "exemplary" is used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as "exemplary" or "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the word "example" or "exemplary" is intended to present concepts in a concrete fashion. As used in this application, the term "or" is intended to mean an inclusive "or" rather than an exclusive "or". That is, unless specified otherwise, or clear from context, "X comprises a or B" is intended to mean any of the natural inclusive permutations. That is, if X includes A, X includes B, or X includes both a and B, then "X includes a or B" is satisfied in any of the above cases. In addition, the articles "a" and "an" as used in this application and the appended claims should generally be construed to mean "one or more" unless specified otherwise or clear from context to be directed to a singular form. Furthermore, use of the terms "embodiment" or "one embodiment" or "an implementation" or "one implementation" throughout this specification is not intended to imply the same embodiment or implementation unless described as such. Furthermore, the terms "first," "second," "third," "fourth," and the like as used herein refer to labels to distinguish between different elements and may not necessarily have the ordinal meaning assigned to them in accordance with their numbers.
It will be appreciated that variations of the above-disclosed and other features and functions, or alternatives thereof, may be desirably combined into many other different systems or applications. Various presently unforeseen or unanticipated alternatives, modifications, variations or improvements therein may be subsequently made by those skilled in the art which are also intended to be encompassed by the following claims. The claims may cover embodiments in hardware, software, or a combination thereof.
In addition to the embodiments described above, the present invention also includes, but is not limited to, the following example implementations.
Some example implementations provide a method of monitoring cardiac health of a user. The method can comprise the following steps: receiving measured health indicator data and other factor data for a user at a first time; inputting, by the processing device, the health indicator data and the other factor data into a machine learning model, wherein the machine learning model generates predicted health indicator data at a next time step; receiving data of a user at a next time step; determining, by the processing device, a loss at a next time step, wherein the loss is a measure between the predicted health indicator data at the next time step and the measured health indicator data of the user at the next time step; judging that the loss exceeds a threshold value; and outputting a notification to a user in response to determining that the loss exceeds the threshold.
In some example implementations of the method of any example implementation, the trained machine learning model is a trained generative neural network. In some example implementations of the method of any example implementation, the trained machine learning model is a feed-forward network. In some example implementations of the method of any example implementation, the trained machine learning model is an RNN. In some example implementations of the method of any example implementation, the trained machine learning model is a CNN.
In some example implementations of the method of any example implementation, the trained machine learning model is trained by training examples from one or more of: healthy people, people with heart disease, and users.
In some example implementations of the method of any example implementation, the loss at the next time step is an absolute value of a difference between the predicted health indicator data at the next time step and the measured health indicator of the user at the next time step.
In some example implementations of the method of any example implementation, the predicted health indicator data is a probability distribution, and wherein the predicted health indicator data at the next time step is sampled according to the probability distribution.
In some example implementations of the method of any example implementation, the predicted health indicator data at the next time step is sampled according to a sampling technique selected from the group consisting of: predicted health index data of maximum probability; and randomly sampling the predicted health index data according to the probability distribution.
In some example implementations of the method of any example implementation, the predicted health indicator data is a probability distribution (β), and wherein the loss is determined based on a negative logarithm of the probability distribution at a next time step evaluated with a measured health indicator of the user at the next time step.
In some example implementations of the method of any example implementation, the method further comprises: averaging the predicted health index data over the time period of the time step; averaging the measured health indicator data of the user over the time period of the time step; and determining a loss based on an absolute value of a difference between the predicted health indicator data and the measured health indicator data.
In some example implementations of the method of any example implementation, the measured health indicator data comprises PPG data. In some example implementations of the method of any example implementation, the measured health indicator data includes heart rate data.
In some example implementations of the method of any example implementation, the method further comprises resampling the irregularly-spaced heart rate data onto a regularly-spaced grid, wherein the heart rate data is sampled according to the regularly-spaced grid.
In some example implementations of the method of any example implementation, the measured health indicator data is one or more health indicator data selected from the group consisting of: PPG data, heart rate data, pulse oximeter data, ECG data, and blood pressure data.
Some example limitations provide an apparatus comprising a mobile computing device, the mobile computing device comprising: a processing device; a display; a health index data sensor; and a memory having instructions stored thereon that, when executed by the processing device, cause the processing device to: receiving measured health indicator data from a health indicator data sensor at a first time and other factor data at the first time; inputting the health indicator data and other factor data into a trained machine learning model, and wherein the trained machine learning model generates predicted health indicator data at a next time step; receiving measured health indicator data and other factor data at a next time step; determining a loss at a next time step, wherein the loss is a measure between the predicted health indicator data at the next time step and the measured health indicator data at the next time step; and outputting a notification if the loss at the next time step exceeds a threshold.
In some example implementations of any example apparatus, the trained machine learning model includes a trained generative neural network. In some example implementations of any example apparatus, the trained machine learning model includes a feed-forward network. In some example implementations of any example apparatus, the trained machine learning model is an RNN. In some example implementations of the method of any example implementation, the trained machine learning model is a CNN.
In some example implementations of any example apparatus, the trained machine learning model is trained by a training example from one of the group consisting of: healthy people, people with heart disease, and users.
In some example implementations of any example apparatus, the predicted health indicator data is a point prediction of a health indicator of the user at a next time step, and wherein the loss is an absolute value of a difference between the predicted health indicator data at the next time step and the measured health indicator data at the next time step.
In some example implementations of any example apparatus, the predicted health indicator data is sampled according to a probability distribution generated by a machine learning model.
In some example implementations of any of the example apparatus, the predicted health indicator data is sampled according to a sampling technique selected from the group consisting of: a maximum probability; and randomly sampling according to the probability distribution.
In some example implementations of any example apparatus, the predicted health indicator data is a probability distribution (β), and wherein the loss is determined based on a negative logarithm of β evaluated with the measured health indicator of the user at a next time step.
In some example implementations of any of the example apparatuses, the processing device is further to define a function α ranging from 0 to 1, where ItIncluding a linear combination of the user's measured health indicator data and the number of predicted health indicators as a function of α.
In some example implementations of any of the example apparatus, the processing device is further to perform self-sampling of the probability distribution.
In some example implementations of any of the example apparatuses, the processing device is further to: averaging the predicted health index data sampled according to the probability distribution within the time period of the time step using an averaging method; averaging the measured health indicator data of the user over the time period of the time step using an averaging method; the loss is defined as the absolute value of the difference between the average predicted health indicator data and the measured health indicator data.
In some example implementations of any of the example apparatuses, the averaging method includes one or more methods selected from the group consisting of: calculating the mean, calculating the arithmetic mean, calculating the median, and calculating the mode.
In some example implementations of any example device, the measured health indicator data includes PPG data from a PPG signal. In some example implementations of any example apparatus, the measured health indicator data is heart rate data. In some example implementations of any example apparatus, the heart rate data is collected by resampling irregularly spaced heart rate data onto a regularly spaced grid and sampling the heart rate data according to the regularly spaced grid. In some example implementations of any example apparatus, the measured health indicator data is one or more health indicator data selected from the group consisting of: PPG data, heart rate data, pulse oximeter data, ECG data, and blood pressure data.
In some example implementations of any of the example apparatuses, the mobile device is selected from the group consisting of: a smart watch; a body-building bracelet; a tablet computer; and laptop computers.
In some example implementations of any of the example apparatuses, the mobile device further includes a user high fidelity sensor, wherein the notification requests that the user obtain high fidelity measurement data, and wherein the processing device is further to: receiving an analysis of the high fidelity measurement data; tagging measured health indicator data of the user with the analysis to generate tagged user health indicator data; and using the labeled user health indicator data as a training example to train the trained personalized high fidelity machine learning model.
In some example implementations of any example apparatus, the trained machine learning model is stored in a memory. In some example implementations of any example apparatus, the trained machine learning model is stored in a remote storage, wherein the remote storage is separate from the computing device, and wherein the mobile computing device is a wearable computing device. In some example implementations of any example apparatus, the trained personalized high fidelity machine learning model is stored in a memory. In some example implementations of any example apparatus, the trained personalized high fidelity machine learning model is stored in a remote memory, wherein the remote memory is separate from the computing device, and wherein the mobile computing device is a wearable computing device.
In some example implementations of any of the example apparatuses, the processing means is further for predicting that the user is experiencing atrial fibrillation and determining an atrial fibrillation load of the user.
Some example implementations provide a method of monitoring cardiac health of a user. The method can comprise the following steps: receiving measured low-fidelity user health indicator data and other factor data at a first time; inputting data comprising user health indicator data and other factor data at a first time into a personalized trained high fidelity machine learning model, wherein the personalized trained high fidelity machine learning model predicts whether the user's health indicator data is abnormal; and sending a notification of the health anomaly of the user in case of the predicted anomaly.
In some example implementations of the method of any example implementation, the trained personalized high fidelity machine learning model is trained by measuring low fidelity user health indicator data that is tagged for analysis of high fidelity measurement data.
In some example implementations of the method of any example implementation, the analysis of the high fidelity measurement data is based on user-specific high fidelity measurement data.
In some example implementations of the method of any example implementation, the personalized high fidelity machine learning model outputs a probability distribution, wherein the predictions are sampled according to the probability distribution.
In some example implementations of the method of any example implementation, the prediction is sampled according to a sampling technique selected from the group consisting of: prediction of the maximum probability; and sampling the prediction according to the probability distribution.
In some example implementations of the method of any example implementation, the average prediction is determined by averaging predictions over a time period of the time step using an averaging method, and wherein the average prediction is used to determine whether the health indicator data of the user is normal or abnormal.
In some example implementations of the method of any example implementation, the averaging method includes one or more methods selected from the group consisting of: calculating the mean, calculating the arithmetic mean, calculating the median, and calculating the mode.
In some example implementations of the method of any example implementation, the personalized high fidelity training machine learning model is stored in a memory of the user wearable device. In some example implementations of the method of any example implementation, the measured health indicator data and the other factor data are time slices of data over a period of time.
In some example implementations of the method of any example implementation, the personalized high fidelity training machine learning model is stored in a remote memory, wherein the remote memory is located remotely from the user wearable computing device.
In some example implementations, a health monitoring device may include a mobile computing apparatus including: a microprocessor; a display; a user health index data sensor; and a memory having stored thereon instructions that, when executed by the microprocessor, cause the processing device to: receiving measured low-fidelity health indicator data and other factor data at a first time, wherein the measured health indicator data is obtained by a user health indicator data sensor; inputting data comprising health indicator data and other factor data at a first time into a trained high fidelity machine learning model, wherein the trained high fidelity machine learning model predicts whether the health indicator data of the user is normal or abnormal; and in response to the prediction being anomalous, sending a notification to at least the user that the user's health is anomalous.
In some example implementations of the health monitoring device of any example implementation, the trained high-fidelity machine learning model is a trained high-fidelity generating neural network. In some example implementations of the health monitoring device of any example implementation, wherein the trained high fidelity machine learning model is a trained Recurrent Neural Network (RNN). In some example implementations of the health monitoring device of any example implementation, the trained high-fidelity machine learning model is a trained feedforward neural network. In some example implementations of the health monitoring device of any example implementation, the trained high fidelity machine learning model is CNN.
In some example implementations of the health monitoring device of any example implementation, the trained high fidelity machine learning model is trained with measured user health indicator data labeled based on user-specific high fidelity measurement data.
In some example implementations of the health monitoring device of any example implementation, the trained high fidelity machine learning model is trained with low fidelity health indicator data labeled based on high fidelity measurement data, wherein the low fidelity health indicator data and the high fidelity measurement data are from a population of subjects.
In some example implementations of the health monitoring device of any example implementation, the high fidelity machine learning model outputs a probability distribution from which the predictions are sampled.
In some example implementations of the health monitoring device of any example implementation, the prediction is sampled according to a sampling technique selected from the group consisting of: prediction of the maximum probability; and randomly sampling the prediction according to the probability distribution.
In some example implementations of the health monitoring device of any example implementation, the average prediction is determined by averaging the predictions over a time period of the time step using an averaging method, and wherein the average prediction is used to determine whether the health indicator data of the user is normal or abnormal.
In some example implementations of the health monitoring device of any example implementation, the measured health indicator data and the other factor data are time slices of data over a period of time.
In some example implementations of the health monitoring device of any example implementation, the averaging method includes one or more methods selected from the group consisting of: calculating the mean, calculating the arithmetic mean, calculating the median, and calculating the mode.
In some example implementations of the health monitoring device of any example implementation, the personalized high fidelity training machine learning model is stored in the memory. In some example implementations of the health monitoring device of any example implementation, the personalized high fidelity training machine learning model is stored in a remote memory, wherein the remote memory is located at a location remote from the wearable computing device. In some example implementations of the health monitoring apparatus of any example implementation, the mobile device is selected from the group consisting of: a smart watch; a body-building bracelet; a tablet computer; and laptop computers.

Claims (15)

1. A method of monitoring heart health of a user, the method comprising:
receiving measured health indicator data and other factor data for a user at a first time;
inputting, by a processing device, health indicator data and other factor data into a machine learning model, wherein the machine learning model generates predicted health indicator data at a next time step;
receiving user data at the next time step;
determining, by the processing device, a loss at the next time step, wherein the loss is a measure between the predicted health indicator data at the next time step and the measured health indicator data of the user at the next time step;
determining that the loss exceeds a threshold; and
outputting a notification to a user in response to determining that the loss exceeds a threshold.
2. The method of claim 1, wherein the trained machine learning model comprises a trained generative neural network, a feed-forward network, a recurrent neural network, or a convolutional neural network.
3. The method of claim 1, wherein the trained machine learning model is trained for training examples from one or more of: healthy people, people with heart disease, and users.
4. The method of claim 1, wherein the loss at the next time step is an absolute value of a difference between the predicted health indicator data at the next time step and the measured health indicator of the user at the next time step.
5. The method of claim 1, wherein the predicted health indicator data is a probability distribution, and wherein the predicted health indicator data at the next time step is sampled according to the probability distribution.
6. The method of claim 1, wherein the predicted health indicator data is a probability distribution β, and wherein the loss is determined based on a negative logarithm of the probability distribution at the next time step evaluated with the user's measured health indicator at the next time step.
7. The method of claim 6, further comprising self-sampling of the probability distribution.
8. The method of claim 1, further comprising:
averaging the predicted health indicator data over the time period of the previous time step;
averaging the measured health indicator data of the user over the time period of the previous time step; and
the loss is determined based on an absolute difference between the predicted health indicator data and the measured health indicator data.
9. The method of claim 1, wherein measuring health indicator data comprises pulse plethysmography data, heart rate data, or heart rate variability.
10. An apparatus, comprising:
a processing device;
a health indicator data sensor operatively coupled to the processing device; and
a memory having instructions stored thereon that, when executed by the processing device, cause the processing device to:
receiving measured health indicator data from the health indicator data sensor at a time and other factor data at a first time;
inputting the health indicator data and other factor data into a trained machine learning model, and wherein the trained machine learning model generates predicted health indicator data at a next time step;
receiving measured health indicator data and other factor data at the next time step;
determining a loss at the next time step, wherein the loss is a measure between the predicted health indicator data at the next time step and the measured health indicator data at the next time step; and
outputting a notification if the loss at the next time step exceeds a threshold.
11. The apparatus of claim 10, wherein the trained machine learning model comprises a trained generative neural network, a feed-forward network, a recurrent neural network, or a convolutional neural network.
12. The apparatus of claim 10, wherein the trained machine learning model is trained on a training example from one of the group consisting of: healthy people, people with heart disease, and users.
13. The apparatus of claim 10, wherein predictive health indicator data is sampled according to a probability distribution generated by the machine learning model.
14. The apparatus of claim 10, wherein the predicted health metric data is a probability distribution (β), and wherein the loss is determined based on a negative logarithm of β evaluated with the user's measured health metric at the next time step.
15. The apparatus of claim 10, wherein the processing device is further to:
predicting that the user is experiencing atrial fibrillation; and
atrial fibrillation load of the user is determined.
CN201880065407.9A 2017-10-06 2018-10-05 Continuously monitoring user health with a mobile device Pending CN111194468A (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US201762569309P 2017-10-06 2017-10-06
US62/569,309 2017-10-06
US201762589477P 2017-11-21 2017-11-21
US62/589,477 2017-11-21
PCT/US2018/054714 WO2019071201A1 (en) 2017-10-06 2018-10-05 Continuous monitoring of a user's health with a mobile device

Publications (1)

Publication Number Publication Date
CN111194468A true CN111194468A (en) 2020-05-22

Family

ID=64270939

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880065407.9A Pending CN111194468A (en) 2017-10-06 2018-10-05 Continuously monitoring user health with a mobile device

Country Status (4)

Country Link
EP (1) EP3692546A1 (en)
JP (1) JP2020536623A (en)
CN (1) CN111194468A (en)
WO (1) WO2019071201A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022206615A1 (en) * 2021-03-30 2022-10-06 华为技术有限公司 Electronic device for giving atrial fibrillation early warning on basis of different atrial fibrillation stages, and system

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016070128A1 (en) 2014-10-31 2016-05-06 Irhythm Technologies, Inc. Wireless physiological monitoring device and systems
US11653871B2 (en) 2018-03-07 2023-05-23 Technion Research & Development Foundation Limited Atrial fibrillation prediction using heart rate variability
CN110415821B (en) * 2019-07-02 2023-02-24 山东大学 Health knowledge recommendation system based on human physiological data and operation method thereof
CN110507313B (en) * 2019-08-30 2023-06-06 武汉中旗生物医疗电子有限公司 Intracavitary electrocardiosignal reconstruction method and device
CN111276247B (en) * 2020-01-16 2023-12-19 超越科技股份有限公司 Flight parameter data health assessment method and equipment based on big data processing
US11083371B1 (en) * 2020-02-12 2021-08-10 Irhythm Technologies, Inc. Methods and systems for processing data via an executable file on a monitor to reduce the dimensionality of the data and encrypting the data being transmitted over the wireless network
CN111540471B (en) * 2020-05-12 2024-01-26 西安交通大学医学院第一附属医院 Health state tracking and early warning method and system based on user health data
CN111588384B (en) * 2020-05-27 2023-08-22 京东方科技集团股份有限公司 Method, device and equipment for obtaining blood glucose detection result
US11350864B2 (en) 2020-08-06 2022-06-07 Irhythm Technologies, Inc. Adhesive physiological monitoring device
US11337632B2 (en) 2020-08-06 2022-05-24 Irhythm Technologies, Inc. Electrical components for physiological monitoring device
CN112716504B (en) * 2020-12-22 2023-12-15 沈阳东软智能医疗科技研究院有限公司 Electrocardiogram data processing method and device, storage medium and electronic equipment
WO2022231000A1 (en) * 2021-04-30 2022-11-03 富士フイルム株式会社 Information processing device, information processing method, and information processing program
WO2022231001A1 (en) * 2021-04-30 2022-11-03 富士フイルム株式会社 Information processing device, information processing method, and information processing program

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102908130A (en) * 2005-11-29 2013-02-06 风险获利有限公司 Residual-based monitoring of human health
CN106446533A (en) * 2016-09-12 2017-02-22 北京和信康科技有限公司 Processing system of human body health data and method thereof
JP2017080154A (en) * 2015-10-29 2017-05-18 日本電信電話株式会社 Sleep stage estimation device, method, and program
WO2017117798A1 (en) * 2016-01-08 2017-07-13 Heartisans Limited Wearable device for assessing the likelihood of the onset of cardiac arrest and method thereof
CN107408144A (en) * 2014-11-14 2017-11-28 Zoll医疗公司 Medical precursor event estimation

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8235912B2 (en) * 2009-03-18 2012-08-07 Acarix A/S Segmenting a cardiac acoustic signal
US8509882B2 (en) 2010-06-08 2013-08-13 Alivecor, Inc. Heart monitoring system usable with a smartphone or computer
US9351654B2 (en) 2010-06-08 2016-05-31 Alivecor, Inc. Two electrode apparatus and methods for twelve lead ECG
WO2014074913A1 (en) 2012-11-08 2014-05-15 Alivecor, Inc. Electrocardiogram signal detection
US9247911B2 (en) 2013-07-10 2016-02-02 Alivecor, Inc. Devices and methods for real-time denoising of electrocardiograms
US20150018660A1 (en) 2013-07-11 2015-01-15 Alivecor, Inc. Apparatus for Coupling to Computing Devices and Measuring Physiological Data
US9420956B2 (en) 2013-12-12 2016-08-23 Alivecor, Inc. Methods and systems for arrhythmia tracking and scoring
JP2017513626A (en) * 2014-04-21 2017-06-01 アライヴコア・インコーポレーテッド Method and system for cardiac monitoring using mobile devices and accessories
WO2015171764A1 (en) 2014-05-06 2015-11-12 Alivecor, Inc. Blood pressure monitor
US20160206287A1 (en) * 2015-01-14 2016-07-21 Yoram Palti Wearable Doppler Ultrasound Based Cardiac Monitoring
US9839363B2 (en) * 2015-05-13 2017-12-12 Alivecor, Inc. Discordance monitoring

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102908130A (en) * 2005-11-29 2013-02-06 风险获利有限公司 Residual-based monitoring of human health
CN107408144A (en) * 2014-11-14 2017-11-28 Zoll医疗公司 Medical precursor event estimation
JP2017080154A (en) * 2015-10-29 2017-05-18 日本電信電話株式会社 Sleep stage estimation device, method, and program
WO2017117798A1 (en) * 2016-01-08 2017-07-13 Heartisans Limited Wearable device for assessing the likelihood of the onset of cardiac arrest and method thereof
CN107405087A (en) * 2016-01-08 2017-11-28 心匠有限公司 A kind of Wearable and its method for being used to assess the possibility of heart arrest generation
CN106446533A (en) * 2016-09-12 2017-02-22 北京和信康科技有限公司 Processing system of human body health data and method thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张江石等: "《行为安全管理中的数学模型及应用》", vol. 1, 29 February 2016, 煤炭工业出版社, pages: 55 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022206615A1 (en) * 2021-03-30 2022-10-06 华为技术有限公司 Electronic device for giving atrial fibrillation early warning on basis of different atrial fibrillation stages, and system

Also Published As

Publication number Publication date
WO2019071201A1 (en) 2019-04-11
JP2020536623A (en) 2020-12-17
EP3692546A1 (en) 2020-08-12

Similar Documents

Publication Publication Date Title
US10561321B2 (en) Continuous monitoring of a user&#39;s health with a mobile device
US11877830B2 (en) Machine learning health analysis with a mobile device
US20190076031A1 (en) Continuous monitoring of a user&#39;s health with a mobile device
CN111194468A (en) Continuously monitoring user health with a mobile device
JP5669787B2 (en) Residue-based management of human health
US20210298648A1 (en) Calibration of a noninvasive physiological characteristic sensor based on data collected from a continuous analyte sensor
US20220254492A1 (en) System and method for automated detection of clinical outcome measures
CN113168908A (en) Continuous monitoring of user health using mobile devices
CN113164057A (en) Machine learning health analysis with mobile devices
US20230088974A1 (en) Method, device, and computer program for predicting occurrence of patient shock using artificial intelligence
US20240099593A1 (en) Machine learning health analysis with a mobile device
JP2024513618A (en) Methods and systems for personalized prediction of infections and sepsis
Vijayan et al. Implementing Pattern Recognition and Matching techniques to automatically detect standardized functional tests from wearable technology
WO2021127566A1 (en) Devices and methods for measuring physiological parameters
Luu et al. Accurate Step Count With Generalizable Deep Learning on Accelerometer Data
Jacob et al. Heart diseases classification using 1D CNN
Doan A NOVEL LOW-COST SYSTEM FOR REMOTE HEALTH MONITORING USING SMARTWATCHES
Praba et al. HARNet: automatic recognition of human activity from mobile health data using CNN and transfer learning of LSTM with SVM
CN115553739A (en) Health management method, device and system based on personal health big data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination