WO2022046939A1 - Deep learning based sleep apnea syndrome portable diagnostic system and method - Google Patents

Deep learning based sleep apnea syndrome portable diagnostic system and method Download PDF

Info

Publication number
WO2022046939A1
WO2022046939A1 PCT/US2021/047605 US2021047605W WO2022046939A1 WO 2022046939 A1 WO2022046939 A1 WO 2022046939A1 US 2021047605 W US2021047605 W US 2021047605W WO 2022046939 A1 WO2022046939 A1 WO 2022046939A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
sleep apnea
biological data
processor
diagnosis
Prior art date
Application number
PCT/US2021/047605
Other languages
French (fr)
Inventor
Ho Sung Kim
Soonhyun YOOK
Original Assignee
University Of Southern California
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University Of Southern California filed Critical University Of Southern California
Publication of WO2022046939A1 publication Critical patent/WO2022046939A1/en

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/024Detecting, measuring or recording pulse rate or heart rate
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/03Detecting, measuring or recording fluid pressure within the body other than blood pressure, e.g. cerebral pressure; Measuring pressure in body tissues or organs
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/145Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue
    • A61B5/1455Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue using optical sensors, e.g. spectral photometrical oximeters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/25Bioelectric electrodes therefor
    • A61B5/279Bioelectric electrodes therefor specially adapted for particular uses
    • A61B5/28Bioelectric electrodes therefor specially adapted for particular uses for electrocardiography [ECG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4806Sleep evaluation
    • A61B5/4818Sleep apnoea
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/63ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/67ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H80/00ICT specially adapted for facilitating communication between medical practitioners or patients, e.g. for collaborative diagnosis, therapy or health monitoring

Definitions

  • This disclosure generally relates to a deep learning based sleep apnea syndrome portable diagnostic system and method, and more particularly, to measuring, generating scalograms, comparing scalograms, and generating sleep apnea related diagnoses based on those comparisons.
  • SA Sleep apnea
  • OSAS obstructive sleep apnea syndrome
  • Episodes are typically accompanied by oxygen and oxyhemoglobin desaturation and terminated by brief microarousals that result in sleep fragmentation and diminished amounts of deep stage sleep (slow wave sleep and REM sleep). Patients often present with excessive daytime sleepiness due to low quality sleep as well as intermittent brain hypoxic damage due to oxyhemoglobin desaturation. If left untreated, SA can cause health problems, such as high blood pressure, stroke, heart failure, irregular heartbeats, heart attacks, diabetes, depression, stroke, vascular dementia, worsening of attention deficit/hyperactivity disorder (ADHD), headaches, etc. SA-related heart disease in particular can further lead to sudden death because of low blood oxygen. Therefore, it is important to diagnose SA accurately and early.
  • ADHD attention deficit/hyperactivity disorder
  • a typical SA diagnosis technique requires an overnight stay at a sleep laboratory with attending systems and specialized staff. Thus, patients have to set apart time to travel to the sleep laboratory, and spend the night there attempting to sleep in a new setting under artificial conditions, under observation, and with health monitors connected to their body. Clinical experts take observations and use recorded data to generate a diagnosis. While some individuals are desperate enough to go through this process, many find existing SA diagnosis techniques to be burdensome, uncomfortable, expensive and time consuming to the extent that people are often deterred from being diagnosed. Therefore, diagnosis and treatment are often delayed or never occur, potentially resulting in significant, even life threatening, health impacts for the undiagnosed person suffering from SA. Excessive daytime sleepiness leads to significant impairments in quality of life, cognitive performance, and social functioning.
  • OSAS cardiovascular risk factors
  • cardiovascular disease particularly hypertension, but also coronary artery disease, congestive cardiac failure, stroke, and vascular dementia.
  • a system for detection of sleep apnea in a subject may comprise: a first sensor; and a second sensor, wherein the first and second sensors are configure to measure a first type of biological data and a second type of biological data corresponding to sleep apnea disorder of the subject, wherein the first sensor and second sensor are wearable devices.
  • the system may further comprise a processor configured to receive the first type of biological data and the second type of biological data, wherein the processor comprises a scalogram converter for generating a scalogram from the first type of biological data and from the second type of biological data, and wherein scalogram converter is configured to provide the scalogram to a classification unit comprising a diagnostic model built based on deep learning of a training dataset, wherein the classification unit is configured generate a sleep apnea diagnosis based on the scalogram.
  • a processor configured to receive the first type of biological data and the second type of biological data
  • the processor comprises a scalogram converter for generating a scalogram from the first type of biological data and from the second type of biological data
  • scalogram converter is configured to provide the scalogram to a classification unit comprising a diagnostic model built based on deep learning of a training dataset, wherein the classification unit is configured generate a sleep apnea diagnosis based on the scalogram.
  • a method for detection of sleep apnea in a subject may comprise: testing a group of people by measuring, for each person in the group of people, first test biological data and second test biological data, each corresponding to sleep apnea disorder of the person, wherein the diagnosis for each person in the group of people is already known; generating, with a scalogram converter, a scalogram for each person tested in the group of people, the scalogram based on the first test biological data and the second test biological data and the known diagnosis for each person; training a diagnosis model, based on the scalogram and known diagnosis for each person; generating, by a first sensor, first biological data corresponding to sleep apnea disorder of the subject; generating, by a second sensor, second biological data corresponding to sleep apnea disorder of the subject; receiving, by a processor in communication with the first sensor, the first biological data; receiving, by a processor in communication with the second sensor,
  • an article of manufacture may include a non-transitory, tangible computer readable storage medium having instructions stored thereon that, in response to execution by a processor, cause the processor to perform operations to detect sleep apnea in a subject, the operations comprising: receiving, by the processor in communication with at least two sensors, biological data corresponding to sleep apnea disorder of the subject, wherein the biological data comprises at least pulse oximeter data (SpO2) and electrocardiogram data (ECG); converting, by a scalogram converter of the processor, the biological data to a scalogram; determining, by a classification unit of the processor, a sleep apnea diagnosis based on the scalogram and a diagnostic model that was built based on deep learning of the biological data and a corresponding diagnosis in a training dataset; and outputting, by an output device, the sleep apnea diagnosis.
  • SpO2 pulse oximeter data
  • ECG electrocardiogram data
  • Figure 1 is an exemplary high-level overview of a wearable mobile sleep apnea diagnosis system, in accordance with various embodiments.
  • Figure 2 is a flow diagram illustrating an exemplary method for implementing a wearable mobile sleep apnea diagnosis, in accordance with various embodiments.
  • PSG polysomnography
  • sleep apnea identification based on PSG is expensive, time consuming, and labor intensive. Also, patients with sleep apnea are often reluctant to get diagnosed as this procedure requires them to have a full night sleep in a sleep clinic. Therefore, diagnosis and treatment are often delayed, and as a consequence, a chance to prevent SA-led brain and heart disease can be missed.
  • a wearable mobile sleep apnea diagnosis system 100 may comprise sensors 110 and a device 120.
  • the sensors 110 may be wearable sensors.
  • the device 120 may comprise a processor 130.
  • the system may further comprise a cloud server 140, a remote device 150 associated with a medical professional 160.
  • the system 100 is configured such that an individual at home can put on wearable portable sensors to obtain a sleep apnea diagnosis on their mobile device in the comfort of their own home. This can significantly improve the accessibility and reduce the impediments to obtaining a sleep apnea diagnosis while keeping the diagnosis precision.
  • the sensors 110 may comprise a first sensor 111.
  • First sensor 111 may comprise a pulse oximeter sensor.
  • a pulse oximeter sensor may be configured to measure the blood oxygen content (SpO2) of a subject.
  • the pulse oximeter sensor may, for example, be configure to attach to the finger or toe of the subject.
  • the pulse oximeter sensor may, for example, be a wearable and portable sensor that can be worn by the subject.
  • the pulse oximeter sensor can be a FL310 Pulse Oximeter by Facelake or it can be an O2Ring, continuous ring oxygen monitor by Wellue.
  • any suitable pulse oximeter sensor may be used.
  • the sensors 110 may comprise a second sensor 112.
  • Second sensor 112 may comprise an electrocardiogram sensor.
  • An electrocardiogram sensor may be configured to measure the cardiac electrical potential waveforms (voltages produced during the contraction of the heart) of a subject.
  • the electrocardiogram sensor may, for example, be configured to attach to the chest, arms and/or legs of the subject.
  • the electrocardiogram sensor may, for example, be a wearable and portable sensor that can be worn by the subject.
  • the electrocardiogram sensor may be an ECG Recorder with Al Analysis by Wellue, or a Hipee 24-hour Smart Dynamic ECG Monitor High-precision Monitor Electrocardiographic Recorder, by Hipee.
  • any suitable electrocardiogram sensor may be used.
  • the sensors 110 may comprise a third sensor 113.
  • Third sensor 113 may comprise a heart rate sensor.
  • a heart rate sensor may be configured to measure the pulse of a subject.
  • the heart rate sensor may, for example, be configured to attach to the wrist or finger of the subject.
  • the heart rate sensor may, for example, be a wearable and portable sensor that can be worn by the subject.
  • the heart rate sensor may measure pulse waves, based on changes in the volume of a blood vessel that occur when the heart pumps blood.
  • the heart rate sensor detects the heart beat using an optical sensor and a green light emitting diode.
  • the heart rate sensor may be combined with the SpO2 oximeter sensor.
  • any suitable heart rate sensor may be used.
  • the sensors 110 may comprise a fourth sensor 114.
  • Fourth sensor 114 may comprise a nasal pressure sensor.
  • a nasal pressure sensor may be configured to measure and track the airflow / breathing of a subject.
  • the nasal pressure sensor may, for example, be configured to be attached to the face of the subject over the nose.
  • the nasal pressure sensor may, for example, be a wearable and portable sensor that can be worn by the subject.
  • the nasal pressure sensor comprises thermal sensors and/or ultrasonic sensors to measure the breathing of a subject.
  • the nasal pressure sensor may be a Sleep Breathing Monitor with App for iPhone & Android, by Mickcara.
  • any suitable nasal pressure sensor may be used.
  • the first sensor 111 and third sensor 113 may be located in the same device.
  • the first sensor 111 and third sensor 113 may both be located in a device that is attached to the finger of the subject.
  • each sensor may comprise an analog to digital converter configured to convert the measured biological condition to measured biological data.
  • each sensor may provide analog measurements to one or more separate analog-to- digital converters configured to make a similar conversion.
  • the first sensor may be configured to generate first biological data, such as SpO2 data.
  • the second sensor may be configured to generate second biological data, such as ESG data.
  • the third sensor may be configured to generate third biological data, such as heart rate data.
  • the fourth sensor may be configured to generate fourth biological data, such as nasal pressure data.
  • any suitable number and type of sensors may be used to generate different and/or additional types of biological data, in accordance with the disclosure herein.
  • the first - fourth types of biological data are used to diagnose sleep apnea disorder of the subject.
  • the sensors 110 may be configured to communicate the biological data to the device 120.
  • the sensors 110 may be configured to transmit the biological data wirelessly to the device 120.
  • the biological data may be communicated via Bluetooth communication to the device 120.
  • any wireless communication technique may be used.
  • the sensors 110 may be configured to transmit the biological data to the device 120 via a connected cable.
  • some of the sensors may communicate wirelessly and other sensors may communicate via a wire with the device 120.
  • the device 120 may comprise a mobile device.
  • the device 120 is a local device, located convenient to the subject (such as at the subject’s home, or wherever the subject would be sleeping regularly).
  • the mobile device is a smartphone.
  • the mobile device is a tablet, a laptop, a desktop computer, a smart watch, and/or the like.
  • device 120 itself may be a wearable device, such as a smart watch or other health electronic component.
  • one or more of the first - fourth sensors may be part of device 120 and may communicate the biological data internally to the processor 130.
  • the device 120 may be configured to receive the biological data.
  • the device 120 may comprise a wireless receiver/transceiver for receiving the biological data.
  • the device 120 may be configured to receive the biological data via a wired connection.
  • device 120 may be configured to receive two or more of the first, second, third, and fourth biological data from sensors 110.
  • the first biological data and second biological data are selected from among at least two of: pulse oximeter data (SpO2); heart rate data; electrocardiogram data (ECG); and nasal pressure data.
  • the first biological data is pulse oximeter data
  • the second biological data is electrocardiogram data.
  • device 120 further comprises a processor 130.
  • Processor 130 may be configured to process the biological data.
  • Processor 130 may comprise a scalogram converter 131 and a classification unit 132.
  • scalogram converter 131 is configured to generate a scalogram from the biological data.
  • the first biological data is pulse oximeter data (SpO2) and the second biological data is electrocardiogram data (ECG), and the scalogram is formed based on each of the pulse oximeter data and electrocardiogram data.
  • the multi-channel scalograms may be formed based on any two or more of the first, second, third, and fourth biological data.
  • the system is configured to detect sleep apnea in a subject.
  • the processor 130 is configured to generate a sleep apnea diagnosis from a test subject’s scalogram using the classification unit 132.
  • system 100 is configured to automatically and accurately detect sleep apnea events using information obtained from portable and inexpensive devices which can be used at a subject’s home.
  • the device 120 is further configured to transmit the biological data and/or the diagnosis to a remote device 150.
  • the information may be transmitted to remote device 150 wirelessly and/or via wired communication channels.
  • the information may be transmitted via cloud server 140 where the information may be temporarily or permanently stored, for example.
  • the information may be transmitted to remote device 150 via any suitable communication method.
  • the remote device 150 is located at a distance from the subject and from device 120.
  • remote device 150 may be located at a physician’s office or hospital 160.
  • remote device 150 may be located anywhere that is not proximate device 120.
  • Remote device 150 may be another mobile device, such as a doctor’s smartphone, laptop, desktop, tablet, or the like.
  • the remote device 150 may be associated with any diagnosis confirmation / data management system.
  • a medical professional e.g., a doctor or nurse practitioner
  • CPAP Continuous Positive Airway Pressure therapy
  • the remote device 150 stores the official patient ‘chart’ with the data and/or diagnosis.
  • the device 120 is configured to send the biological data to a remote device, and the remote device is configured to generate the scalogram based on the biological data (as that process is described herein) and to make a determination whether the subject has sleep apnea at the remote device.
  • the device 120 may need less storage or processing power.
  • the results of the sleep apnea determination can then be conveyed from the remote device to the patient in any suitable way, such as by transmission back to the mobile device, by a phone call from the doctor, by mail, by e-mail, or the like.
  • the sensors may provide the biological data to a processor that is separate from the mobile device 120, and the conversion and classification may be performed outside of mobile device 120.
  • the processor may provide the results and/or data from the conversion and classification to device 120 for display and/or for subsequent communication with remote device 150.
  • a method 200 for detection of sleep apnea in a subject is disclosed.
  • a diagnostic model is constructed by deep learning of a large dataset collected at a sleep clinic.
  • method 200 may comprise obtaining a training dataset from a group of people and a training process.
  • the training process may comprise measuring, for each person in the group of people, biological data (210).
  • the training data may comprise measuring at least first test biological data and second test biological data, each corresponding to sleep apnea diagnosis of one of the people in the group of people.
  • each test subject has a known diagnosis.
  • the sensors measure data during a test period.
  • the test period may be any suitable length, but in one example embodiment the test period is a 6 hour sleep test period.
  • the test system uses the same number / type of biological data from the same type of sensors as will be used in the home, wearable, portable diagnosis kits.
  • the training process may use sensors, similar to sensors 110 of system 100.
  • the method 200 further comprises converting the measured biological data to scalogram (220).
  • the conversion of biological data to a scalogram may be performed by a scalogram converter similar to scalogram converter 131.
  • the biological data e.g., ECG, heart rate, oxygen saturation, snoring air pressure
  • ID one dimensional temporal data.
  • CNNs novel deep learning approaches in the field of Image Processing and Computer Vision, are very effective in learning information from 2D (or 3D) data.
  • scalograms are generated of the biological data as 4-channel 2D images that describe absolute value of the continuous wavelet transform (CWT), generating a 2D image with time (x-axis) and frequency (y-axis).
  • the biological data from two or more sensors
  • the method 200 further comprises partitioning the whole sleep scalogram into 10-second patches, and inputting the patches to a deep learning algorithm while teaching the algorithm the label (normal, sleep apnea or hypopnea) of each patch (230).
  • the deep learning algorithm results in parameter optimization that leads to completion of model training with the least diagnosis error for the training dataset compared to human expert’s labeling.
  • the diagnostic model may be based on the training dataset and training process for the group of people and the related first test biological data and second test biological data.
  • the method comprises training the deep learning algorithm based on a previously measured biological dataset and corresponding previously determined sleep apnea diagnoses.
  • the diagnostic model is trained through use of neural networks.
  • the diagnostic model may be implemented using a deep learning approach including various Convolutional Neural Networks (CNN).
  • CNN Convolutional Neural Networks
  • Xception architecture which is a CNN with depth-wise separable convolution operation, may be used as the foundation network while other CNN architecture can be used with different parameters setting.
  • This CNN architecture with high order context feature extraction capabilities has demonstrated better classification performance compared to other CNN models. That said, other suitable CNN architectures may be used.
  • the scalograms are based on Continuous Wavelet Transforms (CWT).
  • the CWT may comprise Wavelet parameters.
  • the diagnosis model can learn the pattern of Wavelet parameters, for example by applying a scalogram patch to, and comparing the output to the known diagnosis for the person in the group of test subjects.
  • the Xception CNN model may automatically output the probability of each SA event (i.e., likelihood of SA, 0-100%). The probability of each SA event can then be binarized as normal sleep or apnea using, for example, a cut-off value of 50%.
  • the error between the known diagnosis and the diagnosis model output can be used to fine tune various sets of model parameters.
  • the model parameters may include: the Learning Rate, Dropout Rate, Regularization factor, etc. However, other suitable parameters may be used.
  • the diagnostic model is generated by taking a group of 800 people, where 700 have been diagnosed with SA and 100 have received normal diagnosis. These 800 people are used to train the model using system 100.
  • a diagnostic model is generated using the same number / type of biological data from the same type of sensors as will be used in the home, wearable, portable diagnosis kits.
  • the biological data for each of the 800 people is converted into a scalogram, an image capable of time-frequency analysis through a scalogram converter.
  • the scalogram can be divided into patches and these patches are fed into the diagnostic model to generate an output (such as the probability of an SA diagnosis and the probability of a hypopnea).
  • That output probability can be binarized into normal, SA or hypopnea for the test subject, and the model parameters are then tuned to reduce the error.
  • the model parameters can be, for example, latency of each biosignal (e.g., oxygen desaturation occurs gradually for 30-60 secs after a SA event), Learning Rate of 0.001, Dropout Rate 0.3, and L2 regularization with weight decay rate of le-5.
  • any suitable model parameters can be used.
  • the diagnostic model will have a high degree of accuracy, and is ready for use in home testing applications.
  • the method 200 may further comprise steps taken in conjunction with at home sleep apnea testing.
  • the method 200 further comprises generating, by a first sensor, first biological data corresponding to sleep apnea disorder of the subject, and generating, by a second sensor, second biological data corresponding to sleep apnea disorder of the subject (240).
  • the method 200 further comprises receiving, by a processor in communication with the first sensor, the first biological data, and receiving, by a processor in communication with the second sensor, the second biological data.
  • Method 200 may further comprise generating, by a scalogram converter 131 of the processor, a scalogram based on the first biological data and the second biological data and providing the scalogram to a classification unit 132 of the processor 130 (250).
  • the processor 130 may comprise scalogram converter 131 and classification unit 132.
  • Scalogram converter 131 is configured to receive the biological data, generate a scalogram by converting the biological data into the scalogram, and provide the scalogram to the classification unit 132.
  • the trained diagnosis model is provided to the classification unit 132 of device 120.
  • the classification unit 132 is configured to receive the output of scalogram converter 131.
  • Classification unit 132 may be configured to perform real time automatic SA event detection. The output of that detection, for example may be a diagnosis that it has detected a SA event or that no SA event has been detected.
  • classification unit 132 may be configured to further generate a patient diagnosis.
  • the diagnosis may be: a sleep apnea diagnosis, or a normal diagnosis.
  • Classification unit 132 may further be configured to generate a sleep apnea syndrome (SAS) severity classification of a patient. For example the severity could be divided up between: normal, mild SAS, moderate SAS, and severe SAS.
  • SAS sleep apnea syndrome
  • method 200 may further comprise generating, in the classification unit 132, a sleep apnea diagnosis based on the scalogram and the trained diagnosis model (260).
  • the trained diagnostic model is inputted a patient’s Scalogram and performs the following roles through the classification unit: real-time sleep apnea event detection, patient diagnosis and/or sleep apnea syndrome severity classification.
  • Method 200 may further comprise providing, by the processor, the sleep apnea diagnosis to the subject (270).
  • the diagnosis can be provided on the screen of the mobile device (see FIG. 1), for example. However any suitable method of delivery of the diagnosis can be used.
  • training of the diagnosis model is done using multimodal signals - to get more reliable and more accurate results than using a single modality physiological signal. These multimodal based results are compared to a human expert’s diagnosis and due to their high reliability, the diagnosis can be provided in real-time to the subject and a human expert is no longer needed to diagnose SA syndromes. After-the-fact checking can be done as desired, but the case for its necessity would be far less compelling with the disclosed system.
  • the real-time automatic SA detection can be also applied to an OSAS treatment using continuous positive air pressure (CPAP), as accurate SA detection can improve the adaptive air ventilation in CPAP treatment, which is needed when SA occurs.
  • CPAP continuous positive air pressure
  • the system to detect SA events will use a minimal number of input physiological signals that are measured
  • method 200 may further comprise providing, by the processor, the first biological data, second biological data and the sleep apnea diagnosis to a remote system for approbation by a medical professional.
  • the at least two sensors are located on one or more portable wearable devices worn by the subject, and the processor is located on a mobile device.
  • the method comprises transmitting, by the mobile device, at least one of the biological data and/or an analysis of the biological data to a remote server (the sleep apnea diagnosis), wherein the processor is located on the remote server.
  • the method comprises transmitting, by the mobile device, the sleep apnea diagnosis and the biological data to a sleep clinic server associated with at least one of sleep clinicians or a sleep doctor.
  • determining the sleep apnea diagnosis further includes at least one of detecting at least one sleep apnea event, determining a severity of the at least one sleep apnea event, and determining a score of the sleep apnea diagnosis.
  • determining the sleep apnea diagnosis includes each of detecting the at least one sleep apnea event, determining the severity of the at least one sleep apnea event, and determining the score of the sleep apnea diagnosis.
  • detecting the at least one sleep apnea event, determining the severity of the at least one sleep apnea event, and determining the score of the sleep apnea diagnosis are each performed using a classification unit of the deep learning algorithm.
  • providing the result to the subject includes providing the result in real-time.
  • One or more of the components of the system may include software, hardware, a platform, app, micro-app, algorithms, modules, etc.
  • the app may operate on any platform such as, for example, the IOS or Android platforms.
  • the diagnostic model may be trained by the use of artificial intelligence, machine learning and other algorithms. Long term follow-up of a large patient population on the app may provide data which can be processed to define treatment and analysis algorithms which can correlate with the published literature. The process and analysis may adjust over time, to account for improvements in diagnosis and treatment of SA. Thus, a subject may initially have a normal diagnosis, but an update of the diagnostic model may result in a SA diagnosis at a later point in time. The update in the diagnostic model could be the result of additional subjects being tested to add to the number of inputs impacting the model, or improvements in the accuracy of the diagnosis. Thus, the system may be configured to reassess stored scalograms for a period of time after the first analysis. The system may be configured to provide updated scalograms to device 120 on a periodic or pull or push basis.
  • the system may be agnostic to and/or interface with any sensors and mobile device.
  • the system may process any type of biological data using any suitable sensors and devices.
  • Sensors for measuring biological data are constantly evolving.
  • the machines to process and analyze biological data can change and improve over time.
  • the sleep clinic SA diagnosis methods are not rapidly changing, but to the extent they change, the method and system described herein can readily adapt by producing a diagnostic model consistent with the most current physician / sleep clinic diagnosis results.
  • the system may process the biological data to provide real-time SA diagnosis / notifications to the subject and/or to a remote system.
  • a SA diagnosis kit may be sent to the subject via the mail or other common carrier.
  • the subject may further download a software application to their mobile device, or to device 120.
  • the kit may include any indicator (e.g., a bar code or QR code). Scanning the indicator may facilitate syncing the sensors to the mobile device and syncing the mobile device to a patient account at the remote device.
  • the system may display the kit information, the sleep clinic (or other remote entity) information, the bar code data, the type of test, and/or the like.
  • the system may also provide entry questions for the user to complete to obtain more data from the user. The questions may assist in diagnosing and/or treating sleep apnea.
  • the system may register the test kit.
  • the data collection may be configurable and/or specific to a particular clinic, hospital system, clinical trial, or the like.
  • the system 100 may utilizes artificial intelligence.
  • the system 100 may also convert the output from the classification unit 132 into a traditional lab report (e.g., in the form of a PDF report).
  • the system 100 may provide event triggers to the remote device 150 or to cause action on the mobile device.
  • the remote device 150 may send data to the user app (e.g., on the user mobile device, device 120).
  • the user app may provide the results to the user.
  • the user app may also provide notifications to the user and/or serve as a communications interface.
  • the user may send actionable events and/or triggers to the user app.
  • External devices are examples of triggers from the patient to the system.
  • Other examples are any types of configurable forms or surveys that can trigger the event, such as symptoms of sleep apnea.
  • the system may include configurable surveys.
  • the surveys may be pre-test surveys and/or patient surveys, and the data may be entered by the patient into the app.
  • the event triggers may be fully or partially integrated with a mobile app technology interface to the user.
  • the event triggers may trigger the providing of information.
  • the diagnosis may function as the event triggering feature to the user.
  • the diagnosis may have a diagnosis-specific interactive interface which defines the test result, provides communication interfaces to appropriate medical specialists and/or provides access to additional testing modalities. With respect to specialists, certain disease states benefit from an evaluation by specialists.
  • a telemedicine interface with the platform of the system provides a straightforward way of virtual communication with the appropriate specialist. This provides immediate actionable events for the patient. The ability to do this in the context of sleep apnea diagnosis with real time diagnosis and immediate triggers is ideal in the current medical environment.
  • the diagnosis may appear on the user interface in the app showing, for example, test results, up- to-date results analysis which can be adjusted in real time based on multiple variables, associated relevant data from monitoring devices, recommended relevant media inputs (e.g., audio, images, and video), relevant recommendations and information from a medical professional regarding the diagnosis, communications access to a health care provider and/or recommendations of additional tests or testing frequency.
  • recommended relevant media inputs e.g., audio, images, and video
  • relevant recommendations and information from a medical professional regarding the diagnosis e.g., communications access to a health care provider and/or recommendations of additional tests or testing frequency.
  • the triggers may provide certain diagnosis results to a health care provider and/or establish a communication connection with an HCP (Health Care Provider) through the mobile device.
  • HCP Health Care Provider
  • the system may establish the connection via being connected to a telemedicine platform. Telemedicine platforms have specialists that can be available virtually 24 hours a day. Triggers of a sleep apnea event can also result in patient requests for more information.
  • the system 100 may be configured to comply with all HIPAA requirements and functionality.
  • the system may also include or facilitate a payment functionality, transactional functionality and/or insurance reimbursement functionality.
  • the system may define a transactional cohort such as, for example, a cash pay model or an insurance model.
  • the system may also define a transactional type such as, for example, home test kit or doctor’s office kit.
  • the home test kits may be a direct to consumer (cash pay) model.
  • the system may analyze coverage. In particular, certain high-risk groups may merit certain types of tests, while healthy individuals may not merit similar tests. This may be a function of cost control.
  • components, modules, and/or engines of system may be implemented as micro-applications or micro-apps.
  • Micro-apps are typically deployed in the context of a mobile operating system, including for example, a WINDOWS® mobile operating system, an ANDROID® operating system, an APPLE® iOS operating system, a BLACKBERRY® company’s operating system, and the like.
  • the micro-app may be configured to leverage the resources of the larger operating system and associated hardware via a set of predetermined rules which govern the operations of various operating systems and hardware resources.
  • the micro-app may leverage the communication protocol of the operating system and associated device hardware under the predetermined rules of the mobile operating system.
  • the micro-app may be configured to request a response from the operating system which monitors various hardware components and then communicates a detected input from the hardware to the micro-app.
  • system and method may be described herein in terms of functional block components, screen shots, optional selections, and various processing steps. It should be appreciated that such functional blocks may be realized by any number of hardware and/or software components configured to perform the specified functions.
  • the system may employ various integrated circuit components, e.g., memory elements, processing elements, logic elements, look-up tables, and the like, which may carry out a variety of functions under the control of one or more microprocessors or other control devices.
  • the software elements of the system may be implemented with any or any combination of programming or scripting languages such as C, C++, C#, JAVA®, JAVASCRIPT®, JAVASCRIPT® Object Notation (JSON), VBScript, Macromedia COLD FUSION, COBOL, MICROSOFT® company’s Active Server Pages, assembly, PERL® , PHP, awk, PYTHON®, Visual Basic, SQL Stored Procedures, PL/SQL, any UNIX® shell script, and extensible markup language (XML) with the various algorithms being implemented with any combination of data structures, objects, processes, routines or other programming elements.
  • programming or scripting languages such as C, C++, C#, JAVA®, JAVASCRIPT®, JAVASCRIPT® Object Notation (JSON), VBScript, Macromedia COLD FUSION, COBOL, MICROSOFT® company’s Active Server Pages, assembly, PERL® , PHP, a
  • system may employ any number of conventional techniques for data transmission, signaling, data processing, network control, and the like. Still further, the system could be used to detect or prevent security issues with a client-side scripting language, such as JAVASCRIPT®, VBScript, or the like.
  • client-side scripting language such as JAVASCRIPT®, VBScript, or the like.
  • the software elements of the system may also be implemented using a JAVASCRIPT® run-time environment configured to execute JAVASCRIPT® code outside of a web browser.
  • the software elements of the system may also be implemented using NODE.JS® components.
  • NODE.JS® programs may implement several modules to handle various core functionalities.
  • a package management module such as NPM®, may be implemented as an open source library to aid in organizing the installation and management of third-party NODE.JS® programs.
  • NODE.JS® programs may also implement a process manager, such as, for example, Parallel Multithreaded Machine (“PM2”); a resource and performance monitoring tool, such as, for example, Node Application Metrics (“appmetrics”); a library module for building user interfaces, and/or any other suitable and/or desired module.
  • PM2 Parallel Multithreaded Machine
  • appmetrics Node Application Metrics
  • library module for building user interfaces, and/or any other suitable and/or desired module.
  • Middleware may include any hardware and/or software suitably configured to facilitate communications and/or process transactions between disparate computing systems.
  • Middleware components are commercially available and known in the art.
  • Middleware may be implemented through commercially available hardware and/or software, through custom hardware and/or software components, or through a combination thereof.
  • Middleware may reside in a variety of configurations and may exist as a standalone system or may be a software component residing on the internet server.
  • Middleware may be configured to process transactions between the various components of an application server and any number of internal or external systems for any of the purposes disclosed herein.
  • WEBSPHERE® MQTM (formerly MQSeries) by IBM®, Inc. (Armonk, NY) is an example of a commercially available middleware product.
  • An Enterprise Service Bus (“ESB”) application is another example of middleware.
  • the computers discussed herein may provide a suitable website or other internet-based graphical user interface which is accessible by users.
  • MICROSOFT® company s Internet Information Services (IIS), Transaction Server (MTS) service, and an SQL SERVER® database
  • IIS Internet Information Services
  • MTS Transaction Server
  • SQL SERVER® database WINDOWS NT® web server software
  • SQL SERVER® database WINDOWS NT® web server software
  • SQL SERVER® database WINDOWS NT® web server software
  • MICROSOFT® Commerce Server MICROSOFT® Commerce Server.
  • components such as ACCESS® software, SQL SERVER® database, ORACLE® software, SYBASE® software, INFORMIX® software, MYSQL® software, INTERBASE® software, etc.
  • ADO Active Data Object
  • the APACHE® web server is used in conjunction with a LINUX® operating system, a MYSQL® database, and PERL®, PHP, Ruby, and/or PYTHON®
  • the system and various components may integrate with one or more smart digital assistant technologies.
  • exemplary smart digital assistant technologies may include the ALEXA® system developed by the AMAZON® company, the GOOGLE HOME® system developed by Alphabet, Inc., the HOMEPOD® system of the APPLE® company, and/or similar digital assistant technologies.
  • the ALEXA® system, GOOGLE HOME® system, and HOMEPOD® system may each provide cloud-based voice activation services that can assist with tasks, entertainment, general information, and more.
  • All the ALEXA® devices such as the AMAZON ECHO®, AMAZON ECHO DOT®, AMAZON TAP®, and AMAZON FIRE® TV, have access to the ALEXA® system.
  • the ALEXA® system, GOOGLE HOME® system, and HOMEPOD® system may receive voice commands via its voice activation technology, activate other functions, control smart devices, and/or gather information.
  • the smart digital assistant technologies may be used to interact with music, emails, texts, phone calls, question answering, shopping, making to-do lists, setting alarms, streaming podcasts, playing audiobooks, and providing real time information, such as news.
  • the ALEXA®, GOOGLE HOME®, and HOMEPOD® systems may also allow the user to access information about eligible transaction accounts linked to an online account across all digital assistant-enabled devices.
  • the various system components discussed herein may include one or more of the following: a host server or other computing systems including a processor for processing digital data; a memory coupled to the processor for storing digital data; an input digitizer coupled to the processor for inputting digital data; an application program stored in the memory and accessible by the processor for directing processing of digital data by the processor; a display device coupled to the processor and memory for displaying information derived from digital data processed by the processor; and a plurality of databases.
  • Various databases used herein may include: client data; merchant data; financial institution data; and/or like data useful in the operation of the system.
  • user computer may include an operating system (e.g., WINDOWS®, UNIX®, LINUX®, SOLARIS®, MACOS®, etc.) as well as various conventional support software and drivers typically associated with computers.
  • an operating system e.g., WINDOWS®, UNIX®, LINUX®, SOLARIS®, MACOS®, etc.
  • various conventional support software and drivers typically associated with computers e.g., WINDOWS®, UNIX®, LINUX®, SOLARIS®, MACOS®, etc.
  • the present system or any part(s) or function(s) thereof may be implemented using hardware, software, or a combination thereof and may be implemented in one or more computer systems or other processing systems.
  • the manipulations performed by embodiments may be referred to in terms, such as matching or selecting, which are commonly associated with mental operations performed by a human operator. No such capability of a human operator is necessary, or desirable, in most cases, in any of the operations described herein. Rather, the operations may be machine operations or any of the operations may be conducted or enhanced by artificial intelligence (Al) or machine learning.
  • Al may refer generally to the study of agents (e.g., machines, computer-based systems, etc.) that perceive the world around them, form plans, and make decisions to achieve their goals.
  • Foundations of Al include mathematics, logic, philosophy, probability, linguistics, neuroscience, and decision theory. Many fields fall under the umbrella of Al, such as computer vision, robotics, machine learning, and natural language processing. Useful machines for performing the various embodiments include general purpose digital computers or similar devices.
  • the embodiments are directed toward one or more computer systems capable of carrying out the functionalities described herein.
  • the computer system includes one or more processors.
  • the processor is connected to a communication infrastructure (e.g., a communications bus, cross-over bar, network, etc.).
  • a communication infrastructure e.g., a communications bus, cross-over bar, network, etc.
  • Various software embodiments are described in terms of this exemplary computer system. After reading this description, it will become apparent to a person skilled in the relevant art(s) how to implement various embodiments using other computer systems and/or architectures.
  • the computer system can include a display interface that forwards graphics, text, and other data from the communication infrastructure (or from a frame buffer not shown) for display on a display unit.
  • the computer system also includes a main memory, such as random access memory (RAM), and may also include a secondary memory.
  • the secondary memory may include, for example, a hard disk drive, a solid-state drive, and/or a removable storage drive.
  • the removable storage drive reads from and/or writes to a removable storage unit in a well-known manner.
  • the removable storage unit includes a computer usable storage medium having stored therein computer software and/or data.
  • secondary memory may include other similar devices for allowing computer programs or other instructions to be loaded into a computer system. Such devices may include, for example, a removable storage unit and an interface.
  • Examples of such may include a removable memory chip (such as an erasable programmable read only memory (EPROM), programmable read only memory (PROM)) and associated socket, or other removable storage units and interfaces, which allow software and data to be transferred from the removable storage unit to a computer system.
  • a removable memory chip such as an erasable programmable read only memory (EPROM), programmable read only memory (PROM)
  • EPROM erasable programmable read only memory
  • PROM programmable read only memory
  • the computer system or device 120 may also include a communications interface.
  • a communications interface allows software and data to be transferred between the computer system and external devices. Examples of such a communications interface may include a modem, a network interface (such as an Ethernet card), a communications port, etc.
  • Software and data transferred via the communications interface are in the form of signals which may be electronic, electromagnetic, optical, or other signals capable of being received by communications interface. These signals are provided to communications interface via a communications path (e.g., channel). This channel carries signals and may be implemented using wire, cable, fiber optics, a telephone line, a cellular link, a radio frequency (RF) link, wireless and other communications channels.
  • RF radio frequency
  • the server may include application servers (e.g., WEBSPHERE®, WEBLOGIC®, JBOSS®, POSTGRES PLUS ADVANCED SERVER®, etc ).
  • the server may include web servers (e.g., Apache, IIS, GOOGLE® Web Server, SUN JAVA® System Web Server, JAVA® Virtual Machine running on LINUX® or WINDOWS® operating systems).
  • a web client includes any device or software which communicates via any network, such as, for example any device or software discussed herein.
  • the web client may include internet browsing software installed within a computing unit or system to conduct online transactions and/or communications.
  • These computing units or systems may take the form of a computer or set of computers, although other types of computing units or systems may be used, including personal computers, laptops, notebooks, tablets, smart phones, cellular phones, personal digital assistants, servers, pooled servers, mainframe computers, distributed computing clusters, kiosks, terminals, point of sale (POS) devices or terminals, televisions, or any other device capable of receiving data over a network.
  • POS point of sale
  • the web client may include an operating system (e.g., WINDOWS®, WINDOWS MOBILE® operating systems, UNIX® operating system, LINUX® operating systems, APPLE® OS® operating systems, etc.) as well as various conventional support software and drivers typically associated with computers.
  • the web-client may also run MICROSOFT® INTERNET EXPLORER® software, MOZILLA® FIREFOX® software, GOOGLE CHROMETM software, APPLE® SAFARI® software, or any other of the myriad software packages available for browsing the internet.
  • the web client may or may not be in direct contact with the server (e.g., application server, web server, etc., as discussed herein).
  • the web client may access the services of the server through another server and/or hardware component, which may have a direct or indirect connection to an internet server.
  • the web client may communicate with the server via a load balancer.
  • web client access is through a network or the internet through a commercially-available webbrowser software package.
  • the web client may be in a home or business environment with access to the network or the internet.
  • the web client may implement security protocols such as Secure Sockets Layer (SSL) and Transport Layer Security (TLS).
  • a web client may implement several application layer protocols including HTTP, HTTPS, FTP, and SFTP.
  • the various system components may be independently, separately, or collectively suitably coupled to the network via data links which includes, for example, a connection to an Internet Service Provider (ISP) over the local loop as is typically used in connection with standard modem communication, cable modem, DISH NETWORK®, ISDN, Digital Subscriber Line (DSL), or various wireless communication methods.
  • ISP Internet Service Provider
  • the network may be implemented as other types of networks, such as an interactive television (ITV) network.
  • ITV interactive television
  • the system contemplates the use, sale, or distribution of any goods, services, or information over any network having similar functionality described herein.
  • the system contemplates uses in association with web services, utility computing, pervasive and individualized computing, security and identity solutions, autonomic computing, cloud computing, commodity computing, mobility and wireless solutions, open source, biometrics, grid computing, and/or mesh computing.
  • web page as it is used herein is not meant to limit the type of documents and applications that might be used to interact with the user.
  • a typical website might include, in addition to standard HTML documents, various forms, JAVA® applets, JAVASCRIPT® programs, active server pages (ASP), common gateway interface scripts (CGI), extensible markup language (XML), dynamic HTML, cascading style sheets (CSS), AJAX (Asynchronous JAVASCRIPT And XML) programs, helper applications, plug-ins, and the like.
  • a server may include a web service that receives a request from a web server, the request including a URL and an IP address (192.168.1.1). The web server retrieves the appropriate web pages and sends the data or applications for the web pages to the IP address.
  • Web services are applications that are capable of interacting with other applications over a communications means, such as the internet. Web services are typically based on standards or protocols such as XML, SOAP, AJAX, WSDL and UDDI. Web services methods are well known in the art, and are covered in many standard texts. For example, representational state transfer (REST), or RESTful, web services may provide one way of enabling interoperability between applications.
  • the computing unit of the web client may be further equipped with an internet browser connected to the internet or an intranet using standard dial-up, cable, DSL, or any other internet protocol known in the art. Transactions originating at a web client may pass through a firewall in order to prevent unauthorized access from users of other networks. Further, additional firewalls may be deployed between the varying components of CMS to further enhance security.
  • Any databases discussed herein may include relational, hierarchical, graphical, blockchain, object-oriented structure, and/or any other database configurations.
  • Any database may also include a flat file structure wherein data may be stored in a single file in the form of rows and columns, with no structure for indexing and no structural relationships between records.
  • a flat file structure may include a delimited text file, a CSV (comma-separated values) file, and/or any other suitable flat file structure.
  • DB2® by IBM® (Armonk, NY), various database products available from ORACLE® Corporation (Redwood Shores, CA), MICROSOFT ACCESS® or MICROSOFT SQL SERVER® by MICROSOFT® Corporation (Redmond, Washington), MYSQL® by MySQL AB (Uppsala, Sweden), MONGODB®, Redis, Apache Cassandra®, HBASE® by APACHE®, MapR-DB by the MAPR® corporation, or any other suitable database product.
  • any database may be organized in any suitable manner, for example, as data tables or lookup tables. Each record may be a single file, a series of files, a linked series of data fields, or any other data structure.
  • big data may refer to partially or fully structured, semi-structured, or unstructured data sets including millions of rows and hundreds of thousands of columns.
  • a big data set may be compiled, for example, from a history of purchase transactions over time, from web registrations, from social media, from records of charge (ROC), from summaries of charges (SOC), from internal data, or from other suitable sources. Big data sets may be compiled without descriptive metadata such as column types, counts, percentiles, or other interpretive-aid data points.
  • Association of certain data may be accomplished through any desired data association technique such as those known or practiced in the art.
  • the association may be accomplished either manually or automatically.
  • Automatic association techniques may include, for example, a database search, a database merge, GREP, AGREP, SQL, using a key field in the tables to speed searches, sequential searches through all the tables and files, sorting records in the file according to a known order to simplify lookup, and/or the like.
  • the association step may be accomplished by a database merge function, for example, using a “key field” in pre-selected databases or data sectors.
  • Various database tuning steps are contemplated to optimize database performance. For example, frequently used files such as indexes may be placed on separate file systems to reduce In/Out (“VO”) bottlenecks.
  • VO In/Out
  • a “key field” partitions the database according to the high-level class of objects defined by the key field. For example, certain types of data may be designated as a key field in a plurality of related data tables and the data tables may then be linked on the basis of the type of data in the key field.
  • the data corresponding to the key field in each of the linked data tables is preferably the same or of the same type.
  • data tables having similar, though not identical, data in the key fields may also be linked by using AGREP, for example.
  • any suitable data storage technique may be utilized to store data without a standard format.
  • Data sets may be stored using any suitable technique, including, for example, storing individual files using an ISO/IEC 7816-4 file structure; implementing a domain whereby a dedicated file is selected that exposes one or more elementary files containing one or more data sets; using data sets stored in individual files using a hierarchical filing system; data sets stored as records in a single file (including compression, SQL accessible, hashed via one or more keys, numeric, alphabetical by first tuple, etc.); data stored as Binary Large Object (BLOB); data stored as ungrouped data elements encoded using ISO/IEC 7816-6 data elements; data stored as ungrouped data elements encoded using ISO/IEC Abstract Syntax Notation (ASN.l) as in ISO/IEC 8824 and 8825; other proprietary techniques that may include fractal compression methods, image compression methods, etc.
  • BLOB Binary Large Object
  • the ability to store a wide variety of information in different formats is facilitated by storing the information as a BLOB.
  • any binary information can be stored in a storage space associated with a data set.
  • the binary information may be stored in association with the system or external to but affiliated with the system.
  • the BLOB method may store data sets as ungrouped data elements formatted as a block of binary via a fixed memory offset using either fixed storage allocation, circular queue techniques, or best practices with respect to memory management (e.g., paged memory, least recently used, etc.).
  • the ability to store various data sets that have different formats facilitates the storage of data, in the database or associated with the system, by multiple and unrelated owners of the data sets.
  • a first data set which may be stored may be provided by a first party
  • a second data set which may be stored may be provided by an unrelated second party
  • a third data set which may be stored may be provided by a third party unrelated to the first and second party.
  • Each of these three exemplary data sets may contain different information that is stored using different data storage formats and/or techniques. Further, each data set may contain subsets of data that also may be distinct from other subsets.
  • the data can be stored without regard to a common format.
  • the data set e.g., BLOB
  • the annotation may comprise a short header, trailer, or other appropriate indicator related to each data set that is configured to convey information useful in managing the various data sets.
  • the annotation may be called a “condition header,” “header,” “trailer,” or “status,” herein, and may comprise an indication of the status of the data set or may include an identifier correlated to a specific issuer or owner of the data.
  • the first three bytes of each data set BLOB may be configured or configurable to indicate the status of that particular data set; e.g., LOADED, INITIALIZED, READY, BLOCKED, REMOVABLE, or DELETED. Subsequent bytes of data may be used to indicate for example, the identity of the issuer, user, transaction/membership account identifier or the like. Each of these condition annotations are further discussed herein.
  • the data set annotation may also be used for other types of status information as well as various other purposes.
  • the data set annotation may include security information establishing access levels.
  • the access levels may, for example, be configured to permit only certain individuals, levels of employees, companies, or other entities to access data sets, or to permit access to specific data sets based on the transaction, merchant, issuer, user, or the like.
  • the security information may restrict/permit only certain actions, such as accessing, modifying, and/or deleting data sets.
  • the data set annotation indicates that only the data set owner or the user are permitted to delete a data set, various identified users may be permitted to access the data set for reading, and others are altogether excluded from accessing the data set.
  • other access restriction parameters may also be used allowing various entities to access a data set with various permission levels as appropriate.
  • the data including the header or trailer, may be received by a standalone interaction device configured to add, delete, modify, or augment the data in accordance with the header or trailer.
  • the header or trailer is not stored on the transaction device along with the associated issuer-owned data, but instead the appropriate action may be taken by providing to the user, at the standalone device, the appropriate option for the action to be taken.
  • the system may contemplate a data storage arrangement wherein the header or trailer, or header or trailer history, of the data is stored on the system, device or transaction instrument in relation to the appropriate data.
  • any databases, systems, devices, servers, or other components of the system may consist of any combination thereof at a single location or at multiple locations, wherein each database or system includes any of various suitable security features, such as firewalls, access codes, encryption, decryption, compression, decompression, and/or the like.
  • Data may be represented as standard text or within a fixed list, scrollable list, drop-down list, editable text field, fixed text field, pop-up window, and the like.
  • methods for modifying data in a web page such as, for example, free text entry using a keyboard, selection of menu items, check boxes, option boxes, and the like.
  • the data may be big data that is processed by a distributed computing cluster.
  • the distributed computing cluster may be, for example, a HADOOP® software cluster configured to process and store big data sets with some of nodes comprising a distributed storage system and some of nodes comprising a distributed processing system.
  • distributed computing cluster may be configured to support a HADOOP® software distributed file system (HDFS) as specified by the Apache Software Foundation at www.hadoop.apache.org/docs.
  • HDFS software distributed file system
  • the term “network” includes any cloud, cloud computing system, or electronic communications system or method which incorporates hardware and/or software components. Communication among the parties may be accomplished through any suitable communication channels, such as, for example, a telephone network, an extranet, an intranet, internet, point of interaction device (point of sale device, personal digital assistant (e.g., an IPHONE® device, a BLACKBERRY® device), cellular phone, kiosk, etc.), online communications, satellite communications, off-line communications, wireless communications, transponder communications, local area network (LAN), wide area network (WAN), virtual private network (VPN), networked or linked devices, keyboard, mouse, and/or any suitable communication or data input modality.
  • a telephone network such as, for example, a telephone network, an extranet, an intranet, internet, point of interaction device (point of sale device, personal digital assistant (e.g., an IPHONE® device, a BLACKBERRY® device), cellular phone, kiosk, etc.), online communications, satellite communications, off-line communications, wireless communications, transpond
  • the system may also be implemented using IPX, APPLETALK® program, IP-6, NetBIOS, OSI, any tunneling protocol (e.g. IPsec, SSH, etc.), or any number of existing or future protocols.
  • IPX IPX
  • APPLETALK® program IP-6
  • NetBIOS NetBIOS
  • OSI any tunneling protocol (e.g. IPsec, SSH, etc.), or any number of existing or future protocols.
  • IPsec IP Security
  • SSH Secure Shell
  • Cloud or “Cloud computing” includes a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. Cloud computing may include locationindependent computing, whereby shared servers provide resources, software, and data to computers and other devices on demand.
  • Any communication, transmission, and/or channel discussed herein may include any system or method for delivering content (e.g. data, information, metadata, etc.), and/or the content itself.
  • the content may be presented in any form or medium, and in various embodiments, the content may be delivered electronically and/or capable of being presented electronically.
  • a channel may comprise a website, mobile application, or device (e.g., FACEBOOK®, YOUTUBE®, PANDORA®, APPLE TV®, MICROSOFT® XBOX®, ROKU®, AMAZON FIRE®, GOOGLE CHROMECASTTM, SONY® PLAYSTATION®, NINTENDO® SWITCH®, etc.) a uniform resource locator (“URL”), a document (e.g., a MICROSOFT® Word or EXCELTM, an ADOBE® Portable Document Format (PDF) document, etc.), an “ebook,” an “emagazine,” an application or microapplication (as described herein), an short message service (SMS) or other type of text message, an email, a FACEBOOK® message, a TWITTER® tweet, multimedia messaging services (MMS), and/or other type of communication technology.
  • a uniform resource locator e.g., a MICROSOFT® Word or EXCELTM, an
  • a channel may be hosted or provided by a data partner.
  • the distribution channel may comprise at least one of a merchant website, a social media website, affiliate or partner websites, an external vendor, a mobile device communication, social media network, and/or location based service.
  • Distribution channels may include at least one of a merchant website, a social media site, affiliate or partner websites, an external vendor, and a mobile device communication.
  • Examples of social media sites include FACEBOOK®, FOURSQUARE®, TWITTER®, LINKED IN®, INSTAGRAM®, PINTEREST®, TUMBLR®, REDDIT®, SNAPCHAT®, WHATSAPP®, FLICKR®, VK®, QZONE®, WECHAT®, and the like.
  • Examples of affiliate or partner websites include AMERICAN EXPRESS®, GROUPON®, LIVINGSOCIAL®, and the like.
  • examples of mobile device communications include texting, email, and mobile applications for smartphones.
  • the disclosure includes a method, it is contemplated that it may be embodied as computer program instructions on a tangible computer-readable carrier, such as a magnetic or optical memory or a magnetic or optical disk.
  • a tangible computer-readable carrier such as a magnetic or optical memory or a magnetic or optical disk.

Abstract

A system and method for detection of sleep apnea in a subject comprises first and second sensors providing, to the processor of a mobile device, biological data from among pulse oximeter data, heart rate data, electrocardiogram data and nasal pressure data. The processor is configured to detect sleep apnea in a subject by receiving the biological data, converting the biological data into a scalogram, and determining a sleep apnea diagnosis via a classification unit based on the scalogram and based on a diagnostic model that was built based on deep learning of the biological data and corresponding diagnosis in the training dataset. The processor is configured to output, by an output device, the sleep apnea diagnosis, facilitating home diagnosis of sleep apnea.

Description

DEEP LEARNING BASED SLEEP APNEA SYNDROME PORTABLE
DIAGNOSTIC SYSTEM AND METHOD
FIELD
[0001] This disclosure generally relates to a deep learning based sleep apnea syndrome portable diagnostic system and method, and more particularly, to measuring, generating scalograms, comparing scalograms, and generating sleep apnea related diagnoses based on those comparisons.
PRIORITY CLAIM
[0002] This application claims priority to US provisional patent application number 63/070,189, titled Deep Learning Based Portable Diagnostic System for Sleep Apnea Syndrome Detection, filed August 25, 2020, the contents of which are incorporated by reference.
BACKGROUND
[0003] Sleep apnea (SA) is a serious sleep disorder in which a person's breathing is intermittently interrupted during sleep, leading to a lack of provision of oxygen to the brain and the rest of the body. It is estimated that SA affects 26% of Americans between the ages of 30 and 70. Sleep apnea events and obstructive sleep apnea syndrome (OSAS) is quite prevalent. It is characterized by instability of the upper airway during sleep, which results in markedly reduced (hypopnea) or absent (apnea) airflow during breathing. Episodes are typically accompanied by oxygen and oxyhemoglobin desaturation and terminated by brief microarousals that result in sleep fragmentation and diminished amounts of deep stage sleep (slow wave sleep and REM sleep). Patients often present with excessive daytime sleepiness due to low quality sleep as well as intermittent brain hypoxic damage due to oxyhemoglobin desaturation. If left untreated, SA can cause health problems, such as high blood pressure, stroke, heart failure, irregular heartbeats, heart attacks, diabetes, depression, stroke, vascular dementia, worsening of attention deficit/hyperactivity disorder (ADHD), headaches, etc. SA-related heart disease in particular can further lead to sudden death because of low blood oxygen. Therefore, it is important to diagnose SA accurately and early.
[0004] A typical SA diagnosis technique requires an overnight stay at a sleep laboratory with attending systems and specialized staff. Thus, patients have to set apart time to travel to the sleep laboratory, and spend the night there attempting to sleep in a new setting under artificial conditions, under observation, and with health monitors connected to their body. Clinical experts take observations and use recorded data to generate a diagnosis. While some individuals are desperate enough to go through this process, many find existing SA diagnosis techniques to be burdensome, uncomfortable, expensive and time consuming to the extent that people are often deterred from being diagnosed. Therefore, diagnosis and treatment are often delayed or never occur, potentially resulting in significant, even life threatening, health impacts for the undiagnosed person suffering from SA. Excessive daytime sleepiness leads to significant impairments in quality of life, cognitive performance, and social functioning. Patients with OSAS have a high prevalence of other cardiovascular risk factors such as obesity, hyperlipidemia, and diabetes, which makes the identification of the independent association of OSAS with cardiovascular disease more difficult. OSAS also significantly relates to diabetes and the metabolic syndrome. Furthermore, the disorder is associated with a three- to sevenfold increase in the rate of road traffic accidents that can be substantially reduced by effective therapy. Moreover, OSAS is an independent risk factor for the development of cardiovascular disease, particularly hypertension, but also coronary artery disease, congestive cardiac failure, stroke, and vascular dementia. Thus, a strong need exists for a system that may address such daunting SA / OSAS diagnosis issues and reduce the attendant consequences of a lack of such diagnosis and treatment thereof.
SUMMARY
[0005] In an example embodiment, a system for detection of sleep apnea in a subject is disclosed. The system may comprise: a first sensor; and a second sensor, wherein the first and second sensors are configure to measure a first type of biological data and a second type of biological data corresponding to sleep apnea disorder of the subject, wherein the first sensor and second sensor are wearable devices. The system may further comprise a processor configured to receive the first type of biological data and the second type of biological data, wherein the processor comprises a scalogram converter for generating a scalogram from the first type of biological data and from the second type of biological data, and wherein scalogram converter is configured to provide the scalogram to a classification unit comprising a diagnostic model built based on deep learning of a training dataset, wherein the classification unit is configured generate a sleep apnea diagnosis based on the scalogram.
[0006] In an example embodiment, a method for detection of sleep apnea in a subject is disclosed. The method may comprise: testing a group of people by measuring, for each person in the group of people, first test biological data and second test biological data, each corresponding to sleep apnea disorder of the person, wherein the diagnosis for each person in the group of people is already known; generating, with a scalogram converter, a scalogram for each person tested in the group of people, the scalogram based on the first test biological data and the second test biological data and the known diagnosis for each person; training a diagnosis model, based on the scalogram and known diagnosis for each person; generating, by a first sensor, first biological data corresponding to sleep apnea disorder of the subject; generating, by a second sensor, second biological data corresponding to sleep apnea disorder of the subject; receiving, by a processor in communication with the first sensor, the first biological data; receiving, by a processor in communication with the second sensor, the second biological data; generating, by a scalogram converter of the processor, a scalogram based on the first biological data and the second biological data; providing the scalogram to a classification unit of the processor; generating, in the classification unit, a sleep apnea diagnosis based on the scalogram, wherein the classification unit uses the trained diagnosis model; and providing, by the processor, the sleep apnea diagnosis to the subject.
[0007] In an example embodiment, an article of manufacture is disclosed. The article of manufacture may include a non-transitory, tangible computer readable storage medium having instructions stored thereon that, in response to execution by a processor, cause the processor to perform operations to detect sleep apnea in a subject, the operations comprising: receiving, by the processor in communication with at least two sensors, biological data corresponding to sleep apnea disorder of the subject, wherein the biological data comprises at least pulse oximeter data (SpO2) and electrocardiogram data (ECG); converting, by a scalogram converter of the processor, the biological data to a scalogram; determining, by a classification unit of the processor, a sleep apnea diagnosis based on the scalogram and a diagnostic model that was built based on deep learning of the biological data and a corresponding diagnosis in a training dataset; and outputting, by an output device, the sleep apnea diagnosis.
BRIEF DESCRIPTION OF DRAWINGS
[0008] A more complete understanding of the present disclosure may be derived by referring to the detailed description and claims when considered in connection with the Figures, wherein like reference numbers refer to similar elements throughout the Figures, and:
[0009] Figure 1 is an exemplary high-level overview of a wearable mobile sleep apnea diagnosis system, in accordance with various embodiments.
[0010] Figure 2 is a flow diagram illustrating an exemplary method for implementing a wearable mobile sleep apnea diagnosis, in accordance with various embodiments.
DETAILED DESCRIPTION
[0011] As noted above, typical sleep apnea diagnosis techniques are expensive, burdensome, uncomfortable, and time consuming. One technique for diagnosing sleep apnea and hypopnea is polysomnography (PSG). PSG consists of an overnight recording of different physiological signals including electroencephalogram (EEG), electrocardiogram (ECG), airflow, oxygen saturation in arterial blood, respiratory efforts, snoring, body position, etc. A PSG is carried out only in sleep laboratories with attending systems and specialized staff. Clinical experts extensively trained to read these recordings are necessary for identification of sleep apnea events during a patient’s sleep. Therefore, due to various modal data acquisition and the need for a technical expert, sleep apnea identification based on PSG is expensive, time consuming, and labor intensive. Also, patients with sleep apnea are often reluctant to get diagnosed as this procedure requires them to have a full night sleep in a sleep clinic. Therefore, diagnosis and treatment are often delayed, and as a consequence, a chance to prevent SA-led brain and heart disease can be missed.
[0012] Describe herein are various systems and methods for improved diagnosis of sleep apnea. In general, and with reference to Figure 1, a wearable mobile sleep apnea diagnosis system 100 may comprise sensors 110 and a device 120. The sensors 110 may be wearable sensors. The device 120 may comprise a processor 130. In an example embodiment, the system may further comprise a cloud server 140, a remote device 150 associated with a medical professional 160. In an example embodiment, the system 100 is configured such that an individual at home can put on wearable portable sensors to obtain a sleep apnea diagnosis on their mobile device in the comfort of their own home. This can significantly improve the accessibility and reduce the impediments to obtaining a sleep apnea diagnosis while keeping the diagnosis precision.
[0013] More specifically, the sensors 110 may comprise a first sensor 111. First sensor 111 may comprise a pulse oximeter sensor. A pulse oximeter sensor may be configured to measure the blood oxygen content (SpO2) of a subject. The pulse oximeter sensor may, for example, be configure to attach to the finger or toe of the subject. The pulse oximeter sensor may, for example, be a wearable and portable sensor that can be worn by the subject. By way of example, the pulse oximeter sensor can be a FL310 Pulse Oximeter by Facelake or it can be an O2Ring, continuous ring oxygen monitor by Wellue. Moreover, any suitable pulse oximeter sensor may be used.
[0014] The sensors 110 may comprise a second sensor 112. Second sensor 112 may comprise an electrocardiogram sensor. An electrocardiogram sensor may be configured to measure the cardiac electrical potential waveforms (voltages produced during the contraction of the heart) of a subject. The electrocardiogram sensor may, for example, be configured to attach to the chest, arms and/or legs of the subject. The electrocardiogram sensor may, for example, be a wearable and portable sensor that can be worn by the subject. By way of example, the electrocardiogram sensor may be an ECG Recorder with Al Analysis by Wellue, or a Hipee 24-hour Smart Dynamic ECG Monitor High-precision Monitor Electrocardiographic Recorder, by Hipee. Moreover, any suitable electrocardiogram sensor may be used.
[0015] The sensors 110 may comprise a third sensor 113. Third sensor 113 may comprise a heart rate sensor. A heart rate sensor may be configured to measure the pulse of a subject. The heart rate sensor may, for example, be configured to attach to the wrist or finger of the subject. The heart rate sensor may, for example, be a wearable and portable sensor that can be worn by the subject. The heart rate sensor may measure pulse waves, based on changes in the volume of a blood vessel that occur when the heart pumps blood. In one example embodiment, the heart rate sensor detects the heart beat using an optical sensor and a green light emitting diode. In an example embodiment, the heart rate sensor may be combined with the SpO2 oximeter sensor. Moreover, any suitable heart rate sensor may be used. [0016] The sensors 110 may comprise a fourth sensor 114. Fourth sensor 114 may comprise a nasal pressure sensor. A nasal pressure sensor may be configured to measure and track the airflow / breathing of a subject. The nasal pressure sensor may, for example, be configured to be attached to the face of the subject over the nose. The nasal pressure sensor may, for example, be a wearable and portable sensor that can be worn by the subject. In one example embodiment, the nasal pressure sensor comprises thermal sensors and/or ultrasonic sensors to measure the breathing of a subject. By way of example, the nasal pressure sensor may be a Sleep Breathing Monitor with App for iPhone & Android, by Mickcara. Moreover, any suitable nasal pressure sensor may be used.
[0017] In one example embodiment, the first sensor 111 and third sensor 113 may be located in the same device. For example, the first sensor 111 and third sensor 113 may both be located in a device that is attached to the finger of the subject.
[0018] In an example embodiment, each sensor may comprise an analog to digital converter configured to convert the measured biological condition to measured biological data. In another example embodiment, each sensor may provide analog measurements to one or more separate analog-to- digital converters configured to make a similar conversion.
[0019] In an example embodiment, the first sensor may be configured to generate first biological data, such as SpO2 data. In an example embodiment, the second sensor may be configured to generate second biological data, such as ESG data. In an example embodiment, the third sensor may be configured to generate third biological data, such as heart rate data. In an example embodiment, the fourth sensor may be configured to generate fourth biological data, such as nasal pressure data. Moreover, any suitable number and type of sensors may be used to generate different and/or additional types of biological data, in accordance with the disclosure herein. In an example embodiment, the first - fourth types of biological data are used to diagnose sleep apnea disorder of the subject.
[0020] In accordance with various example embodiments, the sensors 110 may be configured to communicate the biological data to the device 120. For example, the sensors 110 may be configured to transmit the biological data wirelessly to the device 120. For example, the biological data may be communicated via Bluetooth communication to the device 120. However, any wireless communication technique may be used. In other example embodiments, the sensors 110 may be configured to transmit the biological data to the device 120 via a connected cable. In some embodiments, some of the sensors may communicate wirelessly and other sensors may communicate via a wire with the device 120.
[0021] In a further example embodiment, the device 120 may comprise a mobile device. In other example embodiments, the device 120 is a local device, located convenient to the subject (such as at the subject’s home, or wherever the subject would be sleeping regularly). In an example embodiment, the mobile device is a smartphone. In other example embodiments, the mobile device is a tablet, a laptop, a desktop computer, a smart watch, and/or the like.
[0022] In accordance with various example embodiments, device 120 itself may be a wearable device, such as a smart watch or other health electronic component. In this case, one or more of the first - fourth sensors may be part of device 120 and may communicate the biological data internally to the processor 130.
[0023] In an example embodiment, the device 120 may be configured to receive the biological data. For example, the device 120 may comprise a wireless receiver/transceiver for receiving the biological data. In another example embodiment, the device 120 may be configured to receive the biological data via a wired connection. Thus, device 120 may be configured to receive two or more of the first, second, third, and fourth biological data from sensors 110.
[0024] In an example embodiment, the first biological data and second biological data are selected from among at least two of: pulse oximeter data (SpO2); heart rate data; electrocardiogram data (ECG); and nasal pressure data. Moreover, in one example embodiment, the first biological data is pulse oximeter data, and the second biological data is electrocardiogram data.
[0025] In an example embodiment, device 120 further comprises a processor 130. Processor 130 may be configured to process the biological data. Processor 130 may comprise a scalogram converter 131 and a classification unit 132. In an example embodiment, scalogram converter 131 is configured to generate a scalogram from the biological data. In one example embodiment, the first biological data is pulse oximeter data (SpO2) and the second biological data is electrocardiogram data (ECG), and the scalogram is formed based on each of the pulse oximeter data and electrocardiogram data. However, in accordance with various example embodiments, the multi-channel scalograms may be formed based on any two or more of the first, second, third, and fourth biological data.
[0026] In an example embodiment, the system is configured to detect sleep apnea in a subject. In one example embodiment, the processor 130 is configured to generate a sleep apnea diagnosis from a test subject’s scalogram using the classification unit 132. Thus, in an example embodiment, system 100 is configured to automatically and accurately detect sleep apnea events using information obtained from portable and inexpensive devices which can be used at a subject’s home.
[0027] In an example embodiment, the device 120 is further configured to transmit the biological data and/or the diagnosis to a remote device 150. The information may be transmitted to remote device 150 wirelessly and/or via wired communication channels. In one example embodiment, the information may be transmitted via cloud server 140 where the information may be temporarily or permanently stored, for example. Moreover, the information may be transmitted to remote device 150 via any suitable communication method.
[0028] In an example embodiment, the remote device 150 is located at a distance from the subject and from device 120. For example, remote device 150 may be located at a physician’s office or hospital 160. Moreover, remote device 150 may be located anywhere that is not proximate device 120. Remote device 150 may be another mobile device, such as a doctor’s smartphone, laptop, desktop, tablet, or the like. Moreover, the remote device 150 may be associated with any diagnosis confirmation / data management system. In an example embodiment, a medical professional (e.g., a doctor or nurse practitioner) may review the diagnosis and the underlying data to provide approbation, approval, confirmation or the like of the diagnosis as well as prescription of treatment such as Continuous Positive Airway Pressure therapy (CPAP). In another example embodiment, the remote device 150 stores the official patient ‘chart’ with the data and/or diagnosis.
[0029] In accordance with an alternative example embodiment, the device 120 is configured to send the biological data to a remote device, and the remote device is configured to generate the scalogram based on the biological data (as that process is described herein) and to make a determination whether the subject has sleep apnea at the remote device. In this example embodiment, the device 120 may need less storage or processing power. In this example embodiment, the results of the sleep apnea determination can then be conveyed from the remote device to the patient in any suitable way, such as by transmission back to the mobile device, by a phone call from the doctor, by mail, by e-mail, or the like.
[0030] In accordance with another alternative example embodiment, the sensors may provide the biological data to a processor that is separate from the mobile device 120, and the conversion and classification may be performed outside of mobile device 120. In this example embodiment, the processor may provide the results and/or data from the conversion and classification to device 120 for display and/or for subsequent communication with remote device 150.
[0031] With reference now to FIG. 2, a method 200 for detection of sleep apnea in a subject is disclosed. In an example embodiment, a diagnostic model is constructed by deep learning of a large dataset collected at a sleep clinic. Thus, method 200 may comprise obtaining a training dataset from a group of people and a training process.
[0032] The training process may comprise measuring, for each person in the group of people, biological data (210). For example, the training data may comprise measuring at least first test biological data and second test biological data, each corresponding to sleep apnea diagnosis of one of the people in the group of people. In an example embodiment, each test subject has a known diagnosis. In an example embodiment, the sensors measure data during a test period. The test period may be any suitable length, but in one example embodiment the test period is a 6 hour sleep test period. In an example embodiment, the test system uses the same number / type of biological data from the same type of sensors as will be used in the home, wearable, portable diagnosis kits. The training process may use sensors, similar to sensors 110 of system 100.
[0033] In an example embodiment, the method 200 further comprises converting the measured biological data to scalogram (220). The conversion of biological data to a scalogram may be performed by a scalogram converter similar to scalogram converter 131. In an example embodiment, the biological data (e.g., ECG, heart rate, oxygen saturation, snoring air pressure) measured using the portable sleep devices (sensors) is in the form of one dimensional (ID) temporal data. However, CNNs, novel deep learning approaches in the field of Image Processing and Computer Vision, are very effective in learning information from 2D (or 3D) data. To overcome this unequal dimensionality issue, in an example embodiment, scalograms are generated of the biological data as 4-channel 2D images that describe absolute value of the continuous wavelet transform (CWT), generating a 2D image with time (x-axis) and frequency (y-axis). Thus, the biological data (from two or more sensors) for each of the test subjects is converted into an image capable of time-frequency analysis through a scalogram converter. [0034] In an example embodiment, the method 200 further comprises partitioning the whole sleep scalogram into 10-second patches, and inputting the patches to a deep learning algorithm while teaching the algorithm the label (normal, sleep apnea or hypopnea) of each patch (230). It is noted here that although disclosed herein as using 10-second patches for Xception CNN, any suitable time duration / frequency that yield a more accurate diagnosis using different deep learning classifiers may be used. Thus, in an example embodiment, the deep learning algorithm results in parameter optimization that leads to completion of model training with the least diagnosis error for the training dataset compared to human expert’s labeling.
[0035] Discussing the training of the diagnostic model further, in an example embodiment, the diagnostic model may be based on the training dataset and training process for the group of people and the related first test biological data and second test biological data. In another example embodiment, the method comprises training the deep learning algorithm based on a previously measured biological dataset and corresponding previously determined sleep apnea diagnoses.
[0036] In one example embodiment, the diagnostic model is trained through use of neural networks. For example, the diagnostic model may be implemented using a deep learning approach including various Convolutional Neural Networks (CNN). In order to reduce the number of trainable parameters, Xception architecture, which is a CNN with depth-wise separable convolution operation, may be used as the foundation network while other CNN architecture can be used with different parameters setting. This CNN architecture with high order context feature extraction capabilities has demonstrated better classification performance compared to other CNN models. That said, other suitable CNN architectures may be used.
[0037] In an example embodiment, the scalograms are based on Continuous Wavelet Transforms (CWT). The CWT may comprise Wavelet parameters. The diagnosis model can learn the pattern of Wavelet parameters, for example by applying a scalogram patch to, and comparing the output to the known diagnosis for the person in the group of test subjects. For example, the Xception CNN model may automatically output the probability of each SA event (i.e., likelihood of SA, 0-100%). The probability of each SA event can then be binarized as normal sleep or apnea using, for example, a cut-off value of 50%.
[0038] The error between the known diagnosis and the diagnosis model output can be used to fine tune various sets of model parameters. In an example embodiment, the model parameters may include: the Learning Rate, Dropout Rate, Regularization factor, etc. However, other suitable parameters may be used.
[0039] Example
[0040] In one example embodiment, the diagnostic model is generated by taking a group of 800 people, where 700 have been diagnosed with SA and 100 have received normal diagnosis. These 800 people are used to train the model using system 100. In an example embodiment, a diagnostic model is generated using the same number / type of biological data from the same type of sensors as will be used in the home, wearable, portable diagnosis kits. The biological data for each of the 800 people is converted into a scalogram, an image capable of time-frequency analysis through a scalogram converter. The scalogram can be divided into patches and these patches are fed into the diagnostic model to generate an output (such as the probability of an SA diagnosis and the probability of a hypopnea). That output probability can be binarized into normal, SA or hypopnea for the test subject, and the model parameters are then tuned to reduce the error. The model parameters can be, for example, latency of each biosignal (e.g., oxygen desaturation occurs gradually for 30-60 secs after a SA event), Learning Rate of 0.001, Dropout Rate 0.3, and L2 regularization with weight decay rate of le-5. However any suitable model parameters can be used.
[0041] Once a large number of patches for each of the 800 test subjects has been learned on by the diagnostic model, the diagnostic model will have a high degree of accuracy, and is ready for use in home testing applications.
[0042] At home testing
[0043] The method 200 may further comprise steps taken in conjunction with at home sleep apnea testing. In an example embodiment, the method 200 further comprises generating, by a first sensor, first biological data corresponding to sleep apnea disorder of the subject, and generating, by a second sensor, second biological data corresponding to sleep apnea disorder of the subject (240). The method 200 further comprises receiving, by a processor in communication with the first sensor, the first biological data, and receiving, by a processor in communication with the second sensor, the second biological data.
[0044] Method 200 may further comprise generating, by a scalogram converter 131 of the processor, a scalogram based on the first biological data and the second biological data and providing the scalogram to a classification unit 132 of the processor 130 (250). In an example embodiment, and with reference again to FIG 1, the processor 130 may comprise scalogram converter 131 and classification unit 132. Scalogram converter 131 is configured to receive the biological data, generate a scalogram by converting the biological data into the scalogram, and provide the scalogram to the classification unit 132.
[0045] In an example embodiment, the trained diagnosis model is provided to the classification unit 132 of device 120. In an example embodiment, the classification unit 132 is configured to receive the output of scalogram converter 131. Classification unit 132 may be configured to perform real time automatic SA event detection. The output of that detection, for example may be a diagnosis that it has detected a SA event or that no SA event has been detected. In another example embodiment, classification unit 132 may be configured to further generate a patient diagnosis. For example, the diagnosis may be: a sleep apnea diagnosis, or a normal diagnosis. Classification unit 132 may further be configured to generate a sleep apnea syndrome (SAS) severity classification of a patient. For example the severity could be divided up between: normal, mild SAS, moderate SAS, and severe SAS.
[0046] Thus, method 200 may further comprise generating, in the classification unit 132, a sleep apnea diagnosis based on the scalogram and the trained diagnosis model (260). In an example embodiment, the trained diagnostic model is inputted a patient’s Scalogram and performs the following roles through the classification unit: real-time sleep apnea event detection, patient diagnosis and/or sleep apnea syndrome severity classification.
[0047] Method 200 may further comprise providing, by the processor, the sleep apnea diagnosis to the subject (270). The diagnosis can be provided on the screen of the mobile device (see FIG. 1), for example. However any suitable method of delivery of the diagnosis can be used. In accordance with an example embodiment, training of the diagnosis model is done using multimodal signals - to get more reliable and more accurate results than using a single modality physiological signal. These multimodal based results are compared to a human expert’s diagnosis and due to their high reliability, the diagnosis can be provided in real-time to the subject and a human expert is no longer needed to diagnose SA syndromes. After-the-fact checking can be done as desired, but the case for its necessity would be far less compelling with the disclosed system. Moreover, the automated nature of the diagnosis makes it easier, more comfortable, and less expensive to obtain a diagnosis. [0048] The real-time automatic SA detection can be also applied to an OSAS treatment using continuous positive air pressure (CPAP), as accurate SA detection can improve the adaptive air ventilation in CPAP treatment, which is needed when SA occurs. The system to detect SA events will use a minimal number of input physiological signals that are measured
[0049] In accordance with various example embodiments, method 200 may further comprise providing, by the processor, the first biological data, second biological data and the sleep apnea diagnosis to a remote system for approbation by a medical professional. In an example embodiment, the at least two sensors are located on one or more portable wearable devices worn by the subject, and the processor is located on a mobile device. In accordance with various example embodiments, the method comprises transmitting, by the mobile device, at least one of the biological data and/or an analysis of the biological data to a remote server (the sleep apnea diagnosis), wherein the processor is located on the remote server. In another example embodiment, the method comprises transmitting, by the mobile device, the sleep apnea diagnosis and the biological data to a sleep clinic server associated with at least one of sleep clinicians or a sleep doctor.
[0050] In one example embodiment, determining the sleep apnea diagnosis further includes at least one of detecting at least one sleep apnea event, determining a severity of the at least one sleep apnea event, and determining a score of the sleep apnea diagnosis. In another example embodiment, determining the sleep apnea diagnosis includes each of detecting the at least one sleep apnea event, determining the severity of the at least one sleep apnea event, and determining the score of the sleep apnea diagnosis. And in another example embodiment, detecting the at least one sleep apnea event, determining the severity of the at least one sleep apnea event, and determining the score of the sleep apnea diagnosis are each performed using a classification unit of the deep learning algorithm.
[0051] In an example embodiment, providing the result to the subject includes providing the result in real-time.
[0052] One or more of the components of the system may include software, hardware, a platform, app, micro-app, algorithms, modules, etc. The app may operate on any platform such as, for example, the IOS or Android platforms.
[0053] The diagnostic model may be trained by the use of artificial intelligence, machine learning and other algorithms. Long term follow-up of a large patient population on the app may provide data which can be processed to define treatment and analysis algorithms which can correlate with the published literature. The process and analysis may adjust over time, to account for improvements in diagnosis and treatment of SA. Thus, a subject may initially have a normal diagnosis, but an update of the diagnostic model may result in a SA diagnosis at a later point in time. The update in the diagnostic model could be the result of additional subjects being tested to add to the number of inputs impacting the model, or improvements in the accuracy of the diagnosis. Thus, the system may be configured to reassess stored scalograms for a period of time after the first analysis. The system may be configured to provide updated scalograms to device 120 on a periodic or pull or push basis.
[0054] The system may be agnostic to and/or interface with any sensors and mobile device. In other words, the system may process any type of biological data using any suitable sensors and devices. Sensors for measuring biological data are constantly evolving. The machines to process and analyze biological data can change and improve over time. The sleep clinic SA diagnosis methods are not rapidly changing, but to the extent they change, the method and system described herein can readily adapt by producing a diagnostic model consistent with the most current physician / sleep clinic diagnosis results.
[0055] The system may process the biological data to provide real-time SA diagnosis / notifications to the subject and/or to a remote system. In an example embodiment, a SA diagnosis kit may be sent to the subject via the mail or other common carrier. The subject may further download a software application to their mobile device, or to device 120. The kit may include any indicator (e.g., a bar code or QR code). Scanning the indicator may facilitate syncing the sensors to the mobile device and syncing the mobile device to a patient account at the remote device. After scanning the bar code, the system may display the kit information, the sleep clinic (or other remote entity) information, the bar code data, the type of test, and/or the like. The system may also provide entry questions for the user to complete to obtain more data from the user. The questions may assist in diagnosing and/or treating sleep apnea. After the information is verified, the system may register the test kit. The data collection may be configurable and/or specific to a particular clinic, hospital system, clinical trial, or the like.
[0056] The system 100 may utilizes artificial intelligence.
[0057] The system 100 may also convert the output from the classification unit 132 into a traditional lab report (e.g., in the form of a PDF report). [0058] The system 100 may provide event triggers to the remote device 150 or to cause action on the mobile device. The remote device 150 may send data to the user app (e.g., on the user mobile device, device 120). The user app may provide the results to the user. The user app may also provide notifications to the user and/or serve as a communications interface. The user may send actionable events and/or triggers to the user app. External devices are examples of triggers from the patient to the system. Other examples are any types of configurable forms or surveys that can trigger the event, such as symptoms of sleep apnea. The system may include configurable surveys. The surveys may be pre-test surveys and/or patient surveys, and the data may be entered by the patient into the app.
[0059] The event triggers may be fully or partially integrated with a mobile app technology interface to the user. The event triggers may trigger the providing of information.
[0060] The diagnosis may function as the event triggering feature to the user. The diagnosis may have a diagnosis-specific interactive interface which defines the test result, provides communication interfaces to appropriate medical specialists and/or provides access to additional testing modalities. With respect to specialists, certain disease states benefit from an evaluation by specialists. A telemedicine interface with the platform of the system provides a straightforward way of virtual communication with the appropriate specialist. This provides immediate actionable events for the patient. The ability to do this in the context of sleep apnea diagnosis with real time diagnosis and immediate triggers is ideal in the current medical environment.
[0061] The diagnosis may appear on the user interface in the app showing, for example, test results, up- to-date results analysis which can be adjusted in real time based on multiple variables, associated relevant data from monitoring devices, recommended relevant media inputs (e.g., audio, images, and video), relevant recommendations and information from a medical professional regarding the diagnosis, communications access to a health care provider and/or recommendations of additional tests or testing frequency.
[0062] The triggers may provide certain diagnosis results to a health care provider and/or establish a communication connection with an HCP (Health Care Provider) through the mobile device. The system may establish the connection via being connected to a telemedicine platform. Telemedicine platforms have specialists that can be available virtually 24 hours a day. Triggers of a sleep apnea event can also result in patient requests for more information. [0063] The system 100 may be configured to comply with all HIPAA requirements and functionality. The system may also include or facilitate a payment functionality, transactional functionality and/or insurance reimbursement functionality. In various embodiments, the system may define a transactional cohort such as, for example, a cash pay model or an insurance model. The system may also define a transactional type such as, for example, home test kit or doctor’s office kit. The home test kits may be a direct to consumer (cash pay) model. The system may analyze coverage. In particular, certain high-risk groups may merit certain types of tests, while healthy individuals may not merit similar tests. This may be a function of cost control.
[0064] In various embodiments, components, modules, and/or engines of system may be implemented as micro-applications or micro-apps. Micro-apps are typically deployed in the context of a mobile operating system, including for example, a WINDOWS® mobile operating system, an ANDROID® operating system, an APPLE® iOS operating system, a BLACKBERRY® company’s operating system, and the like. The micro-app may be configured to leverage the resources of the larger operating system and associated hardware via a set of predetermined rules which govern the operations of various operating systems and hardware resources. For example, where a micro-app desires to communicate with a device or network other than the mobile device or mobile operating system, the micro-app may leverage the communication protocol of the operating system and associated device hardware under the predetermined rules of the mobile operating system. Moreover, where the micro-app desires an input from a user, the micro-app may be configured to request a response from the operating system which monitors various hardware components and then communicates a detected input from the hardware to the micro-app.
[0065] The system and method may be described herein in terms of functional block components, screen shots, optional selections, and various processing steps. It should be appreciated that such functional blocks may be realized by any number of hardware and/or software components configured to perform the specified functions. For example, the system may employ various integrated circuit components, e.g., memory elements, processing elements, logic elements, look-up tables, and the like, which may carry out a variety of functions under the control of one or more microprocessors or other control devices. Similarly, the software elements of the system may be implemented with any or any combination of programming or scripting languages such as C, C++, C#, JAVA®, JAVASCRIPT®, JAVASCRIPT® Object Notation (JSON), VBScript, Macromedia COLD FUSION, COBOL, MICROSOFT® company’s Active Server Pages, assembly, PERL® , PHP, awk, PYTHON®, Visual Basic, SQL Stored Procedures, PL/SQL, any UNIX® shell script, and extensible markup language (XML) with the various algorithms being implemented with any combination of data structures, objects, processes, routines or other programming elements. Further, it should be noted that the system may employ any number of conventional techniques for data transmission, signaling, data processing, network control, and the like. Still further, the system could be used to detect or prevent security issues with a client-side scripting language, such as JAVASCRIPT®, VBScript, or the like.
[0066] The system and method are described herein with reference to screen shots, block diagrams and flowchart illustrations of methods, apparatus, and computer program products according to various embodiments. It will be understood that each functional block of the block diagrams and the flowchart illustrations, and combinations of functional blocks in the block diagrams and flowchart illustrations, respectively, can be implemented by computer program instructions.
[0067] Accordingly, functional blocks of the block diagrams and flowchart illustrations support combinations of means for performing the specified functions, combinations of steps for performing the specified functions, and program instruction means for performing the specified functions. It will also be understood that each functional block of the block diagrams and flowchart illustrations, and combinations of functional blocks in the block diagrams and flowchart illustrations, can be implemented by either special purpose hardware-based computer systems which perform the specified functions or steps, or suitable combinations of special purpose hardware and computer instructions. Further, illustrations of the process flows and the descriptions thereof may make reference to user WINDOWS® applications, webpages, websites, web forms, prompts, etc. Practitioners will appreciate that the illustrated steps described herein may comprise, in any number of configurations, including the use of WINDOWS® applications, webpages, web forms, popup WINDOWS® applications, prompts, and the like.
[0068] In various embodiments, the software elements of the system may also be implemented using a JAVASCRIPT® run-time environment configured to execute JAVASCRIPT® code outside of a web browser. For example, the software elements of the system may also be implemented using NODE.JS® components. NODE.JS® programs may implement several modules to handle various core functionalities. For example, a package management module, such as NPM®, may be implemented as an open source library to aid in organizing the installation and management of third-party NODE.JS® programs. NODE.JS® programs may also implement a process manager, such as, for example, Parallel Multithreaded Machine (“PM2”); a resource and performance monitoring tool, such as, for example, Node Application Metrics (“appmetrics”); a library module for building user interfaces, and/or any other suitable and/or desired module.
[0069] Middleware may include any hardware and/or software suitably configured to facilitate communications and/or process transactions between disparate computing systems. Middleware components are commercially available and known in the art. Middleware may be implemented through commercially available hardware and/or software, through custom hardware and/or software components, or through a combination thereof. Middleware may reside in a variety of configurations and may exist as a standalone system or may be a software component residing on the internet server. Middleware may be configured to process transactions between the various components of an application server and any number of internal or external systems for any of the purposes disclosed herein. WEBSPHERE® MQTM (formerly MQSeries) by IBM®, Inc. (Armonk, NY) is an example of a commercially available middleware product. An Enterprise Service Bus (“ESB”) application is another example of middleware.
[0070] The computers discussed herein may provide a suitable website or other internet-based graphical user interface which is accessible by users. In one embodiment, MICROSOFT® company’s Internet Information Services (IIS), Transaction Server (MTS) service, and an SQL SERVER® database, are used in conjunction with MICROSOFT® operating systems, WINDOWS NT® web server software, SQL SERVER® database, and MICROSOFT® Commerce Server. Additionally, components such as ACCESS® software, SQL SERVER® database, ORACLE® software, SYBASE® software, INFORMIX® software, MYSQL® software, INTERBASE® software, etc., may be used to provide an Active Data Object (ADO) compliant database management system. In one embodiment, the APACHE® web server is used in conjunction with a LINUX® operating system, a MYSQL® database, and PERL®, PHP, Ruby, and/or PYTHON® programming languages.
[0071] For the sake of brevity, conventional data networking, application development, and other functional aspects of the systems (and components of the individual operating components of the systems) may not be described in detail herein. Furthermore, the connecting lines shown in the various figures contained herein are intended to represent exemplary functional relationships and/or physical couplings between the various elements. It should be noted that many alternative or additional functional relationships or physical connections may be present in a practical system.
[0072] In various embodiments, the system and various components may integrate with one or more smart digital assistant technologies. For example, exemplary smart digital assistant technologies may include the ALEXA® system developed by the AMAZON® company, the GOOGLE HOME® system developed by Alphabet, Inc., the HOMEPOD® system of the APPLE® company, and/or similar digital assistant technologies. The ALEXA® system, GOOGLE HOME® system, and HOMEPOD® system, may each provide cloud-based voice activation services that can assist with tasks, entertainment, general information, and more. All the ALEXA® devices, such as the AMAZON ECHO®, AMAZON ECHO DOT®, AMAZON TAP®, and AMAZON FIRE® TV, have access to the ALEXA® system. The ALEXA® system, GOOGLE HOME® system, and HOMEPOD® system may receive voice commands via its voice activation technology, activate other functions, control smart devices, and/or gather information. For example, the smart digital assistant technologies may be used to interact with music, emails, texts, phone calls, question answering, shopping, making to-do lists, setting alarms, streaming podcasts, playing audiobooks, and providing real time information, such as news. The ALEXA®, GOOGLE HOME®, and HOMEPOD® systems may also allow the user to access information about eligible transaction accounts linked to an online account across all digital assistant-enabled devices.
[0073] The various system components discussed herein may include one or more of the following: a host server or other computing systems including a processor for processing digital data; a memory coupled to the processor for storing digital data; an input digitizer coupled to the processor for inputting digital data; an application program stored in the memory and accessible by the processor for directing processing of digital data by the processor; a display device coupled to the processor and memory for displaying information derived from digital data processed by the processor; and a plurality of databases. Various databases used herein may include: client data; merchant data; financial institution data; and/or like data useful in the operation of the system. As those skilled in the art will appreciate, user computer may include an operating system (e.g., WINDOWS®, UNIX®, LINUX®, SOLARIS®, MACOS®, etc.) as well as various conventional support software and drivers typically associated with computers.
[0074] The present system or any part(s) or function(s) thereof may be implemented using hardware, software, or a combination thereof and may be implemented in one or more computer systems or other processing systems. However, the manipulations performed by embodiments may be referred to in terms, such as matching or selecting, which are commonly associated with mental operations performed by a human operator. No such capability of a human operator is necessary, or desirable, in most cases, in any of the operations described herein. Rather, the operations may be machine operations or any of the operations may be conducted or enhanced by artificial intelligence (Al) or machine learning. Al may refer generally to the study of agents (e.g., machines, computer-based systems, etc.) that perceive the world around them, form plans, and make decisions to achieve their goals. Foundations of Al include mathematics, logic, philosophy, probability, linguistics, neuroscience, and decision theory. Many fields fall under the umbrella of Al, such as computer vision, robotics, machine learning, and natural language processing. Useful machines for performing the various embodiments include general purpose digital computers or similar devices.
[0075] In various embodiments, the embodiments are directed toward one or more computer systems capable of carrying out the functionalities described herein. The computer system includes one or more processors. The processor is connected to a communication infrastructure (e.g., a communications bus, cross-over bar, network, etc.). Various software embodiments are described in terms of this exemplary computer system. After reading this description, it will become apparent to a person skilled in the relevant art(s) how to implement various embodiments using other computer systems and/or architectures. The computer system can include a display interface that forwards graphics, text, and other data from the communication infrastructure (or from a frame buffer not shown) for display on a display unit.
[0076] The computer system also includes a main memory, such as random access memory (RAM), and may also include a secondary memory. The secondary memory may include, for example, a hard disk drive, a solid-state drive, and/or a removable storage drive. The removable storage drive reads from and/or writes to a removable storage unit in a well-known manner. As will be appreciated, the removable storage unit includes a computer usable storage medium having stored therein computer software and/or data. [0077] In various embodiments, secondary memory may include other similar devices for allowing computer programs or other instructions to be loaded into a computer system. Such devices may include, for example, a removable storage unit and an interface. Examples of such may include a removable memory chip (such as an erasable programmable read only memory (EPROM), programmable read only memory (PROM)) and associated socket, or other removable storage units and interfaces, which allow software and data to be transferred from the removable storage unit to a computer system.
[0078] The computer system or device 120 may also include a communications interface. A communications interface allows software and data to be transferred between the computer system and external devices. Examples of such a communications interface may include a modem, a network interface (such as an Ethernet card), a communications port, etc. Software and data transferred via the communications interface are in the form of signals which may be electronic, electromagnetic, optical, or other signals capable of being received by communications interface. These signals are provided to communications interface via a communications path (e.g., channel). This channel carries signals and may be implemented using wire, cable, fiber optics, a telephone line, a cellular link, a radio frequency (RF) link, wireless and other communications channels.
[0079] In various embodiments, the server may include application servers (e.g., WEBSPHERE®, WEBLOGIC®, JBOSS®, POSTGRES PLUS ADVANCED SERVER®, etc ). In various embodiments, the server may include web servers (e.g., Apache, IIS, GOOGLE® Web Server, SUN JAVA® System Web Server, JAVA® Virtual Machine running on LINUX® or WINDOWS® operating systems).
[0080] A web client includes any device or software which communicates via any network, such as, for example any device or software discussed herein. The web client may include internet browsing software installed within a computing unit or system to conduct online transactions and/or communications. These computing units or systems may take the form of a computer or set of computers, although other types of computing units or systems may be used, including personal computers, laptops, notebooks, tablets, smart phones, cellular phones, personal digital assistants, servers, pooled servers, mainframe computers, distributed computing clusters, kiosks, terminals, point of sale (POS) devices or terminals, televisions, or any other device capable of receiving data over a network. The web client may include an operating system (e.g., WINDOWS®, WINDOWS MOBILE® operating systems, UNIX® operating system, LINUX® operating systems, APPLE® OS® operating systems, etc.) as well as various conventional support software and drivers typically associated with computers. The web-client may also run MICROSOFT® INTERNET EXPLORER® software, MOZILLA® FIREFOX® software, GOOGLE CHROME™ software, APPLE® SAFARI® software, or any other of the myriad software packages available for browsing the internet.
[0081] As those skilled in the art will appreciate, the web client may or may not be in direct contact with the server (e.g., application server, web server, etc., as discussed herein). For example, the web client may access the services of the server through another server and/or hardware component, which may have a direct or indirect connection to an internet server. For example, the web client may communicate with the server via a load balancer. In various embodiments, web client access is through a network or the internet through a commercially-available webbrowser software package. In that regard, the web client may be in a home or business environment with access to the network or the internet. The web client may implement security protocols such as Secure Sockets Layer (SSL) and Transport Layer Security (TLS). A web client may implement several application layer protocols including HTTP, HTTPS, FTP, and SFTP.
[0082] The various system components may be independently, separately, or collectively suitably coupled to the network via data links which includes, for example, a connection to an Internet Service Provider (ISP) over the local loop as is typically used in connection with standard modem communication, cable modem, DISH NETWORK®, ISDN, Digital Subscriber Line (DSL), or various wireless communication methods. It is noted that the network may be implemented as other types of networks, such as an interactive television (ITV) network. Moreover, the system contemplates the use, sale, or distribution of any goods, services, or information over any network having similar functionality described herein.
[0083] The system contemplates uses in association with web services, utility computing, pervasive and individualized computing, security and identity solutions, autonomic computing, cloud computing, commodity computing, mobility and wireless solutions, open source, biometrics, grid computing, and/or mesh computing.
[0084] Any of the communications, inputs, storage, databases or displays discussed herein may be facilitated through a website having web pages. The term “web page” as it is used herein is not meant to limit the type of documents and applications that might be used to interact with the user. For example, a typical website might include, in addition to standard HTML documents, various forms, JAVA® applets, JAVASCRIPT® programs, active server pages (ASP), common gateway interface scripts (CGI), extensible markup language (XML), dynamic HTML, cascading style sheets (CSS), AJAX (Asynchronous JAVASCRIPT And XML) programs, helper applications, plug-ins, and the like. A server may include a web service that receives a request from a web server, the request including a URL and an IP address (192.168.1.1). The web server retrieves the appropriate web pages and sends the data or applications for the web pages to the IP address. Web services are applications that are capable of interacting with other applications over a communications means, such as the internet. Web services are typically based on standards or protocols such as XML, SOAP, AJAX, WSDL and UDDI. Web services methods are well known in the art, and are covered in many standard texts. For example, representational state transfer (REST), or RESTful, web services may provide one way of enabling interoperability between applications.
[0085] The computing unit of the web client may be further equipped with an internet browser connected to the internet or an intranet using standard dial-up, cable, DSL, or any other internet protocol known in the art. Transactions originating at a web client may pass through a firewall in order to prevent unauthorized access from users of other networks. Further, additional firewalls may be deployed between the varying components of CMS to further enhance security.
[0086] Any databases discussed herein may include relational, hierarchical, graphical, blockchain, object-oriented structure, and/or any other database configurations. Any database may also include a flat file structure wherein data may be stored in a single file in the form of rows and columns, with no structure for indexing and no structural relationships between records. For example, a flat file structure may include a delimited text file, a CSV (comma-separated values) file, and/or any other suitable flat file structure. Common database products that may be used to implement the databases include DB2® by IBM® (Armonk, NY), various database products available from ORACLE® Corporation (Redwood Shores, CA), MICROSOFT ACCESS® or MICROSOFT SQL SERVER® by MICROSOFT® Corporation (Redmond, Washington), MYSQL® by MySQL AB (Uppsala, Sweden), MONGODB®, Redis, Apache Cassandra®, HBASE® by APACHE®, MapR-DB by the MAPR® corporation, or any other suitable database product. Moreover, any database may be organized in any suitable manner, for example, as data tables or lookup tables. Each record may be a single file, a series of files, a linked series of data fields, or any other data structure.
[0087] As used herein, big data may refer to partially or fully structured, semi-structured, or unstructured data sets including millions of rows and hundreds of thousands of columns. A big data set may be compiled, for example, from a history of purchase transactions over time, from web registrations, from social media, from records of charge (ROC), from summaries of charges (SOC), from internal data, or from other suitable sources. Big data sets may be compiled without descriptive metadata such as column types, counts, percentiles, or other interpretive-aid data points.
[0088] Association of certain data may be accomplished through any desired data association technique such as those known or practiced in the art. For example, the association may be accomplished either manually or automatically. Automatic association techniques may include, for example, a database search, a database merge, GREP, AGREP, SQL, using a key field in the tables to speed searches, sequential searches through all the tables and files, sorting records in the file according to a known order to simplify lookup, and/or the like. The association step may be accomplished by a database merge function, for example, using a “key field” in pre-selected databases or data sectors. Various database tuning steps are contemplated to optimize database performance. For example, frequently used files such as indexes may be placed on separate file systems to reduce In/Out (“VO”) bottlenecks.
[0089] More particularly, a “key field” partitions the database according to the high-level class of objects defined by the key field. For example, certain types of data may be designated as a key field in a plurality of related data tables and the data tables may then be linked on the basis of the type of data in the key field. The data corresponding to the key field in each of the linked data tables is preferably the same or of the same type. However, data tables having similar, though not identical, data in the key fields may also be linked by using AGREP, for example. In accordance with one embodiment, any suitable data storage technique may be utilized to store data without a standard format. Data sets may be stored using any suitable technique, including, for example, storing individual files using an ISO/IEC 7816-4 file structure; implementing a domain whereby a dedicated file is selected that exposes one or more elementary files containing one or more data sets; using data sets stored in individual files using a hierarchical filing system; data sets stored as records in a single file (including compression, SQL accessible, hashed via one or more keys, numeric, alphabetical by first tuple, etc.); data stored as Binary Large Object (BLOB); data stored as ungrouped data elements encoded using ISO/IEC 7816-6 data elements; data stored as ungrouped data elements encoded using ISO/IEC Abstract Syntax Notation (ASN.l) as in ISO/IEC 8824 and 8825; other proprietary techniques that may include fractal compression methods, image compression methods, etc.
[0090] In various embodiments, the ability to store a wide variety of information in different formats is facilitated by storing the information as a BLOB. Thus, any binary information can be stored in a storage space associated with a data set. As discussed above, the binary information may be stored in association with the system or external to but affiliated with the system. The BLOB method may store data sets as ungrouped data elements formatted as a block of binary via a fixed memory offset using either fixed storage allocation, circular queue techniques, or best practices with respect to memory management (e.g., paged memory, least recently used, etc.). By using BLOB methods, the ability to store various data sets that have different formats facilitates the storage of data, in the database or associated with the system, by multiple and unrelated owners of the data sets. For example, a first data set which may be stored may be provided by a first party, a second data set which may be stored may be provided by an unrelated second party, and yet a third data set which may be stored may be provided by a third party unrelated to the first and second party. Each of these three exemplary data sets may contain different information that is stored using different data storage formats and/or techniques. Further, each data set may contain subsets of data that also may be distinct from other subsets.
[0091] As stated above, in various embodiments, the data can be stored without regard to a common format. However, the data set (e.g., BLOB) may be annotated in a standard manner when provided for manipulating the data in the database or system. The annotation may comprise a short header, trailer, or other appropriate indicator related to each data set that is configured to convey information useful in managing the various data sets. For example, the annotation may be called a “condition header,” “header,” “trailer,” or “status,” herein, and may comprise an indication of the status of the data set or may include an identifier correlated to a specific issuer or owner of the data. In one example, the first three bytes of each data set BLOB may be configured or configurable to indicate the status of that particular data set; e.g., LOADED, INITIALIZED, READY, BLOCKED, REMOVABLE, or DELETED. Subsequent bytes of data may be used to indicate for example, the identity of the issuer, user, transaction/membership account identifier or the like. Each of these condition annotations are further discussed herein.
[0092] The data set annotation may also be used for other types of status information as well as various other purposes. For example, the data set annotation may include security information establishing access levels. The access levels may, for example, be configured to permit only certain individuals, levels of employees, companies, or other entities to access data sets, or to permit access to specific data sets based on the transaction, merchant, issuer, user, or the like. Furthermore, the security information may restrict/permit only certain actions, such as accessing, modifying, and/or deleting data sets. In one example, the data set annotation indicates that only the data set owner or the user are permitted to delete a data set, various identified users may be permitted to access the data set for reading, and others are altogether excluded from accessing the data set. However, other access restriction parameters may also be used allowing various entities to access a data set with various permission levels as appropriate.
[0093] The data, including the header or trailer, may be received by a standalone interaction device configured to add, delete, modify, or augment the data in accordance with the header or trailer. As such, in one embodiment, the header or trailer is not stored on the transaction device along with the associated issuer-owned data, but instead the appropriate action may be taken by providing to the user, at the standalone device, the appropriate option for the action to be taken. The system may contemplate a data storage arrangement wherein the header or trailer, or header or trailer history, of the data is stored on the system, device or transaction instrument in relation to the appropriate data.
[0094] One skilled in the art will also appreciate that, for security reasons, any databases, systems, devices, servers, or other components of the system may consist of any combination thereof at a single location or at multiple locations, wherein each database or system includes any of various suitable security features, such as firewalls, access codes, encryption, decryption, compression, decompression, and/or the like.
[0095] Practitioners will also appreciate that there are a number of methods for displaying data within a browser-based document. Data may be represented as standard text or within a fixed list, scrollable list, drop-down list, editable text field, fixed text field, pop-up window, and the like. Likewise, there are a number of methods available for modifying data in a web page such as, for example, free text entry using a keyboard, selection of menu items, check boxes, option boxes, and the like.
[0096] The data may be big data that is processed by a distributed computing cluster. The distributed computing cluster may be, for example, a HADOOP® software cluster configured to process and store big data sets with some of nodes comprising a distributed storage system and some of nodes comprising a distributed processing system. In that regard, distributed computing cluster may be configured to support a HADOOP® software distributed file system (HDFS) as specified by the Apache Software Foundation at www.hadoop.apache.org/docs.
[0097] As used herein, the term “network” includes any cloud, cloud computing system, or electronic communications system or method which incorporates hardware and/or software components. Communication among the parties may be accomplished through any suitable communication channels, such as, for example, a telephone network, an extranet, an intranet, internet, point of interaction device (point of sale device, personal digital assistant (e.g., an IPHONE® device, a BLACKBERRY® device), cellular phone, kiosk, etc.), online communications, satellite communications, off-line communications, wireless communications, transponder communications, local area network (LAN), wide area network (WAN), virtual private network (VPN), networked or linked devices, keyboard, mouse, and/or any suitable communication or data input modality. Moreover, although the system is frequently described herein as being implemented with TCP/IP communications protocols, the system may also be implemented using IPX, APPLETALK® program, IP-6, NetBIOS, OSI, any tunneling protocol (e.g. IPsec, SSH, etc.), or any number of existing or future protocols. If the network is in the nature of a public network, such as the internet, it may be advantageous to presume the network to be insecure and open to eavesdroppers. Specific information related to the protocols, standards, and application software utilized in connection with the internet is generally known to those skilled in the art and, as such, need not be detailed herein.
[0098] “Cloud” or “Cloud computing” includes a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. Cloud computing may include locationindependent computing, whereby shared servers provide resources, software, and data to computers and other devices on demand. [0099] Any communication, transmission, and/or channel discussed herein may include any system or method for delivering content (e.g. data, information, metadata, etc.), and/or the content itself. The content may be presented in any form or medium, and in various embodiments, the content may be delivered electronically and/or capable of being presented electronically. For example, a channel may comprise a website, mobile application, or device (e.g., FACEBOOK®, YOUTUBE®, PANDORA®, APPLE TV®, MICROSOFT® XBOX®, ROKU®, AMAZON FIRE®, GOOGLE CHROMECASTTM, SONY® PLAYSTATION®, NINTENDO® SWITCH®, etc.) a uniform resource locator (“URL”), a document (e.g., a MICROSOFT® Word or EXCELTM, an ADOBE® Portable Document Format (PDF) document, etc.), an “ebook,” an “emagazine,” an application or microapplication (as described herein), an short message service (SMS) or other type of text message, an email, a FACEBOOK® message, a TWITTER® tweet, multimedia messaging services (MMS), and/or other type of communication technology. In various embodiments, a channel may be hosted or provided by a data partner. In various embodiments, the distribution channel may comprise at least one of a merchant website, a social media website, affiliate or partner websites, an external vendor, a mobile device communication, social media network, and/or location based service. Distribution channels may include at least one of a merchant website, a social media site, affiliate or partner websites, an external vendor, and a mobile device communication. Examples of social media sites include FACEBOOK®, FOURSQUARE®, TWITTER®, LINKED IN®, INSTAGRAM®, PINTEREST®, TUMBLR®, REDDIT®, SNAPCHAT®, WHATSAPP®, FLICKR®, VK®, QZONE®, WECHAT®, and the like. Examples of affiliate or partner websites include AMERICAN EXPRESS®, GROUPON®, LIVINGSOCIAL®, and the like. Moreover, examples of mobile device communications include texting, email, and mobile applications for smartphones.
[0100] The detailed description of various embodiments herein makes reference to the accompanying drawings and pictures, which show various embodiments by way of illustration. While these various embodiments are described in sufficient detail to enable those skilled in the art to practice the disclosure, it should be understood that other embodiments may be realized and that logical and mechanical changes may be made without departing from the spirit and scope of the disclosure. Thus, the detailed description herein is presented for purposes of illustration only and not for purposes of limitation. For example, the steps recited in any of the method or process descriptions may be executed in any order and are not limited to the order presented. Moreover, any of the functions or steps may be outsourced to or performed by one or more third parties. Modifications, additions, or omissions may be made to the systems, apparatuses, and methods described herein without departing from the scope of the disclosure. For example, the components of the systems and apparatuses may be integrated or separated. Moreover, the operations of the systems and apparatuses disclosed herein may be performed by more, fewer, or other components and the methods described may include more, fewer, or other steps. Additionally, steps may be performed in any suitable order. As used in this document, “each” refers to each member of a set or each member of a subset of a set. Furthermore, any reference to singular includes plural embodiments, and any reference to more than one component may include a singular embodiment. Although specific advantages have been enumerated herein, various embodiments may include some, none, or all of the enumerated advantages.
[0101] Systems, methods, and computer program products are provided. In the detailed description herein, references to “various embodiments,” “one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. After reading the description, it will be apparent to one skilled in the relevant art(s) how to implement the disclosure in alternative embodiments.
[0102] Benefits, other advantages, and solutions to problems have been described herein with regard to specific embodiments. However, the benefits, advantages, solutions to problems, and any elements that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as critical, required, or essential features or elements of the disclosure. The scope of the disclosure is accordingly limited by nothing other than the appended claims, in which reference to an element in the singular is not intended to mean "one and only one" unless explicitly so stated, but rather “one or more.” Moreover, where a phrase similar to 'at least one of A, B, and C or 'at least one of A, B, or C is used in the claims or specification, it is intended that the phrase be interpreted to mean that A alone may be present in an embodiment, B alone may be present in an embodiment, C alone may be present in an embodiment, or that any combination of the elements A, B and C may be present in a single embodiment; for example, A and B, A and C, B and C, or A and B and C. Although the disclosure includes a method, it is contemplated that it may be embodied as computer program instructions on a tangible computer-readable carrier, such as a magnetic or optical memory or a magnetic or optical disk. All structural, chemical, and functional equivalents to the elements of the above-described various embodiments that are known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the present claims. Moreover, it is not necessary for a device or method to address each and every problem sought to be solved by the present disclosure for it to be encompassed by the present claims. Furthermore, no element, component, or method step in the present disclosure is intended to be dedicated to the public regardless of whether the element, component, or method step is explicitly recited in the claims. No claim element is intended to invoke 35 U.S.C. § 112(f) unless the element is expressly recited using the phrase “means for” or “step for”. As used herein, the terms “comprises,” “comprising,” or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.

Claims

We claim:
1. A system for detection of sleep apnea in a subject, the system comprising: a first sensor; a second sensor, wherein the first and second sensors are configure to measure a first type of biological data and a second type of biological data corresponding to sleep apnea disorder of the subject, wherein the first sensor and the second sensor are wearable devices; and a processor configured to receive the first type of biological data and the second type of biological data, wherein the processor comprises a scalogram converter for generating a scalogram from the first type of biological data and from the second type of biological data, and wherein the scalogram converter is configured to provide the scalogram to a classification unit comprising a diagnostic model built based on deep learning of a training dataset, wherein the classification unit is configured generate a sleep apnea diagnosis based on the scalogram.
2. The system of claim 1, wherein the processor is part of a mobile device.
3. The system of claim 1, further comprising a tangible, non-transitory memory configured to communicate with the processor, the tangible, non-transitory memory having instructions stored thereon that, in response to execution by the processor, cause the processor to perform operations comprising: receiving, by the processor in communication with the first sensor, the first type of biological data; receiving, by the processor in communication with the second sensor, the second type of biological data; generating the scalogram based on at least the first type of biological data and the second type of biological data; providing the scalogram to the classification unit comprising the diagnostic model; and generating, in the classification unit, a sleep apnea diagnosis based on the scalogram and the diagnostic model.
4. The system of claim 1, wherein the tangible, non-transitory memory has further instructions stored thereon that, in response to execution by the processor, cause the processor to perform operations comprising: providing, by the processor, the first type of biological data, the second type of biological data, and the sleep apnea diagnosis to a remote system for approbation of the sleep apnea diagnosis by a physician.
5. The system of claim 1, wherein the first type of biological data and the second type of biological data are selected from among at least two of: pulse oximeter data (SpO2); heart rate data; electrocardiogram data (ECG); and nasal pressure data.
6. The system of claim 1, wherein the first type of biological data is pulse oximeter data (SpO2) and the second type of biological data is electrocardiogram data (ECG), and wherein the scalogram is formed based on the pulse oximeter data and electrocardiogram data.
7. The system of claim 1, wherein the first sensor and the second sensor are both portable wearable devices that can be worn by the subject.
8. A method for detection of sleep apnea in a subject, the method comprising: testing a group of people by measuring, for each person in the group of people, first test biological data and second test biological data, each corresponding to sleep apnea disorder of the person, wherein the diagnosis for each person in the group of people is already known; generating, with a scalogram converter, a scalogram for each person tested in the group of people, the scalogram based on the first test biological data and the second test biological data and the known diagnosis for each person; training a diagnosis model, based on the scalogram and known diagnosis for each person; generating, by a first sensor, first biological data corresponding to sleep apnea disorder of the subject; generating, by a second sensor, second biological data corresponding to sleep apnea disorder of the subject; receiving, by a processor in communication with the first sensor, the first biological data; receiving, by the processor in communication with the second sensor, the second biological data; generating, by a scalogram converter of the processor, a scalogram based on the first biological data and the second biological data; providing the scalogram to a classification unit of the processor, generating, in the classification unit, a sleep apnea diagnosis based on the scalogram, wherein the classification unit uses the trained diagnosis model; and providing, by the processor, the sleep apnea diagnosis to the subject.
9. The method of claim 8, further comprising providing, by the processor, the first biological data, second biological data and the sleep apnea diagnosis to a remote system for approbation by a medical professional.
10. The method of claim 8, wherein the first biological data and the second biological data includes at least two of: pulse oximeter data (SpO2); heart rate data; electrocardiogram data (ECG); and nasal pressure data.
11. The method of claim 8, wherein the scalogram is based on each of the pulse oximeter data (SpO2), heart rate data, electrocardiogram data (ECG), and nasal pressure data.
12. The method of claim 8, wherein the first sensor and the second sensor are located on one or more portable wearable device worn by the subject, and wherein the processor is located on a mobile device.
13. The method of claim 8, further comprising transmitting, by the processor of a mobile device, at least one of the biological data or an analysis of the biological data to a remote server, wherein the processor is located on the remote server.
14. The method of claim 8, further comprising converting, by the processor, the biological data to image data capable of time-frequency analysis through the scalogram converter, wherein determining the sleep apnea diagnosis further includes analyzing the image data.
15. The method of claim 14, wherein determining the sleep apnea diagnosis further includes at least one of detecting at least one sleep apnea event, determining a severity of the at least one sleep apnea event, and determining a score of the sleep apnea diagnosis.
16. The method of claim 15, wherein determining the sleep apnea diagnosis includes each of detecting possible sleep apnea events from whole sleep data, determining the severity of each of the detected sleep apnea events, and determining the score of the sleep apnea diagnosis.
17. The method of claim 16, wherein detecting possible sleep apnea events, determining the severity of each of the detected sleep apnea events, and determining the score of the sleep apnea diagnosis are each performed using the classification unit of a deep learning algorithm.
18. The method of claim 8, further comprising transmitting, by the processor of a mobile device, the sleep apnea diagnosis and the biological data to a sleep clinic server associated with at least one of a sleep clinician or a sleep doctor.
19. The method of claim 8 further comprising training a deep learning algorithm based on a previously measured biological data and corresponding previously determined sleep apnea diagnoses in a training dataset.
20. The method of claim 8, wherein providing the result to the subject includes providing the result in real-time.
21. An article of manufacture including a non -transitory, tangible computer readable storage medium having instructions stored thereon that, in response to execution by a processor, cause the processor to perform operations to detect sleep apnea in a subject, the operations comprising: receiving, by the processor in communication with at least two sensors, biological data corresponding to sleep apnea disorder of the subject, wherein the biological data comprises at least pulse oximeter data (SpO2) and electrocardiogram data (ECG); converting, by a scalogram converter of the processor, the biological data to a scalogram; determining, by a classification unit of the processor, a sleep apnea diagnosis based on the scalogram and a diagnostic model that was built based on deep learning of the biological data and a corresponding diagnosis in a training dataset; and outputting, by an output device, the sleep apnea diagnosis.
PCT/US2021/047605 2020-08-25 2021-08-25 Deep learning based sleep apnea syndrome portable diagnostic system and method WO2022046939A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202063070189P 2020-08-25 2020-08-25
US63/070,189 2020-08-25

Publications (1)

Publication Number Publication Date
WO2022046939A1 true WO2022046939A1 (en) 2022-03-03

Family

ID=80353888

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2021/047605 WO2022046939A1 (en) 2020-08-25 2021-08-25 Deep learning based sleep apnea syndrome portable diagnostic system and method

Country Status (1)

Country Link
WO (1) WO2022046939A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4327738A1 (en) * 2022-08-26 2024-02-28 Nihon Kohden Corporation Respiration information obtaining apparatus, respiration information obtaining system, processing apparatus, and respiration information obtaining method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110270114A1 (en) * 2008-10-03 2011-11-03 Nellcor Puritan Bennett Ireland Methods and apparatus for calibrating respiratory effort from photoplethysmograph signals
US20180184970A1 (en) * 2016-12-29 2018-07-05 Hill-Rom Services, Inc. Video Monitoring to Detect Sleep Apnea
US10213152B2 (en) * 2011-02-14 2019-02-26 The Board Of Regents Of The University Of Texas System System and method for real-time measurement of sleep quality
WO2020132315A1 (en) * 2018-12-19 2020-06-25 Northwestern University Systems and methods to detect and treat obstructive sleep apnea and upper airway obstruction
KR20200079676A (en) * 2018-12-26 2020-07-06 (주)허니냅스 Apparatus and method for inspecting sleep disorder based on deep-learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110270114A1 (en) * 2008-10-03 2011-11-03 Nellcor Puritan Bennett Ireland Methods and apparatus for calibrating respiratory effort from photoplethysmograph signals
US10213152B2 (en) * 2011-02-14 2019-02-26 The Board Of Regents Of The University Of Texas System System and method for real-time measurement of sleep quality
US20180184970A1 (en) * 2016-12-29 2018-07-05 Hill-Rom Services, Inc. Video Monitoring to Detect Sleep Apnea
WO2020132315A1 (en) * 2018-12-19 2020-06-25 Northwestern University Systems and methods to detect and treat obstructive sleep apnea and upper airway obstruction
KR20200079676A (en) * 2018-12-26 2020-07-06 (주)허니냅스 Apparatus and method for inspecting sleep disorder based on deep-learning

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4327738A1 (en) * 2022-08-26 2024-02-28 Nihon Kohden Corporation Respiration information obtaining apparatus, respiration information obtaining system, processing apparatus, and respiration information obtaining method

Similar Documents

Publication Publication Date Title
US20230197281A1 (en) Predicting intensive care transfers and other unforeseen events using machine learning
JP7240789B2 (en) Systems for screening and monitoring of encephalopathy/delirium
US20230197223A1 (en) Health care information system providing additional data fields in patient data
US20120179478A1 (en) Devices, Systems, and Methods for the Real-Time and Individualized Prediction of Health and Economic Outcomes
US20160196399A1 (en) Systems and methods for interpretive medical data management
Faust et al. A smart service platform for cost efficient cardiac health monitoring
Moreira et al. SAREF4health: IoT Standard-Based Ontology-Driven Healthcare Systems.
WO2019215055A1 (en) Personalized recommendations for health management
Pais et al. Suitability of fast healthcare interoperability resources (FHIR) for wellness data
Zhu et al. Wearable technologies and telehealth in care management for chronic illness
Zhou et al. Electrocardiogram quality assessment with a generalized deep learning model assisted by conditional generative adversarial networks
US20210313021A1 (en) Health information exchange system
Neri et al. Electrocardiogram monitoring wearable devices and artificial-intelligence-enabled diagnostic capabilities: a review
Calderón-Gómez et al. Evaluating service-oriented and microservice architecture patterns to deploy ehealth applications in cloud computing environment
Gulzar Ahmad et al. Sensing and artificial intelligent maternal-infant health care systems: a review
Cohoon et al. Toward precision health: applying artificial intelligence analytics to digital health biometric datasets
WO2022046939A1 (en) Deep learning based sleep apnea syndrome portable diagnostic system and method
Ehiabhi et al. A systematic review of machine learning models in mental health analysis based on multi-channel multi-modal biometric signals
Alamri Big data with integrated cloud computing for prediction of health conditions
Jagatheesaperumal et al. An iot-based framework for personalized health assessment and recommendations using machine learning
CN114190897A (en) Training method of sleep staging model, sleep staging method and device
Albaqami et al. Automatic detection of abnormal eeg signals using wavenet and lstm
Staszak et al. From data to diagnosis: How machine learning is changing heart health monitoring
Bae et al. Vital block and vital sign server for ECG and vital sign monitoring in a portable u-vital system
Sandi et al. Mobile health monitoring and consultation to support hypertension treatment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21862678

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21862678

Country of ref document: EP

Kind code of ref document: A1