US11931168B2 - Speech-controlled health monitoring systems and methods - Google Patents

Speech-controlled health monitoring systems and methods Download PDF

Info

Publication number
US11931168B2
US11931168B2 US17/112,177 US202017112177A US11931168B2 US 11931168 B2 US11931168 B2 US 11931168B2 US 202017112177 A US202017112177 A US 202017112177A US 11931168 B2 US11931168 B2 US 11931168B2
Authority
US
United States
Prior art keywords
subject
speech
acoustic signals
signals
verbal command
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US17/112,177
Other versions
US20210307681A1 (en
Inventor
Omid Sayadi
Steven Jay Young
Carl Hewitt
Alan Luckow
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sleep Number Corp
Original Assignee
Sleep Number Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sleep Number Corp filed Critical Sleep Number Corp
Priority to US17/112,177 priority Critical patent/US11931168B2/en
Assigned to UDP Labs, Inc. reassignment UDP Labs, Inc. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HEWITT, CARL, LUCKOW, ALAN, SAYADI, Omid, YOUNG, STEVEN JAY
Publication of US20210307681A1 publication Critical patent/US20210307681A1/en
Assigned to SLEEP NUMBER CORPORATION reassignment SLEEP NUMBER CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DOBKIN, ROBERT, HEWITT, CARL, LUCKOW, ALAN, OLSON, JONATHAN, UDP Labs, Inc., YOUNG, STEVEN JAY
Assigned to SLEEP NUMBER CORPORATION reassignment SLEEP NUMBER CORPORATION CORRECTIVE ASSIGNMENT TO CORRECT THE THE CONVEYING PARTY NAMES AS LISTED ON THE ASSIGNMENT COVERSHEET PREVIOUSLY RECORDED AT REEL: 062787 FRAME: 0247. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT. Assignors: HEWITT, CARL, UDP Labs, Inc., YOUNG, STEVEN JAY
Application granted granted Critical
Publication of US11931168B2 publication Critical patent/US11931168B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4806Sleep evaluation
    • A61B5/4815Sleep quality
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4806Sleep evaluation
    • A61B5/4818Sleep apnoea
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0002Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network
    • A61B5/0015Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network characterised by features of the telemetry system
    • A61B5/0022Monitoring a patient using a global network, e.g. telephone networks, internet
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/0205Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/08Detecting, measuring or recording devices for evaluating the respiratory organs
    • A61B5/0823Detecting or evaluating cough events
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4842Monitoring progression or stage of a disease
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6887Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient mounted on external non-worn devices, e.g. non-medical devices
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/7405Details of notification to user or communication with user or patient ; user input means using sound
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/746Alarms related to a physiological condition, e.g. details of setting alarm thresholds or avoiding false alarms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/7465Arrangements for interactive communication between patient and care services, e.g. by using a telephone network
    • A61B5/747Arrangements for interactive communication between patient and care services, e.g. by using a telephone network in case of emergency, i.e. alerting emergency services
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/7475User input or interface means, e.g. keyboard, pointing device, joystick
    • A61B5/749Voice-controlled interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/1815Semantic context, e.g. disambiguation of the recognition hypotheses based on word meaning
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/66Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for extracting parameters related to health condition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/78Detection of presence or absence of voice signals
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/70ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/63ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/67ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2562/00Details of sensors; Constructional details of sensor housings or probes; Accessories for sensors
    • A61B2562/02Details of sensors specially adapted for in-vivo measurements
    • A61B2562/0204Acoustic sensors
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2562/00Details of sensors; Constructional details of sensor housings or probes; Accessories for sensors
    • A61B2562/02Details of sensors specially adapted for in-vivo measurements
    • A61B2562/0219Inertial sensors, e.g. accelerometers, gyroscopes, tilt switches
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2562/00Details of sensors; Constructional details of sensor housings or probes; Accessories for sensors
    • A61B2562/02Details of sensors specially adapted for in-vivo measurements
    • A61B2562/0247Pressure sensors
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/01Measuring temperature of body parts ; Diagnostic temperature sensing, e.g. for malignant or inflamed tissue
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/021Measuring pressure in heart or blood vessels
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/024Detecting, measuring or recording pulse rate or heart rate
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/08Detecting, measuring or recording devices for evaluating the respiratory organs
    • A61B5/0816Measuring devices for examining respiratory frequency
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/145Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue
    • A61B5/14542Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue for measuring blood gases
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4803Speech analysis specially adapted for diagnostic purposes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7203Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B7/00Instruments for auscultation
    • A61B7/003Detecting lung or respiration noise
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Definitions

  • This disclosure relates to systems and methods for health monitoring of a subject.
  • Speech enabled technology has become a standard method of interaction with consumer electronic devices for its convenience and simple accessibility, enabling more efficient and faster operations.
  • the medical applications of speech technology has been mostly limited to care checklists, panic calls, and prescription refills. This is mainly due to the fact that these voice enabled devices do not have the ability to directly measure and monitor the physiological parameters of the subject.
  • paroxysmal conditions with sudden or intermittent onset require an at home screening solution that can be used immediately and continuously, and need a simple way such as speech to initiate a health check.
  • many people are bedbound or live with poor health conditions. These people are at risk for falling or experiencing sudden health episodes, such as an apnea, pressure, ulcers, atrial fibrillation, or heart attack. If the person lives alone, there is no one to notice the early warnings, observe the situation, or to call for help.
  • a device in implementations, includes a substrate configured to support a subject, a plurality of non-contact sensors configured to capture acoustic signals and force signals with respect to the subject, an audio interface configured to communicate with the subject, and a processor in connection with the plurality of sensors and the audio interface.
  • the processor configured to determine biosignals from one or more of the acoustic signals and the force signals to monitor a subject's health status, and detect presence of speech in the acoustic signals.
  • the audio interface configured to interactively communicate with at least one of the subject or an entity associated with the subject based on at least one of an action needed due to the subject's health status and a verbal command in detected speech.
  • FIG. 1 is a system architecture for a speech-controlled health monitoring system.
  • FIGS. 2 A- 2 J are illustrations of sensor placements and configurations.
  • FIG. 3 is a processing pipeline for obtaining sensors data.
  • FIG. 4 is a processing pipeline for analyzing force sensors data.
  • FIG. 5 is a processing pipeline for analyzing audio sensors data from audio sensors.
  • FIG. 6 is a processing pipeline for analyzing audio sensors data using a speech capable device.
  • FIG. 7 is a processing pipeline for recognizing speech.
  • FIG. 8 is a processing pipeline for sleep disordered breathing (SDB) detection and response.
  • SDB sleep disordered breathing
  • the systems and methods can be used to passively and continuously monitor the subject's health and verbally interact with the subject to initiate a health check, provide information about the subject's health status, or perform an action such as recording a health related episode or calling emergency services.
  • a subject's health and wellbeing can be monitored using a system that verbally interacts with the subject. Sleep, cardiac, respiration, motion, and sleep disordered breathing monitoring are examples.
  • the subject can use his/her speech to interact with the system to request an action to be performed by the system or to obtain information about the subject's health status.
  • the systems can be used to respond to the commands of a subject's partner in the event the subject is unable or incapacitated.
  • the systems and methods use one or more non-contact sensors such as audio or acoustic sensors, accelerometers, pressure sensors, load sensors, weight sensors, force sensors, motion sensors, or vibration sensors to capture a sound(s) (speech and disordered breathing) as well as mechanical vibrations of the body (motion and physiological movements of the heart and lungs) and translate that into biosignal information used for screening and identifying health status and disease conditions.
  • non-contact sensors such as audio or acoustic sensors, accelerometers, pressure sensors, load sensors, weight sensors, force sensors, motion sensors, or vibration sensors to capture a sound(s) (speech and disordered breathing) as well as mechanical vibrations of the body (motion and physiological movements of the heart and lungs) and translate that into biosignal information used for screening and identifying health status and disease conditions.
  • the system includes one or more microphones or audio sensors placed near the subject to record acoustic signals, one or more speakers placed near the subject to play back audio, a physiological measurement system that uses one or more non-contact sensors such as accelerometers, pressure sensors, load sensors, weight sensors, force sensors, motion sensors, or vibration sensors to record mechanical vibrations of the body, a speech recognition system, a speech synthesizer, and a processor configured to record the subject's audio and biosignals, process them, detect the subject's speech, process the subject's speech, and initiate a response to the subject's speech.
  • a physiological measurement system that uses one or more non-contact sensors such as accelerometers, pressure sensors, load sensors, weight sensors, force sensors, motion sensors, or vibration sensors to record mechanical vibrations of the body
  • a speech recognition system such as a speech recognition system, a speech synthesizer, and a processor configured to record the subject's audio and biosignals, process them, detect the subject's speech, process the subject's speech, and initiate a response
  • the one or more microphones or audio sensors and the one or more non-contact sensors can be placed under, or be built into a substrate, such as a bed, couch, chair, exam table, floor, etc.
  • the one or more microphones or audio sensors and the one or more non-contact sensors can be placed or positioned inside, under, or attached to a control box, legs, bed frame, headboard, or wall.
  • the processor can be in the device (control box) or in the computing platform (cloud).
  • the processor is configured to record mechanical force and vibrations of the body, including motion and physiological movements of heart and lungs using one or more non-contact sensors such as accelerometers, pressure sensors, load sensors, weight sensors, force sensors, motion sensors, or vibration sensors.
  • the processor further enhances such data to perform cardiac analysis (including determining heart rate, heartbeat timing, variability, and heartbeat morphology and their corresponding changes from a baseline or range), respiratory analysis (including determining breathing rate, breathing phase, depth, timing and variability, and breathing morphology and their corresponding changes from a baseline or range), and motion analysis (including determining movements amplitude, time, periodicity, and pattern and their corresponding changes from a baseline or range).
  • the processor is configured to record acoustic information, filter unwanted interferences, and enhance it for analytical determinations.
  • the processor can use the enhanced acoustic information to identify sleep disordered breathing.
  • the processor can then determine a proper response to the detected sleep disordered breathing such as by changing an adjustable feature of the bed (for example, firmness) or bedroom (for example, lighting), or play a sound to make the sleeper change position or transition into a lighter state of sleep and therefore, help stop, reduce, or alter the disordered breathing.
  • the processor can use the enhanced acoustic information to correlate irregular lung or body movements with lung or body sounds. Weezing or other abnormal sounds are an example.
  • the processor can use the enhanced acoustic information to detect if speech has been initiated.
  • the processor compares the audio stream against a dictionary of electronic commands to discard unrelated conversations and to determine if a verbal command to interact with the system has been initiated.
  • the processor is configured to handle speech recognition.
  • the processor can perform speech recognition. This can include detecting a trigger (for example, a preset keyword or phrase) and determining the context.
  • a key word could be, for example, “Afib” to trigger annotating (marking) cardiac recording or generating alerts.
  • the processor can communicate through APIs with other speech capable devices (such as Alexa®, Siri®, and Google®) responsible for recognizing and synthesizing speech.
  • the processor is configured to categorize and initiate a response to the recognized speech.
  • the response can be starting an interactive session with the subject (for example, playing back a tone or playing a synthesized speech) or performing a responsive action (for example, turning on/off a home automation feature, labeling the data with health status markers for future access of the subject or subject's physician, or calling emergency services).
  • the response can also include communicating with other speech capable devices connected to home automation systems or notification systems.
  • the system can also be used to create events based on the analysis, the event may be an audible tone or message sent to the cloud for a critical condition.
  • the sensors are connected either with a wire, wirelessly or optically to the processor which may be on the internet and running artificial intelligence software.
  • the signals from the sensors can be analyzed locally with a locally present processor or the data can be networked by wire or other means to another computer and remote storage that can process and analyze the real-time and/or historical data.
  • the processor can be a single processor for both mechanical force sensors and audio sensors, or a set of processors to process mechanical force and interact with other speech capable devices.
  • Other sensors such as blood pressure, temperature, blood oxygen and pulse oximetry sensors can be added for enhanced monitoring or health status evaluation.
  • the system can use artificial intelligence and/or machine learning to train classifiers used to process force, audio, and other sensor signals.
  • the speech enabled device can act as a speech recognizer or speech synthesizer to support unidirectional and bidirectional communication with the subject.
  • the speech recognizer uses speech to text
  • the speech synthesizer uses text to speech, both based on dictionaries of predefined keywords or phrases.
  • the system includes bidirectional audio (microphone and speakers) to enable two-way communication with the patient (the subject's speech serves as a command, and the device responds upon receiving a command).
  • the system can additionally include interfaces to other voice assistant devices (such as Alexa®, Siri®, and Google®) to process the subject's speech, or to play the synthesized response, or both.
  • voice assistant devices such as Alexa®, Siri®, and Google®
  • the systems and methods described herein can be used by a subject when experiencing symptoms of a complication or condition or exhibiting the early warning signs of a health related condition, or can be used when instructed by a physician in a telehealth application.
  • the system can be used for in home stress testing where sensors data can be used to monitor indices of heart rate variability to quantify dynamic autonomic modulation or heart rate recovery.
  • the system can be programmed to limit the number or the individuals who can verbally interact with it.
  • the system may accept and respond to verbal commands only from one person (the subject) or the subject's partner.
  • the speech recognition will have voice recognition to only respond to certain individuals.
  • the electronic commands can include, but are not limited to, a verbal request to perform a specific health check on the subject (for example, cardiac check or stress test), give updates about health status of the subject. mark the data when the subject is experiencing a health episode or condition, send a health report to the subject's physician, call emergency services, order a product through API integrations with third parties (for example, purchasing something from an internet seller), and/or interact with adjustable features of home automation.
  • the system can integrate with other means of communication such as a tablet or smartphone to provide video communication.
  • FIG. 1 is a system architecture for speech-controlled or speech-enabled health monitoring system (SHMS) 100 .
  • the SHMS 100 includes one or more devices 110 which are connected to or in communication with (collectively “connected to”) a computing platform 120 .
  • a machine learning training platform 130 may be connected to the computing platform 120 .
  • a speech capable device 150 may be connected to the computing platform 120 and the one or more devices 110 .
  • users may access the data via a connected device 140 , which may receive data from the computing platform 120 , the device 110 , the speech capable device 150 , or combinations thereof.
  • the connections between the one or more devices 110 , the computing platform 120 , the machine learning training platform 130 , the speech capable device 150 , and the connected device 140 can be wired, wireless, optical, combinations thereof and/or the like.
  • the system architecture of the SHMS 100 is illustrative and may include additional, fewer or different devices, entities and the like which may be similarly or differently architected without departing from the scope of the specification and claims herein. Moreover, the illustrated devices may perform other functions without departing from the scope of the specification and claims herein.
  • the device 110 can include an audio interface 111 , one or more sensors 112 , a controller 114 , a database 116 , and a communications interface 118 .
  • the device 110 can include a classifier 119 for applicable and appropriate machine learning techniques as described herein.
  • the one or more sensors 112 can detect sound, wave patterns, and/or combinations of sound and wave patterns of vibration, pressure, force, weight, presence, and motion due to subject(s) activity and/or configuration with respect to the one or more sensors 112 .
  • the one or more sensors 112 can generate more than one data stream.
  • the one or sensors 112 can be the same type.
  • the one or more sensors 112 can be time synchronized.
  • the one or more sensors 112 can measure the partial force of gravity on substrate, furniture or other object.
  • the one or more sensors 112 can independently capture multiple external sources of data in one stream (i.e. multivariate signal), for example, weight, heart rate, breathing rate, vibration, and motion from one or more subjects or objects.
  • the data captured by each sensor 112 is correlated with the data captured by at least one, some, all or a combination of the other sensors 112 .
  • amplitude changes are correlated.
  • rate and magnitude of changes are correlated.
  • phase and direction of changes are correlated.
  • the one or more sensors 112 placement triangulates the location of center of mass.
  • the one or more sensors 112 can be placed under or built into the legs of a bed, chair, coach, etc. In implementations, the one or more sensors 112 can be placed under or built into the edges of crib. In implementations, the one or more sensors 112 can be placed under or built into the floor. In implementations, the one or more sensors 112 can be placed under or built into a surface area. In implementations, the one or more sensors 112 locations are used to create a surface map that covers the entire area surrounded by sensors.
  • the one or more sensors 112 can measure data from sources that are anywhere within the area surrounded by the one or more sensors 112 , which can be directly on top of the one or more sensors 112 , near the one or more sensors 112 , or distant from the one or more sensors 112 .
  • the one or more sensors 112 are not intrusive with respect to the subject(s).
  • the one or more sensors 112 can include one or more non-contact sensors such as audio, microphone or acoustic sensors to capture the sound (speech and sleep disordered breathing) as well as sensors to measure the partial force of gravity on substrate, furniture or other object including accelerometer, pressure, load, weight, force, motion or vibration as well as mechanical vibrations of the body (motion and physiological movements of heart and lungs).
  • non-contact sensors such as audio, microphone or acoustic sensors to capture the sound (speech and sleep disordered breathing) as well as sensors to measure the partial force of gravity on substrate, furniture or other object including accelerometer, pressure, load, weight, force, motion or vibration as well as mechanical vibrations of the body (motion and physiological movements of heart and lungs).
  • the audio interface 111 provides a bi-directional audio interface (microphone and speakers) to enable two-way communication with the patient (the subject's speech serves as a command, and the device responds upon receiving a command).
  • the controller 114 can apply the processes and algorithms described herein with respect to FIGS. 3 - 8 to the sensor data to determine biometric parameters and other person-specific information for single or multiple subjects at rest and in motion.
  • the classifier 119 can apply the processes and algorithms described herein with respect to FIGS. 3 - 8 to the sensor data to determine biometric parameters and other person-specific information for single or multiple subjects at rest and in motion.
  • the classifier 119 can apply classifiers to the sensor data to determine the biometric parameters and other person-specific information via machine learning.
  • the classifier 119 may be implemented by the controller 114 .
  • the sensor data and the biometric parameters and other person-specific information can be stored in the database 116 .
  • the sensor data, the biometric parameters and other person-specific information, and/or combinations thereof can be transmitted or sent via the communication interface 118 to the computing platform 120 for processing, storage, and/or combinations thereof.
  • the communication interface 118 can be any interface and use any communications protocol to communicate or transfer data between origin and destination endpoints.
  • the device 110 can be any platform or structure which uses the one or more sensors 112 to collect the data from a subject(s) for use by the controller 114 and/or computing platform 120 as described herein.
  • the device 110 may be a combination of a substrate, frame, legs, and multiple load or other sensors 112 as described in FIG. 2 .
  • the device 110 and the elements therein may include other elements which may be desirable or necessary to implement the devices, systems, and methods described herein. However, because such elements and steps are well known in the art, and because they do not facilitate a better understanding of the disclosed embodiments, a discussion of such elements and steps may not be provided herein.
  • the computing platform 120 can include a processor 122 , a database 124 , and a communication interface 126 .
  • the computing platform 120 may include a classifier 129 for applicable and appropriate machine learning techniques as described herein.
  • the processor 122 can obtain the sensor data from the sensors 112 or the controller 114 and can apply the processes and algorithms described herein with respect to FIGS. 3 - 8 to the sensor data to determine biometric parameters and other person-specific information for single or multiple subjects at rest and in motion.
  • the processor 122 can obtain the biometric parameters and other person-specific information from the controller 114 to store in database 124 for temporal and other types of analysis.
  • the classifier 129 can apply the processes and algorithms described herein with respect to FIGS.
  • the classifier 129 can apply classifiers to the sensor data to determine the biometric parameters and other person-specific information via machine learning.
  • the classifier 129 may be implemented by the processor 122 .
  • the sensor data and the biometric parameters and other person-specific information can be stored in the database 124 .
  • the communication interface 126 can be any interface and use any communications protocol to communicate or transfer data between origin and destination endpoints.
  • the computing platform 120 may be a cloud-based platform.
  • the processor 122 can be a cloud-based computer or off-site controller.
  • the processor 112 can be a single processor for both mechanical force sensors and audio sensors, or a set of processors to process mechanical force and interact with the speech capable device 150 .
  • the computing platform 120 and elements therein may include other elements which may be desirable or necessary to implement the devices, systems, and methods described herein. However, because such elements and steps are well known in the art, and because they do not facilitate a better understanding of the disclosed embodiments, a discussion of such elements and steps may not be provided herein.
  • the machine learning training platform 430 can access and process sensor data to train and generate classifiers.
  • the classifiers can be transmitted or sent to the classifier 129 or to the classifier 119 .
  • the SHMS 100 can interchangeably or additionally include the speech enabled device 150 as a bi-directional speech interface.
  • the speech enabled device 150 could replace the audio interface 111 or could work with the audio interface 111 .
  • the speech enabled device 150 can communicate with the device 100 and/or computing platform 120 .
  • the speech capable device 150 can be a voice assistant device (such as Alexa®, Siri®, and Google®) that communicates with the device 100 or the computing platform 120 through APIs.
  • the speech enabled device 150 can act as a speech recognizer or speech synthesizer to support unidirectional and bi-directional communication with the subject.
  • FIGS. 2 A- 2 J are illustrations of sensor placements and configurations.
  • the SHMS 100 can include one or more audio input sensors 200 such as microphones or acoustic sensors.
  • the sensor placements and configurations shown in FIGS. 2 A- 2 J are with respect to a bed 230 and surrounding environment.
  • U.S. patent application Ser. No. 16/595,848, filed Oct. 8, 2019, the entire disclosure of which is hereby incorporated by reference describes example beds and environments applicable to the sensor placements and configurations described herein.
  • FIG. 2 A shows an example of the one or more audio input sensors 200 inside a control box (controller) 240 .
  • FIG. 2 B shows an example of the one or more audio input sensors 200 attached to a headboard 250 proximate the bed 230 .
  • FIG. 2 C shows an example of the one or more audio input sensors 200 mounted to a wall 260 proximate the bed 230 .
  • FIG. 2 D shows an example of the one or more audio input sensors 200 inside or attached to legs 270 of the bed 230 .
  • FIG. 2 E shows an example of the one or more audio input sensors 200 integrated inside a force sensors box 280 under the legs 270 of the bed 230 .
  • FIG. 2 F shows an example of the one or more audio input sensors 200 placed into or attached to a bed frame 290 of the bed 230 .
  • the SHMS 100 can include one or more speakers 210 .
  • FIG. 2 G shows an example of the one or more speakers 210 inside the control box (controller) 240 .
  • FIG. 2 F shows an example of the one or more speakers 210 placed into or attached to a bed frame 290 of the bed 230 .
  • FIG. 2 H shows an example of the one or more speakers 210 integrated inside a force sensors box 280 under the legs 270 of the bed 230 .
  • FIG. 2 I shows an example of the one or more speakers 210 mounted to a wall 260 proximate the bed 230 .
  • FIG. 2 J shows an example of the one or more speakers 210 attached to a headboard 250 proximate the bed 230 .
  • FIGS. 2 A- 2 E and 2 G are examples of systems with unidirectional audio communications and FIGS. 2 F and 2 H- 2 J are examples of systems with bidirectional audio communications.
  • FIG. 3 is a processing pipeline 300 for obtaining sensor data such as, but not limited to, force sensor data, audio sensor data, and other sensor data, and processing the force sensor data, audio sensor data, and other sensor data.
  • sensor data such as, but not limited to, force sensor data, audio sensor data, and other sensor data
  • An analog sensors data stream 320 is received from sensors 310 .
  • the sensors 310 can record mechanical force and vibrations of the body, including motion and physiological movements of heart and lungs using one or more non-contact sensors such as accelerometer, pressure, load, weight, force, motion or vibration sensors.
  • a digitizer 330 digitizes the analog sensors data stream into a digital sensors data stream 340 .
  • a framer 350 generates digital sensors data frames 360 from the digital sensors data stream 340 which includes all the digital sensors data stream values within a fixed or adaptive time window.
  • An encryption engine 370 encodes the digital sensors data frames 360 such that the data is protected from unauthorized access.
  • a compression engine 380 compresses the encrypted data to reduce the size of the data that is going to be saved in the database 390 . This reduces cost and provides faster access during read time.
  • the database 390 can be local, offsite storage, cloud-based storage, or combinations thereof.
  • An analog sensors data stream 321 is received from sensors 311 .
  • the sensors 311 can record audio information including the subject's breathing and speech.
  • a digitizer 331 digitizes the analog sensors data stream into a digital sensors data stream 341 .
  • a framer 351 generates digital sensors data frames 361 from the digital sensors data stream 341 which includes all the digital sensors data stream values within a fixed or adaptive time window.
  • An encryption engine 371 encodes the digital sensors data frames 361 such that the data is protected from unauthorized access. In implementations, the encryption engine 371 can filter the digital audio sensors data frames 361 to a lower and narrower frequency band. In implementations, the encryption engine 371 can mask the digital audio sensors data frames 361 using a mask template.
  • the encryption engine 371 can transform the digital audio sensors data frames 361 using a mathematical formula.
  • a compression engine 380 compresses the encrypted data to reduce the size of the data that is going to be saved in the database 390 . This reduces cost and provides faster access during read time.
  • the database 390 can be local, offsite storage, cloud-based storage, or combinations thereof.
  • the processing pipeline 300 shown in FIG. 3 is illustrative and can include any, all, none or a combination of the blocks or modules shown in FIG. 3 .
  • the processing order shown in FIG. 3 is illustrative and the processing order may vary without departing from the scope of the specification or claims.
  • FIG. 4 is a pre-processing pipeline 400 for processing the force sensor data.
  • the pre-processing pipeline 400 processes digital force sensor data frames 410 .
  • a noise reduction unit 420 removes or attenuates noise sources that might have the same or different level of impact on each sensor.
  • the noise reduction unit 420 can use a variety of techniques including, but not limited to, subtraction, combination of the input data frames, adaptive filtering, wavelet transform, independent component analysis, principal component analysis, and/or other linear or nonlinear transforms.
  • a signal enhancement unit 430 can improve the signal to noise ratio of the input data.
  • the signal enhancement unit 430 can be implemented as a linear or nonlinear combination of input data frames. For example, the signal enhancement unit 430 may combine the signal deltas to increase the signal strength for higher resolution algorithmic analysis.
  • Subsampling units 440 , 441 and 442 sample the digital enhanced sensor data and can include downsampling, upsampling, or resampling.
  • the subsampling can be implemented as a multi-stage sampling or multi-phase sampling, and can use the same or different sampling rates for cardiac, respiratory and coughing analysis.
  • Cardiac analysis 450 determines the heart rate, heartbeat timing, variability, and heartbeat morphology and their corresponding changes from a baseline or a predefined range.
  • An example process for cardiac analysis is shown in U.S. Provisional Application Patent Ser. No. 63/003,551, filed Apr. 1, 2020, the entire disclosure of which is hereby incorporated by reference.
  • Respiratory analysis 460 determines the breathing rate, breathing phase, depth, timing and variability, and breathing morphology and their corresponding changes from a baseline or a predefined range.
  • An example process for respiratory analysis is shown in U.S. Provisional Application Patent Ser. No. 63/003,551, filed Apr. 1, 2020, the entire disclosure of which is hereby incorporated by reference.
  • Motion analysis 470 determines the movements amplitude, time, periodicity, and pattern and their corresponding changes from a baseline or a predefined range.
  • Health and sleep status analysis 480 combines the data from cardiac analysis 450 , respiratory analysis 460 and motion analysis 470 to determine the subject's health status, sleep quality, out-of-the norm events, diseases and conditions.
  • the processing pipeline 400 shown in FIG. 4 is illustrative and can include any, all, none or a combination of the blocks or modules shown in FIG. 4 .
  • the processing order shown in FIG. 4 is illustrative and the processing order may vary without departing from the scope of the specification or claims.
  • FIG. 5 is an example process 500 for analyzing the audio sensor data.
  • the pipeline 500 processes digital audio sensor data frames 510 .
  • a noise reduction unit 520 removes or attenuates environmental or other noise sources that might have the same or different level of impact on each sensor.
  • the noise reduction unit 520 can use a variety of techniques including, but not limited to, subtraction, combination of the input data frames, adaptive filtering, wavelet transform, independent component analysis, principal component analysis, and/or other linear or nonlinear transforms.
  • a signal enhancement unit 530 can improve the signal to noise ratio of the input data.
  • Speech initiation detector 540 determines if the subject is verbally communicating with the system. The detector 540 compares the audio stream against a dictionary of electronic commands to discard unrelated conversations and determines 545 if a verbal command to interact has been initiated.
  • the enhanced digital audio sensor data frames will be analyzed using sleep disordered breathing analyzer 550 to detect breathing disturbances.
  • Sleep disordered breathing analyzer 550 uses digital audio sensors data frames 510 , digital force sensors data frames 410 , or both to determine breathing disturbances.
  • the sleep disordered breathing analyzer 550 uses envelope detection algorithms, time domain, spectral domain, or time frequency domain analysis to identify the presence, intensity, magnitude, duration and type of sleep disordered breathing.
  • the speech recognizer 560 processes the enhanced digital audio sensor data frames to identify the context of speech.
  • the speech recognizer 560 includes an electronic command recognizer that compares the subject's speech against a dictionary of electronic commands.
  • the speech recognizer uses artificial intelligence algorithms to identify speech.
  • the speech recognizer 560 uses a speech to text engine to translate the subject's verbal commands into strings of text.
  • Response categorizer 570 processes the output from the speech recognizer and determines if an interactive session 580 should be initiated or a responsive action 590 should be performed. Examples of an interactive session are playing back a tone or playing a synthesized speech. Examples of a responsive action are turning on/off a home automation feature, labeling the data with health status markers for future access of the subject or subject's physician, calling emergency services, or interacting with another speech capable device.
  • the processing pipeline 500 shown in FIG. 5 is illustrative and can include any, all, none or a combination of the components, blocks or modules shown in FIG. 5 .
  • the processing order shown in FIG. 5 is illustrative and the processing order may vary without departing from the scope of the specification or claims.
  • FIG. 6 is an example process 600 for analyzing the audio sensor data by interacting with a speech capable device.
  • the speech capable device can be a voice assistant device (such as Alexa®, Siri®, and Google®) acting as a speech recognizer that communicates through APIs.
  • Alexa®, Siri®, and Google® acting as a speech recognizer that communicates through APIs.
  • the pipeline 600 receives speech data 610 from the speech capable device.
  • a noise reduction unit 620 removes or attenuates environmental or other noise sources that might have the same or different level of impact on the speech data.
  • the noise reduction unit 620 can use a variety of techniques including, but not limited to, subtraction, combination of the input data frames, adaptive filtering, wavelet transform, independent component analysis, principal component analysis, and/or other linear or nonlinear transforms.
  • a signal enhancement unit 530 can improve the signal to noise ratio of the speech data.
  • Speech initiation detector 640 determines if the subject is verbally communicating with the system. The detector 640 compares the speech data against a dictionary of electronic commands to discard unrelated conversations and determines 645 if a verbal command to interact has been initiated.
  • the enhanced digital speech data frames will be analyzed using sleep disordered breathing analyzer 650 to detect breathing disturbances.
  • Sleep disordered breathing analyzer 650 uses speech data 610 , digital force sensors data frames 410 , or both to determine breathing disturbances.
  • the sleep disordered breathing analyzer 650 uses envelope detection algorithms, time domain, spectral domain, or time frequency domain analysis to identify the presence, intensity, magnitude, duration and type of sleep disordered breathing.
  • the speech recognizer 660 processes the speech data frames to identify the context of speech.
  • the speech recognizer 660 includes an electronic command recognizer that compares the subject's speech against a dictionary of electronic commands.
  • the speech recognizer uses artificial intelligence algorithms to identify speech.
  • the speech recognizer 660 uses a speech to text engine to translate the subject's verbal commands into strings of text.
  • Response categorizer 670 processes the output from the speech recognizer and determines if an interactive session 680 should be initiated or a responsive action 690 should be performed. Commands corresponding to the categorized response are sent 675 to the speech capable device through APIs.
  • the speech enabled device can act as a speech synthesizer to initiate interactive session 680 .
  • the speech enabled device can also connect to home automation systems or notification systems to perform responsive action 690 . Examples of an interactive session are playing back a tone or playing a synthesized speech. Examples of a responsive action are turning on/off a home automation feature, labeling the data with health status markers for future access of the subject or subject's physician, calling emergency services, or interacting with another speech capable device.
  • the processing pipeline 600 shown in FIG. 6 is illustrative and can include any, all, none or a combination of the components, blocks or modules shown in FIG. 6 .
  • the processing order shown in FIG. 6 is illustrative and the processing order may vary without departing from the scope of the specification or claims.
  • FIG. 7 is an example process 700 for recognizing speech by a speech recognizer.
  • the speech recognizer receives 710 the enhanced audio data streams after it is determined that speech has been initiated as described in FIG. 5 .
  • the speech recognizer detects 720 parts of the electronic command that match a specific request through speech processing, i.e., detects a trigger.
  • the speech recognizer translates 730 the speech into text.
  • the speech recognizer matches 740 the strings of text against a dictionary of electronic commands 750 .
  • the speech recognizer determines 760 the context of the speech.
  • a context is the general category of the subject's verbal request.
  • Examples are running a health check, labeling or annotating the data for a health relate episode, communication with the subject's physician, communication with the emergency services, ordering a product, and interacting with home automation.
  • the speech recognizer encodes 770 the context and prepares it for the response categorizer 570 .
  • the processing pipeline 700 shown in FIG. 7 is illustrative and can include any, all, none or a combination of the components, blocks or modules shown in FIG. 7 .
  • the processing order shown in FIG. 7 is illustrative and the processing order may vary without departing from the scope of the specification or claims.
  • FIG. 8 is an example process 800 for sleep disordered breathing (SDB) detection and response.
  • Digital force sensors frames 810 are received as processed in FIG. 3 and FIG. 4 .
  • a respiration analysis 830 is performed on the digital force sensors frames 810 .
  • the respiration analysis 830 can include filtering, combining, envelope detection, and other algorithms.
  • a spectrum or time frequency spectrum is computed 850 on the output of the respiration analysis 830 .
  • Digital audio force sensors frames 820 are received as processed in FIG. 3 and FIG. 5 .
  • Envelope detection 840 is performed on the digital audio force sensors frames 820 .
  • a spectrum or time frequency spectrum is computed 860 on the output of the envelope detection 840 .
  • Fused sensor processing 870 is performed on the digital force sensors frames 810 and the digital audio sensors frames 820 such as normalized amplitude or frequency parameters, cross correlation, or coherence or similar metrics of similarity to create combined signals or feature sets.
  • Sleep disordered breathing is determined 880 using the envelope, time domain, frequency domain, time-frequency and parameters from the fusion of force and audio sensors. Implementations include threshold based techniques, template matching methods, or use of classifiers to detect sleep disordered breathing. Once sleep disordered breathing is detected, process 880 determines the intensity (for example, light, mild, moderate, severe), magnitude, duration and type of sleep disordered breathing. If sleep disordered breathing is detected 885 , a proper response 890 is determined for the detected SDB such as changing an adjustable feature of the bed (for example, firmness), bedroom (for example, lighting), play a sound to make the sleeper change position, or transition into a lighter state of sleep and therefore, help stop, reduce or alter the disordered breathing.
  • an adjustable feature of the bed for example, firmness
  • bedroom for example, lighting
  • the processing pipeline 800 shown in FIG. 8 is illustrative and can include any, all, none or a combination of the components, blocks or modules shown in FIG. 8 .
  • the processing order shown in FIG. 8 is illustrative and the processing order may vary without departing from the scope of the specification or claims.
  • FIG. 7 is a flowchart of a method 700 for determining weight from the MSMDA data.
  • the method 700 includes: obtaining 710 the MSMDA data; calibrating 720 the MSMDA data; performing 730 superposition analysis on the calibrated MSMDA data; transforming 740 the MSMDA data to weight; finalizing 750 the weight; and outputting 760 the weight.
  • the method 700 includes obtaining 710 the MSMDA data.
  • the MSMDA data is generated from the pre-processing pipeline 600 as described.
  • the method 700 includes calibrating 720 the MSMDA data.
  • the calibration process compares the multiple sensors readings against an expected value or range. If the values are different, the MSMDA data is adjusted to calibrate to the expected value range. Calibration is implemented by turning off all other sources (i.e. set them to zero) in order to determine the weight of the new object. For example, the weight of the bed, bedding and pillow are determined prior to the new object.
  • a baseline is established of the device, for example, prior to use. In an implementation, once a subject or object (collectively “item”) is on the device, an item baseline is determined and saved. This is done so that data from a device having multiple items can be correctly processed using the methods described herein.
  • the method 700 includes performing 730 superposition analysis on the calibrated MSMDA data.
  • Superposition analysis provides the sum of the readings caused by each independent sensor acting alone.
  • the superposition analysis can be implemented as an algebraic sum, a weighted sum, or a nonlinear sum of the responses from all the sensors.
  • the method 700 includes transforming 740 the MSMDA data to weight.
  • a variety of known or to be known techniques can be used to transform the sensor data, i.e. the MSMDA data, to weight.
  • the method 700 includes finalizing 750 the weight.
  • finalizing the weight can include smoothing, checking against a range, checking against a dictionary, or a past value.
  • finalizing the weight can include adjustments due to other factors such as bed type, bed size, location of the sleeper, position of the sleeper, orientation of the sleeper, and the like.
  • the method 700 includes and outputting 760 the weight.
  • the weight is stored for use in the methods described herein.
  • controller 200 can be realized in hardware, software, or any combination thereof.
  • the hardware can include, for example, computers, intellectual property (IP) cores, application-specific integrated circuits (ASICs), programmable logic arrays, optical processors, programmable logic controllers, microcode, microcontrollers, servers, microprocessors, digital signal processors or any other suitable circuit.
  • IP intellectual property
  • ASIC application-specific integrated circuits
  • controller should be understood as encompassing any of the foregoing hardware, either singly or in combination.
  • controller 200 , controller 214 , processor 422 , and/or controller 414 can be implemented using a general purpose computer or general purpose processor with a computer program that, when executed, carries out any of the respective methods, algorithms and/or instructions described herein.
  • a special purpose computer/processor can be utilized which can contain other hardware for carrying out any of the methods, algorithms, or instructions described herein.
  • Controller 200 , controller 214 , processor 422 , and/or controller 414 can be one or multiple special purpose processors, digital signal processors, microprocessors, controllers, microcontrollers, application processors, central processing units (CPU)s, graphics processing units (GPU)s, digital signal processors (DSP)s, application specific integrated circuits (ASIC)s, field programmable gate arrays, any other type or combination of integrated circuits, state machines, or any combination thereof in a distributed, centralized, cloud-based architecture, and/or combinations thereof.
  • CPU central processing units
  • GPU graphics processing units
  • DSP digital signal processors
  • ASIC application specific integrated circuits
  • field programmable gate arrays any other type or combination of integrated circuits, state machines, or any combination thereof in a distributed, centralized, cloud-based architecture, and/or combinations thereof.
  • a device in general, includes a substrate configured to support a subject, a plurality of non-contact sensors configured to capture acoustic signals and force signals with respect to the subject, an audio interface configured to communicate with the subject, and a processor in connection with the plurality of sensors and the audio interface.
  • the processor configured to determine biosignals from one or more of the acoustic signals and the force signals to monitor a subject's health status, and detect presence of speech in the acoustic signals.
  • the audio interface configured to interactively communicate with at least one of the subject or an entity associated with the subject based on at least one of an action needed due to the subject's health status and a verbal command in detected speech.
  • the processor further configured to encrypt digitized acoustic signals by at least one of filter the digitized acoustic signals to a lower and narrower frequency, mask the digitized acoustic signals using a mask template or an encryption key, and transform the digitized acoustic signals using a mathematical formula.
  • the processor further configured to compare the acoustic signals against a dictionary of electronic commands to discard unrelated conversations, determine the presence of the verbal command, identify a context of speech upon determination of the verbal command, and perform at least one of: initiate an interactive session, via the audio interface, with the at least one of the subject or another entity based on the verbal command and the context of speech, and determine a responsive action based on the verbal command and the context of speech.
  • the audio interface is further configured to recognize and respond to voice commands from designated individuals.
  • the processor further configured to: compare the acoustic signals against a dictionary of electronic commands to discard unrelated conversations, determine the presence of the verbal command; analyze the acoustic signals to detect breathing disturbances upon failure to detect the verbal command, and determine a responsive action to detection of sleep disordered breathing (SDB).
  • SDB sleep disordered breathing
  • the plurality of non-contact sensors configured to capture force signals from subject actions with respect to the substrate, the processor further configured to perform at least one of cardiac analysis, respiratory analysis, and motion analysis based on the force signals to determine the subject's health status.
  • the processor when performing breathing disturbances analysis to determine the subject's health status, the processor further configured to: fuse the force signals and the acoustic signals based on one or more similarity metrics to generate fusion signals, detect sleep disordered breathing (SDB) using the fusion signals, the force signals, and the acoustic signals, and determine a responsive action to detection of the SDB.
  • the responsive action is one or more of: an audible tone, an audible message, a trigger for a home automation device, a trigger for a speech assistant device, a call to an entity or emergency services, marking data for future access, a database entry, and a health check-up.
  • the processor further configured to determine an intensity, magnitude, duration, and type of the SDB.
  • a system in general, includes a speech capable device configured to communicate with at least one of a subject or an entity associated with the subject, and a device in communication with the speech capable device,
  • the device including a substrate configured to support the subject, a plurality of non-contact sensors configured to capture acoustic signals with respect to the subject and force signals from subject actions with respect to the substrate, and a processor in connection with the plurality of sensors and the audio interface.
  • the processor configured to: monitor a subject's health status based on the force signals and the acoustic signals, and detect a verbal command in the acoustic signals.
  • the speech capable device configured to interactively communicate with at least the subject or the entity based on at least one of a responsive action needed due to the subject's health status and detection of the verbal command.
  • the processor further configured to encrypt digitized acoustic signals by at least one of filter the digitized acoustic signals to a lower and narrower frequency, mask the digitized acoustic signals using a mask template or an encryption key, and transform the digitized acoustic signals using a mathematical formula.
  • the processor further configured to compare the acoustic signals against a dictionary of electronic commands to discard unrelated conversations, determine the presence of the verbal command, identify a context of speech upon determination of the verbal command, and perform at least one of: initiate an interactive session, via the speech capable device, with the at least one of the subject or the entity based on the verbal command and the context of speech, and determine the responsive action based on the verbal command and the context of speech.
  • the speech capable device is further configured to recognize and respond to voice commands from designated individuals.
  • the processor further configured to perform at least respiratory analysis based on the force signals, compare the acoustic signals against a dictionary of electronic commands to discard unrelated conversations, determine the presence of the verbal command, fuse the force signals and the acoustic signals based on one or more similarity metrics to generated fusion signals upon failure to detect the verbal command, detect sleep disordered breathing (SDB) using the fusion signals, the force signals, and the acoustic signals, and determine a responsive action to detection of the SDB.
  • the processor further configured to determine an intensity, magnitude, duration, and type of the SDB.
  • the responsive action is one or more of: an audible tone, an audible message, a trigger for a home automation device, a trigger for a speech assistant device, a call to an entity or emergency services, marking data for future access, a database entry, and a health check-up.
  • a method for determining item specific parameters includes capturing audio signals and force signals from a plurality of non-contact sensors placed relative to a subject on a substrate, determining at least biosignal information from the audio signals and the force signals, detecting a presence of speech in the acoustic signals, and interactively communicating with at least one of the subject or an entity associated with the subject based on at least one of an action needed due to a subject's health status and a verbal command found in detected speech.
  • the method further includes encrypting digitized acoustic signals by at least one of filter the digitized acoustic signals to a lower and narrower frequency, mask the digitized acoustic signals using a mask template or an encryption key, and transform the digitized acoustic signals using a mathematical formula.
  • the method further includes comparing the acoustic signals against a dictionary of electronic commands to discard unrelated conversations, determining the presence of the verbal command, identifying a context of speech upon determination of the verbal command, and performing at least one of: initiating an interactive session, via the audio interface, with the at least one of the subject or another entity based on the verbal command and the context of speech, and determining a responsive action based on the verbal command and the context of speech.
  • the method further includes recognizing and responding to voice commands from designated individuals.
  • the method further includes comparing the acoustic signals against a dictionary of electronic commands to discard unrelated conversations, determining the presence of the verbal command, analyzing the acoustic signals to detect breathing disturbances upon failure to detect the verbal command, and determining a responsive action to detection of sleep disordered breathing (SDB).
  • the method further includes performing at least one of cardiac analysis, respiratory analysis, and motion analysis based on the force signals to determine the subject's health status.
  • the method further includes performing breathing disturbances analysis to determine the subject's health status, the performing further includes fusing the force signals and the acoustic signals based on one or more similarity metrics to generate fusion signals, detecting sleep disordered breathing (SDB) using the fusion signals, the force signals, and the acoustic signals, and determining a responsive action to detection of the SDB.
  • the responsive action is one or more of: an audible tone, an audible message, a trigger for a home automation device, a trigger for a speech assistant device, a call to an entity or emergency services, marking data for future access, a database entry, and a health check-up.
  • the method further includes determining an intensity, magnitude, duration, and type of the SDB.
  • the method further includes performing at least respiratory analysis based on captured force signals, comparing the acoustic signals against a dictionary of electronic commands to discard unrelated conversations, determining the presence of the verbal command, fusing the force signals and the acoustic signals based on one or more similarity metrics to generated fusion signals upon failure to detect the verbal command, detecting sleep disordered breathing (SDB) using the fusion signals, the force signals, and the acoustic signals, and determining a responsive action to detection of the SDB.
  • SDB sleep disordered breathing
  • a device in general, includes a substrate configured to support a subject, a plurality of non-contact sensors configured to capture force signals with respect to the subject, a processor in connection with the plurality of sensors, the processor configured to determine biosignals from the force signals to monitor a subject's health status, and an audio interface configured to interactively communicate with at least one of the subject or an entity associated with the subject based on at least one of an action needed due to the subject's health status and a verbal command received via a speech capable device.
  • a device in general, includes a substrate configured to support a subject, a plurality of non-contact sensors configured to capture acoustic signals and force signals with respect to the subject, an audio interface configured to communicate with the subject, and a processor in connection with the plurality of sensors and the audio interface.
  • the processor configured to determine biosignals from one or more of the acoustic signals and the force signals to monitor a subject's health status, and receive, from a speech detection entity, speech detected in the acoustic signals.
  • the audio interface configured to interactively communicate with at least one of the subject or an entity associated with the subject based on at least one of an action needed due to the subject's health status and a verbal command in detected speech.
  • example is used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as using one or more of these words is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the word “example,” “aspect,” or “embodiment” is intended to present concepts in a concrete fashion.
  • the term “or” is intended to mean an inclusive “or” rather than an exclusive “or”. That is, unless specified otherwise, or clear from context, “X includes A or B” is intended to mean any of the natural inclusive permutations.

Abstract

Systems and methods for speech-controlled or speech-enabled health monitoring of a subject are described. A device includes a substrate configured to support a subject, a plurality of non-contact sensors configured to capture acoustic signals and force signals with respect to the subject, an audio interface configured to communicate with the subject, and a processor in connection with the plurality of sensors and the audio interface. The processor configured to determine biosignals from one or more of the acoustic signals and the force signals to monitor a subject's health status, and detect presence of speech in the acoustic signals. The audio interface configured to interactively communicate with at least one of the subject or an entity associated with the subject based on at least one of an action needed due to the subject's health status and a verbal command in detected speech.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS
This application claims priority to and the benefit of U.S. Provisional Application Patent Ser. No. 63/003,551, filed Apr. 1, 2020, the entire disclosure of which is hereby incorporated by reference.
TECHNICAL FIELD
This disclosure relates to systems and methods for health monitoring of a subject.
BACKGROUND
Speech enabled technology has become a standard method of interaction with consumer electronic devices for its convenience and simple accessibility, enabling more efficient and faster operations. The medical applications of speech technology has been mostly limited to care checklists, panic calls, and prescription refills. This is mainly due to the fact that these voice enabled devices do not have the ability to directly measure and monitor the physiological parameters of the subject. Unlike persistent conditions, paroxysmal conditions with sudden or intermittent onset require an at home screening solution that can be used immediately and continuously, and need a simple way such as speech to initiate a health check. In addition, many people are bedbound or live with poor health conditions. These people are at risk for falling or experiencing sudden health episodes, such as an apnea, pressure, ulcers, atrial fibrillation, or heart attack. If the person lives alone, there is no one to notice the early warnings, observe the situation, or to call for help.
SUMMARY
Disclosed herein are implementations of systems and methods for speech-controlled or speech-enabled health monitoring of a subject.
In implementations, a device includes a substrate configured to support a subject, a plurality of non-contact sensors configured to capture acoustic signals and force signals with respect to the subject, an audio interface configured to communicate with the subject, and a processor in connection with the plurality of sensors and the audio interface. The processor configured to determine biosignals from one or more of the acoustic signals and the force signals to monitor a subject's health status, and detect presence of speech in the acoustic signals. The audio interface configured to interactively communicate with at least one of the subject or an entity associated with the subject based on at least one of an action needed due to the subject's health status and a verbal command in detected speech.
BRIEF DESCRIPTION OF THE DRAWINGS
The disclosure is best understood from the following detailed description when read in conjunction with the accompanying drawings. It is emphasized that, according to common practice, the various features of the drawings are not to-scale. On the contrary, the dimensions of the various features are arbitrarily expanded or reduced for clarity.
FIG. 1 is a system architecture for a speech-controlled health monitoring system.
FIGS. 2A-2J are illustrations of sensor placements and configurations.
FIG. 3 is a processing pipeline for obtaining sensors data.
FIG. 4 is a processing pipeline for analyzing force sensors data.
FIG. 5 is a processing pipeline for analyzing audio sensors data from audio sensors.
FIG. 6 is a processing pipeline for analyzing audio sensors data using a speech capable device.
FIG. 7 is a processing pipeline for recognizing speech.
FIG. 8 is a processing pipeline for sleep disordered breathing (SDB) detection and response.
DETAILED DESCRIPTION
Disclosed herein are implementations of systems and methods for speech-controlled or speech-enabled health monitoring of a subject. The systems and methods can be used to passively and continuously monitor the subject's health and verbally interact with the subject to initiate a health check, provide information about the subject's health status, or perform an action such as recording a health related episode or calling emergency services. A subject's health and wellbeing can be monitored using a system that verbally interacts with the subject. Sleep, cardiac, respiration, motion, and sleep disordered breathing monitoring are examples. The subject can use his/her speech to interact with the system to request an action to be performed by the system or to obtain information about the subject's health status. The systems can be used to respond to the commands of a subject's partner in the event the subject is unable or incapacitated.
The systems and methods use one or more non-contact sensors such as audio or acoustic sensors, accelerometers, pressure sensors, load sensors, weight sensors, force sensors, motion sensors, or vibration sensors to capture a sound(s) (speech and disordered breathing) as well as mechanical vibrations of the body (motion and physiological movements of the heart and lungs) and translate that into biosignal information used for screening and identifying health status and disease conditions.
In implementations, the system includes one or more microphones or audio sensors placed near the subject to record acoustic signals, one or more speakers placed near the subject to play back audio, a physiological measurement system that uses one or more non-contact sensors such as accelerometers, pressure sensors, load sensors, weight sensors, force sensors, motion sensors, or vibration sensors to record mechanical vibrations of the body, a speech recognition system, a speech synthesizer, and a processor configured to record the subject's audio and biosignals, process them, detect the subject's speech, process the subject's speech, and initiate a response to the subject's speech. In implementations, the one or more microphones or audio sensors and the one or more non-contact sensors can be placed under, or be built into a substrate, such as a bed, couch, chair, exam table, floor, etc. For example, the one or more microphones or audio sensors and the one or more non-contact sensors can be placed or positioned inside, under, or attached to a control box, legs, bed frame, headboard, or wall. In implementations, the processor can be in the device (control box) or in the computing platform (cloud).
In implementations, the processor is configured to record mechanical force and vibrations of the body, including motion and physiological movements of heart and lungs using one or more non-contact sensors such as accelerometers, pressure sensors, load sensors, weight sensors, force sensors, motion sensors, or vibration sensors. The processor further enhances such data to perform cardiac analysis (including determining heart rate, heartbeat timing, variability, and heartbeat morphology and their corresponding changes from a baseline or range), respiratory analysis (including determining breathing rate, breathing phase, depth, timing and variability, and breathing morphology and their corresponding changes from a baseline or range), and motion analysis (including determining movements amplitude, time, periodicity, and pattern and their corresponding changes from a baseline or range). The processor is configured to record acoustic information, filter unwanted interferences, and enhance it for analytical determinations.
For example, the processor can use the enhanced acoustic information to identify sleep disordered breathing. The processor can then determine a proper response to the detected sleep disordered breathing such as by changing an adjustable feature of the bed (for example, firmness) or bedroom (for example, lighting), or play a sound to make the sleeper change position or transition into a lighter state of sleep and therefore, help stop, reduce, or alter the disordered breathing. For example, the processor can use the enhanced acoustic information to correlate irregular lung or body movements with lung or body sounds. Weezing or other abnormal sounds are an example. For example, the processor can use the enhanced acoustic information to detect if speech has been initiated. The processor compares the audio stream against a dictionary of electronic commands to discard unrelated conversations and to determine if a verbal command to interact with the system has been initiated.
In implementations, the processor is configured to handle speech recognition. For example, the processor can perform speech recognition. This can include detecting a trigger (for example, a preset keyword or phrase) and determining the context. A key word could be, for example, “Afib” to trigger annotating (marking) cardiac recording or generating alerts. For example, the processor can communicate through APIs with other speech capable devices (such as Alexa®, Siri®, and Google®) responsible for recognizing and synthesizing speech.
In implementations, the processor is configured to categorize and initiate a response to the recognized speech. The response can be starting an interactive session with the subject (for example, playing back a tone or playing a synthesized speech) or performing a responsive action (for example, turning on/off a home automation feature, labeling the data with health status markers for future access of the subject or subject's physician, or calling emergency services). The response can also include communicating with other speech capable devices connected to home automation systems or notification systems. The system can also be used to create events based on the analysis, the event may be an audible tone or message sent to the cloud for a critical condition.
The sensors are connected either with a wire, wirelessly or optically to the processor which may be on the internet and running artificial intelligence software. The signals from the sensors can be analyzed locally with a locally present processor or the data can be networked by wire or other means to another computer and remote storage that can process and analyze the real-time and/or historical data. The processor can be a single processor for both mechanical force sensors and audio sensors, or a set of processors to process mechanical force and interact with other speech capable devices. Other sensors such as blood pressure, temperature, blood oxygen and pulse oximetry sensors can be added for enhanced monitoring or health status evaluation. The system can use artificial intelligence and/or machine learning to train classifiers used to process force, audio, and other sensor signals.
In implementations, the speech enabled device can act as a speech recognizer or speech synthesizer to support unidirectional and bidirectional communication with the subject. The speech recognizer uses speech to text, and the speech synthesizer uses text to speech, both based on dictionaries of predefined keywords or phrases. The system includes bidirectional audio (microphone and speakers) to enable two-way communication with the patient (the subject's speech serves as a command, and the device responds upon receiving a command). The system can additionally include interfaces to other voice assistant devices (such as Alexa®, Siri®, and Google®) to process the subject's speech, or to play the synthesized response, or both.
The systems and methods described herein can be used by a subject when experiencing symptoms of a complication or condition or exhibiting the early warning signs of a health related condition, or can be used when instructed by a physician in a telehealth application. For example, the system can be used for in home stress testing where sensors data can be used to monitor indices of heart rate variability to quantify dynamic autonomic modulation or heart rate recovery.
The system can be programmed to limit the number or the individuals who can verbally interact with it. For example, the system may accept and respond to verbal commands only from one person (the subject) or the subject's partner. In such cases, the speech recognition will have voice recognition to only respond to certain individuals. The electronic commands can include, but are not limited to, a verbal request to perform a specific health check on the subject (for example, cardiac check or stress test), give updates about health status of the subject. mark the data when the subject is experiencing a health episode or condition, send a health report to the subject's physician, call emergency services, order a product through API integrations with third parties (for example, purchasing something from an internet seller), and/or interact with adjustable features of home automation. The system can integrate with other means of communication such as a tablet or smartphone to provide video communication.
FIG. 1 is a system architecture for speech-controlled or speech-enabled health monitoring system (SHMS) 100. The SHMS 100 includes one or more devices 110 which are connected to or in communication with (collectively “connected to”) a computing platform 120. In implementations, a machine learning training platform 130 may be connected to the computing platform 120. In implementations, a speech capable device 150 may be connected to the computing platform 120 and the one or more devices 110. In implementations, users may access the data via a connected device 140, which may receive data from the computing platform 120, the device 110, the speech capable device 150, or combinations thereof. The connections between the one or more devices 110, the computing platform 120, the machine learning training platform 130, the speech capable device 150, and the connected device 140 can be wired, wireless, optical, combinations thereof and/or the like. The system architecture of the SHMS 100 is illustrative and may include additional, fewer or different devices, entities and the like which may be similarly or differently architected without departing from the scope of the specification and claims herein. Moreover, the illustrated devices may perform other functions without departing from the scope of the specification and claims herein.
In an implementation, the device 110 can include an audio interface 111, one or more sensors 112, a controller 114, a database 116, and a communications interface 118. In an implementation, the device 110 can include a classifier 119 for applicable and appropriate machine learning techniques as described herein. The one or more sensors 112 can detect sound, wave patterns, and/or combinations of sound and wave patterns of vibration, pressure, force, weight, presence, and motion due to subject(s) activity and/or configuration with respect to the one or more sensors 112. In implementations, the one or more sensors 112 can generate more than one data stream. In implementations, the one or sensors 112 can be the same type. In implementations, the one or more sensors 112 can be time synchronized. In implementations, the one or more sensors 112 can measure the partial force of gravity on substrate, furniture or other object. In implementations, the one or more sensors 112 can independently capture multiple external sources of data in one stream (i.e. multivariate signal), for example, weight, heart rate, breathing rate, vibration, and motion from one or more subjects or objects. In an implementation, the data captured by each sensor 112 is correlated with the data captured by at least one, some, all or a combination of the other sensors 112. In implementations, amplitude changes are correlated. In implementations, rate and magnitude of changes are correlated. In implementations, phase and direction of changes are correlated. In implementations, the one or more sensors 112 placement triangulates the location of center of mass. In implementations, the one or more sensors 112 can be placed under or built into the legs of a bed, chair, coach, etc. In implementations, the one or more sensors 112 can be placed under or built into the edges of crib. In implementations, the one or more sensors 112 can be placed under or built into the floor. In implementations, the one or more sensors 112 can be placed under or built into a surface area. In implementations, the one or more sensors 112 locations are used to create a surface map that covers the entire area surrounded by sensors. In implementations, the one or more sensors 112 can measure data from sources that are anywhere within the area surrounded by the one or more sensors 112, which can be directly on top of the one or more sensors 112, near the one or more sensors 112, or distant from the one or more sensors 112. The one or more sensors 112 are not intrusive with respect to the subject(s).
The one or more sensors 112 can include one or more non-contact sensors such as audio, microphone or acoustic sensors to capture the sound (speech and sleep disordered breathing) as well as sensors to measure the partial force of gravity on substrate, furniture or other object including accelerometer, pressure, load, weight, force, motion or vibration as well as mechanical vibrations of the body (motion and physiological movements of heart and lungs).
The audio interface 111 provides a bi-directional audio interface (microphone and speakers) to enable two-way communication with the patient (the subject's speech serves as a command, and the device responds upon receiving a command).
The controller 114 can apply the processes and algorithms described herein with respect to FIGS. 3-8 to the sensor data to determine biometric parameters and other person-specific information for single or multiple subjects at rest and in motion. The classifier 119 can apply the processes and algorithms described herein with respect to FIGS. 3-8 to the sensor data to determine biometric parameters and other person-specific information for single or multiple subjects at rest and in motion. The classifier 119 can apply classifiers to the sensor data to determine the biometric parameters and other person-specific information via machine learning. In implementations, the classifier 119 may be implemented by the controller 114. In implementations, the sensor data and the biometric parameters and other person-specific information can be stored in the database 116. In implementations, the sensor data, the biometric parameters and other person-specific information, and/or combinations thereof can be transmitted or sent via the communication interface 118 to the computing platform 120 for processing, storage, and/or combinations thereof. The communication interface 118 can be any interface and use any communications protocol to communicate or transfer data between origin and destination endpoints. In an implementation, the device 110 can be any platform or structure which uses the one or more sensors 112 to collect the data from a subject(s) for use by the controller 114 and/or computing platform 120 as described herein. For example, the device 110 may be a combination of a substrate, frame, legs, and multiple load or other sensors 112 as described in FIG. 2 . The device 110 and the elements therein may include other elements which may be desirable or necessary to implement the devices, systems, and methods described herein. However, because such elements and steps are well known in the art, and because they do not facilitate a better understanding of the disclosed embodiments, a discussion of such elements and steps may not be provided herein.
In implementations, the computing platform 120 can include a processor 122, a database 124, and a communication interface 126. In implementations, the computing platform 120 may include a classifier 129 for applicable and appropriate machine learning techniques as described herein. The processor 122 can obtain the sensor data from the sensors 112 or the controller 114 and can apply the processes and algorithms described herein with respect to FIGS. 3-8 to the sensor data to determine biometric parameters and other person-specific information for single or multiple subjects at rest and in motion. In implementations, the processor 122 can obtain the biometric parameters and other person-specific information from the controller 114 to store in database 124 for temporal and other types of analysis. In implementations, the classifier 129 can apply the processes and algorithms described herein with respect to FIGS. 3-8 to the sensor data to determine biometric parameters and other person-specific information for single or multiple subjects at rest and in motion. The classifier 129 can apply classifiers to the sensor data to determine the biometric parameters and other person-specific information via machine learning. In implementations, the classifier 129 may be implemented by the processor 122. In implementations, the sensor data and the biometric parameters and other person-specific information can be stored in the database 124. The communication interface 126 can be any interface and use any communications protocol to communicate or transfer data between origin and destination endpoints. In implementations, the computing platform 120 may be a cloud-based platform. In implementations, the processor 122 can be a cloud-based computer or off-site controller. In implementations, the processor 112 can be a single processor for both mechanical force sensors and audio sensors, or a set of processors to process mechanical force and interact with the speech capable device 150. The computing platform 120 and elements therein may include other elements which may be desirable or necessary to implement the devices, systems, and methods described herein. However, because such elements and steps are well known in the art, and because they do not facilitate a better understanding of the disclosed embodiments, a discussion of such elements and steps may not be provided herein.
In implementations, the machine learning training platform 430 can access and process sensor data to train and generate classifiers. The classifiers can be transmitted or sent to the classifier 129 or to the classifier 119.
In implementations, the SHMS 100 can interchangeably or additionally include the speech enabled device 150 as a bi-directional speech interface. In implementations, the speech enabled device 150 could replace the audio interface 111 or could work with the audio interface 111. The speech enabled device 150 can communicate with the device 100 and/or computing platform 120. In an implementation, the speech capable device 150 can be a voice assistant device (such as Alexa®, Siri®, and Google®) that communicates with the device 100 or the computing platform 120 through APIs. The speech enabled device 150 can act as a speech recognizer or speech synthesizer to support unidirectional and bi-directional communication with the subject.
FIGS. 2A-2J are illustrations of sensor placements and configurations. As described herein, the SHMS 100 can include one or more audio input sensors 200 such as microphones or acoustic sensors. The sensor placements and configurations shown in FIGS. 2A-2J are with respect to a bed 230 and surrounding environment. For example, U.S. patent application Ser. No. 16/595,848, filed Oct. 8, 2019, the entire disclosure of which is hereby incorporated by reference, describes example beds and environments applicable to the sensor placements and configurations described herein.
FIG. 2A shows an example of the one or more audio input sensors 200 inside a control box (controller) 240. FIG. 2B shows an example of the one or more audio input sensors 200 attached to a headboard 250 proximate the bed 230. FIG. 2C shows an example of the one or more audio input sensors 200 mounted to a wall 260 proximate the bed 230. FIG. 2D shows an example of the one or more audio input sensors 200 inside or attached to legs 270 of the bed 230. FIG. 2E shows an example of the one or more audio input sensors 200 integrated inside a force sensors box 280 under the legs 270 of the bed 230. FIG. 2F shows an example of the one or more audio input sensors 200 placed into or attached to a bed frame 290 of the bed 230.
In implementations, the SHMS 100 can include one or more speakers 210. FIG. 2G shows an example of the one or more speakers 210 inside the control box (controller) 240. FIG. 2F shows an example of the one or more speakers 210 placed into or attached to a bed frame 290 of the bed 230. FIG. 2H shows an example of the one or more speakers 210 integrated inside a force sensors box 280 under the legs 270 of the bed 230. FIG. 2I shows an example of the one or more speakers 210 mounted to a wall 260 proximate the bed 230. FIG. 2J shows an example of the one or more speakers 210 attached to a headboard 250 proximate the bed 230.
FIGS. 2A-2E and 2G are examples of systems with unidirectional audio communications and FIGS. 2F and 2H-2J are examples of systems with bidirectional audio communications.
FIG. 3 is a processing pipeline 300 for obtaining sensor data such as, but not limited to, force sensor data, audio sensor data, and other sensor data, and processing the force sensor data, audio sensor data, and other sensor data.
An analog sensors data stream 320 is received from sensors 310. The sensors 310 can record mechanical force and vibrations of the body, including motion and physiological movements of heart and lungs using one or more non-contact sensors such as accelerometer, pressure, load, weight, force, motion or vibration sensors. A digitizer 330 digitizes the analog sensors data stream into a digital sensors data stream 340. A framer 350 generates digital sensors data frames 360 from the digital sensors data stream 340 which includes all the digital sensors data stream values within a fixed or adaptive time window. An encryption engine 370 encodes the digital sensors data frames 360 such that the data is protected from unauthorized access. A compression engine 380 compresses the encrypted data to reduce the size of the data that is going to be saved in the database 390. This reduces cost and provides faster access during read time. The database 390 can be local, offsite storage, cloud-based storage, or combinations thereof.
An analog sensors data stream 321 is received from sensors 311. The sensors 311 can record audio information including the subject's breathing and speech. A digitizer 331 digitizes the analog sensors data stream into a digital sensors data stream 341. A framer 351 generates digital sensors data frames 361 from the digital sensors data stream 341 which includes all the digital sensors data stream values within a fixed or adaptive time window. An encryption engine 371 encodes the digital sensors data frames 361 such that the data is protected from unauthorized access. In implementations, the encryption engine 371 can filter the digital audio sensors data frames 361 to a lower and narrower frequency band. In implementations, the encryption engine 371 can mask the digital audio sensors data frames 361 using a mask template. In implementations, the encryption engine 371 can transform the digital audio sensors data frames 361 using a mathematical formula. A compression engine 380 compresses the encrypted data to reduce the size of the data that is going to be saved in the database 390. This reduces cost and provides faster access during read time. The database 390 can be local, offsite storage, cloud-based storage, or combinations thereof.
The processing pipeline 300 shown in FIG. 3 is illustrative and can include any, all, none or a combination of the blocks or modules shown in FIG. 3 . The processing order shown in FIG. 3 is illustrative and the processing order may vary without departing from the scope of the specification or claims.
FIG. 4 is a pre-processing pipeline 400 for processing the force sensor data. The pre-processing pipeline 400 processes digital force sensor data frames 410. A noise reduction unit 420 removes or attenuates noise sources that might have the same or different level of impact on each sensor. The noise reduction unit 420 can use a variety of techniques including, but not limited to, subtraction, combination of the input data frames, adaptive filtering, wavelet transform, independent component analysis, principal component analysis, and/or other linear or nonlinear transforms. A signal enhancement unit 430 can improve the signal to noise ratio of the input data. The signal enhancement unit 430 can be implemented as a linear or nonlinear combination of input data frames. For example, the signal enhancement unit 430 may combine the signal deltas to increase the signal strength for higher resolution algorithmic analysis. Subsampling units 440, 441 and 442 sample the digital enhanced sensor data and can include downsampling, upsampling, or resampling. The subsampling can be implemented as a multi-stage sampling or multi-phase sampling, and can use the same or different sampling rates for cardiac, respiratory and coughing analysis.
Cardiac analysis 450 determines the heart rate, heartbeat timing, variability, and heartbeat morphology and their corresponding changes from a baseline or a predefined range. An example process for cardiac analysis is shown in U.S. Provisional Application Patent Ser. No. 63/003,551, filed Apr. 1, 2020, the entire disclosure of which is hereby incorporated by reference. Respiratory analysis 460 determines the breathing rate, breathing phase, depth, timing and variability, and breathing morphology and their corresponding changes from a baseline or a predefined range. An example process for respiratory analysis is shown in U.S. Provisional Application Patent Ser. No. 63/003,551, filed Apr. 1, 2020, the entire disclosure of which is hereby incorporated by reference. Motion analysis 470 determines the movements amplitude, time, periodicity, and pattern and their corresponding changes from a baseline or a predefined range. Health and sleep status analysis 480 combines the data from cardiac analysis 450, respiratory analysis 460 and motion analysis 470 to determine the subject's health status, sleep quality, out-of-the norm events, diseases and conditions.
The processing pipeline 400 shown in FIG. 4 is illustrative and can include any, all, none or a combination of the blocks or modules shown in FIG. 4 . The processing order shown in FIG. 4 is illustrative and the processing order may vary without departing from the scope of the specification or claims.
FIG. 5 is an example process 500 for analyzing the audio sensor data. The pipeline 500 processes digital audio sensor data frames 510. A noise reduction unit 520 removes or attenuates environmental or other noise sources that might have the same or different level of impact on each sensor. The noise reduction unit 520 can use a variety of techniques including, but not limited to, subtraction, combination of the input data frames, adaptive filtering, wavelet transform, independent component analysis, principal component analysis, and/or other linear or nonlinear transforms. A signal enhancement unit 530 can improve the signal to noise ratio of the input data. Speech initiation detector 540 determines if the subject is verbally communicating with the system. The detector 540 compares the audio stream against a dictionary of electronic commands to discard unrelated conversations and determines 545 if a verbal command to interact has been initiated.
If no verbal command has been initiated, the enhanced digital audio sensor data frames will be analyzed using sleep disordered breathing analyzer 550 to detect breathing disturbances. Sleep disordered breathing analyzer 550 uses digital audio sensors data frames 510, digital force sensors data frames 410, or both to determine breathing disturbances. The sleep disordered breathing analyzer 550 uses envelope detection algorithms, time domain, spectral domain, or time frequency domain analysis to identify the presence, intensity, magnitude, duration and type of sleep disordered breathing.
If it is determined that a verbal command has been initiated, the speech recognizer 560 processes the enhanced digital audio sensor data frames to identify the context of speech. In implementations, the speech recognizer 560 includes an electronic command recognizer that compares the subject's speech against a dictionary of electronic commands. In implementations, the speech recognizer uses artificial intelligence algorithms to identify speech. In implementations, the speech recognizer 560 uses a speech to text engine to translate the subject's verbal commands into strings of text. Response categorizer 570 processes the output from the speech recognizer and determines if an interactive session 580 should be initiated or a responsive action 590 should be performed. Examples of an interactive session are playing back a tone or playing a synthesized speech. Examples of a responsive action are turning on/off a home automation feature, labeling the data with health status markers for future access of the subject or subject's physician, calling emergency services, or interacting with another speech capable device.
The processing pipeline 500 shown in FIG. 5 is illustrative and can include any, all, none or a combination of the components, blocks or modules shown in FIG. 5 . The processing order shown in FIG. 5 is illustrative and the processing order may vary without departing from the scope of the specification or claims.
FIG. 6 is an example process 600 for analyzing the audio sensor data by interacting with a speech capable device. In implementations, the speech capable device can be a voice assistant device (such as Alexa®, Siri®, and Google®) acting as a speech recognizer that communicates through APIs.
The pipeline 600 receives speech data 610 from the speech capable device. A noise reduction unit 620 removes or attenuates environmental or other noise sources that might have the same or different level of impact on the speech data. The noise reduction unit 620 can use a variety of techniques including, but not limited to, subtraction, combination of the input data frames, adaptive filtering, wavelet transform, independent component analysis, principal component analysis, and/or other linear or nonlinear transforms. A signal enhancement unit 530 can improve the signal to noise ratio of the speech data. Speech initiation detector 640 determines if the subject is verbally communicating with the system. The detector 640 compares the speech data against a dictionary of electronic commands to discard unrelated conversations and determines 645 if a verbal command to interact has been initiated.
If no verbal command has been initiated, the enhanced digital speech data frames will be analyzed using sleep disordered breathing analyzer 650 to detect breathing disturbances. Sleep disordered breathing analyzer 650 uses speech data 610, digital force sensors data frames 410, or both to determine breathing disturbances. The sleep disordered breathing analyzer 650 uses envelope detection algorithms, time domain, spectral domain, or time frequency domain analysis to identify the presence, intensity, magnitude, duration and type of sleep disordered breathing.
If it is determined that a verbal command has been initiated, the speech recognizer 660 processes the speech data frames to identify the context of speech. In implementations, the speech recognizer 660 includes an electronic command recognizer that compares the subject's speech against a dictionary of electronic commands. In implementations, the speech recognizer uses artificial intelligence algorithms to identify speech. In implementations, the speech recognizer 660 uses a speech to text engine to translate the subject's verbal commands into strings of text. Response categorizer 670 processes the output from the speech recognizer and determines if an interactive session 680 should be initiated or a responsive action 690 should be performed. Commands corresponding to the categorized response are sent 675 to the speech capable device through APIs. In implementations, the speech enabled device can act as a speech synthesizer to initiate interactive session 680. In implementations, the speech enabled device can also connect to home automation systems or notification systems to perform responsive action 690. Examples of an interactive session are playing back a tone or playing a synthesized speech. Examples of a responsive action are turning on/off a home automation feature, labeling the data with health status markers for future access of the subject or subject's physician, calling emergency services, or interacting with another speech capable device.
The processing pipeline 600 shown in FIG. 6 is illustrative and can include any, all, none or a combination of the components, blocks or modules shown in FIG. 6 . The processing order shown in FIG. 6 is illustrative and the processing order may vary without departing from the scope of the specification or claims.
FIG. 7 is an example process 700 for recognizing speech by a speech recognizer. The speech recognizer receives 710 the enhanced audio data streams after it is determined that speech has been initiated as described in FIG. 5 . The speech recognizer detects 720 parts of the electronic command that match a specific request through speech processing, i.e., detects a trigger. The speech recognizer translates 730 the speech into text. The speech recognizer matches 740 the strings of text against a dictionary of electronic commands 750. The speech recognizer determines 760 the context of the speech. A context is the general category of the subject's verbal request. Examples are running a health check, labeling or annotating the data for a health relate episode, communication with the subject's physician, communication with the emergency services, ordering a product, and interacting with home automation. The speech recognizer encodes 770 the context and prepares it for the response categorizer 570.
The processing pipeline 700 shown in FIG. 7 is illustrative and can include any, all, none or a combination of the components, blocks or modules shown in FIG. 7 . The processing order shown in FIG. 7 is illustrative and the processing order may vary without departing from the scope of the specification or claims.
FIG. 8 is an example process 800 for sleep disordered breathing (SDB) detection and response. Digital force sensors frames 810 are received as processed in FIG. 3 and FIG. 4 . A respiration analysis 830 is performed on the digital force sensors frames 810. The respiration analysis 830 can include filtering, combining, envelope detection, and other algorithms. A spectrum or time frequency spectrum is computed 850 on the output of the respiration analysis 830. Digital audio force sensors frames 820 are received as processed in FIG. 3 and FIG. 5 . Envelope detection 840 is performed on the digital audio force sensors frames 820. A spectrum or time frequency spectrum is computed 860 on the output of the envelope detection 840. Fused sensor processing 870 is performed on the digital force sensors frames 810 and the digital audio sensors frames 820 such as normalized amplitude or frequency parameters, cross correlation, or coherence or similar metrics of similarity to create combined signals or feature sets.
Sleep disordered breathing (SDB) is determined 880 using the envelope, time domain, frequency domain, time-frequency and parameters from the fusion of force and audio sensors. Implementations include threshold based techniques, template matching methods, or use of classifiers to detect sleep disordered breathing. Once sleep disordered breathing is detected, process 880 determines the intensity (for example, light, mild, moderate, severe), magnitude, duration and type of sleep disordered breathing. If sleep disordered breathing is detected 885, a proper response 890 is determined for the detected SDB such as changing an adjustable feature of the bed (for example, firmness), bedroom (for example, lighting), play a sound to make the sleeper change position, or transition into a lighter state of sleep and therefore, help stop, reduce or alter the disordered breathing.
The processing pipeline 800 shown in FIG. 8 is illustrative and can include any, all, none or a combination of the components, blocks or modules shown in FIG. 8 . The processing order shown in FIG. 8 is illustrative and the processing order may vary without departing from the scope of the specification or claims.
FIG. 7 is a flowchart of a method 700 for determining weight from the MSMDA data. The method 700 includes: obtaining 710 the MSMDA data; calibrating 720 the MSMDA data; performing 730 superposition analysis on the calibrated MSMDA data; transforming 740 the MSMDA data to weight; finalizing 750 the weight; and outputting 760 the weight.
The method 700 includes obtaining 710 the MSMDA data. The MSMDA data is generated from the pre-processing pipeline 600 as described.
The method 700 includes calibrating 720 the MSMDA data. The calibration process compares the multiple sensors readings against an expected value or range. If the values are different, the MSMDA data is adjusted to calibrate to the expected value range. Calibration is implemented by turning off all other sources (i.e. set them to zero) in order to determine the weight of the new object. For example, the weight of the bed, bedding and pillow are determined prior to the new object. A baseline is established of the device, for example, prior to use. In an implementation, once a subject or object (collectively “item”) is on the device, an item baseline is determined and saved. This is done so that data from a device having multiple items can be correctly processed using the methods described herein.
The method 700 includes performing 730 superposition analysis on the calibrated MSMDA data. Superposition analysis provides the sum of the readings caused by each independent sensor acting alone. The superposition analysis can be implemented as an algebraic sum, a weighted sum, or a nonlinear sum of the responses from all the sensors.
The method 700 includes transforming 740 the MSMDA data to weight. A variety of known or to be known techniques can be used to transform the sensor data, i.e. the MSMDA data, to weight.
The method 700 includes finalizing 750 the weight. In an implementation, finalizing the weight can include smoothing, checking against a range, checking against a dictionary, or a past value. In an implementation, finalizing the weight can include adjustments due to other factors such as bed type, bed size, location of the sleeper, position of the sleeper, orientation of the sleeper, and the like.
The method 700 includes and outputting 760 the weight. The weight is stored for use in the methods described herein.
Implementations of controller 200, controller 214, processor 422, and/or controller 414 (and the algorithms, methods, instructions, etc., stored thereon and/or executed thereby) can be realized in hardware, software, or any combination thereof. The hardware can include, for example, computers, intellectual property (IP) cores, application-specific integrated circuits (ASICs), programmable logic arrays, optical processors, programmable logic controllers, microcode, microcontrollers, servers, microprocessors, digital signal processors or any other suitable circuit. In the claims, the term “controller” should be understood as encompassing any of the foregoing hardware, either singly or in combination.
Further, in one aspect, for example, controller 200, controller 214, processor 422, and/or controller 414 can be implemented using a general purpose computer or general purpose processor with a computer program that, when executed, carries out any of the respective methods, algorithms and/or instructions described herein. In addition or alternatively, for example, a special purpose computer/processor can be utilized which can contain other hardware for carrying out any of the methods, algorithms, or instructions described herein.
Controller 200, controller 214, processor 422, and/or controller 414 can be one or multiple special purpose processors, digital signal processors, microprocessors, controllers, microcontrollers, application processors, central processing units (CPU)s, graphics processing units (GPU)s, digital signal processors (DSP)s, application specific integrated circuits (ASIC)s, field programmable gate arrays, any other type or combination of integrated circuits, state machines, or any combination thereof in a distributed, centralized, cloud-based architecture, and/or combinations thereof.
In general, a device includes a substrate configured to support a subject, a plurality of non-contact sensors configured to capture acoustic signals and force signals with respect to the subject, an audio interface configured to communicate with the subject, and a processor in connection with the plurality of sensors and the audio interface. The processor configured to determine biosignals from one or more of the acoustic signals and the force signals to monitor a subject's health status, and detect presence of speech in the acoustic signals. The audio interface configured to interactively communicate with at least one of the subject or an entity associated with the subject based on at least one of an action needed due to the subject's health status and a verbal command in detected speech.
In implementations, the processor further configured to encrypt digitized acoustic signals by at least one of filter the digitized acoustic signals to a lower and narrower frequency, mask the digitized acoustic signals using a mask template or an encryption key, and transform the digitized acoustic signals using a mathematical formula. In implementations, the processor further configured to compare the acoustic signals against a dictionary of electronic commands to discard unrelated conversations, determine the presence of the verbal command, identify a context of speech upon determination of the verbal command, and perform at least one of: initiate an interactive session, via the audio interface, with the at least one of the subject or another entity based on the verbal command and the context of speech, and determine a responsive action based on the verbal command and the context of speech. In implementations, the audio interface is further configured to recognize and respond to voice commands from designated individuals. In implementations, the processor further configured to: compare the acoustic signals against a dictionary of electronic commands to discard unrelated conversations, determine the presence of the verbal command; analyze the acoustic signals to detect breathing disturbances upon failure to detect the verbal command, and determine a responsive action to detection of sleep disordered breathing (SDB). In implementations, the plurality of non-contact sensors configured to capture force signals from subject actions with respect to the substrate, the processor further configured to perform at least one of cardiac analysis, respiratory analysis, and motion analysis based on the force signals to determine the subject's health status. In implementations, when performing breathing disturbances analysis to determine the subject's health status, the processor further configured to: fuse the force signals and the acoustic signals based on one or more similarity metrics to generate fusion signals, detect sleep disordered breathing (SDB) using the fusion signals, the force signals, and the acoustic signals, and determine a responsive action to detection of the SDB. In implementations, wherein the responsive action is one or more of: an audible tone, an audible message, a trigger for a home automation device, a trigger for a speech assistant device, a call to an entity or emergency services, marking data for future access, a database entry, and a health check-up. In implementations, the processor further configured to determine an intensity, magnitude, duration, and type of the SDB.
In general, a system includes a speech capable device configured to communicate with at least one of a subject or an entity associated with the subject, and a device in communication with the speech capable device, The device including a substrate configured to support the subject, a plurality of non-contact sensors configured to capture acoustic signals with respect to the subject and force signals from subject actions with respect to the substrate, and a processor in connection with the plurality of sensors and the audio interface. The processor configured to: monitor a subject's health status based on the force signals and the acoustic signals, and detect a verbal command in the acoustic signals. The speech capable device configured to interactively communicate with at least the subject or the entity based on at least one of a responsive action needed due to the subject's health status and detection of the verbal command.
In implementations, the processor further configured to encrypt digitized acoustic signals by at least one of filter the digitized acoustic signals to a lower and narrower frequency, mask the digitized acoustic signals using a mask template or an encryption key, and transform the digitized acoustic signals using a mathematical formula. In implementations, the processor further configured to compare the acoustic signals against a dictionary of electronic commands to discard unrelated conversations, determine the presence of the verbal command, identify a context of speech upon determination of the verbal command, and perform at least one of: initiate an interactive session, via the speech capable device, with the at least one of the subject or the entity based on the verbal command and the context of speech, and determine the responsive action based on the verbal command and the context of speech. In implementations, the speech capable device is further configured to recognize and respond to voice commands from designated individuals. In implementations, the processor further configured to perform at least respiratory analysis based on the force signals, compare the acoustic signals against a dictionary of electronic commands to discard unrelated conversations, determine the presence of the verbal command, fuse the force signals and the acoustic signals based on one or more similarity metrics to generated fusion signals upon failure to detect the verbal command, detect sleep disordered breathing (SDB) using the fusion signals, the force signals, and the acoustic signals, and determine a responsive action to detection of the SDB. In implementations, the processor further configured to determine an intensity, magnitude, duration, and type of the SDB. In implementations, the responsive action is one or more of: an audible tone, an audible message, a trigger for a home automation device, a trigger for a speech assistant device, a call to an entity or emergency services, marking data for future access, a database entry, and a health check-up.
In general, a method for determining item specific parameters includes capturing audio signals and force signals from a plurality of non-contact sensors placed relative to a subject on a substrate, determining at least biosignal information from the audio signals and the force signals, detecting a presence of speech in the acoustic signals, and interactively communicating with at least one of the subject or an entity associated with the subject based on at least one of an action needed due to a subject's health status and a verbal command found in detected speech.
In implementations, the method further includes encrypting digitized acoustic signals by at least one of filter the digitized acoustic signals to a lower and narrower frequency, mask the digitized acoustic signals using a mask template or an encryption key, and transform the digitized acoustic signals using a mathematical formula. In implementations, the method further includes comparing the acoustic signals against a dictionary of electronic commands to discard unrelated conversations, determining the presence of the verbal command, identifying a context of speech upon determination of the verbal command, and performing at least one of: initiating an interactive session, via the audio interface, with the at least one of the subject or another entity based on the verbal command and the context of speech, and determining a responsive action based on the verbal command and the context of speech. In implementations, the method further includes recognizing and responding to voice commands from designated individuals. In implementations, the method further includes comparing the acoustic signals against a dictionary of electronic commands to discard unrelated conversations, determining the presence of the verbal command, analyzing the acoustic signals to detect breathing disturbances upon failure to detect the verbal command, and determining a responsive action to detection of sleep disordered breathing (SDB). In implementations, the method further includes performing at least one of cardiac analysis, respiratory analysis, and motion analysis based on the force signals to determine the subject's health status. In implementations, the method further includes performing breathing disturbances analysis to determine the subject's health status, the performing further includes fusing the force signals and the acoustic signals based on one or more similarity metrics to generate fusion signals, detecting sleep disordered breathing (SDB) using the fusion signals, the force signals, and the acoustic signals, and determining a responsive action to detection of the SDB. In implementations, the responsive action is one or more of: an audible tone, an audible message, a trigger for a home automation device, a trigger for a speech assistant device, a call to an entity or emergency services, marking data for future access, a database entry, and a health check-up. In implementations, the method further includes determining an intensity, magnitude, duration, and type of the SDB. In implementations, the method further includes performing at least respiratory analysis based on captured force signals, comparing the acoustic signals against a dictionary of electronic commands to discard unrelated conversations, determining the presence of the verbal command, fusing the force signals and the acoustic signals based on one or more similarity metrics to generated fusion signals upon failure to detect the verbal command, detecting sleep disordered breathing (SDB) using the fusion signals, the force signals, and the acoustic signals, and determining a responsive action to detection of the SDB.
In general, a device includes a substrate configured to support a subject, a plurality of non-contact sensors configured to capture force signals with respect to the subject, a processor in connection with the plurality of sensors, the processor configured to determine biosignals from the force signals to monitor a subject's health status, and an audio interface configured to interactively communicate with at least one of the subject or an entity associated with the subject based on at least one of an action needed due to the subject's health status and a verbal command received via a speech capable device.
In general, a device includes a substrate configured to support a subject, a plurality of non-contact sensors configured to capture acoustic signals and force signals with respect to the subject, an audio interface configured to communicate with the subject, and a processor in connection with the plurality of sensors and the audio interface. The processor configured to determine biosignals from one or more of the acoustic signals and the force signals to monitor a subject's health status, and receive, from a speech detection entity, speech detected in the acoustic signals. The audio interface configured to interactively communicate with at least one of the subject or an entity associated with the subject based on at least one of an action needed due to the subject's health status and a verbal command in detected speech.
The word “example,” “aspect,” or “embodiment” is used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as using one or more of these words is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the word “example,” “aspect,” or “embodiment” is intended to present concepts in a concrete fashion. As used in this application, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or”. That is, unless specified otherwise, or clear from context, “X includes A or B” is intended to mean any of the natural inclusive permutations. That is, if X includes A; X includes B; or X includes both A and B, then “X includes A or B” is satisfied under any of the foregoing instances. In addition, the articles “a” and “an” as used in this application and the appended claims should generally be construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form.
While the disclosure has been described in connection with certain embodiments, it is to be understood that the disclosure is not to be limited to the disclosed embodiments but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims, which scope is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures as is permitted under the law.

Claims (28)

What is claimed is:
1. A device comprising:
a substrate configured to support a subject;
a plurality of non-contact sensors configured to capture acoustic signals and force signals with respect to the subject;
an audio interface; and
a processor in connection with the plurality of sensors and the audio interface, the processor configured to:
determine one or more biosignals from one or more of (i) the acoustic signals or (ii) the force signals to monitor a health status of the subject;
detect presence of speech in the acoustic signals; and
detect a verbal command in the speech,
wherein, in response to not detecting presence of the verbal command, the processor is configured to perform a sleep disordered breathing (SDB) analysis, and
wherein, in response to detecting the presence of the verbal command, the audio interface is configured to interactively communicate with at least one of the subject or an entity associated with the subject based on at least one of an action needed due to the health status of the subject and the verbal command.
2. The device of claim 1, wherein the processor is further configured to: digitize the acoustic signals to obtain digitized acoustic signals, and encrypt the digitized acoustic signals by at least one of: filtering the digitized acoustic signals to a lower and narrower frequency than a frequency of the digitized acoustic signals, masking the digitized acoustic signals using a mask template or an encryption key, and transforming the digitized acoustic signals using a mathematical formula.
3. The device of claim 1, wherein the processor is further configured to:
identify a context of the speech upon detection of the verbal command; and
perform at least one of:
initiate an interactive session, via the audio interface, with the at least one of the subject or the entity associated with the subject based on the verbal command and the context of the speech; and
determine a responsive action based on the verbal command and the context of the speech.
4. The device of claim 1, wherein the audio interface is further configured to recognize and respond to the verbal command from a designated individual.
5. The device of claim 1, wherein the processor is further configured to:
compare the acoustic signals against a dictionary of electronic commands to identify unrelated conversations to discard.
6. The device of claim 1, wherein the plurality of non-contact sensors are configured to capture force signals from an action of the subject with respect to the substrate, and the processor is further configured to perform at least one of cardiac analysis, respiratory analysis, and motion analysis based on the force signals to determine the health status of the subject.
7. The device of claim 6, wherein the SDB analysis comprises breathing disturbances analysis, and wherein, when performing the breathing disturbances analysis to determine the health status of the subject, the processor is further configured to:
fuse the force signals and the acoustic signals based on one or more similarity metrics to generate fusion signals;
detect the SDB using at least one of: (iii) the fusion signals, (iv) the force signals, or (v) the acoustic signals; and
determine a responsive action to detection of the SDB.
8. The device of claim 7, wherein the responsive action is one or more of:
an audible tone;
an audible message;
a trigger for a home automation device;
a trigger for a speech assistant device;
a call to an entity or emergency services;
marking data for future access;
a database entry; and
a health check-up.
9. The device of claim 7, wherein the processor is further configured to determine an intensity, magnitude, duration, and type of the SDB.
10. A system comprising:
a speech capable device comprising an audio interface configured to communicate with at least one of a subject or an entity associated with the subject;
an apparatus in communication with the speech capable device, the apparatus comprising:
a substrate configured to support the subject;
a plurality of non-contact sensors configured to capture acoustic signals with respect to the subject and force signals from subject actions with respect to the substrate;
a processor in connection with the plurality of sensors and the audio interface, the processor configured to:
monitor a health status of the subject based on one or more of (i) the force signals; or (ii) the acoustic signals;
detect presence of speech in the acoustic signals; and
detect a verbal command in the speech,
wherein, in response to not detecting presence of the verbal command, the speech capable device is configured to perform a sleep disordered breathing (SDB) analysis, and
wherein, in response to detecting the presence of the verbal command, the speech capable device is configured to interactively communicate with at least the subject or the entity based on at least one of a responsive action needed due to the health status of the subject and the verbal command.
11. The system of claim 10, wherein the processor is further configured to: digitize the acoustic signals to obtain digitized acoustic signals, and encrypt the digitized acoustic signals by at least one of: filtering the digitized acoustic signals to a lower and narrower frequency than a frequency of the digitized acoustic signals, masking the digitized acoustic signals using a mask template or an encryption key, and transforming the digitized acoustic signals using a mathematical formula.
12. The system of claim 10, wherein the processor is further configured to:
compare the acoustic signals against a dictionary of electronic commands to identify unrelated conversations to discard;
determine the presence of the verbal command;
identify a context of the speech upon detection of the verbal command; and
perform at least one of:
initiate an interactive session, via the speech capable device, with the at least one of the subject or the entity based on the verbal command and the context of the speech; and
determine the responsive action based on the verbal command and the context of the speech.
13. The system of claim 10, wherein the speech capable device is further configured to recognize and respond to the verbal command from a designated individual.
14. The system of claim 10, wherein the processor is further configured to:
perform respiratory analysis based on the force signals;
compare the acoustic signals against a dictionary of electronic commands to identify unrelated conversations to discard;
fuse the force signals and the acoustic signals based on one or more similarity metrics to generated fusion signals in response to not detecting the presence of the verbal command;
detect the SDB using at least one of: (iii) the fusion signals, (iv) the force signals, or (v) the acoustic signals; and
determine a responsive action to detection of the SDB.
15. The system of claim 14, wherein the processor is further configured to determine an intensity, magnitude, duration, and type of the SDB.
16. The system of claim 14, wherein the responsive action is one or more of:
an audible tone;
an audible message;
a trigger for a home automation device;
a trigger for a speech assistant device;
a call to an entity or emergency services;
marking data for future access;
a database entry; and
a health check-up.
17. A method for determining item specific parameters, the method comprising:
capturing acoustic signals and force signals from a plurality of non-contact sensors placed relative to a subject on a substrate;
determining biosignal information from one or more of (i) the acoustic signals, or (ii) the force signals;
detecting presence of speech in the acoustic signals;
detecting a verbal command in the speech;
in response to not detecting presence of the verbal command, performing a sleep disordered breathing (SDB) analysis; and
in response to detecting the presence of the verbal command, interactively communicating with at least one of the subject or an entity associated with the subject based on at least one of an action needed due to a health status of a subject and the verbal command.
18. The method of claim 17, the method further comprising:
digitizing the acoustic signals to obtain digitized acoustic signals; and
encrypting the digitized acoustic signals by at least one of: filtering the digitized acoustic signals to a lower and narrower frequency than a frequency of the digitized acoustic signals, masking the digitized acoustic signals using a mask template or an encryption key, and transforming the digitized acoustic signals using a mathematical formula.
19. The method of claim 17, the method further comprising:
identifying a context of the speech upon detection of the verbal command; and
performing at least one of:
initiating an interactive session, via an audio interface, with the at least one of the subject or the entity associated with the subject based on the verbal command and the context of the speech; and
determining a responsive action based on the verbal command and the context of the speech.
20. The method of claim 17, the method further comprising:
recognizing and responding to the verbal command from a designated individual.
21. The method of claim 17, the method further comprising:
comparing the acoustic signals against a dictionary of electronic commands to identify unrelated conversations to discard.
22. The method of claim 17, the method further comprising:
performing at least one of cardiac analysis, respiratory analysis, and motion analysis based on the force signals to determine the health status of the subject.
23. The method of claim 22, the method further comprising:
performing breathing disturbances analysis to determine the health status of the subject, the performing further comprising:
fusing the force signals and the acoustic signals based on one or more similarity metrics to generate fusion signals;
detecting the SDB using at least one of: (iii) the fusion signals, (iv) the force signals, or (v) the acoustic signals; and
determining a responsive action to detection of the SDB.
24. The method of claim 23, wherein the responsive action is one or more of:
an audible tone;
an audible message;
a trigger for a home automation device;
a trigger for a speech assistant device;
a call to an entity or emergency services;
marking data for future access;
a database entry; and
a health check-up.
25. The method of claim 23, the method further comprising:
determining an intensity, magnitude, duration, and type of the SDB.
26. The method of claim 17, the method further comprising:
performing respiratory analysis based on captured force signals;
comparing the acoustic signals against a dictionary of electronic commands to identify unrelated conversations to discard;
fusing the force signals and the acoustic signals based on one or more similarity metrics to generated fusion signals upon failure to detect the verbal command;
detecting the SDB using at least one of: (iii) the fusion signals, (iv) the force signals, or (v) the acoustic signals; and
determining a responsive action to detection of the SDB.
27. A device comprising:
a substrate configured to support a subject;
a plurality of non-contact sensors configured to capture force signals with respect to the subject;
a processor in connection with the plurality of sensors, the processor configured to:
determine biosignals from the force signals to monitor a health status of the subject;
detect presence of speech in an acoustic signals; and
detect a verbal command in the speech; and
an audio interface,
wherein, in response to not detecting presence of the verbal command, the processor is configured to perform a sleep disordered breathing (SDB) analysis, and
wherein, in response to detecting the presence of the verbal command, the audio interface is configured to interactively communicate with at least one of the subject or an entity associated with the subject based on at least one of an action needed due to the health status of the subject and the verbal command.
28. A device comprising:
a substrate configured to support a subject;
a plurality of non-contact sensors configured to capture acoustic signals and force signals with respect to the subject;
an audio interface;
a processor in connection with the plurality of sensors and the audio interface, the processor configured to:
determine biosignals from one or more of the acoustic signals or (ii) the force signals to monitor a health status of the subject;
receive, from a speech detection entity, speech detected in the acoustic signals; and
detect a verbal command in the speech,
wherein, in response to not detecting presence of the verbal command, the processor is configured to perform a sleep disordered breathing (SDB) analysis, and
wherein, in response to detecting the presence of the verbal command, the audio interface is configured to interactively communicate with at least one of the subject or an entity associated with the subject based on at least one of an action needed due to the health status of the subject and the verbal command.
US17/112,177 2020-04-01 2020-12-04 Speech-controlled health monitoring systems and methods Active 2041-12-16 US11931168B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/112,177 US11931168B2 (en) 2020-04-01 2020-12-04 Speech-controlled health monitoring systems and methods

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202063003551P 2020-04-01 2020-04-01
US17/112,177 US11931168B2 (en) 2020-04-01 2020-12-04 Speech-controlled health monitoring systems and methods

Publications (2)

Publication Number Publication Date
US20210307681A1 US20210307681A1 (en) 2021-10-07
US11931168B2 true US11931168B2 (en) 2024-03-19

Family

ID=77920651

Family Applications (2)

Application Number Title Priority Date Filing Date
US17/112,177 Active 2041-12-16 US11931168B2 (en) 2020-04-01 2020-12-04 Speech-controlled health monitoring systems and methods
US17/112,074 Abandoned US20210307683A1 (en) 2020-04-01 2020-12-04 Systems and Methods for Remote Patient Screening and Triage

Family Applications After (1)

Application Number Title Priority Date Filing Date
US17/112,074 Abandoned US20210307683A1 (en) 2020-04-01 2020-12-04 Systems and Methods for Remote Patient Screening and Triage

Country Status (8)

Country Link
US (2) US11931168B2 (en)
EP (2) EP4125549A1 (en)
JP (2) JP2023532387A (en)
KR (2) KR20220162768A (en)
CN (2) CN115397310A (en)
AU (2) AU2020440233A1 (en)
CA (2) CA3173469A1 (en)
WO (2) WO2021201925A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021201925A1 (en) 2020-04-01 2021-10-07 UDP Labs, Inc. Speech-controlled health monitoring systems and methods

Citations (228)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4766628A (en) 1986-01-21 1988-08-30 Walker Robert A Air mattress with filler check valve and cap therefor
US4788729A (en) 1985-04-14 1988-12-06 Walker Robert A Air mattress with audible pressure relief valve
USD300194S (en) 1984-10-12 1989-03-14 Walker Robert A Air mattress
US4829616A (en) 1985-10-25 1989-05-16 Walker Robert A Air control system for air bed
US4897890A (en) 1983-01-05 1990-02-06 Walker Robert A Air control system for air bed
US4908895A (en) 1989-03-20 1990-03-20 Walker Robert A Air mattress
USD313973S (en) 1988-12-30 1991-01-22 Walker Robert A Hand-held control unit for the operation of an inflatable air mattress
US4991244A (en) 1990-01-05 1991-02-12 Walker Robert A Border for air bed
US5144706A (en) 1990-12-03 1992-09-08 Walker Robert A Bed foundation
US5170522A (en) 1991-12-16 1992-12-15 Select Comfort Corporation Air adjustable bed
US5430266A (en) 1993-02-03 1995-07-04 A-Dec, Inc. Control panel with sealed switch keypad
USD368475S (en) 1994-11-01 1996-04-02 Select Comfort Corporation Hand held remote control unit
US5509154A (en) 1994-11-01 1996-04-23 Select Comfort Corporation Air control system for an air bed
US5564140A (en) 1994-07-22 1996-10-15 Select Comfort Corporation Frame assembly for supporting a mattress
US5642546A (en) 1995-09-19 1997-07-01 Select Comfort Corporation Inflatable mattress with improved border support wall
US5904172A (en) 1997-07-28 1999-05-18 Select Comfort Corporation Valve enclosure assembly
CN2352936Y (en) 1998-04-14 1999-12-08 中国人民解放军北京军区总医院 Sectional weighing apparatus used for hospital bed
WO2000004828A1 (en) 1998-07-21 2000-02-03 Sensitive Technologies, Llc Respiration and movement monitoring system
CA2346207A1 (en) 1998-10-28 2000-05-04 Hill-Rom, Inc. Force optimization surface apparatus and method
US6108844A (en) 1998-03-11 2000-08-29 Sleeptec, Inc. Air mattress for a sleeper sofa
JP2001037729A (en) 1999-07-29 2001-02-13 Toshiba Corp Cardiac load test system
US6202239B1 (en) 1998-02-25 2001-03-20 Select Comfort Corp. Multi-zone support
JP2001178834A (en) 1999-12-27 2001-07-03 Mitsubishi Electric Corp Charged particle irradiation system
JP2001252253A (en) 2000-03-13 2001-09-18 Hitachi Ltd Equipment for measuring biological magnetic field
EP1180352A1 (en) 1999-03-25 2002-02-20 Matsushita Seiko Co.Ltd. Device for moving body
US6397419B1 (en) 1999-03-10 2002-06-04 Select Comfort Corporation System and method for sleep surface adjustment
US20030052787A1 (en) 2001-08-03 2003-03-20 Zerhusen Robert Mark Patient point-of-care computer system
US6686711B2 (en) 2000-11-15 2004-02-03 Comfortaire Corporation Air mattress control system and method
US6708357B2 (en) 2002-01-14 2004-03-23 Select Comfort Corporation Corner piece for a soft-sided mattress
US6763541B2 (en) 2001-06-07 2004-07-20 Select Comfort Corporation Interactive air bed
US6804848B1 (en) 2003-03-14 2004-10-19 Comfortaire Corporation High-profile mattress having an upper low-profile module with an air posturizing sleep surface
US6832397B2 (en) 2000-07-07 2004-12-21 Select Comfort Corporation Bed foundation
USD502929S1 (en) 2004-03-02 2005-03-15 Select Comfort Corporation Remote control
US6883191B2 (en) 2000-07-07 2005-04-26 Select Comfort Corporation Leg and bracket assembly for a bed foundation
US20060116589A1 (en) 2004-11-22 2006-06-01 Jawon Medical Co., Ltd. Weight scale having function of pulse rate meter or heartbeat rate meter
US7107095B2 (en) 2002-04-30 2006-09-12 Jan Manolas Device for and method of rapid noninvasive measurement of parameters of diastolic function of left ventricle and automated evaluation of the measured profile of left ventricular function at rest and with exercise
JP2007135863A (en) 2005-11-18 2007-06-07 Terumo Corp Monitoring system for monitoring condition of subject
US20070157385A1 (en) 2005-12-19 2007-07-12 Stryker Corporation Hospital bed
US20070164871A1 (en) 2005-02-23 2007-07-19 Stryker Canadian Management, Inc. Diagnostic and control system for a patient support
US20080005843A1 (en) 2004-04-30 2008-01-10 Tactex Controls Inc. Body Support Apparatus Having Automatic Pressure Control and Related Methods
US7343197B2 (en) 2000-05-30 2008-03-11 Vladimir Shusterman Multi-scale analysis and representation of physiological and health data
US20080077020A1 (en) 2006-09-22 2008-03-27 Bam Labs, Inc. Method and apparatus for monitoring vital signs remotely
US20080126122A1 (en) 2006-11-28 2008-05-29 General Electric Company Smart bed system and apparatus
US20080120784A1 (en) 2006-11-28 2008-05-29 General Electric Company Smart bed system and apparatus
US20080122616A1 (en) 2006-11-28 2008-05-29 General Electric Company Smart bed method
CN101201273A (en) 2006-12-11 2008-06-18 大隈株式会社 Method for detecting abnormality of temperature sensor in machine tool
US20080146890A1 (en) 2006-12-19 2008-06-19 Valencell, Inc. Telemetric apparatus for health and environmental monitoring
US20080235872A1 (en) 2007-03-30 2008-10-02 Newkirk David C User interface for hospital bed
US20090177495A1 (en) 2006-04-14 2009-07-09 Fuzzmed Inc. System, method, and device for personal medical care, intelligent analysis, and diagnosis
US7666151B2 (en) 2002-11-20 2010-02-23 Hoana Medical, Inc. Devices and methods for passive patient monitoring
USD611499S1 (en) 2009-01-26 2010-03-09 A-Dec, Inc. Controller for a bed or chair with symbol
USD611498S1 (en) 2009-01-26 2010-03-09 A-Dec, Inc. Controller for a bed or chair with symbol set
US20100170043A1 (en) 2009-01-06 2010-07-08 Bam Labs, Inc. Apparatus for monitoring vital signs
JP2010160783A (en) 2008-12-12 2010-07-22 Flower Robotics Inc Information providing system, portable information terminal, and information management device
US7865988B2 (en) 2004-03-16 2011-01-11 Select Comfort Corporation Sleeping surface having two longitudinally connected bladders with a support member
CN102080534A (en) 2009-11-30 2011-06-01 上海神开石油化工装备股份有限公司 Speed oil filling device for pulse generator of wireless inclinometer and using method thereof
US20110144455A1 (en) 2007-08-31 2011-06-16 Bam Labs, Inc. Systems and methods for monitoring a subject at rest
USD640280S1 (en) 2010-06-25 2011-06-21 Microsoft Corporation Display screen with user interface
US20110224510A1 (en) 2010-01-29 2011-09-15 Dreamwell, Ltd. Systems and Methods for Bedding with Sleep Diagnostics
US20110265003A1 (en) 2008-05-13 2011-10-27 Apple Inc. Pushing a user interface to a remote device
US20120089419A1 (en) 2010-10-08 2012-04-12 Huster Keith A Hospital bed with graphical user interface having advanced functionality
US8287452B2 (en) 2009-01-07 2012-10-16 Bam Labs, Inc. Apparatus for monitoring vital signs of an emergency victim
US20120265024A1 (en) 2010-10-05 2012-10-18 University Of Florida Research Foundation, Incorporated Systems and methods of screening for medical states using speech and other vocal behaviors
USD669499S1 (en) 2011-03-21 2012-10-23 Microsoft Corporation Display screen with animated user icon
US8336369B2 (en) 2007-05-24 2012-12-25 Select Comfort Corporation System and method for detecting a leak in an air bed
USD674400S1 (en) 2009-09-14 2013-01-15 Microsoft Corporation Display screen with user interface
USD678312S1 (en) 2008-05-20 2013-03-19 Apple Inc. Display screen or portion thereof with graphical user interface
US8444558B2 (en) 2009-01-07 2013-05-21 Bam Labs, Inc. Apparatus for monitoring vital signs having fluid bladder beneath padding
USD690723S1 (en) 2011-11-03 2013-10-01 Blackberry Limited Display screen with keyboard graphical user interface
USD691118S1 (en) 2013-03-14 2013-10-08 Select Comfort Corporation Remote control
US20130267791A1 (en) * 2008-05-12 2013-10-10 Earlysense Ltd. Monitoring, predicting and treating clinical episodes
JP2013215252A (en) 2012-04-05 2013-10-24 Tanita Corp Biological information measuring apparatus
CN103381123A (en) 2013-06-13 2013-11-06 厚福医疗装备有限公司 High-precision dynamic weighing sickbed system and automatic control method thereof
US20130332318A1 (en) 2012-06-10 2013-12-12 Apple Inc. User Interface for In-Browser Product Viewing and Purchasing
USD696268S1 (en) 2011-10-06 2013-12-24 Samsung Electronics Co., Ltd. Mobile phone displaying graphical user interface
USD696271S1 (en) 2011-10-06 2013-12-24 Samsung Electronics Co., Ltd. Mobile phone displaying graphical user interface
US20130340168A1 (en) 2012-06-21 2013-12-26 Hill-Rom Services, Inc. Patient support systems and methods of use
USD696677S1 (en) 2011-10-14 2013-12-31 Nest Labs, Inc. Display screen or portion thereof with a graphical user interface
USD697874S1 (en) 2013-03-15 2014-01-21 Select Comfort Corporation Remote control
USD698338S1 (en) 2013-03-14 2014-01-28 Select Comfort Corporation Remote control
US20140026322A1 (en) 2012-07-24 2014-01-30 Randall J. Bell Proxy caregiver interface
US20140066798A1 (en) 2012-08-30 2014-03-06 David E. Albert Cardiac performance monitoring system for use with mobile communications devices
US8672853B2 (en) 2010-06-15 2014-03-18 Bam Labs, Inc. Pressure sensor for monitoring a subject and pressure sensor with inflatable bladder
US8672842B2 (en) 2010-08-24 2014-03-18 Evacusled Inc. Smart mattress
USD701536S1 (en) 2013-07-26 2014-03-25 Select Comfort Corporation Air pump
US8769747B2 (en) 2008-04-04 2014-07-08 Select Comfort Corporation System and method for improved pressure adjustment
USD709909S1 (en) 2012-04-30 2014-07-29 Blackberry Limited Display screen with keyboard graphical user interface
US20140250597A1 (en) 2013-03-11 2014-09-11 Select Comfort Corporation Adjustable bed foundation system with built-in self-test
US20140277822A1 (en) 2013-03-14 2014-09-18 Rob Nunn Inflatable air mattress sleep environment adjustment and suggestions
US20140259418A1 (en) 2013-03-14 2014-09-18 Rob Nunn Inflatable air mattress with light and voice controls
US8892679B1 (en) 2013-09-13 2014-11-18 Box, Inc. Mobile device, methods and user interfaces thereof in a mobile device platform featuring multifunctional access and engagement in a collaborative environment provided by a cloud-based platform
US8893339B2 (en) 2013-03-14 2014-11-25 Select Comfort Corporation System and method for adjusting settings of a bed with a remote control
USD720762S1 (en) 2012-02-09 2015-01-06 Samsung Electronics Co., Ltd. Display screen or portion thereof with graphical user interface
US20150007393A1 (en) 2013-07-02 2015-01-08 Select Comfort Corporation Controller for multi-zone fluid chamber mattress system
US20150025327A1 (en) 2013-07-18 2015-01-22 Bam Labs, Inc. Device and Method of Monitoring a Position and Predicting an Exit of a Subject on or from a Substrate
US8966689B2 (en) 2012-11-19 2015-03-03 Select Comfort Corporation Multi-zone fluid chamber and mattress system
US8973183B1 (en) 2014-01-02 2015-03-10 Select Comfort Corporation Sheet for a split-top adjustable bed
US8984687B2 (en) 2013-03-14 2015-03-24 Select Comfort Corporation Partner snore feature for adjustable bed foundation
US20150097682A1 (en) 2013-10-07 2015-04-09 Google Inc. Mobile user interface for smart-home hazard detector configuration
US9005101B1 (en) 2014-01-04 2015-04-14 Julian Van Erlach Smart surface biological sensor and therapy administration
US20150169840A1 (en) 2011-08-05 2015-06-18 Alere San Diego, Inc. Methods and compositions for monitoring heart failure
US20150182418A1 (en) 2014-01-02 2015-07-02 Select Comfort Corporation Massage furniture item and method of operation
US20150182397A1 (en) 2014-01-02 2015-07-02 Select Comfort Corporation Adjustable bed system having split-head and joined foot configuration
US20150182399A1 (en) 2014-01-02 2015-07-02 Select Comfort Corporation Adjustable bed system with split head and split foot configuration
CN104822355A (en) 2012-07-20 2015-08-05 费诺-华盛顿公司 Automated systems for powered cots
USD737250S1 (en) 2013-03-14 2015-08-25 Select Comfort Corporation Remote control
US9131781B2 (en) 2012-12-27 2015-09-15 Select Comfort Corporation Distribution pad for a temperature control system
US20150277703A1 (en) 2014-02-25 2015-10-01 Stephen Rhett Davis Apparatus for digital signage alerts
USD743976S1 (en) 2014-01-10 2015-11-24 Aliphcom Display screen or portion thereof with graphical user interface
USD745884S1 (en) 2013-12-04 2015-12-22 Medtronic, Inc. Display screen or portion thereof with graphical user interface
US9271665B2 (en) 2011-05-20 2016-03-01 The Regents Of The University Of California Fabric-based pressure sensor arrays and methods for data analysis
US20160058337A1 (en) 2014-09-02 2016-03-03 Apple Inc. Physical activity and workout monitor
USD752624S1 (en) 2014-09-01 2016-03-29 Apple Inc. Display screen or portion thereof with graphical user interface
US20160100696A1 (en) 2014-10-10 2016-04-14 Select Comfort Corporation Bed having logic controller
US20160110986A1 (en) 2014-10-21 2016-04-21 Kenneth Lawrence Rosenblood Posture improvement device, system, and method
USD754672S1 (en) 2014-01-10 2016-04-26 Aliphcom Display screen or portion thereof with graphical user interface
USD755823S1 (en) 2014-09-02 2016-05-10 Apple Inc. Display screen or portion thereof with graphical user interface
US9370457B2 (en) 2013-03-14 2016-06-21 Select Comfort Corporation Inflatable air mattress snoring detection and response
US9375142B2 (en) 2012-03-15 2016-06-28 Siemens Aktiengesellschaft Learning patient monitoring and intervention system
USD761293S1 (en) 2014-10-17 2016-07-12 Robert Bosch Gmbh Display screen with graphical user interface
US9392879B2 (en) 2013-03-14 2016-07-19 Select Comfort Corporation Inflatable air mattress system architecture
USD762716S1 (en) 2014-02-21 2016-08-02 Huawei Device Co., Ltd. Display screen or portion thereof with animated graphical user interface
CN105877712A (en) 2016-06-19 2016-08-24 河北工业大学 Multifunctional intelligent bed system
US20160242562A1 (en) 2015-02-24 2016-08-25 Select Comfort Corporation Mattress with Adjustable Firmness
WO2016170005A1 (en) 2015-04-20 2016-10-27 Resmed Sensor Technologies Limited Detection and identification of a human from characteristic signals
USD771123S1 (en) 2014-09-01 2016-11-08 Apple Inc. Display screen or portion thereof with multi-state graphical user interface
USD772905S1 (en) 2014-11-14 2016-11-29 Volvo Car Corporation Display screen with graphical user interface
US9504416B2 (en) 2013-07-03 2016-11-29 Sleepiq Labs Inc. Smart seat monitoring system
US9510688B2 (en) 2013-03-14 2016-12-06 Select Comfort Corporation Inflatable air mattress system with detection techniques
US20160353996A1 (en) 2015-06-05 2016-12-08 The Arizona Board Of Regents On Behalf Of The University Of Arizona Systems and methods for real-time signal processing and fitting
USD774071S1 (en) 2012-09-07 2016-12-13 Bank Of America Corporation Communication device with graphical user interface
US20160367039A1 (en) 2015-06-16 2016-12-22 Sleepiq Labs Inc. Device and Method of Automated Substrate Control and Non-Intrusive Subject Monitoring
USD775631S1 (en) 2013-01-09 2017-01-03 Samsung Electronics Co., Ltd. Display screen or portion thereof with graphical user interface
US20170003666A1 (en) 2015-07-02 2017-01-05 Select Comfort Corporation Automation for improved sleep quality
USD778301S1 (en) 2015-03-27 2017-02-07 Showa Corporation Display screen with graphical user interface
US20170055896A1 (en) 2015-08-31 2017-03-02 Masimo Corporation Systems and methods to monitor repositioning of a patient
JP2017510390A (en) 2014-01-27 2017-04-13 リズム ダイアグノスティック システムズ,インク. System and method for monitoring health status
USD785003S1 (en) 2013-09-03 2017-04-25 Samsung Electronics Co., Ltd. Display screen or portion thereof with graphical user interface
WO2017068581A1 (en) 2015-10-20 2017-04-27 Healthymize Ltd System and method for monitoring and determining a medical condition of a user
US9635953B2 (en) 2013-03-14 2017-05-02 Sleepiq Labs Inc. Inflatable air mattress autofill and off bed pressure adjustment
USD785660S1 (en) 2015-12-23 2017-05-02 Samsung Electronics Co., Ltd. Display screen or portion thereof with graphical user interface
USD787533S1 (en) 2014-09-02 2017-05-23 Apple Inc. Display screen or portion thereof with graphical user interface
USD787551S1 (en) 2015-02-27 2017-05-23 Samsung Electronics Co., Ltd. Display screen or portion thereof with graphical user interface
USD789391S1 (en) 2014-12-31 2017-06-13 Dexcom, Inc. Display screen or portion thereof with graphical user interface and icons
USD789956S1 (en) 2015-04-16 2017-06-20 Honeywell International Inc. Display screen or portion thereof with graphical user interface
US20170191516A1 (en) 2015-12-31 2017-07-06 Select Comfort Corporation Foundation and frame for bed
WO2017122178A1 (en) 2016-01-14 2017-07-20 King Abdullah University Of Science And Technology Paper based electronics platform
USD792908S1 (en) 2014-08-27 2017-07-25 Janssen Pharmaceutica Nv Display screen or portion thereof with icon
US9730524B2 (en) 2013-03-11 2017-08-15 Select Comfort Corporation Switching means for an adjustable foundation system
US20170231545A1 (en) 2016-02-14 2017-08-17 Earlysense Ltd. Apparatus and methods for monitoring a subject
US20170255751A1 (en) 2014-09-15 2017-09-07 Geetha Sanmugalingham System and method for collection, storage and management of medical data
US9770114B2 (en) 2013-12-30 2017-09-26 Select Comfort Corporation Inflatable air mattress with integrated control
US20170281054A1 (en) * 2016-03-31 2017-10-05 Zoll Medical Corporation Systems and methods of tracking patient movement
USD799518S1 (en) 2016-06-11 2017-10-10 Apple Inc. Display screen or portion thereof with graphical user interface
USD800140S1 (en) 2013-09-03 2017-10-17 Samsung Electronics Co., Ltd. Display screen or portion thereof with graphical user interface
USD800162S1 (en) 2016-03-22 2017-10-17 Teletracking Technologies, Inc. Display screen with graphical user interface icon
US20170354268A1 (en) 2013-12-30 2017-12-14 Select Comfort Corporation Inflatable Air Mattress With Integrated Control
US20170374186A1 (en) 2016-06-24 2017-12-28 Sandisk Technologies Llc Mobile Device with Unified Media-Centric User Interface
USD809843S1 (en) 2016-11-09 2018-02-13 Sleep Number Corporation Bed foundation
USD812393S1 (en) 2016-09-15 2018-03-13 Sleep Number Corporation Bed
US9924813B1 (en) 2015-05-29 2018-03-27 Sleep Number Corporation Bed sheet system
US20180116415A1 (en) 2016-10-28 2018-05-03 Select Comfort Corporation Bed with foot warming system
US20180116418A1 (en) 2016-10-28 2018-05-03 Select Comfort Corporation Noise Reducing Plunger
US20180116420A1 (en) 2016-10-28 2018-05-03 Select Comfort Corporation Air Manifold
US20180116419A1 (en) 2016-10-28 2018-05-03 Select Comfort Corporation Air Controller With Vibration Isolators
US20180119686A1 (en) 2016-10-28 2018-05-03 Select Comfort Corporation Pump With Vibration Isolators
US20180125260A1 (en) 2016-11-09 2018-05-10 Select Comfort Corporation Bed With Magnetic Couplers
US20180125259A1 (en) 2016-11-09 2018-05-10 Select Comfort Corporation Bed With Magnetic Couplers
US20180184920A1 (en) 2017-01-05 2018-07-05 Livemetric (Medical) S.A. System and method for providing user feeedback of blood pressure sensor placement and contact quality
USD822708S1 (en) 2016-02-26 2018-07-10 Ge Healthcare Uk Limited Display screen with a graphical user interface
CN207837242U (en) 2017-06-19 2018-09-11 佛山市南海区金龙恒家具有限公司 Intelligent digital sleep detection mattress
US10092242B2 (en) 2015-01-05 2018-10-09 Sleep Number Corporation Bed with user occupancy tracking
CN108697241A (en) 2015-12-30 2018-10-23 德沃特奥金有限公司 Sleep with sensor or dependence furniture
CN108784127A (en) 2018-06-14 2018-11-13 深圳市三分之睡眠科技有限公司 A kind of Automatic adjustment method and intelligent control bed for bed
USD834593S1 (en) 2015-10-21 2018-11-27 Manitou Bf (Societe Anonyme) Display screen or portion thereof with graphical user interface
US20180341448A1 (en) 2016-09-06 2018-11-29 Apple Inc. Devices, Methods, and Graphical User Interfaces for Wireless Pairing with Peripheral Devices and Displaying Status Information Concerning the Peripheral Devices
US10143312B2 (en) 2014-04-15 2018-12-04 Sleep Number Corporation Adjustable bed system
US10149549B2 (en) 2015-08-06 2018-12-11 Sleep Number Corporation Diagnostics of bed and bedroom environment
US20180353085A1 (en) 2017-06-09 2018-12-13 Anthony Olivero Portable biometric monitoring device and method for use thereof
US10182661B2 (en) 2013-03-14 2019-01-22 Sleep Number Corporation and Select Comfort Retail Corporation Inflatable air mattress alert and monitoring system
USD840428S1 (en) 2017-01-13 2019-02-12 Adp, Llc Display screen with a graphical user interface
US20190059603A1 (en) 2017-08-23 2019-02-28 Sleep Number Corporation Air system for a bed
WO2019081915A1 (en) 2017-10-24 2019-05-02 Cambridge Cognition Limited System and method for assessing physiological state
US10314407B1 (en) 2014-04-30 2019-06-11 Xsensor Technology Corporation Intelligent sleep ecosystem
US20190201268A1 (en) 2017-12-28 2019-07-04 Sleep Number Corporation Bed having snore detection feature
US20190201269A1 (en) 2017-12-28 2019-07-04 Sleep Number Corporation Bed having sleep stage detecting feature
US20190201266A1 (en) 2017-12-28 2019-07-04 Sleep Number Corporation Bed having rollover identifying feature
US20190200777A1 (en) 2017-12-28 2019-07-04 Sleep Number Corporation Bed having sensors features for determining snore and breathing parameters of two sleepers
US20190206416A1 (en) 2017-12-28 2019-07-04 Sleep Number Corporation Home automation having user privacy protections
US20190201271A1 (en) 2017-12-28 2019-07-04 Sleep Number Corporation Snore sensing bed
US20190201270A1 (en) 2017-12-28 2019-07-04 Sleep Number Corporation Bed having snore control based on partner response
US20190201267A1 (en) 2017-12-28 2019-07-04 Sleep Number Corporation Bed having sensor fusing features useful for determining snore and breathing parameters
US20190201265A1 (en) 2017-12-28 2019-07-04 Sleep Number Corporation Bed having presence detecting feature
US10342358B1 (en) 2014-10-16 2019-07-09 Sleep Number Corporation Bed with integrated components and features
US20190209405A1 (en) 2018-01-05 2019-07-11 Sleep Number Corporation Bed having physiological event detecting feature
US20190220511A1 (en) 2016-06-22 2019-07-18 Huawei Technologies Co., Ltd. Method and apparatus for displaying candidate word, and graphical user interface
US10360368B2 (en) 2013-12-27 2019-07-23 Abbott Diabetes Care Inc. Application interface and display control in an analyte monitoring environment
USD855643S1 (en) 2018-02-21 2019-08-06 Early Warning Services, Llc Display screen portion with graphical user interface for entry of mobile number data
US20190279745A1 (en) 2018-03-07 2019-09-12 Sleep Number Corporation Home based stress test
US20200075136A1 (en) 2016-11-10 2020-03-05 Sonde Health, Inc. System and method for activation and deactivation of cued health assessment
US20200110194A1 (en) 2018-10-08 2020-04-09 UDP Labs, Inc. Multidimensional Multivariate Multiple Sensor System
US20200146910A1 (en) 2018-11-14 2020-05-14 Sleep Number Corporation Using force sensors to determine sleep parameters
US20200163627A1 (en) 2018-10-08 2020-05-28 UDP Labs, Inc. Systems and Methods for Generating Synthetic Cardio-Respiratory Signals
US20200202120A1 (en) * 2018-12-20 2020-06-25 Koninklijke Philips N.V. System and method for providing sleep positional therapy and paced breathing
US20200205580A1 (en) 2018-12-31 2020-07-02 Sleep Number Corporation Home automation with features to improve sleep
US20200227160A1 (en) * 2019-01-15 2020-07-16 Youngblood Ip Holdings, Llc Health data exchange platform
USD890792S1 (en) 2017-08-10 2020-07-21 Jpmorgan Chase Bank, N.A. Display screen or portion thereof with a graphical user interface
US10729253B1 (en) 2016-11-09 2020-08-04 Sleep Number Corporation Adjustable foundation with service position
USD896266S1 (en) 2018-11-05 2020-09-15 Stryker Corporation Display screen or portion thereof with graphical user interface
US20200337470A1 (en) 2019-04-25 2020-10-29 Sleep Number Corporation Bed having features for improving a sleeper's body thermoregulation during sleep
USD902244S1 (en) 2019-02-25 2020-11-17 Juul Labs, Inc. Display screen or portion thereof with animated graphical user interface
USD903700S1 (en) 2017-08-10 2020-12-01 Jpmorgan Chase Bank, N.A. Display screen or portion thereof with a graphical user interface
US20210022667A1 (en) 2019-07-26 2021-01-28 Sleep Number Corporation Long term sensing of sleep phenomena
USD916745S1 (en) 2019-05-08 2021-04-20 Sleep Number Corporation Display screen or portion thereof with graphical user interface
US11001447B2 (en) 2018-09-05 2021-05-11 Sleep Number Corporation Lifting furniture
US20210150873A1 (en) * 2017-12-22 2021-05-20 Resmed Sensor Technologies Limited Apparatus, system, and method for motion sensing
US20210307683A1 (en) 2020-04-01 2021-10-07 UDP Labs, Inc. Systems and Methods for Remote Patient Screening and Triage
USD932808S1 (en) 2016-11-09 2021-10-12 Select Comfort Corporation Mattress
US20220007965A1 (en) * 2018-11-19 2022-01-13 Resmed Sensor Technologies Limited Methods and apparatus for detection of disordered breathing
US20220133164A1 (en) 2020-10-30 2022-05-05 Sleep Number Corporation Bed having controller for tracking sleeper heart rate variablity
US20220175600A1 (en) 2020-12-04 2022-06-09 Sleep Number Corporation Bed having features for automatic sensing of illness states
US11399636B2 (en) 2019-04-08 2022-08-02 Sleep Number Corporation Bed having environmental sensing and control features
US11424646B2 (en) 2019-04-16 2022-08-23 Sleep Number Corporation Pillow with wireless charging
US20220265223A1 (en) 2021-02-16 2022-08-25 Sleep Number Corporation Bed having features for sensing sleeper pressure and generating estimates of brain activity
USD975121S1 (en) 2019-01-08 2023-01-10 Sleep Number Corporation Display screen or portion thereof with graphical user interface
US20230046169A1 (en) 2021-08-10 2023-02-16 Sleep Number Corporation Bed having features for controlling heating of a bed to reduce health risk of a sleeper
US20230190183A1 (en) 2021-12-16 2023-06-22 Sleep Number Corporation Sleep system with features for personalized daytime alertness quantification
US20230218225A1 (en) 2022-01-11 2023-07-13 Sleep Number Corporation Centralized hub device for determining and displaying health-related metrics

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3838138A3 (en) * 2015-08-26 2021-09-15 ResMed Sensor Technologies Limited Systems and methods for monitoring and management of chronic disease
KR101783183B1 (en) * 2016-02-18 2017-10-23 아주대학교 산학협력단 Method and apparatus for emotion classification of smart device user
KR102080534B1 (en) * 2018-12-05 2020-02-25 주식회사 헬스브릿지 Customized health care service system

Patent Citations (272)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4890344A (en) 1983-01-05 1990-01-02 Walker Robert A Air control system for air bed
US4897890A (en) 1983-01-05 1990-02-06 Walker Robert A Air control system for air bed
USD300194S (en) 1984-10-12 1989-03-14 Walker Robert A Air mattress
US4788729A (en) 1985-04-14 1988-12-06 Walker Robert A Air mattress with audible pressure relief valve
US4829616A (en) 1985-10-25 1989-05-16 Walker Robert A Air control system for air bed
US4766628A (en) 1986-01-21 1988-08-30 Walker Robert A Air mattress with filler check valve and cap therefor
USD313973S (en) 1988-12-30 1991-01-22 Walker Robert A Hand-held control unit for the operation of an inflatable air mattress
US4908895A (en) 1989-03-20 1990-03-20 Walker Robert A Air mattress
US4991244A (en) 1990-01-05 1991-02-12 Walker Robert A Border for air bed
US5144706A (en) 1990-12-03 1992-09-08 Walker Robert A Bed foundation
US5170522A (en) 1991-12-16 1992-12-15 Select Comfort Corporation Air adjustable bed
US5430266A (en) 1993-02-03 1995-07-04 A-Dec, Inc. Control panel with sealed switch keypad
US5564140A (en) 1994-07-22 1996-10-15 Select Comfort Corporation Frame assembly for supporting a mattress
US5509154A (en) 1994-11-01 1996-04-23 Select Comfort Corporation Air control system for an air bed
US6483264B1 (en) 1994-11-01 2002-11-19 Select Comfort Corporation Air control system for an air bed
US5652484A (en) 1994-11-01 1997-07-29 Select Comfort Corporation Air control system for an air bed
US5903941A (en) 1994-11-01 1999-05-18 Select Comfort Corporation Air control system for an air bed
USD368475S (en) 1994-11-01 1996-04-02 Select Comfort Corporation Hand held remote control unit
US6037723A (en) 1994-11-01 2000-03-14 Select Comfort Corporation Air control system for an air bed
US5642546A (en) 1995-09-19 1997-07-01 Select Comfort Corporation Inflatable mattress with improved border support wall
US5765246A (en) 1995-09-19 1998-06-16 Select Comfort Corporation Inflatable mattress with improved border support wall
US5904172A (en) 1997-07-28 1999-05-18 Select Comfort Corporation Valve enclosure assembly
US6202239B1 (en) 1998-02-25 2001-03-20 Select Comfort Corp. Multi-zone support
US6161231A (en) 1998-03-11 2000-12-19 Sleeptec, Inc. Sleeper sofa with an air mattress
US6108844A (en) 1998-03-11 2000-08-29 Sleeptec, Inc. Air mattress for a sleeper sofa
CN2352936Y (en) 1998-04-14 1999-12-08 中国人民解放军北京军区总医院 Sectional weighing apparatus used for hospital bed
WO2000004828A1 (en) 1998-07-21 2000-02-03 Sensitive Technologies, Llc Respiration and movement monitoring system
CA2346207A1 (en) 1998-10-28 2000-05-04 Hill-Rom, Inc. Force optimization surface apparatus and method
US6397419B1 (en) 1999-03-10 2002-06-04 Select Comfort Corporation System and method for sleep surface adjustment
EP1180352A1 (en) 1999-03-25 2002-02-20 Matsushita Seiko Co.Ltd. Device for moving body
JP2001037729A (en) 1999-07-29 2001-02-13 Toshiba Corp Cardiac load test system
JP2001178834A (en) 1999-12-27 2001-07-03 Mitsubishi Electric Corp Charged particle irradiation system
JP2001252253A (en) 2000-03-13 2001-09-18 Hitachi Ltd Equipment for measuring biological magnetic field
US7343197B2 (en) 2000-05-30 2008-03-11 Vladimir Shusterman Multi-scale analysis and representation of physiological and health data
US6883191B2 (en) 2000-07-07 2005-04-26 Select Comfort Corporation Leg and bracket assembly for a bed foundation
US6832397B2 (en) 2000-07-07 2004-12-21 Select Comfort Corporation Bed foundation
US6686711B2 (en) 2000-11-15 2004-02-03 Comfortaire Corporation Air mattress control system and method
US6763541B2 (en) 2001-06-07 2004-07-20 Select Comfort Corporation Interactive air bed
US20030052787A1 (en) 2001-08-03 2003-03-20 Zerhusen Robert Mark Patient point-of-care computer system
US6708357B2 (en) 2002-01-14 2004-03-23 Select Comfort Corporation Corner piece for a soft-sided mattress
US7107095B2 (en) 2002-04-30 2006-09-12 Jan Manolas Device for and method of rapid noninvasive measurement of parameters of diastolic function of left ventricle and automated evaluation of the measured profile of left ventricular function at rest and with exercise
US7666151B2 (en) 2002-11-20 2010-02-23 Hoana Medical, Inc. Devices and methods for passive patient monitoring
US6804848B1 (en) 2003-03-14 2004-10-19 Comfortaire Corporation High-profile mattress having an upper low-profile module with an air posturizing sleep surface
US7389554B1 (en) 2003-03-14 2008-06-24 Comfortaire Corporation Air sleep system with dual elevating air posturizing sleep surfaces
USD502929S1 (en) 2004-03-02 2005-03-15 Select Comfort Corporation Remote control
US7865988B2 (en) 2004-03-16 2011-01-11 Select Comfort Corporation Sleeping surface having two longitudinally connected bladders with a support member
US20080005843A1 (en) 2004-04-30 2008-01-10 Tactex Controls Inc. Body Support Apparatus Having Automatic Pressure Control and Related Methods
US20060116589A1 (en) 2004-11-22 2006-06-01 Jawon Medical Co., Ltd. Weight scale having function of pulse rate meter or heartbeat rate meter
US20070164871A1 (en) 2005-02-23 2007-07-19 Stryker Canadian Management, Inc. Diagnostic and control system for a patient support
JP2007135863A (en) 2005-11-18 2007-06-07 Terumo Corp Monitoring system for monitoring condition of subject
US20070157385A1 (en) 2005-12-19 2007-07-12 Stryker Corporation Hospital bed
US20090177495A1 (en) 2006-04-14 2009-07-09 Fuzzmed Inc. System, method, and device for personal medical care, intelligent analysis, and diagnosis
US20080077020A1 (en) 2006-09-22 2008-03-27 Bam Labs, Inc. Method and apparatus for monitoring vital signs remotely
US20080122616A1 (en) 2006-11-28 2008-05-29 General Electric Company Smart bed method
US20080120784A1 (en) 2006-11-28 2008-05-29 General Electric Company Smart bed system and apparatus
US20080126122A1 (en) 2006-11-28 2008-05-29 General Electric Company Smart bed system and apparatus
CN101201273A (en) 2006-12-11 2008-06-18 大隈株式会社 Method for detecting abnormality of temperature sensor in machine tool
US20080146890A1 (en) 2006-12-19 2008-06-19 Valencell, Inc. Telemetric apparatus for health and environmental monitoring
US20080235872A1 (en) 2007-03-30 2008-10-02 Newkirk David C User interface for hospital bed
US8931329B2 (en) 2007-05-24 2015-01-13 Select Comfort Corporation System and method for detecting a leak in an air bed
US8336369B2 (en) 2007-05-24 2012-12-25 Select Comfort Corporation System and method for detecting a leak in an air bed
US20110144455A1 (en) 2007-08-31 2011-06-16 Bam Labs, Inc. Systems and methods for monitoring a subject at rest
US20170318980A1 (en) 2008-04-04 2017-11-09 Select Comfort Corporation System and Method for Improved Pressure Adjustment
US8769747B2 (en) 2008-04-04 2014-07-08 Select Comfort Corporation System and method for improved pressure adjustment
US9737154B2 (en) 2008-04-04 2017-08-22 Select Comfort Corporation System and method for improved pressure adjustment
US20130267791A1 (en) * 2008-05-12 2013-10-10 Earlysense Ltd. Monitoring, predicting and treating clinical episodes
US20110265003A1 (en) 2008-05-13 2011-10-27 Apple Inc. Pushing a user interface to a remote device
USD678312S1 (en) 2008-05-20 2013-03-19 Apple Inc. Display screen or portion thereof with graphical user interface
JP2010160783A (en) 2008-12-12 2010-07-22 Flower Robotics Inc Information providing system, portable information terminal, and information management device
US20100170043A1 (en) 2009-01-06 2010-07-08 Bam Labs, Inc. Apparatus for monitoring vital signs
US8287452B2 (en) 2009-01-07 2012-10-16 Bam Labs, Inc. Apparatus for monitoring vital signs of an emergency victim
US8444558B2 (en) 2009-01-07 2013-05-21 Bam Labs, Inc. Apparatus for monitoring vital signs having fluid bladder beneath padding
USD611498S1 (en) 2009-01-26 2010-03-09 A-Dec, Inc. Controller for a bed or chair with symbol set
USD611499S1 (en) 2009-01-26 2010-03-09 A-Dec, Inc. Controller for a bed or chair with symbol
USD674400S1 (en) 2009-09-14 2013-01-15 Microsoft Corporation Display screen with user interface
CN102080534A (en) 2009-11-30 2011-06-01 上海神开石油化工装备股份有限公司 Speed oil filling device for pulse generator of wireless inclinometer and using method thereof
US20170095196A1 (en) 2010-01-29 2017-04-06 Dreamwell, Ltd. Systems and methods for bedding with sleep diagnostics
US9592005B2 (en) 2010-01-29 2017-03-14 Dreamwell, Ltd. Systems and methods for bedding with sleep diagnostics
US20110224510A1 (en) 2010-01-29 2011-09-15 Dreamwell, Ltd. Systems and Methods for Bedding with Sleep Diagnostics
US8672853B2 (en) 2010-06-15 2014-03-18 Bam Labs, Inc. Pressure sensor for monitoring a subject and pressure sensor with inflatable bladder
USD640280S1 (en) 2010-06-25 2011-06-21 Microsoft Corporation Display screen with user interface
US8672842B2 (en) 2010-08-24 2014-03-18 Evacusled Inc. Smart mattress
US20120265024A1 (en) 2010-10-05 2012-10-18 University Of Florida Research Foundation, Incorporated Systems and methods of screening for medical states using speech and other vocal behaviors
US20120089419A1 (en) 2010-10-08 2012-04-12 Huster Keith A Hospital bed with graphical user interface having advanced functionality
USD669499S1 (en) 2011-03-21 2012-10-23 Microsoft Corporation Display screen with animated user icon
US9271665B2 (en) 2011-05-20 2016-03-01 The Regents Of The University Of California Fabric-based pressure sensor arrays and methods for data analysis
US20150169840A1 (en) 2011-08-05 2015-06-18 Alere San Diego, Inc. Methods and compositions for monitoring heart failure
USD696268S1 (en) 2011-10-06 2013-12-24 Samsung Electronics Co., Ltd. Mobile phone displaying graphical user interface
USD696271S1 (en) 2011-10-06 2013-12-24 Samsung Electronics Co., Ltd. Mobile phone displaying graphical user interface
USD696677S1 (en) 2011-10-14 2013-12-31 Nest Labs, Inc. Display screen or portion thereof with a graphical user interface
USD690723S1 (en) 2011-11-03 2013-10-01 Blackberry Limited Display screen with keyboard graphical user interface
USD720762S1 (en) 2012-02-09 2015-01-06 Samsung Electronics Co., Ltd. Display screen or portion thereof with graphical user interface
US9375142B2 (en) 2012-03-15 2016-06-28 Siemens Aktiengesellschaft Learning patient monitoring and intervention system
JP2013215252A (en) 2012-04-05 2013-10-24 Tanita Corp Biological information measuring apparatus
USD709909S1 (en) 2012-04-30 2014-07-29 Blackberry Limited Display screen with keyboard graphical user interface
US20130332318A1 (en) 2012-06-10 2013-12-12 Apple Inc. User Interface for In-Browser Product Viewing and Purchasing
US10555850B2 (en) 2012-06-21 2020-02-11 Hill-Rom Services, Inc. Patient support systems and methods of use
US20130340168A1 (en) 2012-06-21 2013-12-26 Hill-Rom Services, Inc. Patient support systems and methods of use
US9228885B2 (en) 2012-06-21 2016-01-05 Hill-Rom Services, Inc. Patient support systems and methods of use
US20190336367A1 (en) 2012-06-21 2019-11-07 Hill-Rom Services, Inc. Patient support systems and methods of use
CN104822355A (en) 2012-07-20 2015-08-05 费诺-华盛顿公司 Automated systems for powered cots
US20140026322A1 (en) 2012-07-24 2014-01-30 Randall J. Bell Proxy caregiver interface
US20140066798A1 (en) 2012-08-30 2014-03-06 David E. Albert Cardiac performance monitoring system for use with mobile communications devices
USD774071S1 (en) 2012-09-07 2016-12-13 Bank Of America Corporation Communication device with graphical user interface
US8966689B2 (en) 2012-11-19 2015-03-03 Select Comfort Corporation Multi-zone fluid chamber and mattress system
US10194752B2 (en) 2012-12-27 2019-02-05 Sleep Number Corporation Distribution pad for a temperature control system
US9131781B2 (en) 2012-12-27 2015-09-15 Select Comfort Corporation Distribution pad for a temperature control system
USD775631S1 (en) 2013-01-09 2017-01-03 Samsung Electronics Co., Ltd. Display screen or portion thereof with graphical user interface
US9730524B2 (en) 2013-03-11 2017-08-15 Select Comfort Corporation Switching means for an adjustable foundation system
US20170303697A1 (en) 2013-03-11 2017-10-26 Select Comfort Corporation Switching Means for an Adjustable Foundation System
US20140250597A1 (en) 2013-03-11 2014-09-11 Select Comfort Corporation Adjustable bed foundation system with built-in self-test
US10182661B2 (en) 2013-03-14 2019-01-22 Sleep Number Corporation and Select Comfort Retail Corporation Inflatable air mattress alert and monitoring system
US20160338871A1 (en) 2013-03-14 2016-11-24 Select Comfort Corporation Inflatable Air Mattress Snoring Detection and Response
US20140277822A1 (en) 2013-03-14 2014-09-18 Rob Nunn Inflatable air mattress sleep environment adjustment and suggestions
USD737250S1 (en) 2013-03-14 2015-08-25 Select Comfort Corporation Remote control
US20140259418A1 (en) 2013-03-14 2014-09-18 Rob Nunn Inflatable air mattress with light and voice controls
US9844275B2 (en) 2013-03-14 2017-12-19 Select Comfort Corporation Inflatable air mattress with light and voice controls
US10058467B2 (en) 2013-03-14 2018-08-28 Sleep Number Corporation Partner snore feature for adjustable bed foundation
USD698338S1 (en) 2013-03-14 2014-01-28 Select Comfort Corporation Remote control
US9635953B2 (en) 2013-03-14 2017-05-02 Sleepiq Labs Inc. Inflatable air mattress autofill and off bed pressure adjustment
US8893339B2 (en) 2013-03-14 2014-11-25 Select Comfort Corporation System and method for adjusting settings of a bed with a remote control
US10194753B2 (en) 2013-03-14 2019-02-05 Sleep Number Corporation System and method for adjusting settings of a bed with a remote control
US20170049243A1 (en) 2013-03-14 2017-02-23 Select Comfort Corporation Inflatable Air Mattress System With Detection Techniques
USD691118S1 (en) 2013-03-14 2013-10-08 Select Comfort Corporation Remote control
US10201234B2 (en) 2013-03-14 2019-02-12 Sleep Number Corporation Inflatable air mattress system architecture
US10251490B2 (en) 2013-03-14 2019-04-09 Sleep Number Corporation Inflatable air mattress autofill and off bed pressure adjustment
US20190125097A1 (en) 2013-03-14 2019-05-02 Sleep Number Corporation Inflatable Air Mattress System Architecture
US9370457B2 (en) 2013-03-14 2016-06-21 Select Comfort Corporation Inflatable air mattress snoring detection and response
JP2016518159A (en) 2013-03-14 2016-06-23 セレクト コンフォート コーポレーションSelect Comfort Corporation Sleep environment adjustment and recommendations for inflatable air mattress
US8984687B2 (en) 2013-03-14 2015-03-24 Select Comfort Corporation Partner snore feature for adjustable bed foundation
US20190328147A1 (en) 2013-03-14 2019-10-31 Sleep Number Corporation Inflatable Air Mattress System With Detection Techniques
US9392879B2 (en) 2013-03-14 2016-07-19 Select Comfort Corporation Inflatable air mattress system architecture
US9510688B2 (en) 2013-03-14 2016-12-06 Select Comfort Corporation Inflatable air mattress system with detection techniques
US20190125095A1 (en) 2013-03-14 2019-05-02 Sleep Number Corporation Inflatable Air Mattress Alert and Monitoring System
USD697874S1 (en) 2013-03-15 2014-01-21 Select Comfort Corporation Remote control
CN103381123A (en) 2013-06-13 2013-11-06 厚福医疗装备有限公司 High-precision dynamic weighing sickbed system and automatic control method thereof
US20150007393A1 (en) 2013-07-02 2015-01-08 Select Comfort Corporation Controller for multi-zone fluid chamber mattress system
US9504416B2 (en) 2013-07-03 2016-11-29 Sleepiq Labs Inc. Smart seat monitoring system
US9931085B2 (en) 2013-07-18 2018-04-03 Select Comfort Retail Corporation Device and method of monitoring a position and predicting an exit of a subject on or from a substrate
US9445751B2 (en) 2013-07-18 2016-09-20 Sleepiq Labs, Inc. Device and method of monitoring a position and predicting an exit of a subject on or from a substrate
US20150025327A1 (en) 2013-07-18 2015-01-22 Bam Labs, Inc. Device and Method of Monitoring a Position and Predicting an Exit of a Subject on or from a Substrate
USD701536S1 (en) 2013-07-26 2014-03-25 Select Comfort Corporation Air pump
USD785003S1 (en) 2013-09-03 2017-04-25 Samsung Electronics Co., Ltd. Display screen or portion thereof with graphical user interface
USD800140S1 (en) 2013-09-03 2017-10-17 Samsung Electronics Co., Ltd. Display screen or portion thereof with graphical user interface
US8892679B1 (en) 2013-09-13 2014-11-18 Box, Inc. Mobile device, methods and user interfaces thereof in a mobile device platform featuring multifunctional access and engagement in a collaborative environment provided by a cloud-based platform
US20150097682A1 (en) 2013-10-07 2015-04-09 Google Inc. Mobile user interface for smart-home hazard detector configuration
USD745884S1 (en) 2013-12-04 2015-12-22 Medtronic, Inc. Display screen or portion thereof with graphical user interface
US10360368B2 (en) 2013-12-27 2019-07-23 Abbott Diabetes Care Inc. Application interface and display control in an analyte monitoring environment
US20170354268A1 (en) 2013-12-30 2017-12-14 Select Comfort Corporation Inflatable Air Mattress With Integrated Control
US9770114B2 (en) 2013-12-30 2017-09-26 Select Comfort Corporation Inflatable air mattress with integrated control
US8973183B1 (en) 2014-01-02 2015-03-10 Select Comfort Corporation Sheet for a split-top adjustable bed
US20150182397A1 (en) 2014-01-02 2015-07-02 Select Comfort Corporation Adjustable bed system having split-head and joined foot configuration
US20150182418A1 (en) 2014-01-02 2015-07-02 Select Comfort Corporation Massage furniture item and method of operation
US20150182399A1 (en) 2014-01-02 2015-07-02 Select Comfort Corporation Adjustable bed system with split head and split foot configuration
US9005101B1 (en) 2014-01-04 2015-04-14 Julian Van Erlach Smart surface biological sensor and therapy administration
USD743976S1 (en) 2014-01-10 2015-11-24 Aliphcom Display screen or portion thereof with graphical user interface
USD754672S1 (en) 2014-01-10 2016-04-26 Aliphcom Display screen or portion thereof with graphical user interface
JP2017510390A (en) 2014-01-27 2017-04-13 リズム ダイアグノスティック システムズ,インク. System and method for monitoring health status
USD762716S1 (en) 2014-02-21 2016-08-02 Huawei Device Co., Ltd. Display screen or portion thereof with animated graphical user interface
US20150277703A1 (en) 2014-02-25 2015-10-01 Stephen Rhett Davis Apparatus for digital signage alerts
US20190082855A1 (en) 2014-04-15 2019-03-21 Sleep Number Corporation Adjustable bed system
US10143312B2 (en) 2014-04-15 2018-12-04 Sleep Number Corporation Adjustable bed system
US10314407B1 (en) 2014-04-30 2019-06-11 Xsensor Technology Corporation Intelligent sleep ecosystem
USD792908S1 (en) 2014-08-27 2017-07-25 Janssen Pharmaceutica Nv Display screen or portion thereof with icon
USD771123S1 (en) 2014-09-01 2016-11-08 Apple Inc. Display screen or portion thereof with multi-state graphical user interface
USD752624S1 (en) 2014-09-01 2016-03-29 Apple Inc. Display screen or portion thereof with graphical user interface
USD755823S1 (en) 2014-09-02 2016-05-10 Apple Inc. Display screen or portion thereof with graphical user interface
US20160058337A1 (en) 2014-09-02 2016-03-03 Apple Inc. Physical activity and workout monitor
USD787533S1 (en) 2014-09-02 2017-05-23 Apple Inc. Display screen or portion thereof with graphical user interface
US20170255751A1 (en) 2014-09-15 2017-09-07 Geetha Sanmugalingham System and method for collection, storage and management of medical data
US10448749B2 (en) 2014-10-10 2019-10-22 Sleep Number Corporation Bed having logic controller
US20160100696A1 (en) 2014-10-10 2016-04-14 Select Comfort Corporation Bed having logic controller
US20190328146A1 (en) 2014-10-16 2019-10-31 Sleep Number Corporation Bed With Integrated Components and Features
US10342358B1 (en) 2014-10-16 2019-07-09 Sleep Number Corporation Bed with integrated components and features
USD761293S1 (en) 2014-10-17 2016-07-12 Robert Bosch Gmbh Display screen with graphical user interface
US20160110986A1 (en) 2014-10-21 2016-04-21 Kenneth Lawrence Rosenblood Posture improvement device, system, and method
USD772905S1 (en) 2014-11-14 2016-11-29 Volvo Car Corporation Display screen with graphical user interface
USD789391S1 (en) 2014-12-31 2017-06-13 Dexcom, Inc. Display screen or portion thereof with graphical user interface and icons
US20190029597A1 (en) 2015-01-05 2019-01-31 Sleep Number Corporation Bed with User Occupancy Tracking
US10092242B2 (en) 2015-01-05 2018-10-09 Sleep Number Corporation Bed with user occupancy tracking
US20200405240A1 (en) 2015-01-05 2020-12-31 Sleep Number Corporation Bed with User Occupancy Tracking
US20160242562A1 (en) 2015-02-24 2016-08-25 Select Comfort Corporation Mattress with Adjustable Firmness
USD800778S1 (en) 2015-02-27 2017-10-24 Samsung Electronics Co., Ltd. Display screen or portion thereof with graphical user interface
USD787551S1 (en) 2015-02-27 2017-05-23 Samsung Electronics Co., Ltd. Display screen or portion thereof with graphical user interface
USD778301S1 (en) 2015-03-27 2017-02-07 Showa Corporation Display screen with graphical user interface
USD789956S1 (en) 2015-04-16 2017-06-20 Honeywell International Inc. Display screen or portion thereof with graphical user interface
US20180106897A1 (en) * 2015-04-20 2018-04-19 Resmed Sensor Technologies Limited Detection and identification of a human from characteristic signals
WO2016170005A1 (en) 2015-04-20 2016-10-27 Resmed Sensor Technologies Limited Detection and identification of a human from characteristic signals
US9924813B1 (en) 2015-05-29 2018-03-27 Sleep Number Corporation Bed sheet system
US20160353996A1 (en) 2015-06-05 2016-12-08 The Arizona Board Of Regents On Behalf Of The University Of Arizona Systems and methods for real-time signal processing and fitting
US20160367039A1 (en) 2015-06-16 2016-12-22 Sleepiq Labs Inc. Device and Method of Automated Substrate Control and Non-Intrusive Subject Monitoring
US20220323001A1 (en) 2015-07-02 2022-10-13 Sleep Number Corporation Automation for improved sleep quality
US20170003666A1 (en) 2015-07-02 2017-01-05 Select Comfort Corporation Automation for improved sleep quality
US10149549B2 (en) 2015-08-06 2018-12-11 Sleep Number Corporation Diagnostics of bed and bedroom environment
US20190104858A1 (en) 2015-08-06 2019-04-11 Sleep Number Corporation Diagnostics of bed and bedroom environment
US20170055896A1 (en) 2015-08-31 2017-03-02 Masimo Corporation Systems and methods to monitor repositioning of a patient
WO2017068581A1 (en) 2015-10-20 2017-04-27 Healthymize Ltd System and method for monitoring and determining a medical condition of a user
USD834593S1 (en) 2015-10-21 2018-11-27 Manitou Bf (Societe Anonyme) Display screen or portion thereof with graphical user interface
USD785660S1 (en) 2015-12-23 2017-05-02 Samsung Electronics Co., Ltd. Display screen or portion thereof with graphical user interface
CN108697241A (en) 2015-12-30 2018-10-23 德沃特奥金有限公司 Sleep with sensor or dependence furniture
US20170191516A1 (en) 2015-12-31 2017-07-06 Select Comfort Corporation Foundation and frame for bed
WO2017122178A1 (en) 2016-01-14 2017-07-20 King Abdullah University Of Science And Technology Paper based electronics platform
US20170231545A1 (en) 2016-02-14 2017-08-17 Earlysense Ltd. Apparatus and methods for monitoring a subject
USD822708S1 (en) 2016-02-26 2018-07-10 Ge Healthcare Uk Limited Display screen with a graphical user interface
USD800162S1 (en) 2016-03-22 2017-10-17 Teletracking Technologies, Inc. Display screen with graphical user interface icon
US20170281054A1 (en) * 2016-03-31 2017-10-05 Zoll Medical Corporation Systems and methods of tracking patient movement
USD799518S1 (en) 2016-06-11 2017-10-10 Apple Inc. Display screen or portion thereof with graphical user interface
CN105877712A (en) 2016-06-19 2016-08-24 河北工业大学 Multifunctional intelligent bed system
US20190220511A1 (en) 2016-06-22 2019-07-18 Huawei Technologies Co., Ltd. Method and apparatus for displaying candidate word, and graphical user interface
US20170374186A1 (en) 2016-06-24 2017-12-28 Sandisk Technologies Llc Mobile Device with Unified Media-Centric User Interface
US20180341448A1 (en) 2016-09-06 2018-11-29 Apple Inc. Devices, Methods, and Graphical User Interfaces for Wireless Pairing with Peripheral Devices and Displaying Status Information Concerning the Peripheral Devices
USD812393S1 (en) 2016-09-15 2018-03-13 Sleep Number Corporation Bed
US20180116420A1 (en) 2016-10-28 2018-05-03 Select Comfort Corporation Air Manifold
US20180119686A1 (en) 2016-10-28 2018-05-03 Select Comfort Corporation Pump With Vibration Isolators
US20180116415A1 (en) 2016-10-28 2018-05-03 Select Comfort Corporation Bed with foot warming system
US20180116418A1 (en) 2016-10-28 2018-05-03 Select Comfort Corporation Noise Reducing Plunger
US20180116419A1 (en) 2016-10-28 2018-05-03 Select Comfort Corporation Air Controller With Vibration Isolators
US20180125259A1 (en) 2016-11-09 2018-05-10 Select Comfort Corporation Bed With Magnetic Couplers
US10729253B1 (en) 2016-11-09 2020-08-04 Sleep Number Corporation Adjustable foundation with service position
USD809843S1 (en) 2016-11-09 2018-02-13 Sleep Number Corporation Bed foundation
USD932808S1 (en) 2016-11-09 2021-10-12 Select Comfort Corporation Mattress
US20180125260A1 (en) 2016-11-09 2018-05-10 Select Comfort Corporation Bed With Magnetic Couplers
US20200075136A1 (en) 2016-11-10 2020-03-05 Sonde Health, Inc. System and method for activation and deactivation of cued health assessment
US20180184920A1 (en) 2017-01-05 2018-07-05 Livemetric (Medical) S.A. System and method for providing user feeedback of blood pressure sensor placement and contact quality
USD840428S1 (en) 2017-01-13 2019-02-12 Adp, Llc Display screen with a graphical user interface
US20180353085A1 (en) 2017-06-09 2018-12-13 Anthony Olivero Portable biometric monitoring device and method for use thereof
CN207837242U (en) 2017-06-19 2018-09-11 佛山市南海区金龙恒家具有限公司 Intelligent digital sleep detection mattress
USD890792S1 (en) 2017-08-10 2020-07-21 Jpmorgan Chase Bank, N.A. Display screen or portion thereof with a graphical user interface
USD903700S1 (en) 2017-08-10 2020-12-01 Jpmorgan Chase Bank, N.A. Display screen or portion thereof with a graphical user interface
US20190059603A1 (en) 2017-08-23 2019-02-28 Sleep Number Corporation Air system for a bed
WO2019081915A1 (en) 2017-10-24 2019-05-02 Cambridge Cognition Limited System and method for assessing physiological state
US20210150873A1 (en) * 2017-12-22 2021-05-20 Resmed Sensor Technologies Limited Apparatus, system, and method for motion sensing
US20190206416A1 (en) 2017-12-28 2019-07-04 Sleep Number Corporation Home automation having user privacy protections
US20190201271A1 (en) 2017-12-28 2019-07-04 Sleep Number Corporation Snore sensing bed
US20190201265A1 (en) 2017-12-28 2019-07-04 Sleep Number Corporation Bed having presence detecting feature
US20190201267A1 (en) 2017-12-28 2019-07-04 Sleep Number Corporation Bed having sensor fusing features useful for determining snore and breathing parameters
US20190200777A1 (en) 2017-12-28 2019-07-04 Sleep Number Corporation Bed having sensors features for determining snore and breathing parameters of two sleepers
US20190201268A1 (en) 2017-12-28 2019-07-04 Sleep Number Corporation Bed having snore detection feature
US20190201269A1 (en) 2017-12-28 2019-07-04 Sleep Number Corporation Bed having sleep stage detecting feature
US20190201266A1 (en) 2017-12-28 2019-07-04 Sleep Number Corporation Bed having rollover identifying feature
US20190201270A1 (en) 2017-12-28 2019-07-04 Sleep Number Corporation Bed having snore control based on partner response
US20190209405A1 (en) 2018-01-05 2019-07-11 Sleep Number Corporation Bed having physiological event detecting feature
USD855643S1 (en) 2018-02-21 2019-08-06 Early Warning Services, Llc Display screen portion with graphical user interface for entry of mobile number data
US20190279745A1 (en) 2018-03-07 2019-09-12 Sleep Number Corporation Home based stress test
US11670404B2 (en) 2018-03-07 2023-06-06 Sleep Number Corporation Home based stress test
CN108784127A (en) 2018-06-14 2018-11-13 深圳市三分之睡眠科技有限公司 A kind of Automatic adjustment method and intelligent control bed for bed
US11001447B2 (en) 2018-09-05 2021-05-11 Sleep Number Corporation Lifting furniture
US20200163627A1 (en) 2018-10-08 2020-05-28 UDP Labs, Inc. Systems and Methods for Generating Synthetic Cardio-Respiratory Signals
US20200110194A1 (en) 2018-10-08 2020-04-09 UDP Labs, Inc. Multidimensional Multivariate Multiple Sensor System
USD896266S1 (en) 2018-11-05 2020-09-15 Stryker Corporation Display screen or portion thereof with graphical user interface
US11376178B2 (en) 2018-11-14 2022-07-05 Sleep Number Corporation Using force sensors to determine sleep parameters
US20230115150A1 (en) 2018-11-14 2023-04-13 Sleep Number Corporation Using force sensors to determine sleep parameters
US20200146910A1 (en) 2018-11-14 2020-05-14 Sleep Number Corporation Using force sensors to determine sleep parameters
US20220007965A1 (en) * 2018-11-19 2022-01-13 Resmed Sensor Technologies Limited Methods and apparatus for detection of disordered breathing
US20200202120A1 (en) * 2018-12-20 2020-06-25 Koninklijke Philips N.V. System and method for providing sleep positional therapy and paced breathing
US20200205580A1 (en) 2018-12-31 2020-07-02 Sleep Number Corporation Home automation with features to improve sleep
USD975121S1 (en) 2019-01-08 2023-01-10 Sleep Number Corporation Display screen or portion thereof with graphical user interface
US20200227160A1 (en) * 2019-01-15 2020-07-16 Youngblood Ip Holdings, Llc Health data exchange platform
USD902244S1 (en) 2019-02-25 2020-11-17 Juul Labs, Inc. Display screen or portion thereof with animated graphical user interface
US11399636B2 (en) 2019-04-08 2022-08-02 Sleep Number Corporation Bed having environmental sensing and control features
US11424646B2 (en) 2019-04-16 2022-08-23 Sleep Number Corporation Pillow with wireless charging
US20200337470A1 (en) 2019-04-25 2020-10-29 Sleep Number Corporation Bed having features for improving a sleeper's body thermoregulation during sleep
USD954725S1 (en) 2019-05-08 2022-06-14 Sleep Number Corporation Display screen or portion thereof with graphical user interface
USD916745S1 (en) 2019-05-08 2021-04-20 Sleep Number Corporation Display screen or portion thereof with graphical user interface
US20210022667A1 (en) 2019-07-26 2021-01-28 Sleep Number Corporation Long term sensing of sleep phenomena
US20210307683A1 (en) 2020-04-01 2021-10-07 UDP Labs, Inc. Systems and Methods for Remote Patient Screening and Triage
US20220133164A1 (en) 2020-10-30 2022-05-05 Sleep Number Corporation Bed having controller for tracking sleeper heart rate variablity
US20220175600A1 (en) 2020-12-04 2022-06-09 Sleep Number Corporation Bed having features for automatic sensing of illness states
US20220265223A1 (en) 2021-02-16 2022-08-25 Sleep Number Corporation Bed having features for sensing sleeper pressure and generating estimates of brain activity
US20230046169A1 (en) 2021-08-10 2023-02-16 Sleep Number Corporation Bed having features for controlling heating of a bed to reduce health risk of a sleeper
US20230190183A1 (en) 2021-12-16 2023-06-22 Sleep Number Corporation Sleep system with features for personalized daytime alertness quantification
US20230218225A1 (en) 2022-01-11 2023-07-13 Sleep Number Corporation Centralized hub device for determining and displaying health-related metrics

Non-Patent Citations (18)

* Cited by examiner, † Cited by third party
Title
[No Author Listed] [online], "Leesa vs. Casper Mattress Review—Best Memory Foam Mattress", Mar. 2019, retrieved on Dec. 10, 2020, retrieved from URL<https://webarchive.org/web/20190304041833/https//www.rizknows.com/buyerguides/leesa-vs-casper-mattress-review-best-memory-foam-mattress/>, 1 page.
Alves, "iPhone App Registration Flow," Dribbble, published Oct. 23, 2014, retrieved from the Internet Jan. 26, 2022, Internet URL<https://dribbble.com/shots/1778356-iPhone-App-Registration-Flow>, 3 pages.
Gold, "Sign Up/Log In modal for iOS app," Dribbble, published May 7, 2013, retrieved from the Internet Jan. 26, 2022, Internet URL<https://dribbble.com/shots/1061991-Sign-Up-Log-In-modal-for-i0S-app>, 3 pages.
International Search Report and Written Opinion in International Appln. No. PCT/US2020/063329, dated Apr. 5, 2021, 11 pages.
International Search Report and Written Opinion in International Appln. No. PCT/US2020/063338, dated Mar. 30, 2021, 11 pages.
Johns, "Sign Up," Dribbble, published Aug. 2, 2013, retrieved from the Internet Jan. 26, 2022, Internet URL: <https://dribbble.com/shots/1181529-Sign-Up>, 3 pages.
Matus, [online], "The Composition Sketch", Dribble, Sep. 2017, retrieved on Dec. 10, 2020, retrieved from URL: <https://dribbble.com/shots/3821915-The-Composition-Sketch>, 2 pages.
U.S. Appl. No. 18/094,751, filed Jan. 9, 2023, Sayadi et al.
U.S. Appl. No. 18/104,634, filed Feb. 1, 2023, Rao et al.
U.S. Appl. No. 18/131,218, filed Apr. 5, 2023, Molina et al.
U.S. Appl. No. 18/139,066, filed Apr. 25, 2023, Sayadi et al.
U.S. Appl. No. 29/583,852, filed Nov. 9, 2016, Keeley.
U.S. Appl. No. 29/676,117, filed Jan. 8, 2019, Stusynski et al.
U.S. Appl. No. 29/690,492, filed May 8, 2019, Stusynski et al.
U.S. Appl. No. 62/742,613, filed Oct. 8, 2018, Young et al.
U.S. Appl. No. 62/804,623, filed Feb. 12, 2019, Young et al.
U.S. Appl. No. 63/003,551, filed Apr. 1, 2020, Young et al.
Zaytsev, "Finance iPhone App [Another Direction]," Dribbble, published Jul. 21, 2014, retrieved from the Internet Jan. 26, 2022, Internet URL: <https://dribbble.com/shots/1650374-Finance-iPhone-App-Another-Direction>, 3 pages.

Also Published As

Publication number Publication date
CA3173469A1 (en) 2021-10-07
CA3173464A1 (en) 2021-10-07
CN115397311A (en) 2022-11-25
US20210307683A1 (en) 2021-10-07
JP2023532387A (en) 2023-07-28
EP4125548A1 (en) 2023-02-08
CN115397310A (en) 2022-11-25
WO2021201924A1 (en) 2021-10-07
KR20220162767A (en) 2022-12-08
EP4125549A1 (en) 2023-02-08
WO2021201925A1 (en) 2021-10-07
KR20220162768A (en) 2022-12-08
AU2020440130A1 (en) 2022-10-27
US20210307681A1 (en) 2021-10-07
EP4125548A4 (en) 2024-04-17
AU2020440233A1 (en) 2022-10-27
JP2023532386A (en) 2023-07-28

Similar Documents

Publication Publication Date Title
US10977522B2 (en) Stimuli for symptom detection
US20200388287A1 (en) Intelligent health monitoring
EP3367908B1 (en) Programmable electronic stethoscope devices, algorithms, systems, and methods
Shi et al. Theory and application of audio-based assessment of cough
US11800996B2 (en) System and method of detecting falls of a subject using a wearable sensor
US20220007964A1 (en) Apparatus and method for detection of breathing abnormalities
Patil et al. The physiological microphone (PMIC): A competitive alternative for speaker assessment in stress detection and speaker verification
Chatterjee et al. Assessing severity of pulmonary obstruction from respiration phase-based wheeze-sensing using mobile sensors
US20240090778A1 (en) Cardiopulmonary health monitoring using thermal camera and audio sensor
Tabatabaei et al. Methods for adventitious respiratory sound analyzing applications based on smartphones: A survey
US11931168B2 (en) Speech-controlled health monitoring systems and methods
He et al. A novel snore detection and suppression method for a flexible patch with MEMS microphone and accelerometer
Christofferson et al. Sleep sound classification using ANC-enabled earbuds
Porieva et al. Investigation of lung sounds features for detection of bronchitis and COPD using machine learning methods
CN113040773A (en) Data acquisition and processing method
Nallanthighal et al. COVID-19 detection based on respiratory sensing from speech
GB2547457A (en) Communication apparatus, method and computer program
Gao et al. System Design of Detection and Intervention Methods for Apnea
Nathan et al. Assessing Severity of Pulmonary Obstruction from Respiration Phase-Based Wheeze Sensing Using Mobile Sensors
CN114010193A (en) Data acquisition and processing system
Coutinho et al. Estimating biosignals using the human voice
Coutinho et al. Automatic Estimation of Biosignals From the Human Voice

Legal Events

Date Code Title Description
AS Assignment

Owner name: UDP LABS, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SAYADI, OMID;YOUNG, STEVEN JAY;HEWITT, CARL;AND OTHERS;REEL/FRAME:054549/0331

Effective date: 20201202

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

AS Assignment

Owner name: SLEEP NUMBER CORPORATION, UNITED STATES

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:UDP LABS, INC.;HEWITT, CARL;YOUNG, STEVEN JAY;AND OTHERS;REEL/FRAME:062787/0247

Effective date: 20230222

AS Assignment

Owner name: SLEEP NUMBER CORPORATION, MINNESOTA

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE THE CONVEYING PARTY NAMES AS LISTED ON THE ASSIGNMENT COVERSHEET PREVIOUSLY RECORDED AT REEL: 062787 FRAME: 0247. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT;ASSIGNORS:YOUNG, STEVEN JAY;HEWITT, CARL;UDP LABS, INC.;REEL/FRAME:062904/0001

Effective date: 20230222

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED

STPP Information on status: patent application and granting procedure in general

Free format text: AWAITING TC RESP, ISSUE FEE PAYMENT VERIFIED

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE