WO2019209894A1 - Wearable device - Google Patents

Wearable device Download PDF

Info

Publication number
WO2019209894A1
WO2019209894A1 PCT/US2019/028818 US2019028818W WO2019209894A1 WO 2019209894 A1 WO2019209894 A1 WO 2019209894A1 US 2019028818 W US2019028818 W US 2019028818W WO 2019209894 A1 WO2019209894 A1 WO 2019209894A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
wearable device
data
layer
radio
Prior art date
Application number
PCT/US2019/028818
Other languages
French (fr)
Inventor
Joshua Ian COHEN
Lucas Kane THORESEN
Jason Lucas
Gavin Arthur JOHNSON
Original Assignee
SCRRD, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SCRRD, Inc. filed Critical SCRRD, Inc.
Publication of WO2019209894A1 publication Critical patent/WO2019209894A1/en
Priority to US16/897,893 priority Critical patent/US20200341543A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/80Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication

Definitions

  • This invention relates, in general, to wearable devices and, in particular, to enhanced performance in wearable devices that provide context with an environment of a user.
  • Wearable technology has a variety of applications which grows as the field itself expands. It appears prominently in consumer electronics with the popularization of the smartwatch and activity tracker. Apart from commercial uses, wearable technology is being incorporated into navigation systems, advanced textiles, healthcare, and an ever increasing number of applications. As a result of growing needs and an expanding consumer preference, there is a need for more and improved wearable technology.
  • the wearable device includes a hardware layer, a touch having a capacitive touch surface that receives contact data, and a radio layer including a plurality of antennas that receive radio data.
  • the wearable device processes the radio data and the contact data to at least one of increase intemet-of-things awareness and execute a gesture command originating from the user.
  • the wearable device also processes the laryngeal data to execute a vocalization command originating from the user.
  • Figure 1 is a schematic diagram of a user wearing one embodiment of a wearable device according to the teachings presented herein;
  • Figure 2 is a schematic diagram of the user depicted in figure 1 wearing the wearable device in additional detail;
  • Figure 3A, figure 3B, figure 3C, and figure 3D are each schematic diagrams of one embodiment of a portion of the wearable device
  • Figure 4 A, figure 4B, figure 4C, and figure 4D are each schematic diagrams of one embodiment of a portion of a larynx member.
  • Figure 5 is a conceptual module diagram depicting a software architecture of an environmental control application.
  • FIG. 1 and figure 2 therein is depicted one embodiment of a system including wearable technology that is conceptually illustrated and generally designated 10.
  • a user U has a torso T and an arm A as well as a neck N and a head H.
  • the user U is wearing clothing C.
  • the user U is wearing a wearable device 12 on the clothes C and a larynx member 14 is affixed to the neck N.
  • the wearable device 14 processes radio data and contact data to at least one of increase intemet-of- things (IOT) awareness and execute a gesture command originating from the user U.
  • the wearable device 12 also processes the laryngeal data to execute a vocalization command originating from the user U.
  • IOT intemet-of- things
  • the wearable device 12 processes the radio data and the contact data to detect the arm A movement as shown by arrow MA.
  • the wearable device 12 processes the laryngeal data received from the larynx member 14 to detect audible vocals VA and even sub-audible vocals Vs. Based on the detected audibles, the wearable device 12 may execute a command or initiate a telephony application, including transmission of an audible vocalization or transmission of a sub-audible vocalization. Also, audible commands or sub- audible commands may be enabled.
  • the wearable device 12 may also process the laryngeal data that includes movement of the head H as shown by M H , including the detection of biometrics that may indicate what the user is thinking or feeling as shown by element I.
  • the wearable device 12 processes the radio data and the contact data to provide authentication relative to credentials, for example. Such authentication permits the user U to go through the entrance E. Additionally, as shown in figure 2, the user U by way of the processing of the radio data and the contact data has device-to-device awareness of the individual Ii having a wearable device and the individual I 2 having a smart device. Using interactive navigation as shown by NAV and enabled by the processing of the radio data and the contact data, the user U is able to visit the purchasing area P and select a gift G for purchase. A purchase that may be enabled by the wearable device 12.
  • the wearable device 12 can be used for everyday computing, authentication, telephony, navigation, and as an entry point into augmented reality space. It is designed to excel in performance, device awareness, security, design, user interactions, and accessibility. The wearable device makes use of multiple antennae and wireless tracking technology in order to provide enhanced location awareness. It also means that the wearable device understands cardinal directions or bearings of nearby devices for use in software applications on the device, and to support location-aware gestures.
  • the wearable device 12 works hand and hand with the larynx member 14 and, as will be discussed in further detail hereinbelow, includes a radar-enhanced capacitive touch surface for maximum accessibility for all users.
  • the on-board radar chip understands precise hand movements and can detect objects that are directly in front of the device.
  • the system lets users interact with world around them in new ways.
  • the wearable device 12 also lets users bring their desktop with them wherever they go. It should be appreciated that even though a wearable device 12 is depicted on the clothing C, the wearable device 12 may be on necklace, for example, or otherwise associated with the user U. Even though the necklace wearable doesn't have a built-in screen, the device leverages a proximity-based wireless VNC protocol in order to display the graphical desktop on neighboring device displays. Once a display-bearing device (Laptop, Desktop, TV, or Smartphone) has been paired, bringing the wearable within range and performing the hold gesture will cause the two devices to form a VNC connection. This means that users can bring their desktop with them wherever they go.
  • a display-bearing device Laytop, Desktop, TV, or Smartphone
  • the wearable device 12 includes an outer touch layer 20, an interior radio layer 22, an interior hardware layer 24, and an exterior electrical layer 26.
  • the outer touch layer 20, the interior radio layer 22, the interior hardware layer 24, and the exterior electrical layer 26 are interconnecte3d.
  • the outer touch layer 20 includes a capacitive touch surface 30.
  • the interior radio layer 22 includes a substrate 40 securing a cell antenna 42, which may be a transceiver, and an induction coil 44 as well as, in one embodiment spaced and segmented, antennas 46, 48, 50, 52.
  • the capacitive touch surface 30 in conjunction with the antennas 46, 48, 50, 52 can determine the direction of signals as indicated by element 54.
  • the interior hardware layer 24 includes a substrate 56 having components 58 including Amb, LEDs, Mic, Ramdisk, Cell, Flash, WiFi, Mem, CPU, BT, Radar, Audio, Clock, Rocker, Accel, Charge Circuit, IR, USB C, and GPIO, for example.
  • the memory is accessible to the processor and the memory includes processor-executable instructions that, when executed, cause the processor to process the radio data and the contact data to at least one of increase intemet-of-things awareness and execute a gesture command originating from the user. Further, the processor-executable instructions cause the processor to process the laryngeal data to execute a vocalization command originating from the user.
  • the exterior electrical layer includes a substrate 60 having a shielded battery 62 and a heatsink 64.
  • the wearable device 12 may come with one or more segmented antennas, that can identify the points of origin of incoming radio signals, by using low observable tracking techniques, and angle-of- arrival techniques seen in phased-array radar systems. Segments are spaced at squared increments apart from each other at, one-eighth, one-quarter, one-half wavelength distances as is common in phased- arrays.
  • the wearable device 12 may also use the phased-array technology to steer wireless signals towards a destination access-point or device. It might programmatically choose to steer signals back towards another device in the same direction that the signals were received in, adjusted to changes in the device’s position, improving connectivity, and providing some degree of wireless privacy.
  • phased-arrays are not usually used in consumer products, especially portable devices, and that low observable techniques improve the relevance of phased-arrays for mobile applications.
  • directional awareness of neighboring devices allows for far more complex gestures, and opens a world of possibilities for navigation and augmented reality technology, allowing the device to visualize the wireless space, and all of those‘WiFi radar’ apps to actually work.
  • a chaintenna that is, an antenna dipole that has been strung into the chain, that is relatively low-power, and clips onto the sides of the wearable device.
  • the antenna can be used for cellular, WiFi, or Bluetooth radios, and keeps the device facing forward. It is high-gain, as the dipole extends through the chain, and is usually longer than the body of the wearable device.
  • the dipole might also be insulated separately and capped at either end.
  • the chaintenna might have some degree of resistivity between the conductive leads that scales appropriately to the frequency that the antenna is meant to operate on and is calculable with techniques known in the art of antennas.
  • the wearable device 12 may also have a capacitive radar-enhanced multi-touch surface. This involves a capacitive pad grid that measures the locations of multiple fingers or conductive objects on an X, Y plane from each of the comers of the pad. It is combined with a radar sensor underneath the pad that emits radio pulses outwards that bounce off of the user’s hands and land back on the pad. The combination allows the pad to measure the distance to the user’s hand, yielding a Y-coordinate on the two-dimensional touch plane, giving the pad three-dimensional awareness in a manner that is mechanically similar to the doppler-shift ultrasound techniques.
  • the radar sensing provides a positional offset from the center point where the signals were emitted, and likewise, another analog waveform that can be fed into machine-learning software.
  • This lets users train the pad to recognize specific movements of the hands in front of the device and perform free-space gestures that are invokable by the user through normal operation of the wearable member 12. Since there does not need to be a direct visualization of what is on the display, the touchpad gives users the ability to convey what should be selected, playing, or otherwise happening.
  • the device is said to be contextual in nature, as the device has a certain degree of environmental awareness and depending on what happens around the time a gesture is made, the device will perform the appropriate action.
  • the embedded form-factor of the wearable device is available to developers for the purpose of providing a modular development platform that can be used inside existing products or as a badge, for example, that is releasable securable to the clothing C of the user U.
  • the embedded version may expose development endpoints such as the GPIO serial connector, wired and / or wireless network interface modules, USB, and a software API (application programming interface). It lets developers and hobbyists experiment with the platform, and connect different modules appropriate to the projects they are working on. Businesses can embed the platform and deploy wearable device-compatible consumer products and IoT devices with their own functionality, purpose, and branding. The platform can be locked-down for mass deployment, by removing unneeded development modules, leaving only the required components for deployment inside products at scale. Embedding the wearable device ensures that the software protocol implementation is consistent among many different kinds of devices, including doors and toasters, which makes for a secure and open firmware platform that all users of the Intemet-of- Things can benefit.
  • the wearable device may be utilized in different contexts.
  • a pocketed context users might put their wearable device in a closed space, or in their pocket. Similar to on-face detection seen in smartphones, the wearable device uses a light sensor to determine whether there is something directly in- front or behind-it. If both sensors return a closed value of less than a few centimeters, than the device might enter pocketed context. In pocketed context, the wearable device will not respond to any gestures that the user would not reasonably perform in their pocket. This is both a safety mechanism to protect the device from accidental input and works to the benefit of the user who might use pocketed interactions inside, such as triple tapping on the device to silence a notification.
  • the software might stay in embedded context, which is because the wearable device software has detected that it’s running on a device that is embedded inside another product. That product may or not have a touch surface or offer direct physical user interaction whatsoever.
  • network connectivity, and the ability to interact with the system remotely is supported and streamlined with the API and software development kit.
  • the wearable device may be utilized to carry data analogously to bringing your desktop workspace with you on-the-go. Even though the necklace wearable doesn’t have a built-in screen, the device leverages a proximity-based wireless VNC protocol that it uses to display a graphical desktop on neighboring devices that are running the software. Once a display-bearing device (Laptop, Desktop, TV, or Smartphone) comes within range, the wearable device pair to it and either share a roaming workspace or display a graphical shell on neighboring devices via a VNC protocol. Simple bringing the wearable within range and performing the hold gesture will cause the two devices to form a VNC connection. This means that users can bring their desktop with them wherever they go.
  • a display-bearing device Lay, Desktop, TV, or Smartphone
  • named items are available to the user as a sort of vocal shortcut for physical and virtual objects.
  • a user names an object
  • the aspects of the object that make it unique are stored in a searchable mapping of unique identifiers or hashes, as they relate to specific data structures and types, such as file or device. Users might choose to create places where they can store files, and spaces where they can reference objects.
  • the wearable device should save the device’ s cryptographic signature, and wireless profile. It may be the case that the device does not support pairing at all but speaks a common language such as WiFi or Bluetooth. Identifying the commonalities of wireless frames (building a profile) and saving the MAC address (typically unique, but non-unique identifier), can be used to find overlapping traits between devices. Say for example, the user holds an ordinary cell phone in front of the wearable device, and names it“Mobile Phone”. Later on, when the wearable device does wireless discovery and identifies a device that has a different MAC address, but is emitting frames with a similar wireless profile, the wearable device might ask the user,“Is what you’re holding a Mobile Phone?”
  • the larynx member 14 includes multiple layers, a layer 70, a layer 72, a layer 74, and a layer 76.
  • a substrate 90 supports a charging interface 92, a battery 94, and a USB C interface 96.
  • a power button 98 is provided as is a power LED 100.
  • a substrate 110 supports an ACL 112, OS 114, a CPU 116, and a BT 118, as well as a piezo array 120.
  • a microphone 122 and a resistivity sensor 124 are also provided.
  • a piezoelectric sensing array is provided with an ultrasound gel being applied to layer 74.
  • a gel escape channel 130 provides communication to the exterior from the layer 74.
  • a medical grade adhesive may be applied to the exterior. It should be appreciated that although a particular architecture of components is depicted, other architectures are within the teachings presented herein.
  • the memory is accessible to the processor and the memory includes processor-executable instructions that, when executed, cause the processor to process the piezoelectric data and sound data. Further, the processor-executable instructions cause the processor to apply machine learning to train and recognize meanings associated with the piezoelectric data and sound data.
  • the larynx member 14 provides an interface device that is also portable computer, complete with a processor, memory, and a wireless chipset, that rests on the outside of the neck.
  • the wireless sticker version of the larynx member 14 has a replaceable adhesive material and is small enough where it does not become a distraction. Users slide an inexpensive replaceable medical-grade adhesive sticker onto the bottom of the device and apply a small amount of an ultrasound gel directly on top of a piezoelectric sensing array. Any excess ultrasound gel will escape through an inset escape channel, which ensures that there are no air pockets between the piezoelectric array and the surface of the skin.
  • the medical grade adhesive holds the device securely on the outside of the neck and can be positioned so that it is facing the larynx, near to the laryngeal nerve, underneath the jaw, on the spot on the outside of the neck that moves with the larynx, mouth and tongue.
  • the analog waveforms representing the movements of the larynx muscles and throat are captured by the ultrasound piezoelectric array and accelerometer. Any audible sound will be captured by one or more throat microphones that provide another analog data point for combined processing.
  • the device may be in a sticker form, there are resistivity leads for detecting perspiration on the outside of the skin that may weaken the medical adhesive bond. This makes the user of the device aware of when the adhesive sticker or patch needs to be replaced. For medical use, this can signify that the user is becoming anxious or reacting negatively to a stressor. This information is useful for early detection of psychoemotional states like anxiety or excitement. Doctors might find this information useful in gauging the severity of an anxiety disorder, or for measuring the frequency of panic attacks as seen in panic disorder.
  • the larynx recognition technology is derived from several prior works in government, and the medical industry, where the movements of the larynx that help to form human speech were captured as analog waveforms and conveyed to an external device using radio frequency (RFID) technology.
  • RFID radio frequency
  • the larynx sticker also measures muscular movement, except that it accounts for the movement of the muscles in the throat that move with the tongue, in terms of machine learning, and works on the outside of the neck to reconstruct silent speech.
  • the side of the neck moves, and the device is able to recognize silent speech patterns or‘subvocalizations’ that the person will produce during speech, with a hybrid sensor machine-learning approach.
  • the larynx member 14 is capable of providing audio from the microphones and raw data from the on-board sensors, but it can also pre-process these waveforms. It also yields processed ultrasound imagery from the piezoelectric array representing muscular movement in the larynx and muscles in the surrounding area. Muscular data is also generated as the tongue moves in order to form speech, even when the user is speaking silently.
  • the raw waveforms are processed using a machine learning algorithm that can be trained to recognize specific words, phrases, and sounds. Ultrasound imagery from the piezoelectric array is converted into a matrix of reflected distances to individual parts of the muscle, similar to pixels on a computer monitor. These waveforms and distance matrices are ran through machine learning, in order to identify specific patterns that represent known words and phrases (even if they are of a non-language).
  • the machine learning algorithm can be trained with a software training routine that asks the user to say phrases in their own language. As the device captures the waveform signatures for each word or phrase, the machine learning algorithm will produce numeric training vectors. As is common with machine learning, this process can occur in multiple iterations, and the training vectors improve over some period of time.
  • These vectors can be stored on an external device running the training software, or with the laryngeal interface, for use with other devices. These training vectors are used during normal operating to discern between known words based on waveform inputs.
  • the device is not required to analyze the imagery from the ultrasound array visually, as the matrix of distances represents a depth bump map or topographical view of the larynx and throat muscles in action. Individual snapshots are taken on interval over time and can be triggered by the fact that the user is speaking via the accelerometer.
  • Raw waveforms or processed input can be returned to an external device, such as a wearable computer, that implements the same wireless protocol.
  • the larynx input device can be paired with an external computer over Bluetooth. The user would press a button on the device that causes it to enter pairing mode, and then the device can be paired with another computer running the recognition software.
  • the training vectors can be stored on the larynx device so that the recognition is consistent across multiple associated Bluetooth devices.
  • the subvocalization sticker hardware of the larynx member 14 consists of a low-energy ultrasonic piezoelectric array, or singular piezoelectric transducer. It rests on the outside of the neck and has a medical grade adhesive that holds the device securely in-place on the outside of the neck. It should be positioned so that it is facing the larynx, near to the laryngeal nerve bundle, underneath the jaw, on the spot on the outside of where trained physicians and athletes are instructed to check their pulse rate. This area is ideal, because there is a good view of the muscle tissue, data about the user’s pulse rate is available, and the user can still turn their head side-to-side without significantly flexing the device out of place.
  • Transducers used in medical imaging range into higher frequencies depending on the desired depth and type of tissue. In this case, the tissue depth of penetration is minimal, as the diameter of the neck is limited.
  • the device penetrate past the epidermal layer to measure the depth to the first layer of the platysma muscle, which wraps around the entire front and sides of the neck, connects directly to the underside of the skin, and plays an important role in facial expression.
  • the device is meant to reach deeper and may be able to reach multiple muscle groups in the area, including the muscles of the larynx, which are directly responsible for movement within the voice box.
  • This component emits an inaudible tones at specific frequencies for the purpose of deep tissue imaging.
  • the transducer is triggered as the device detects that the user is speaking. In this case, the user may be speaking normally or subvocalizing to the device, which causes multiple muscles in the sides of the neck to contract.
  • the on-board accelerometer can be used to indicate that there is movement, especially when the mouth is open, and the user is engaged in self-talk.
  • the sticker is a small slim device that rests on the outside of the neck, it is still it still has an embedded processor, memory, and a wireless chipset.
  • the proposed design has pull-tab functionality, with a medical-grade adhesive material used to affix it to the neck.
  • the user can apply a tiny amount of an ultrasound gel directly on between the skin and the piezoelectric sensing array. Any excess ultrasound gel will escape through an inset escape channel, which ensures that there are no air pockets between the piezoelectric array and the surface of the skin.
  • the applications of the wearable device 10 and the larynx member 12 are numerous. By way of example and not by way of limitation, laryngeal and mental illness, hybrid gestures, instant purchases, telephony, and casual navigation will be presented with a few other examples.
  • laryngeal and mental illness which are disorders of the larynx such as irritable larynx disorder, and psychiatric conditions like anxiety, post-traumatic stress disorder, and schizophrenia
  • larynx device can help users recognize when they have lost focus or have begun unintentional self-talk that might be making their condition worse. If the person has an irritable larynx, or physical damage to the surrounding tissue, a doctor may have instructed them to avoid speaking in order to let the affected area heal.
  • users may be out of sync with reality, subvocalizing about their worries unintentionally.
  • the device can help users train themselves to focus on their surroundings.
  • the wearable device 12 since the wearable device 12 doesn’t have a screen, it draws on its ability to determine the cardinal directions of nearby devices. Although these directions do necessarily need relate to true cardinal directions like North, South, East, and West, the device understands the bearings of nearby external devices in relation to itself. For example, the user might decide that they want to share a piece of content, and instead of choosing a destination device on a menu screen, users would perform a swipe gesture in the direction of the destination device. The user might also point the device itself in the direction of the destination device, and perform a gesture that would take some action, such as a file copy or initiating the pairing process.
  • users might decide to perform a gesture at a nearby wireless access point for the purpose of key-pairing with that access point.
  • This process might involve a protocol similar to WPS (Wireless Protected Setup) for backward compatibility, or another wireless protocol.
  • WPS Wireless Protected Setup
  • users might share individual wireless keys by performing gestures at one another, which is analogous to simply writing a WiFi key down of a piece of paper and handing it to the other person.
  • users can query the prices of items at retail outlets or commercial goods or perform silent transactions.
  • One potential usage of the system is to enable instant purchasing in stores. As the shopper looks through items on the store shelves, consider buying an item by silently vocalizing a phrase like,“I want to buy these [item name].” The system will detect that pattern of text and select the named item in front of them. In order to complete the purchase, the user would perform a brief gesture, such as holding the wearable device for a few seconds, which begins a cancelation timer. If the user should later decide that he did not actually intend to buy the item, the user can say an abort phrase such as,“I didn’t want to buy that.”, which will revert the item to its unpurchased state. Other similar use cases might involve using The wearable device to order food from your favorite restaurants, scheduling pickup or delivery. EUNA would be there to assist the user with purchasing and pricing and can help confirm the order. It can also help users perform financial transactions between one another.
  • EUNA The AI assistant inside the wearable device
  • EUNA also has a real time automatic feature, and upon request, the ability to offer advice pertaining to the user’s calendar, location, and nearby devices.
  • EUNA Artificial Intelligence
  • Euna would be aware of the turn signal indicators as well as the objects and colors of the objects, mobile devices, IoT devices, vehicles, and other
  • the wearable device user s clothing in proximity to you, and other pertinent data for a more human-to-human casual navigation experience, which can be loaded from an external data set. Users can opt-in to share information would improves the system. For example, the user might share their shirt color.
  • EUNA is a sentient AI that becomes the user’ s personalized virtual assistant.
  • the AI travels with the user, which means that it has situational awareness and understands more than just the question at hand. It understands it’s surroundings, including environmental features, man-made structures, buildings, stores, commercial environments, retail environments, recreational facilities, restaurants, offices, vehicles, and the colors of objects. Another example would be helping guide someone through a crowd of people, by referencing the nearby objects and outfits in order to guide the user to the intended person.
  • the owners of a transportation network decide to install named wireless devices that can help users navigate through a sea of devices, as an electron would flow across a metal in the sea of elections.
  • users can silently query the wearable device for information from a search engine, storing data in a cryptographically secure fashion that is tied to a unique device identifier.
  • the dynamic real-time location of the device that made the query is stored in a distributed, allowing the device to simply query the information from the distributed data center, instead of repeatedly querying the same searches over and over.
  • Euna can also be configured to share pertinent data between devices in close proximity to one another when the wearable devices come within range.
  • EUNA can actively inspire individuals to talk to one another when both parties have opted-in and have chosen to share their profile information, and are looking to meet people in the area:
  • the goal is to create a system which aids users in interacting with the world around them, recognizing danger, and remaining connected in an interconnected world.
  • This technology can help users automatically find their friends and peers. For example, there might be a hospital nurse who needs a doctor for a patient but the doctor isn't in the radiology department where the user expected. It can also detect nearby obstacles with the radar chipsets, which can alert users that they are about to make a mistake. Users who wish to call for help can place emergency calls, but there should also be an audible I non-audible feedback mechanism on-board to let the user know that help is on the way.
  • the system is also useful for cryptographically secure authentication between IoT devices and can be used as an authentication badge with secondary-factor authentication (2FA) support built into the device.
  • 2FA secondary-factor authentication
  • the owner of the the wearable device might draw their unlock code on the touch surface, or look at a door, and subvocalize an opening phrase like,“let me in”, or a locking phrase like,“lock the door”.
  • the phrases can be configured, but there should be sane defaults so that there are common opening and closing phrases.
  • Figure 5 conceptually illustrates the software architecture of an environmental control application 150 of some embodiments that may utilize the wearable device 12 and the larynx member 14.
  • the analytics application 150 is a stand-alone application or is integrated into another application, while in other embodiments the application might be implemented within an operating system 190.
  • the analytics application 190 is provided as part of a server-based solution or a cloud-based solution.
  • the application is provided via a thin client. That is, the application runs on a server while a user interacts with the application via a separate machine remote from the server.
  • the application is provided via a thick client. That is, the application is distributed from the server to the client machine and runs on the client machine.
  • the application is partially ran on each of the wearable device 12 and the larynx member 14.
  • the analytics application 150 includes a user interface (UI) interaction and generation module 152, user interface tools 154, authentication modules 156, wireless device-to-device awareness modules 158, contextual gestures modules 160, vocal modules 162, subvocal modules 164, interactive navigation modules 166, mind/body moduels 168, retail modules 170, and telephony/video calls modules 172.
  • UI user interface
  • storages 180, 182 184 are all stored in one physical storage.
  • the storages 180, 18, 284 are in separate physical storages, or one of the storages is in one physical storage while the other is in a different physical storage.
  • the UI interaction and generation module 152 generates a user interface that allows the end user to utilize the wearable device 12 and the larynx member 14. During the use, various modules may be called to execute the functions described herein.
  • figure 5 also includes an operating system 190 that includes input device drivers 192 and output device drivers 194. In some embodiments, as illustrated, the input device drivers 192 and the output device drivers 194 are part of the operating system 190 even when the environmental control application 150 is an application separate from the operating system 190.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A wearable device is disclosed. In one embodiment, the wearable device includes a hardware layer, a touch having a capacitive touch surface that receives contact data, and a radio layer including a plurality of antennas that receive radio data. The wearable device processes the radio data and the contact data to at least one of increase internet-of-things awareness and execute a gesture command originating from the user. The wearable device also processes the laryngeal data to execute a vocalization command originating from the user.

Description

WEARABLE DEVICE
TECHNICAL FIELD OF THE INVENTION
This invention relates, in general, to wearable devices and, in particular, to enhanced performance in wearable devices that provide context with an environment of a user.
BACKGROUND OF THE INVENTION
Wearable technology has a variety of applications which grows as the field itself expands. It appears prominently in consumer electronics with the popularization of the smartwatch and activity tracker. Apart from commercial uses, wearable technology is being incorporated into navigation systems, advanced textiles, healthcare, and an ever increasing number of applications. As a result of growing needs and an expanding consumer preference, there is a need for more and improved wearable technology.
SUMMARY OF THE INVENTION
It would be advantageous to achieve new wearable technology that would improve upon existing limitations in functionality and increase the offering of wearable devices. It would be desirable to enable an electro-mechanical based solution leveraging hardware that would provide enhanced services. To better address one or more of these concerns, a wearable device is disclosed. In one embodiment, the wearable device includes a hardware layer, a touch having a capacitive touch surface that receives contact data, and a radio layer including a plurality of antennas that receive radio data. The wearable device processes the radio data and the contact data to at least one of increase intemet-of-things awareness and execute a gesture command originating from the user. The wearable device also processes the laryngeal data to execute a vocalization command originating from the user. These and other aspects of the invention will be apparent from and elucidated with reference to the embodiments described hereinafter.
BRIEF DESCRIPTION OF THE DRAWINGS
For a more complete understanding of the features and advantages of the present invention, reference is now made to the detailed description of the invention along with the accompanying figures in which corresponding numerals in the different figures refer to corresponding parts and in which: Figure 1 is a schematic diagram of a user wearing one embodiment of a wearable device according to the teachings presented herein;
Figure 2 is a schematic diagram of the user depicted in figure 1 wearing the wearable device in additional detail;
Figure 3A, figure 3B, figure 3C, and figure 3D are each schematic diagrams of one embodiment of a portion of the wearable device;
Figure 4 A, figure 4B, figure 4C, and figure 4D are each schematic diagrams of one embodiment of a portion of a larynx member; and
Figure 5 is a conceptual module diagram depicting a software architecture of an environmental control application.
DETAILED DESCRIPTION OF THE INVENTION
While the making and using of various embodiments of the present invention are discussed in detail below, it should be appreciated that the present invention provides many applicable inventive concepts, which can be embodied in a wide variety of specific contexts. The specific embodiments discussed herein are merely illustrative of specific ways to make and use the invention, and do not delimit the scope of the present invention.
Referring initially to figure 1 and figure 2, therein is depicted one embodiment of a system including wearable technology that is conceptually illustrated and generally designated 10. A user U has a torso T and an arm A as well as a neck N and a head H. The user U is wearing clothing C. Further, the user U is wearing a wearable device 12 on the clothes C and a larynx member 14 is affixed to the neck N. In general, the wearable device 14 processes radio data and contact data to at least one of increase intemet-of- things (IOT) awareness and execute a gesture command originating from the user U. The wearable device 12 also processes the laryngeal data to execute a vocalization command originating from the user U.
With respect to figure 1, the wearable device 12 processes the radio data and the contact data to detect the arm A movement as shown by arrow MA. The wearable device 12 processes the laryngeal data received from the larynx member 14 to detect audible vocals VA and even sub-audible vocals Vs. Based on the detected audibles, the wearable device 12 may execute a command or initiate a telephony application, including transmission of an audible vocalization or transmission of a sub-audible vocalization. Also, audible commands or sub- audible commands may be enabled. The wearable device 12 may also process the laryngeal data that includes movement of the head H as shown by MH, including the detection of biometrics that may indicate what the user is thinking or feeling as shown by element I.
With respect to figure 2, the wearable device 12 processes the radio data and the contact data to provide authentication relative to credentials, for example. Such authentication permits the user U to go through the entrance E. Additionally, as shown in figure 2, the user U by way of the processing of the radio data and the contact data has device-to-device awareness of the individual Ii having a wearable device and the individual I2 having a smart device. Using interactive navigation as shown by NAV and enabled by the processing of the radio data and the contact data, the user U is able to visit the purchasing area P and select a gift G for purchase. A purchase that may be enabled by the wearable device 12.
The wearable device 12 can be used for everyday computing, authentication, telephony, navigation, and as an entry point into augmented reality space. It is designed to excel in performance, device awareness, security, design, user interactions, and accessibility. The wearable device makes use of multiple antennae and wireless tracking technology in order to provide enhanced location awareness. It also means that the wearable device understands cardinal directions or bearings of nearby devices for use in software applications on the device, and to support location-aware gestures.
The wearable device 12 works hand and hand with the larynx member 14 and, as will be discussed in further detail hereinbelow, includes a radar-enhanced capacitive touch surface for maximum accessibility for all users. The on-board radar chip understands precise hand movements and can detect objects that are directly in front of the device. When augmented with the capacitive touch surface and hands-free laryngeal interface, the system lets users interact with world around them in new ways.
The wearable device 12 also lets users bring their desktop with them wherever they go. It should be appreciated that even though a wearable device 12 is depicted on the clothing C, the wearable device 12 may be on necklace, for example, or otherwise associated with the user U. Even though the necklace wearable doesn't have a built-in screen, the device leverages a proximity-based wireless VNC protocol in order to display the graphical desktop on neighboring device displays. Once a display-bearing device (Laptop, Desktop, TV, or Smartphone) has been paired, bringing the wearable within range and performing the hold gesture will cause the two devices to form a VNC connection. This means that users can bring their desktop with them wherever they go. Referring now to figure 3A, figure 3B, figure 3C, and figure 3D, the wearable device 12 includes an outer touch layer 20, an interior radio layer 22, an interior hardware layer 24, and an exterior electrical layer 26. The outer touch layer 20, the interior radio layer 22, the interior hardware layer 24, and the exterior electrical layer 26 are interconnecte3d.
The outer touch layer 20 includes a capacitive touch surface 30. The interior radio layer 22 includes a substrate 40 securing a cell antenna 42, which may be a transceiver, and an induction coil 44 as well as, in one embodiment spaced and segmented, antennas 46, 48, 50, 52. The capacitive touch surface 30 in conjunction with the antennas 46, 48, 50, 52 can determine the direction of signals as indicated by element 54. The interior hardware layer 24 includes a substrate 56 having components 58 including Amb, LEDs, Mic, Ramdisk, Cell, Flash, WiFi, Mem, CPU, BT, Radar, Audio, Clock, Rocker, Accel, Charge Circuit, IR, USB C, and GPIO, for example. It should be appreciated that although a particular architecture of components is depicted, other architectures are within the teachings presented herein. Within the CPU, the memory is accessible to the processor and the memory includes processor-executable instructions that, when executed, cause the processor to process the radio data and the contact data to at least one of increase intemet-of-things awareness and execute a gesture command originating from the user. Further, the processor-executable instructions cause the processor to process the laryngeal data to execute a vocalization command originating from the user. The exterior electrical layer includes a substrate 60 having a shielded battery 62 and a heatsink 64.
The wearable device 12 may come with one or more segmented antennas, that can identify the points of origin of incoming radio signals, by using low observable tracking techniques, and angle-of- arrival techniques seen in phased-array radar systems. Segments are spaced at squared increments apart from each other at, one-eighth, one-quarter, one-half wavelength distances as is common in phased- arrays. The wearable device 12 may also use the phased-array technology to steer wireless signals towards a destination access-point or device. It might programmatically choose to steer signals back towards another device in the same direction that the signals were received in, adjusted to changes in the device’s position, improving connectivity, and providing some degree of wireless privacy.
It should be mentioned that phased-arrays are not usually used in consumer products, especially portable devices, and that low observable techniques improve the relevance of phased-arrays for mobile applications. However, directional awareness of neighboring devices, allows for far more complex gestures, and opens a world of possibilities for navigation and augmented reality technology, allowing the device to visualize the wireless space, and all of those‘WiFi radar’ apps to actually work.
Users might also want a chaintenna, that is, an antenna dipole that has been strung into the chain, that is relatively low-power, and clips onto the sides of the wearable device. The antenna can be used for cellular, WiFi, or Bluetooth radios, and keeps the device facing forward. It is high-gain, as the dipole extends through the chain, and is usually longer than the body of the wearable device. Depending on the material and conductivity status of the chain and type of radio transmission, the dipole might also be insulated separately and capped at either end. The chaintenna might have some degree of resistivity between the conductive leads that scales appropriately to the frequency that the antenna is meant to operate on and is calculable with techniques known in the art of antennas.
The wearable device 12 may also have a capacitive radar-enhanced multi-touch surface. This involves a capacitive pad grid that measures the locations of multiple fingers or conductive objects on an X, Y plane from each of the comers of the pad. It is combined with a radar sensor underneath the pad that emits radio pulses outwards that bounce off of the user’s hands and land back on the pad. The combination allows the pad to measure the distance to the user’s hand, yielding a Y-coordinate on the two-dimensional touch plane, giving the pad three-dimensional awareness in a manner that is mechanically similar to the doppler-shift ultrasound techniques.
Furthermore, the radar sensing provides a positional offset from the center point where the signals were emitted, and likewise, another analog waveform that can be fed into machine-learning software. This lets users train the pad to recognize specific movements of the hands in front of the device and perform free-space gestures that are invokable by the user through normal operation of the wearable member 12. Since there does not need to be a direct visualization of what is on the display, the touchpad gives users the ability to convey what should be selected, playing, or otherwise happening. The device is said to be contextual in nature, as the device has a certain degree of environmental awareness and depending on what happens around the time a gesture is made, the device will perform the appropriate action.
The embedded form-factor of the wearable device is available to developers for the purpose of providing a modular development platform that can be used inside existing products or as a badge, for example, that is releasable securable to the clothing C of the user U. The embedded version may expose development endpoints such as the GPIO serial connector, wired and / or wireless network interface modules, USB, and a software API (application programming interface). It lets developers and hobbyists experiment with the platform, and connect different modules appropriate to the projects they are working on. Businesses can embed the platform and deploy wearable device-compatible consumer products and IoT devices with their own functionality, purpose, and branding. The platform can be locked-down for mass deployment, by removing unneeded development modules, leaving only the required components for deployment inside products at scale. Embedding the wearable device ensures that the software protocol implementation is consistent among many different kinds of devices, including doors and toasters, which makes for a secure and open firmware platform that all users of the Intemet-of- Things can benefit.
The wearable device may be utilized in different contexts. In a pocketed context, users might put their wearable device in a closed space, or in their pocket. Similar to on-face detection seen in smartphones, the wearable device uses a light sensor to determine whether there is something directly in- front or behind-it. If both sensors return a closed value of less than a few centimeters, than the device might enter pocketed context. In pocketed context, the wearable device will not respond to any gestures that the user would not reasonably perform in their pocket. This is both a safety mechanism to protect the device from accidental input and works to the benefit of the user who might use pocketed interactions inside, such as triple tapping on the device to silence a notification.
If the wearable device is an embedded device, the software might stay in embedded context, which is because the wearable device software has detected that it’s running on a device that is embedded inside another product. That product may or not have a touch surface or offer direct physical user interaction whatsoever. However, in this mode, network connectivity, and the ability to interact with the system remotely is supported and streamlined with the API and software development kit.
In another context, the wearable device may be utilized to carry data analogously to bringing your desktop workspace with you on-the-go. Even though the necklace wearable doesn’t have a built-in screen, the device leverages a proximity-based wireless VNC protocol that it uses to display a graphical desktop on neighboring devices that are running the software. Once a display-bearing device (Laptop, Desktop, TV, or Smartphone) comes within range, the wearable device pair to it and either share a roaming workspace or display a graphical shell on neighboring devices via a VNC protocol. Simple bringing the wearable within range and performing the hold gesture will cause the two devices to form a VNC connection. This means that users can bring their desktop with them wherever they go. To effectuate many of these functions, named items are available to the user as a sort of vocal shortcut for physical and virtual objects. When a user names an object, the aspects of the object that make it unique are stored in a searchable mapping of unique identifiers or hashes, as they relate to specific data structures and types, such as file or device. Users might choose to create places where they can store files, and spaces where they can reference objects.
There might be a place where the user keeps their music, and when they are searching for a song, if they can name the place, the user can narrow results faster. If the user wanted to copy a device, they might say something like,“to my phone”, and as expected, the lookup would result in the user’s phone, but might have a completely different machine identifier. This is something that is common to voice assistants but is especially important for devices without a screen.
In another sense, if the user can describe what something looks like as they name it, or the properties about it, it can make the process of finding it again much easier, and more organic than trying to remember the exact name of the object, or trying to walk through the filesystem directory by directory.
If a named object happens to be a device, then the wearable device should save the device’ s cryptographic signature, and wireless profile. It may be the case that the device does not support pairing at all but speaks a common language such as WiFi or Bluetooth. Identifying the commonalities of wireless frames (building a profile) and saving the MAC address (typically unique, but non-unique identifier), can be used to find overlapping traits between devices. Say for example, the user holds an ordinary cell phone in front of the wearable device, and names it“Mobile Phone”. Later on, when the wearable device does wireless discovery and identifies a device that has a different MAC address, but is emitting frames with a similar wireless profile, the wearable device might ask the user,“Is what you’re holding a Mobile Phone?”
This makes it easier to name categories of Internet-of-Things devices, so when the user goes to their friend’s house and encounters a similar device, the wearable device already knows how to interact with it, as it matches something else you’ve used before.
Referring now to figure 4A, figure 4B, figure 4C, and figure 4D, in one embodiment, the larynx member 14 includes multiple layers, a layer 70, a layer 72, a layer 74, and a layer 76. With respect to the layer 70, a substrate 90 supports a charging interface 92, a battery 94, and a USB C interface 96. A power button 98 is provided as is a power LED 100. With respect to the layer 72, a substrate 110 supports an ACL 112, OS 114, a CPU 116, and a BT 118, as well as a piezo array 120. A microphone 122 and a resistivity sensor 124 are also provided. With respect to the layer 74 and the layer 76, a piezoelectric sensing array is provided with an ultrasound gel being applied to layer 74. A gel escape channel 130 provides communication to the exterior from the layer 74. A medical grade adhesive may be applied to the exterior. It should be appreciated that although a particular architecture of components is depicted, other architectures are within the teachings presented herein. Within the CPU, the memory is accessible to the processor and the memory includes processor-executable instructions that, when executed, cause the processor to process the piezoelectric data and sound data. Further, the processor-executable instructions cause the processor to apply machine learning to train and recognize meanings associated with the piezoelectric data and sound data.
The larynx member 14 provides an interface device that is also portable computer, complete with a processor, memory, and a wireless chipset, that rests on the outside of the neck. The wireless sticker version of the larynx member 14 has a replaceable adhesive material and is small enough where it does not become a distraction. Users slide an inexpensive replaceable medical-grade adhesive sticker onto the bottom of the device and apply a small amount of an ultrasound gel directly on top of a piezoelectric sensing array. Any excess ultrasound gel will escape through an inset escape channel, which ensures that there are no air pockets between the piezoelectric array and the surface of the skin.
The medical grade adhesive holds the device securely on the outside of the neck and can be positioned so that it is facing the larynx, near to the laryngeal nerve, underneath the jaw, on the spot on the outside of the neck that moves with the larynx, mouth and tongue. As the user vocalizes, subvocalizes, or whispers to themselves (silent self-talk), the analog waveforms representing the movements of the larynx muscles and throat are captured by the ultrasound piezoelectric array and accelerometer. Any audible sound will be captured by one or more throat microphones that provide another analog data point for combined processing.
Since the device may be in a sticker form, there are resistivity leads for detecting perspiration on the outside of the skin that may weaken the medical adhesive bond. This makes the user of the device aware of when the adhesive sticker or patch needs to be replaced. For medical use, this can signify that the user is becoming anxious or reacting negatively to a stressor. This information is useful for early detection of psychoemotional states like anxiety or excitement. Doctors might find this information useful in gauging the severity of an anxiety disorder, or for measuring the frequency of panic attacks as seen in panic disorder. The larynx recognition technology is derived from several prior works in government, and the medical industry, where the movements of the larynx that help to form human speech were captured as analog waveforms and conveyed to an external device using radio frequency (RFID) technology.
The larynx sticker also measures muscular movement, except that it accounts for the movement of the muscles in the throat that move with the tongue, in terms of machine learning, and works on the outside of the neck to reconstruct silent speech. As the user's tongue and larynx muscles move, the side of the neck moves, and the device is able to recognize silent speech patterns or‘subvocalizations’ that the person will produce during speech, with a hybrid sensor machine-learning approach. This makes the larynx interface, a non-invasive technology, that can aid users with speech, and may work for medical patients who have lost their ability to speak. It is also useful for normal users who wish to interface with their electronics silently, without the need for any audible speech.
The larynx member 14 is capable of providing audio from the microphones and raw data from the on-board sensors, but it can also pre-process these waveforms. It also yields processed ultrasound imagery from the piezoelectric array representing muscular movement in the larynx and muscles in the surrounding area. Muscular data is also generated as the tongue moves in order to form speech, even when the user is speaking silently. The raw waveforms are processed using a machine learning algorithm that can be trained to recognize specific words, phrases, and sounds. Ultrasound imagery from the piezoelectric array is converted into a matrix of reflected distances to individual parts of the muscle, similar to pixels on a computer monitor. These waveforms and distance matrices are ran through machine learning, in order to identify specific patterns that represent known words and phrases (even if they are of a non-language).
In one embodiment, the machine learning algorithm can be trained with a software training routine that asks the user to say phrases in their own language. As the device captures the waveform signatures for each word or phrase, the machine learning algorithm will produce numeric training vectors. As is common with machine learning, this process can occur in multiple iterations, and the training vectors improve over some period of time.
These vectors can be stored on an external device running the training software, or with the laryngeal interface, for use with other devices. These training vectors are used during normal operating to discern between known words based on waveform inputs. The device is not required to analyze the imagery from the ultrasound array visually, as the matrix of distances represents a depth bump map or topographical view of the larynx and throat muscles in action. Individual snapshots are taken on interval over time and can be triggered by the fact that the user is speaking via the accelerometer.
Raw waveforms or processed input can be returned to an external device, such as a wearable computer, that implements the same wireless protocol. For example, the larynx input device can be paired with an external computer over Bluetooth. The user would press a button on the device that causes it to enter pairing mode, and then the device can be paired with another computer running the recognition software. As previously mentioned, the training vectors can be stored on the larynx device so that the recognition is consistent across multiple associated Bluetooth devices.
In one embodiment, the subvocalization sticker hardware of the larynx member 14 consists of a low-energy ultrasonic piezoelectric array, or singular piezoelectric transducer. It rests on the outside of the neck and has a medical grade adhesive that holds the device securely in-place on the outside of the neck. It should be positioned so that it is facing the larynx, near to the laryngeal nerve bundle, underneath the jaw, on the spot on the outside of where trained physicians and athletes are instructed to check their pulse rate. This area is ideal, because there is a good view of the muscle tissue, data about the user’s pulse rate is available, and the user can still turn their head side-to-side without significantly flexing the device out of place.
Typical frequency ranges for these transducers fall outside of human hearing ranges, above 20 Khz, and more specifically between lMhz and lOMhz for this application. Transducers used in medical imaging range into higher frequencies depending on the desired depth and type of tissue. In this case, the tissue depth of penetration is minimal, as the diameter of the neck is limited. The device penetrate past the epidermal layer to measure the depth to the first layer of the platysma muscle, which wraps around the entire front and sides of the neck, connects directly to the underside of the skin, and plays an important role in facial expression. The device is meant to reach deeper and may be able to reach multiple muscle groups in the area, including the muscles of the larynx, which are directly responsible for movement within the voice box.
This component emits an inaudible tones at specific frequencies for the purpose of deep tissue imaging. The transducer is triggered as the device detects that the user is speaking. In this case, the user may be speaking normally or subvocalizing to the device, which causes multiple muscles in the sides of the neck to contract. The on-board accelerometer can be used to indicate that there is movement, especially when the mouth is open, and the user is engaged in self-talk. Even though the sticker is a small slim device that rests on the outside of the neck, it is still it still has an embedded processor, memory, and a wireless chipset. The proposed design has pull-tab functionality, with a medical-grade adhesive material used to affix it to the neck. For situations where the sticker does not make full contact or is not flush with the binding site, the user can apply a tiny amount of an ultrasound gel directly on between the skin and the piezoelectric sensing array. Any excess ultrasound gel will escape through an inset escape channel, which ensures that there are no air pockets between the piezoelectric array and the surface of the skin.
The applications of the wearable device 10 and the larynx member 12 are numerous. By way of example and not by way of limitation, laryngeal and mental illness, hybrid gestures, instant purchases, telephony, and casual navigation will be presented with a few other examples.
With respect to laryngeal and mental illness, which are disorders of the larynx such as irritable larynx disorder, and psychiatric conditions like anxiety, post-traumatic stress disorder, and schizophrenia, this may cause unintended muscular movements resulting in partial subvocalization. The larynx device can help users recognize when they have lost focus or have begun unintentional self-talk that might be making their condition worse. If the person has an irritable larynx, or physical damage to the surrounding tissue, a doctor may have instructed them to avoid speaking in order to let the affected area heal. In the case of an anxiety disorders, users may be out of sync with reality, subvocalizing about their worries unintentionally. By bringing unintentional laryngeal movements to the user's attention, the device can help users train themselves to focus on their surroundings.
With respect to hybrid gestures, since the wearable device 12 doesn’t have a screen, it draws on its ability to determine the cardinal directions of nearby devices. Although these directions do necessarily need relate to true cardinal directions like North, South, East, and West, the device understands the bearings of nearby external devices in relation to itself. For example, the user might decide that they want to share a piece of content, and instead of choosing a destination device on a menu screen, users would perform a swipe gesture in the direction of the destination device. The user might also point the device itself in the direction of the destination device, and perform a gesture that would take some action, such as a file copy or initiating the pairing process.
On that note, users might decide to perform a gesture at a nearby wireless access point for the purpose of key-pairing with that access point. This process might involve a protocol similar to WPS (Wireless Protected Setup) for backward compatibility, or another wireless protocol. Additionally, users might share individual wireless keys by performing gestures at one another, which is analogous to simply writing a WiFi key down of a piece of paper and handing it to the other person.
With respect to instant purchases, users can query the prices of items at retail outlets or commercial goods or perform silent transactions. One potential usage of the system is to enable instant purchasing in stores. As the shopper looks through items on the store shelves, consider buying an item by silently vocalizing a phrase like,“I want to buy these [item name].” The system will detect that pattern of text and select the named item in front of them. In order to complete the purchase, the user would perform a brief gesture, such as holding the wearable device for a few seconds, which begins a cancelation timer. If the user should later decide that he did not actually intend to buy the item, the user can say an abort phrase such as,“I didn’t want to buy that.”, which will revert the item to its unpurchased state. Other similar use cases might involve using The wearable device to order food from your favorite restaurants, scheduling pickup or delivery. EUNA would be there to assist the user with purchasing and pricing and can help confirm the order. It can also help users perform financial transactions between one another.
With respect to telephony, another use case involves instant messaging (SMS, Email, and IM), and other similar methods of communication. Users should be able to conduct silent phone calls or interface with EUNA (The AI assistant inside the wearable device) silently to draft messages grammatically correct and I or spell check, send messages. EUNA also has a real time automatic feature, and upon request, the ability to offer advice pertaining to the user’s calendar, location, and nearby devices.
With respect to casual navigation, instead of inputting information into the device and receiving responses back, users silently communicate with an Artificial Intelligence (EUNA) who will respond in a more casual and human- like way. This means someone could silently use the wearable device to navigate the city in real-time. EUNA would be aware that the user has anxiety, and would respond to casual requests when the user just wants to get from one meeting location to another:
“Josh breathe, nothing to work about, do you see that red fire hydrant directly in front of you about 10 more steps/paces ahead of you? Walk directly past it and make a left at the next street comer. Then proceed towards the green umbrella at the end of the block.”
“You are about to make a left turn onto on the opposite side of the street. The green umbrella is in front of the Starbucks.” As the user approaches the Starbucks:
“Josh do you see the green car, I believe a Prius, is coming up on your left?”
Euna would be aware of the turn signal indicators as well as the objects and colors of the objects, mobile devices, IoT devices, vehicles, and other The wearable device user’s clothing in proximity to you, and other pertinent data for a more human-to-human casual navigation experience, which can be loaded from an external data set. Users can opt-in to share information would improves the system. For example, the user might share their shirt color.
EUNA is a sentient AI that becomes the user’ s personalized virtual assistant. The AI travels with the user, which means that it has situational awareness and understands more than just the question at hand. It understands it’s surroundings, including environmental features, man-made structures, buildings, stores, commercial environments, retail environments, recreational facilities, restaurants, offices, vehicles, and the colors of objects. Another example would be helping guide someone through a crowd of people, by referencing the nearby objects and outfits in order to guide the user to the intended person. Consider also, that the owners of a transportation network decide to install named wireless devices that can help users navigate through a sea of devices, as an electron would flow across a metal in the sea of elections.
With respect to a distributed search, users can silently query the wearable device for information from a search engine, storing data in a cryptographically secure fashion that is tied to a unique device identifier. The dynamic real-time location of the device that made the query is stored in a distributed, allowing the device to simply query the information from the distributed data center, instead of repeatedly querying the same searches over and over. Euna can also be configured to share pertinent data between devices in close proximity to one another when the wearable devices come within range.
With respect to socialization and dating, an example would be, the user silently subvocalizes that the individual in front of him or her is attractive and wishes they would talk to him. EUNA can actively inspire individuals to talk to one another when both parties have opted-in and have chosen to share their profile information, and are looking to meet people in the area:
“The woman in front of you with the colorful backpack is attractive, she is a cyber security engineer at Amazon, and works I block away from you. You should go speak to her, ask her about [user interest].” Users can query, and/or have an automatic feature of the wearable device that listens to your subvocalizations and detects users who think someone is attractive, and listen if X individual also has the wearable device on the thinks that they are attractive then an automatic date request goes out to both parties alerting them to the mutual interest.
With respect to accessibility, the goal is to create a system which aids users in interacting with the world around them, recognizing danger, and remaining connected in an interconnected world. This technology can help users automatically find their friends and peers. For example, there might be a hospital nurse who needs a doctor for a patient but the doctor isn't in the radiology department where the user expected. It can also detect nearby obstacles with the radar chipsets, which can alert users that they are about to make a mistake. Users who wish to call for help can place emergency calls, but there should also be an audible I non-audible feedback mechanism on-board to let the user know that help is on the way.
With respect to security and authentication, users should be able to use gestures in combination with the laryngeal interface in order to unlock electronic doors, garages, and gates. The system is also useful for cryptographically secure authentication between IoT devices and can be used as an authentication badge with secondary-factor authentication (2FA) support built into the device. As an example of this, the owner of the the wearable device might draw their unlock code on the touch surface, or look at a door, and subvocalize an opening phrase like,“let me in”, or a locking phrase like,“lock the door”. The phrases can be configured, but there should be sane defaults so that there are common opening and closing phrases.
Figure 5 conceptually illustrates the software architecture of an environmental control application 150 of some embodiments that may utilize the wearable device 12 and the larynx member 14. In some embodiments, the analytics application 150 is a stand-alone application or is integrated into another application, while in other embodiments the application might be implemented within an operating system 190. Furthermore, in some embodiments, the analytics application 190 is provided as part of a server-based solution or a cloud-based solution. In some such embodiments, the application is provided via a thin client. That is, the application runs on a server while a user interacts with the application via a separate machine remote from the server. In other such embodiments, the application is provided via a thick client. That is, the application is distributed from the server to the client machine and runs on the client machine. In other embodiments, the application is partially ran on each of the wearable device 12 and the larynx member 14. The analytics application 150 includes a user interface (UI) interaction and generation module 152, user interface tools 154, authentication modules 156, wireless device-to-device awareness modules 158, contextual gestures modules 160, vocal modules 162, subvocal modules 164, interactive navigation modules 166, mind/body moduels 168, retail modules 170, and telephony/video calls modules 172. In some embodiments, storages 180, 182 184 are all stored in one physical storage. In other embodiments, the storages 180, 18, 284 are in separate physical storages, or one of the storages is in one physical storage while the other is in a different physical storage.
The UI interaction and generation module 152 generates a user interface that allows the end user to utilize the wearable device 12 and the larynx member 14. During the use, various modules may be called to execute the functions described herein. In the illustrated embodiment, figure 5 also includes an operating system 190 that includes input device drivers 192 and output device drivers 194. In some embodiments, as illustrated, the input device drivers 192 and the output device drivers 194 are part of the operating system 190 even when the environmental control application 150 is an application separate from the operating system 190.
The order of execution or performance of the methods and data flows illustrated and described herein is not essential, unless otherwise specified. That is, elements of the methods and data flows may be performed in any order, unless otherwise specified, and that the methods may include more or less elements than those disclosed herein. For example, it is contemplated that executing or performing a particular element before, contemporaneously with, or after another element are all possible sequences of execution.
While this invention has been described with reference to illustrative embodiments, this description is not intended to be construed in a limiting sense. Various modifications and combinations of the illustrative embodiments as well as other embodiments of the invention, will be apparent to persons skilled in the art upon reference to the description. It is, therefore, intended that the appended claims encompass any such modifications or embodiments.

Claims

What is claimed is:
1. A wearable device for a user, the wearable device comprising:
a hardware layer having a processor, memory, and transceiver, the processor, memory, and transceiver being interconnected by a busing architecture, the transceiver receiving laryngeal data relative to the user;
a touch layer located in communication with the hardware layer, the touch layer having a capacitive touch surface that receives contact data;
a radio layer located in communication with the hardware layer, the radio layer having an antenna array, the radio layer including a plurality of antennas that receive radio data, the radio data including the identify the points of origin of incoming radio signals; and the memory accessible to the processor, the memory including processor-executable instructions that, when executed, cause the processor to:
process the radio data and the contact data to at least one of increase internet- of-things awareness and execute a gesture command originating from the user, and
process the laryngeal data to execute a vocalization command originating from the user.
2. The wearable device as recited in claim 1, further comprising a form factor of a badge, the badge being releasably securable to clothing of the user.
3. The wearable device as recited in claim 1, further comprising an electrical layer in communication with the hardware layer, the electrical layer comprising a power source and a heatsink.
4. The wearable device as recited in claim 1, wherein the internet-of-things awareness further comprises authentication relative to credentials.
5. The wearable device as recited in claim 1, wherein the internet-of-things awareness further comprises wireless device-to-device awareness.
6. The wearable device as recited in claim 1, wherein the internet-of-things awareness further comprises retail activity.
7. The wearable device as recited in claim 1, wherein the internet-of-things awareness further comprises a telephony application.
8. The wearable device as recited in claim 1, wherein the internet-of-things awareness further comprises interactive navigation.
9. A wearable device for a user, the wearable device comprising:
a hardware layer having a processor, memory, and transceiver, the processor, memory, and transceiver being interconnected by a busing architecture, the transceiver receiving laryngeal data relative to the user from a larynx member;
a touch layer located in communication with the hardware layer, the touch layer having a capacitive touch surface that receives contact data;
a radio layer located in communication with the hardware layer, the radio layer having an antenna array, the radio layer including a plurality of antennas that receive radio data, the radio data including the identify the points of origin of incoming radio signals; the memory accessible to the processor, the memory including processor-executable instructions that, when executed, cause the processor to:
process the radio data and the contact data to at least one of increase internet- of- things awareness and execute a gesture command originating from the user, and
process the laryngeal data to execute a vocalization command originating from the user; and
the larynx member including an ultrasonic piezoelectric member to measure laryngeal data.
10. A wearable device for a user, the wearable device comprising:
a hardware layer having a processor, memory, and transceiver, the processor, memory, and transceiver being interconnected by a busing architecture, the transceiver receiving laryngeal data relative to the user;
a touch layer located in communication with the hardware layer, the touch layer having a capacitive touch surface that receives contact data;
a radio layer located in communication with the hardware layer, the radio layer having an antenna array, the radio layer including a plurality of antennas that receive radio data, the radio data including the identify the points of origin of incoming radio signals; and the memory accessible to the processor, the memory including processor-executable instructions that, when executed, cause the processor to:
process the laryngeal data to execute a vocalization command originating from the user.
PCT/US2019/028818 2018-04-23 2019-04-23 Wearable device WO2019209894A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/897,893 US20200341543A1 (en) 2018-04-23 2020-06-10 Wearable device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862661573P 2018-04-23 2018-04-23
US62/661,573 2018-04-23

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/897,893 Continuation-In-Part US20200341543A1 (en) 2018-04-23 2020-06-10 Wearable device

Publications (1)

Publication Number Publication Date
WO2019209894A1 true WO2019209894A1 (en) 2019-10-31

Family

ID=68294248

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2019/028818 WO2019209894A1 (en) 2018-04-23 2019-04-23 Wearable device

Country Status (1)

Country Link
WO (1) WO2019209894A1 (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060267773A1 (en) * 2005-05-24 2006-11-30 V.H. Blackinton & Co., Inc. Badge verification device
US20130185077A1 (en) * 2012-01-12 2013-07-18 Inha-Industry Partnership Institute Device for supplementing voice and method for controlling the same
US20150029661A1 (en) * 2014-10-15 2015-01-29 AzTrong Inc. Wearable portable electronic device with heat conducting path
US20170076272A1 (en) * 2002-10-01 2017-03-16 Andrew H. B. Zhou Systems and methods for mobile application, wearable application, transactional messaging, calling, digital multimedia capture and payment transactions
WO2017099828A1 (en) * 2015-12-07 2017-06-15 Intel IP Corporation Devices and methods of mobility enhancement and wearable device path selection
US20170192743A1 (en) * 2016-01-06 2017-07-06 Samsung Electronics Co., Ltd. Ear wearable type wireless device and system supporting the same
WO2017140812A1 (en) * 2016-02-18 2017-08-24 Koninklijke Philips N.V. Device, system and method for detection and monitoring of dysphagia of a subject
WO2018053493A1 (en) * 2016-09-19 2018-03-22 Wisconsin Alumni Research Foundation System and method for monitoring airflow in a trachea with ultrasound

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170076272A1 (en) * 2002-10-01 2017-03-16 Andrew H. B. Zhou Systems and methods for mobile application, wearable application, transactional messaging, calling, digital multimedia capture and payment transactions
US20060267773A1 (en) * 2005-05-24 2006-11-30 V.H. Blackinton & Co., Inc. Badge verification device
US20130185077A1 (en) * 2012-01-12 2013-07-18 Inha-Industry Partnership Institute Device for supplementing voice and method for controlling the same
US20150029661A1 (en) * 2014-10-15 2015-01-29 AzTrong Inc. Wearable portable electronic device with heat conducting path
WO2017099828A1 (en) * 2015-12-07 2017-06-15 Intel IP Corporation Devices and methods of mobility enhancement and wearable device path selection
US20170192743A1 (en) * 2016-01-06 2017-07-06 Samsung Electronics Co., Ltd. Ear wearable type wireless device and system supporting the same
WO2017140812A1 (en) * 2016-02-18 2017-08-24 Koninklijke Philips N.V. Device, system and method for detection and monitoring of dysphagia of a subject
WO2018053493A1 (en) * 2016-09-19 2018-03-22 Wisconsin Alumni Research Foundation System and method for monitoring airflow in a trachea with ultrasound

Similar Documents

Publication Publication Date Title
CN105976813B (en) Speech recognition system and speech recognition method thereof
US10389873B2 (en) Electronic device for outputting message and method for controlling the same
US10755695B2 (en) Methods in electronic devices with voice-synthesis and acoustic watermark capabilities
KR102585228B1 (en) Speech recognition system and method thereof
KR102558437B1 (en) Method For Processing of Question and answer and electronic device supporting the same
KR102498451B1 (en) Electronic device and method for provideing information in the electronic device
KR102498364B1 (en) Electronic device and method for provideing information in the electronic device
US10217349B2 (en) Electronic device and method for controlling the electronic device
KR102561572B1 (en) Method for utilizing sensor and electronic device for the same
KR102246742B1 (en) Electronic apparatus and method for identifying at least one external electronic apparatus
Bai et al. Acoustic-based sensing and applications: A survey
EP3396666A1 (en) Electronic device for providing speech recognition service and method thereof
US20170278480A1 (en) Intelligent electronic device and method of operating the same
US11360791B2 (en) Electronic device and screen control method for processing user input by using same
KR20180016866A (en) Watch type terminal
KR20160142128A (en) Watch type mobile terminal and method for controlling the same
KR20170052976A (en) Electronic device for performing motion and method for controlling thereof
US11755111B2 (en) Spatially aware computing hub and environment
US10496225B2 (en) Electronic device and operating method therof
KR20160036921A (en) Mobile terminal and method for controlling the same
KR20150130854A (en) Audio signal recognition method and electronic device supporting the same
US20230185364A1 (en) Spatially Aware Computing Hub and Environment
CN110113659A (en) Generate method, apparatus, electronic equipment and the medium of video
Wang et al. Sensing beyond itself: Multi-functional use of ubiquitous signals towards wearable applications
WO2016206646A1 (en) Method and system for urging machine device to generate action

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19793421

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19793421

Country of ref document: EP

Kind code of ref document: A1