WO2019209894A1 - Dispositif portable - Google Patents

Dispositif portable Download PDF

Info

Publication number
WO2019209894A1
WO2019209894A1 PCT/US2019/028818 US2019028818W WO2019209894A1 WO 2019209894 A1 WO2019209894 A1 WO 2019209894A1 US 2019028818 W US2019028818 W US 2019028818W WO 2019209894 A1 WO2019209894 A1 WO 2019209894A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
wearable device
data
layer
radio
Prior art date
Application number
PCT/US2019/028818
Other languages
English (en)
Inventor
Joshua Ian COHEN
Lucas Kane THORESEN
Jason Lucas
Gavin Arthur JOHNSON
Original Assignee
SCRRD, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SCRRD, Inc. filed Critical SCRRD, Inc.
Publication of WO2019209894A1 publication Critical patent/WO2019209894A1/fr
Priority to US16/897,893 priority Critical patent/US20200341543A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/80Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication

Definitions

  • This invention relates, in general, to wearable devices and, in particular, to enhanced performance in wearable devices that provide context with an environment of a user.
  • Wearable technology has a variety of applications which grows as the field itself expands. It appears prominently in consumer electronics with the popularization of the smartwatch and activity tracker. Apart from commercial uses, wearable technology is being incorporated into navigation systems, advanced textiles, healthcare, and an ever increasing number of applications. As a result of growing needs and an expanding consumer preference, there is a need for more and improved wearable technology.
  • the wearable device includes a hardware layer, a touch having a capacitive touch surface that receives contact data, and a radio layer including a plurality of antennas that receive radio data.
  • the wearable device processes the radio data and the contact data to at least one of increase intemet-of-things awareness and execute a gesture command originating from the user.
  • the wearable device also processes the laryngeal data to execute a vocalization command originating from the user.
  • Figure 1 is a schematic diagram of a user wearing one embodiment of a wearable device according to the teachings presented herein;
  • Figure 2 is a schematic diagram of the user depicted in figure 1 wearing the wearable device in additional detail;
  • Figure 3A, figure 3B, figure 3C, and figure 3D are each schematic diagrams of one embodiment of a portion of the wearable device
  • Figure 4 A, figure 4B, figure 4C, and figure 4D are each schematic diagrams of one embodiment of a portion of a larynx member.
  • Figure 5 is a conceptual module diagram depicting a software architecture of an environmental control application.
  • FIG. 1 and figure 2 therein is depicted one embodiment of a system including wearable technology that is conceptually illustrated and generally designated 10.
  • a user U has a torso T and an arm A as well as a neck N and a head H.
  • the user U is wearing clothing C.
  • the user U is wearing a wearable device 12 on the clothes C and a larynx member 14 is affixed to the neck N.
  • the wearable device 14 processes radio data and contact data to at least one of increase intemet-of- things (IOT) awareness and execute a gesture command originating from the user U.
  • the wearable device 12 also processes the laryngeal data to execute a vocalization command originating from the user U.
  • IOT intemet-of- things
  • the wearable device 12 processes the radio data and the contact data to detect the arm A movement as shown by arrow MA.
  • the wearable device 12 processes the laryngeal data received from the larynx member 14 to detect audible vocals VA and even sub-audible vocals Vs. Based on the detected audibles, the wearable device 12 may execute a command or initiate a telephony application, including transmission of an audible vocalization or transmission of a sub-audible vocalization. Also, audible commands or sub- audible commands may be enabled.
  • the wearable device 12 may also process the laryngeal data that includes movement of the head H as shown by M H , including the detection of biometrics that may indicate what the user is thinking or feeling as shown by element I.
  • the wearable device 12 processes the radio data and the contact data to provide authentication relative to credentials, for example. Such authentication permits the user U to go through the entrance E. Additionally, as shown in figure 2, the user U by way of the processing of the radio data and the contact data has device-to-device awareness of the individual Ii having a wearable device and the individual I 2 having a smart device. Using interactive navigation as shown by NAV and enabled by the processing of the radio data and the contact data, the user U is able to visit the purchasing area P and select a gift G for purchase. A purchase that may be enabled by the wearable device 12.
  • the wearable device 12 can be used for everyday computing, authentication, telephony, navigation, and as an entry point into augmented reality space. It is designed to excel in performance, device awareness, security, design, user interactions, and accessibility. The wearable device makes use of multiple antennae and wireless tracking technology in order to provide enhanced location awareness. It also means that the wearable device understands cardinal directions or bearings of nearby devices for use in software applications on the device, and to support location-aware gestures.
  • the wearable device 12 works hand and hand with the larynx member 14 and, as will be discussed in further detail hereinbelow, includes a radar-enhanced capacitive touch surface for maximum accessibility for all users.
  • the on-board radar chip understands precise hand movements and can detect objects that are directly in front of the device.
  • the system lets users interact with world around them in new ways.
  • the wearable device 12 also lets users bring their desktop with them wherever they go. It should be appreciated that even though a wearable device 12 is depicted on the clothing C, the wearable device 12 may be on necklace, for example, or otherwise associated with the user U. Even though the necklace wearable doesn't have a built-in screen, the device leverages a proximity-based wireless VNC protocol in order to display the graphical desktop on neighboring device displays. Once a display-bearing device (Laptop, Desktop, TV, or Smartphone) has been paired, bringing the wearable within range and performing the hold gesture will cause the two devices to form a VNC connection. This means that users can bring their desktop with them wherever they go.
  • a display-bearing device Laytop, Desktop, TV, or Smartphone
  • the wearable device 12 includes an outer touch layer 20, an interior radio layer 22, an interior hardware layer 24, and an exterior electrical layer 26.
  • the outer touch layer 20, the interior radio layer 22, the interior hardware layer 24, and the exterior electrical layer 26 are interconnecte3d.
  • the outer touch layer 20 includes a capacitive touch surface 30.
  • the interior radio layer 22 includes a substrate 40 securing a cell antenna 42, which may be a transceiver, and an induction coil 44 as well as, in one embodiment spaced and segmented, antennas 46, 48, 50, 52.
  • the capacitive touch surface 30 in conjunction with the antennas 46, 48, 50, 52 can determine the direction of signals as indicated by element 54.
  • the interior hardware layer 24 includes a substrate 56 having components 58 including Amb, LEDs, Mic, Ramdisk, Cell, Flash, WiFi, Mem, CPU, BT, Radar, Audio, Clock, Rocker, Accel, Charge Circuit, IR, USB C, and GPIO, for example.
  • the memory is accessible to the processor and the memory includes processor-executable instructions that, when executed, cause the processor to process the radio data and the contact data to at least one of increase intemet-of-things awareness and execute a gesture command originating from the user. Further, the processor-executable instructions cause the processor to process the laryngeal data to execute a vocalization command originating from the user.
  • the exterior electrical layer includes a substrate 60 having a shielded battery 62 and a heatsink 64.
  • the wearable device 12 may come with one or more segmented antennas, that can identify the points of origin of incoming radio signals, by using low observable tracking techniques, and angle-of- arrival techniques seen in phased-array radar systems. Segments are spaced at squared increments apart from each other at, one-eighth, one-quarter, one-half wavelength distances as is common in phased- arrays.
  • the wearable device 12 may also use the phased-array technology to steer wireless signals towards a destination access-point or device. It might programmatically choose to steer signals back towards another device in the same direction that the signals were received in, adjusted to changes in the device’s position, improving connectivity, and providing some degree of wireless privacy.
  • phased-arrays are not usually used in consumer products, especially portable devices, and that low observable techniques improve the relevance of phased-arrays for mobile applications.
  • directional awareness of neighboring devices allows for far more complex gestures, and opens a world of possibilities for navigation and augmented reality technology, allowing the device to visualize the wireless space, and all of those‘WiFi radar’ apps to actually work.
  • a chaintenna that is, an antenna dipole that has been strung into the chain, that is relatively low-power, and clips onto the sides of the wearable device.
  • the antenna can be used for cellular, WiFi, or Bluetooth radios, and keeps the device facing forward. It is high-gain, as the dipole extends through the chain, and is usually longer than the body of the wearable device.
  • the dipole might also be insulated separately and capped at either end.
  • the chaintenna might have some degree of resistivity between the conductive leads that scales appropriately to the frequency that the antenna is meant to operate on and is calculable with techniques known in the art of antennas.
  • the wearable device 12 may also have a capacitive radar-enhanced multi-touch surface. This involves a capacitive pad grid that measures the locations of multiple fingers or conductive objects on an X, Y plane from each of the comers of the pad. It is combined with a radar sensor underneath the pad that emits radio pulses outwards that bounce off of the user’s hands and land back on the pad. The combination allows the pad to measure the distance to the user’s hand, yielding a Y-coordinate on the two-dimensional touch plane, giving the pad three-dimensional awareness in a manner that is mechanically similar to the doppler-shift ultrasound techniques.
  • the radar sensing provides a positional offset from the center point where the signals were emitted, and likewise, another analog waveform that can be fed into machine-learning software.
  • This lets users train the pad to recognize specific movements of the hands in front of the device and perform free-space gestures that are invokable by the user through normal operation of the wearable member 12. Since there does not need to be a direct visualization of what is on the display, the touchpad gives users the ability to convey what should be selected, playing, or otherwise happening.
  • the device is said to be contextual in nature, as the device has a certain degree of environmental awareness and depending on what happens around the time a gesture is made, the device will perform the appropriate action.
  • the embedded form-factor of the wearable device is available to developers for the purpose of providing a modular development platform that can be used inside existing products or as a badge, for example, that is releasable securable to the clothing C of the user U.
  • the embedded version may expose development endpoints such as the GPIO serial connector, wired and / or wireless network interface modules, USB, and a software API (application programming interface). It lets developers and hobbyists experiment with the platform, and connect different modules appropriate to the projects they are working on. Businesses can embed the platform and deploy wearable device-compatible consumer products and IoT devices with their own functionality, purpose, and branding. The platform can be locked-down for mass deployment, by removing unneeded development modules, leaving only the required components for deployment inside products at scale. Embedding the wearable device ensures that the software protocol implementation is consistent among many different kinds of devices, including doors and toasters, which makes for a secure and open firmware platform that all users of the Intemet-of- Things can benefit.
  • the wearable device may be utilized in different contexts.
  • a pocketed context users might put their wearable device in a closed space, or in their pocket. Similar to on-face detection seen in smartphones, the wearable device uses a light sensor to determine whether there is something directly in- front or behind-it. If both sensors return a closed value of less than a few centimeters, than the device might enter pocketed context. In pocketed context, the wearable device will not respond to any gestures that the user would not reasonably perform in their pocket. This is both a safety mechanism to protect the device from accidental input and works to the benefit of the user who might use pocketed interactions inside, such as triple tapping on the device to silence a notification.
  • the software might stay in embedded context, which is because the wearable device software has detected that it’s running on a device that is embedded inside another product. That product may or not have a touch surface or offer direct physical user interaction whatsoever.
  • network connectivity, and the ability to interact with the system remotely is supported and streamlined with the API and software development kit.
  • the wearable device may be utilized to carry data analogously to bringing your desktop workspace with you on-the-go. Even though the necklace wearable doesn’t have a built-in screen, the device leverages a proximity-based wireless VNC protocol that it uses to display a graphical desktop on neighboring devices that are running the software. Once a display-bearing device (Laptop, Desktop, TV, or Smartphone) comes within range, the wearable device pair to it and either share a roaming workspace or display a graphical shell on neighboring devices via a VNC protocol. Simple bringing the wearable within range and performing the hold gesture will cause the two devices to form a VNC connection. This means that users can bring their desktop with them wherever they go.
  • a display-bearing device Lay, Desktop, TV, or Smartphone
  • named items are available to the user as a sort of vocal shortcut for physical and virtual objects.
  • a user names an object
  • the aspects of the object that make it unique are stored in a searchable mapping of unique identifiers or hashes, as they relate to specific data structures and types, such as file or device. Users might choose to create places where they can store files, and spaces where they can reference objects.
  • the wearable device should save the device’ s cryptographic signature, and wireless profile. It may be the case that the device does not support pairing at all but speaks a common language such as WiFi or Bluetooth. Identifying the commonalities of wireless frames (building a profile) and saving the MAC address (typically unique, but non-unique identifier), can be used to find overlapping traits between devices. Say for example, the user holds an ordinary cell phone in front of the wearable device, and names it“Mobile Phone”. Later on, when the wearable device does wireless discovery and identifies a device that has a different MAC address, but is emitting frames with a similar wireless profile, the wearable device might ask the user,“Is what you’re holding a Mobile Phone?”
  • the larynx member 14 includes multiple layers, a layer 70, a layer 72, a layer 74, and a layer 76.
  • a substrate 90 supports a charging interface 92, a battery 94, and a USB C interface 96.
  • a power button 98 is provided as is a power LED 100.
  • a substrate 110 supports an ACL 112, OS 114, a CPU 116, and a BT 118, as well as a piezo array 120.
  • a microphone 122 and a resistivity sensor 124 are also provided.
  • a piezoelectric sensing array is provided with an ultrasound gel being applied to layer 74.
  • a gel escape channel 130 provides communication to the exterior from the layer 74.
  • a medical grade adhesive may be applied to the exterior. It should be appreciated that although a particular architecture of components is depicted, other architectures are within the teachings presented herein.
  • the memory is accessible to the processor and the memory includes processor-executable instructions that, when executed, cause the processor to process the piezoelectric data and sound data. Further, the processor-executable instructions cause the processor to apply machine learning to train and recognize meanings associated with the piezoelectric data and sound data.
  • the larynx member 14 provides an interface device that is also portable computer, complete with a processor, memory, and a wireless chipset, that rests on the outside of the neck.
  • the wireless sticker version of the larynx member 14 has a replaceable adhesive material and is small enough where it does not become a distraction. Users slide an inexpensive replaceable medical-grade adhesive sticker onto the bottom of the device and apply a small amount of an ultrasound gel directly on top of a piezoelectric sensing array. Any excess ultrasound gel will escape through an inset escape channel, which ensures that there are no air pockets between the piezoelectric array and the surface of the skin.
  • the medical grade adhesive holds the device securely on the outside of the neck and can be positioned so that it is facing the larynx, near to the laryngeal nerve, underneath the jaw, on the spot on the outside of the neck that moves with the larynx, mouth and tongue.
  • the analog waveforms representing the movements of the larynx muscles and throat are captured by the ultrasound piezoelectric array and accelerometer. Any audible sound will be captured by one or more throat microphones that provide another analog data point for combined processing.
  • the device may be in a sticker form, there are resistivity leads for detecting perspiration on the outside of the skin that may weaken the medical adhesive bond. This makes the user of the device aware of when the adhesive sticker or patch needs to be replaced. For medical use, this can signify that the user is becoming anxious or reacting negatively to a stressor. This information is useful for early detection of psychoemotional states like anxiety or excitement. Doctors might find this information useful in gauging the severity of an anxiety disorder, or for measuring the frequency of panic attacks as seen in panic disorder.
  • the larynx recognition technology is derived from several prior works in government, and the medical industry, where the movements of the larynx that help to form human speech were captured as analog waveforms and conveyed to an external device using radio frequency (RFID) technology.
  • RFID radio frequency
  • the larynx sticker also measures muscular movement, except that it accounts for the movement of the muscles in the throat that move with the tongue, in terms of machine learning, and works on the outside of the neck to reconstruct silent speech.
  • the side of the neck moves, and the device is able to recognize silent speech patterns or‘subvocalizations’ that the person will produce during speech, with a hybrid sensor machine-learning approach.
  • the larynx member 14 is capable of providing audio from the microphones and raw data from the on-board sensors, but it can also pre-process these waveforms. It also yields processed ultrasound imagery from the piezoelectric array representing muscular movement in the larynx and muscles in the surrounding area. Muscular data is also generated as the tongue moves in order to form speech, even when the user is speaking silently.
  • the raw waveforms are processed using a machine learning algorithm that can be trained to recognize specific words, phrases, and sounds. Ultrasound imagery from the piezoelectric array is converted into a matrix of reflected distances to individual parts of the muscle, similar to pixels on a computer monitor. These waveforms and distance matrices are ran through machine learning, in order to identify specific patterns that represent known words and phrases (even if they are of a non-language).
  • the machine learning algorithm can be trained with a software training routine that asks the user to say phrases in their own language. As the device captures the waveform signatures for each word or phrase, the machine learning algorithm will produce numeric training vectors. As is common with machine learning, this process can occur in multiple iterations, and the training vectors improve over some period of time.
  • These vectors can be stored on an external device running the training software, or with the laryngeal interface, for use with other devices. These training vectors are used during normal operating to discern between known words based on waveform inputs.
  • the device is not required to analyze the imagery from the ultrasound array visually, as the matrix of distances represents a depth bump map or topographical view of the larynx and throat muscles in action. Individual snapshots are taken on interval over time and can be triggered by the fact that the user is speaking via the accelerometer.
  • Raw waveforms or processed input can be returned to an external device, such as a wearable computer, that implements the same wireless protocol.
  • the larynx input device can be paired with an external computer over Bluetooth. The user would press a button on the device that causes it to enter pairing mode, and then the device can be paired with another computer running the recognition software.
  • the training vectors can be stored on the larynx device so that the recognition is consistent across multiple associated Bluetooth devices.
  • the subvocalization sticker hardware of the larynx member 14 consists of a low-energy ultrasonic piezoelectric array, or singular piezoelectric transducer. It rests on the outside of the neck and has a medical grade adhesive that holds the device securely in-place on the outside of the neck. It should be positioned so that it is facing the larynx, near to the laryngeal nerve bundle, underneath the jaw, on the spot on the outside of where trained physicians and athletes are instructed to check their pulse rate. This area is ideal, because there is a good view of the muscle tissue, data about the user’s pulse rate is available, and the user can still turn their head side-to-side without significantly flexing the device out of place.
  • Transducers used in medical imaging range into higher frequencies depending on the desired depth and type of tissue. In this case, the tissue depth of penetration is minimal, as the diameter of the neck is limited.
  • the device penetrate past the epidermal layer to measure the depth to the first layer of the platysma muscle, which wraps around the entire front and sides of the neck, connects directly to the underside of the skin, and plays an important role in facial expression.
  • the device is meant to reach deeper and may be able to reach multiple muscle groups in the area, including the muscles of the larynx, which are directly responsible for movement within the voice box.
  • This component emits an inaudible tones at specific frequencies for the purpose of deep tissue imaging.
  • the transducer is triggered as the device detects that the user is speaking. In this case, the user may be speaking normally or subvocalizing to the device, which causes multiple muscles in the sides of the neck to contract.
  • the on-board accelerometer can be used to indicate that there is movement, especially when the mouth is open, and the user is engaged in self-talk.
  • the sticker is a small slim device that rests on the outside of the neck, it is still it still has an embedded processor, memory, and a wireless chipset.
  • the proposed design has pull-tab functionality, with a medical-grade adhesive material used to affix it to the neck.
  • the user can apply a tiny amount of an ultrasound gel directly on between the skin and the piezoelectric sensing array. Any excess ultrasound gel will escape through an inset escape channel, which ensures that there are no air pockets between the piezoelectric array and the surface of the skin.
  • the applications of the wearable device 10 and the larynx member 12 are numerous. By way of example and not by way of limitation, laryngeal and mental illness, hybrid gestures, instant purchases, telephony, and casual navigation will be presented with a few other examples.
  • laryngeal and mental illness which are disorders of the larynx such as irritable larynx disorder, and psychiatric conditions like anxiety, post-traumatic stress disorder, and schizophrenia
  • larynx device can help users recognize when they have lost focus or have begun unintentional self-talk that might be making their condition worse. If the person has an irritable larynx, or physical damage to the surrounding tissue, a doctor may have instructed them to avoid speaking in order to let the affected area heal.
  • users may be out of sync with reality, subvocalizing about their worries unintentionally.
  • the device can help users train themselves to focus on their surroundings.
  • the wearable device 12 since the wearable device 12 doesn’t have a screen, it draws on its ability to determine the cardinal directions of nearby devices. Although these directions do necessarily need relate to true cardinal directions like North, South, East, and West, the device understands the bearings of nearby external devices in relation to itself. For example, the user might decide that they want to share a piece of content, and instead of choosing a destination device on a menu screen, users would perform a swipe gesture in the direction of the destination device. The user might also point the device itself in the direction of the destination device, and perform a gesture that would take some action, such as a file copy or initiating the pairing process.
  • users might decide to perform a gesture at a nearby wireless access point for the purpose of key-pairing with that access point.
  • This process might involve a protocol similar to WPS (Wireless Protected Setup) for backward compatibility, or another wireless protocol.
  • WPS Wireless Protected Setup
  • users might share individual wireless keys by performing gestures at one another, which is analogous to simply writing a WiFi key down of a piece of paper and handing it to the other person.
  • users can query the prices of items at retail outlets or commercial goods or perform silent transactions.
  • One potential usage of the system is to enable instant purchasing in stores. As the shopper looks through items on the store shelves, consider buying an item by silently vocalizing a phrase like,“I want to buy these [item name].” The system will detect that pattern of text and select the named item in front of them. In order to complete the purchase, the user would perform a brief gesture, such as holding the wearable device for a few seconds, which begins a cancelation timer. If the user should later decide that he did not actually intend to buy the item, the user can say an abort phrase such as,“I didn’t want to buy that.”, which will revert the item to its unpurchased state. Other similar use cases might involve using The wearable device to order food from your favorite restaurants, scheduling pickup or delivery. EUNA would be there to assist the user with purchasing and pricing and can help confirm the order. It can also help users perform financial transactions between one another.
  • EUNA The AI assistant inside the wearable device
  • EUNA also has a real time automatic feature, and upon request, the ability to offer advice pertaining to the user’s calendar, location, and nearby devices.
  • EUNA Artificial Intelligence
  • Euna would be aware of the turn signal indicators as well as the objects and colors of the objects, mobile devices, IoT devices, vehicles, and other
  • the wearable device user s clothing in proximity to you, and other pertinent data for a more human-to-human casual navigation experience, which can be loaded from an external data set. Users can opt-in to share information would improves the system. For example, the user might share their shirt color.
  • EUNA is a sentient AI that becomes the user’ s personalized virtual assistant.
  • the AI travels with the user, which means that it has situational awareness and understands more than just the question at hand. It understands it’s surroundings, including environmental features, man-made structures, buildings, stores, commercial environments, retail environments, recreational facilities, restaurants, offices, vehicles, and the colors of objects. Another example would be helping guide someone through a crowd of people, by referencing the nearby objects and outfits in order to guide the user to the intended person.
  • the owners of a transportation network decide to install named wireless devices that can help users navigate through a sea of devices, as an electron would flow across a metal in the sea of elections.
  • users can silently query the wearable device for information from a search engine, storing data in a cryptographically secure fashion that is tied to a unique device identifier.
  • the dynamic real-time location of the device that made the query is stored in a distributed, allowing the device to simply query the information from the distributed data center, instead of repeatedly querying the same searches over and over.
  • Euna can also be configured to share pertinent data between devices in close proximity to one another when the wearable devices come within range.
  • EUNA can actively inspire individuals to talk to one another when both parties have opted-in and have chosen to share their profile information, and are looking to meet people in the area:
  • the goal is to create a system which aids users in interacting with the world around them, recognizing danger, and remaining connected in an interconnected world.
  • This technology can help users automatically find their friends and peers. For example, there might be a hospital nurse who needs a doctor for a patient but the doctor isn't in the radiology department where the user expected. It can also detect nearby obstacles with the radar chipsets, which can alert users that they are about to make a mistake. Users who wish to call for help can place emergency calls, but there should also be an audible I non-audible feedback mechanism on-board to let the user know that help is on the way.
  • the system is also useful for cryptographically secure authentication between IoT devices and can be used as an authentication badge with secondary-factor authentication (2FA) support built into the device.
  • 2FA secondary-factor authentication
  • the owner of the the wearable device might draw their unlock code on the touch surface, or look at a door, and subvocalize an opening phrase like,“let me in”, or a locking phrase like,“lock the door”.
  • the phrases can be configured, but there should be sane defaults so that there are common opening and closing phrases.
  • Figure 5 conceptually illustrates the software architecture of an environmental control application 150 of some embodiments that may utilize the wearable device 12 and the larynx member 14.
  • the analytics application 150 is a stand-alone application or is integrated into another application, while in other embodiments the application might be implemented within an operating system 190.
  • the analytics application 190 is provided as part of a server-based solution or a cloud-based solution.
  • the application is provided via a thin client. That is, the application runs on a server while a user interacts with the application via a separate machine remote from the server.
  • the application is provided via a thick client. That is, the application is distributed from the server to the client machine and runs on the client machine.
  • the application is partially ran on each of the wearable device 12 and the larynx member 14.
  • the analytics application 150 includes a user interface (UI) interaction and generation module 152, user interface tools 154, authentication modules 156, wireless device-to-device awareness modules 158, contextual gestures modules 160, vocal modules 162, subvocal modules 164, interactive navigation modules 166, mind/body moduels 168, retail modules 170, and telephony/video calls modules 172.
  • UI user interface
  • storages 180, 182 184 are all stored in one physical storage.
  • the storages 180, 18, 284 are in separate physical storages, or one of the storages is in one physical storage while the other is in a different physical storage.
  • the UI interaction and generation module 152 generates a user interface that allows the end user to utilize the wearable device 12 and the larynx member 14. During the use, various modules may be called to execute the functions described herein.
  • figure 5 also includes an operating system 190 that includes input device drivers 192 and output device drivers 194. In some embodiments, as illustrated, the input device drivers 192 and the output device drivers 194 are part of the operating system 190 even when the environmental control application 150 is an application separate from the operating system 190.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

La présente invention concerne un dispositif portable. Dans un mode de réalisation, le dispositif portable comprend une couche matérielle, un élément tactile ayant une surface tactile capacitive qui reçoit des données de contact, et une couche radio incluant une pluralité d'antennes qui reçoivent des données radio. Le dispositif portable traite les données radio et les données de contact pour augmenter la perception de l'internet des objets et/ou pour exécuter une commande gestuelle provenant de l'utilisateur. Le dispositif portable traite également des données laryngales pour exécuter une commande de vocalisation provenant de l'utilisateur.
PCT/US2019/028818 2018-04-23 2019-04-23 Dispositif portable WO2019209894A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/897,893 US20200341543A1 (en) 2018-04-23 2020-06-10 Wearable device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862661573P 2018-04-23 2018-04-23
US62/661,573 2018-04-23

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/897,893 Continuation-In-Part US20200341543A1 (en) 2018-04-23 2020-06-10 Wearable device

Publications (1)

Publication Number Publication Date
WO2019209894A1 true WO2019209894A1 (fr) 2019-10-31

Family

ID=68294248

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2019/028818 WO2019209894A1 (fr) 2018-04-23 2019-04-23 Dispositif portable

Country Status (1)

Country Link
WO (1) WO2019209894A1 (fr)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060267773A1 (en) * 2005-05-24 2006-11-30 V.H. Blackinton & Co., Inc. Badge verification device
US20130185077A1 (en) * 2012-01-12 2013-07-18 Inha-Industry Partnership Institute Device for supplementing voice and method for controlling the same
US20150029661A1 (en) * 2014-10-15 2015-01-29 AzTrong Inc. Wearable portable electronic device with heat conducting path
US20170076272A1 (en) * 2002-10-01 2017-03-16 Andrew H. B. Zhou Systems and methods for mobile application, wearable application, transactional messaging, calling, digital multimedia capture and payment transactions
WO2017099828A1 (fr) * 2015-12-07 2017-06-15 Intel IP Corporation Dispositifs et procédés d'amélioration de la mobilité et choix d'un chemin de dispositif vestimentaire
US20170192743A1 (en) * 2016-01-06 2017-07-06 Samsung Electronics Co., Ltd. Ear wearable type wireless device and system supporting the same
WO2017140812A1 (fr) * 2016-02-18 2017-08-24 Koninklijke Philips N.V. Dispositif, système et procédé de détection et de surveillance de dysphagie chez un sujet
WO2018053493A1 (fr) * 2016-09-19 2018-03-22 Wisconsin Alumni Research Foundation Système et procédé de surveillance d'un flux d'air dans une trachée avec des ultrasons

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170076272A1 (en) * 2002-10-01 2017-03-16 Andrew H. B. Zhou Systems and methods for mobile application, wearable application, transactional messaging, calling, digital multimedia capture and payment transactions
US20060267773A1 (en) * 2005-05-24 2006-11-30 V.H. Blackinton & Co., Inc. Badge verification device
US20130185077A1 (en) * 2012-01-12 2013-07-18 Inha-Industry Partnership Institute Device for supplementing voice and method for controlling the same
US20150029661A1 (en) * 2014-10-15 2015-01-29 AzTrong Inc. Wearable portable electronic device with heat conducting path
WO2017099828A1 (fr) * 2015-12-07 2017-06-15 Intel IP Corporation Dispositifs et procédés d'amélioration de la mobilité et choix d'un chemin de dispositif vestimentaire
US20170192743A1 (en) * 2016-01-06 2017-07-06 Samsung Electronics Co., Ltd. Ear wearable type wireless device and system supporting the same
WO2017140812A1 (fr) * 2016-02-18 2017-08-24 Koninklijke Philips N.V. Dispositif, système et procédé de détection et de surveillance de dysphagie chez un sujet
WO2018053493A1 (fr) * 2016-09-19 2018-03-22 Wisconsin Alumni Research Foundation Système et procédé de surveillance d'un flux d'air dans une trachée avec des ultrasons

Similar Documents

Publication Publication Date Title
CN105976813B (zh) 语音识别***及其语音识别方法
US10389873B2 (en) Electronic device for outputting message and method for controlling the same
US10755695B2 (en) Methods in electronic devices with voice-synthesis and acoustic watermark capabilities
KR102585228B1 (ko) 음성 인식 시스템 및 방법
KR102558437B1 (ko) 질의 응답 처리 방법 및 이를 지원하는 전자 장치
KR102498451B1 (ko) 전자 장치 및 전자 장치에서의 정보 제공 방법
KR102498364B1 (ko) 전자 장치 및 전자 장치에서의 정보 제공 방법
US10217349B2 (en) Electronic device and method for controlling the electronic device
KR102561572B1 (ko) 센서 활용 방법 및 이를 구현한 전자 장치
KR102246742B1 (ko) 전자 장치 및 전자 장치에서 적어도 하나의 페어링 대상을 식별하는 방법
Bai et al. Acoustic-based sensing and applications: A survey
EP3396666A1 (fr) Dispositif électronique de fourniture de service de reconnaissance de la parole et procédé associé
US20170278480A1 (en) Intelligent electronic device and method of operating the same
US11360791B2 (en) Electronic device and screen control method for processing user input by using same
KR20180016866A (ko) 와치타입 단말기
KR20160142128A (ko) 와치형 단말기 및 그 제어방법
KR20170052976A (ko) 모션을 수행하는 전자 장치 및 그 제어 방법
US11755111B2 (en) Spatially aware computing hub and environment
US10496225B2 (en) Electronic device and operating method therof
KR20160036921A (ko) 이동단말기 및 그 제어방법
KR20150130854A (ko) 오디오 신호 인식 방법 및 이를 제공하는 전자 장치
US20230185364A1 (en) Spatially Aware Computing Hub and Environment
CN110113659A (zh) 生成视频的方法、装置、电子设备及介质
Wang et al. Sensing beyond itself: Multi-functional use of ubiquitous signals towards wearable applications
WO2016206646A1 (fr) Procédé et système pour pousser un dispositif de machine à générer une action

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19793421

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19793421

Country of ref document: EP

Kind code of ref document: A1