CN117707333A - Wearable electronic device for collaborative use - Google Patents

Wearable electronic device for collaborative use Download PDF

Info

Publication number
CN117707333A
CN117707333A CN202311192108.0A CN202311192108A CN117707333A CN 117707333 A CN117707333 A CN 117707333A CN 202311192108 A CN202311192108 A CN 202311192108A CN 117707333 A CN117707333 A CN 117707333A
Authority
CN
China
Prior art keywords
head
mountable device
view
headset
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311192108.0A
Other languages
Chinese (zh)
Inventor
P·X·王
J·C·弗兰克林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US18/232,296 external-priority patent/US20240094804A1/en
Application filed by Apple Inc filed Critical Apple Inc
Publication of CN117707333A publication Critical patent/CN117707333A/en
Pending legal-status Critical Current

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The present disclosure relates to wearable electronic devices for collaborative use. The system of the present disclosure may provide a head-mountable device with different input and output capabilities. Although operating in a shared environment, such differences may result in the head-mountable device providing a slightly different experience for the corresponding user. However, the output provided by one head-mountable device may be indicated on another head-mountable device so that the users are aware of the nature of each other's experience. Where different head-mountable devices provide different sensing capabilities, the sensor of one head-mountable device may facilitate detection of another head-mountable device to provide more accurate and detailed outputs, such as object recognition, head-image generation, hand and body tracking, and the like.

Description

Wearable electronic device for collaborative use
Cross Reference to Related Applications
The present application claims the benefit of U.S. provisional application No. 63/407,122, entitled "WEARABLE ELECTRONIC DEVICES FOR COOPERATIVE USE," filed on 9/15 of 2022, the entire contents of which are incorporated herein by reference.
Technical Field
The present description relates generally to head-mountable devices and, more particularly, to collaborative use of head-mountable devices having different features.
Background
A user may wear a wearable device to display visual information within the user's field of view. The head-mountable device could be used as a Virtual Reality (VR) system, an Augmented Reality (AR) system, and/or a Mixed Reality (MR) system. The user may observe output provided by the head-mountable device, such as visual information provided on a display. The display may optionally allow a user to view the environment external to the head-mountable device. Other outputs provided by the head-mountable device may include speaker output and/or haptic feedback. The user may further interact with the head-mountable device by providing input for processing by one or more components of the head-mountable device. For example, a user may provide tactile input, voice commands, and other inputs while the device is mounted to the user's head.
Drawings
Some features of the subject technology are set forth in the following claims. However, for purposes of explanation, several embodiments of the subject technology are set forth in the following figures.
Fig. 1 illustrates a top view of a first headset device according to some embodiments of the present disclosure.
Fig. 2 illustrates a top view of a second headset device according to some embodiments of the present disclosure.
Fig. 3 illustrates a top view of a head-mountable device on a user according to some embodiments of the present disclosure.
Fig. 4 illustrates a second headset displaying an exemplary graphical user interface, according to some embodiments of the present disclosure.
Fig. 5 illustrates a first headset displaying an exemplary graphical user interface with a second headset-based indicator, according to some embodiments of the present disclosure.
Fig. 6 illustrates a first headset displaying an exemplary graphical user interface having a view based on the second headset of fig. 4, in accordance with some embodiments of the present disclosure.
Fig. 7 illustrates a flowchart of a process having operations performed by a second headset device, according to some embodiments of the present disclosure.
Fig. 8 illustrates a flowchart of a process having operations performed by a first headset device, according to some embodiments of the present disclosure.
Fig. 9 illustrates a front view of a first head-mountable device on a first user according to some embodiments of the present disclosure.
Fig. 10 illustrates a second headset displaying an exemplary graphical user interface including an avatar of a first user, according to some embodiments of the present disclosure.
Fig. 11 illustrates a flowchart of a process having operations performed by a first headset device, according to some embodiments of the present disclosure.
Fig. 12 illustrates a flowchart of a process having operations performed by a second headset device, according to some embodiments of the present disclosure.
Fig. 13 illustrates a front view of a second head mounted device on a second user and a first head mounted device according to some embodiments of the present disclosure.
Fig. 14 illustrates a first headset displaying an exemplary graphical user interface including an avatar of a second user, according to some embodiments of the present disclosure.
Fig. 15 illustrates a flowchart of a process having operations performed by a first headset device, according to some embodiments of the present disclosure.
Fig. 16 illustrates a side view of a head-mountable device on a user according to some embodiments of the present disclosure.
Fig. 17 illustrates a flowchart of a process having operations performed by a second headset device, according to some embodiments of the present disclosure.
Fig. 18 illustrates a flowchart of a process having operations performed by a first headset device, according to some embodiments of the present disclosure.
Fig. 19 illustrates a block diagram of a head mountable device according to some embodiments of the present disclosure.
Detailed Description
The detailed description set forth below is intended as a description of various configurations of the subject technology and is not intended to represent the only configurations in which the subject technology may be practiced. The accompanying drawings are incorporated in and constitute a part of this specification. The specific embodiments include specific details for the purpose of providing a thorough understanding of the subject technology. However, it will be clear and apparent to one skilled in the art that the subject technology is not limited to the specific details shown herein and may be practiced without these specific details. In some instances, well-known structures and components are shown in block diagram form in order to avoid obscuring the concepts of the subject technology.
A wearable device, such as a wearable display, headset, goggles, smart glasses, head-up display, etc., may perform a series of functions determined by the components (e.g., sensors, circuitry, and other hardware) that the manufactured wearable device includes. However, space, cost, and other considerations may limit the ability to provide each component that may provide the desired functionality. For example, different users may wear and operate different head-mountable devices that provide different components and functions. However, users of different types of devices may jointly participate in sharing, collaboration, and/or collaborative activities.
In view of the variety of components and functions desired on different head-mountable devices, it would be beneficial to provide functionality that helps users understand each other's experience. This may allow users to have a more similar experience when operating in a shared environment.
It would also be beneficial to allow multiple head-mountable devices to interoperate to take advantage of their combined sensory input and computing capabilities, as well as those of other external devices, to improve sensory perception, mapping capabilities, accuracy, and/or processing workload. For example, sharing sensory input among multiple head-mountable devices may supplement and enhance individual units by interpreting and reconstructing objects, surfaces, and/or external environments with perceptible data from multiple angles and locations, which also reduces occlusion and inaccuracy. As more detailed information is available at a particular moment, the speed and accuracy of object recognition, hand and body tracking, surface mapping, and/or digital reconstruction may be improved. By way of further example, such collaboration may provide more efficient and more efficient mapping of space, surfaces, objects, gestures, and users.
The system of the present disclosure may provide a head-mountable device with different input and output capabilities. Although operating in a shared environment, such differences may result in the head-mountable device providing a slightly different experience for the corresponding user. However, the output provided by one head-mountable device may be indicated on another head-mountable device so that the users are aware of the nature of each other's experience. Where different head-mountable devices provide different sensing capabilities, the sensor of one head-mountable device may facilitate detection of another head-mountable device to provide more accurate and detailed outputs, such as object recognition, head-image generation, hand and body tracking, and the like.
These and other embodiments are discussed below with reference to fig. 1-19. However, those skilled in the art will readily appreciate that the detailed description given herein with respect to these figures is for explanatory purposes only and should not be construed as limiting.
According to some embodiments, for example as shown in fig. 1, the first head-mountable device 100 comprises a frame 110. The frame 110 may be worn on the head of the user. The frame 110 may be positioned in front of the user's eyes to provide information within the field of view of the user. The frame 110 may provide a nose pad and/or other portion to rest on the nose, forehead, cheeks, and/or other facial features of the user.
The frame 110 may provide structure about its peripheral region to support any internal components of the frame 110 in their assembled position. For example, the frame 110 may encapsulate and support various internal components (including, for example, integrated circuit chips, processors, memory devices, and other circuitry) to provide computing and functional operations for the first headset 100, as discussed further herein. Although several components are shown within the frame 110, it should be understood that some or all of these components may be located anywhere within or on the first head-mountable device 100. For example, one or more of these components may be positioned within the head adapter 120 and/or the frame 110 of the first headset 100.
The frame 110 may optionally be supported on the head of a user with a head adapter 120. As depicted in fig. 1, the head adapter 120 may optionally wrap around or extend along opposite sides of the user's head. It should be appreciated that other configurations may be applied to secure the first headset 100 to the head of a user. For example, one or more straps, bands, belts, covers, caps, or other components may be used in addition to or instead of the illustrated components of the first head-mountable device 100.
The frame 110 may include and/or support one or more cameras 130. The camera 130 may be positioned on or near the outside 112 of the frame 110 to capture images of views external to the first headset 100. As used herein, the outside of a portion of a head-mountable device is the side facing away from the user and/or toward the external environment. The captured image may be available for display to a user or stored for any other purpose.
The first headset 100 may include one or more external sensors 132 for tracking characteristics of or in the external environment. For example, the first head-mountable device 100 may include an image sensor, a depth sensor, a thermal (e.g., infrared) sensor, and so forth. By way of further example, the depth sensor may be configured to measure a distance (e.g., range) to the object via stereo triangulation, structured light, time of flight, interferometry, and the like. Additionally or alternatively, the external sensor 132 may include or operate in conjunction with the camera 130 to capture and/or process images based on one or more of hue space, brightness, color space, luminosity, and the like.
The first headset 100 may include one or more internal sensors 170 for tracking characteristics of a user wearing the first headset 100. For example, the internal sensor 170 may be a user sensor to perform facial feature detection, facial motion detection, facial recognition, eye tracking, user emotion detection, voice detection, and the like. By way of further example, the internal sensor may be a biosensor for tracking biometric characteristics, such as health and activity metrics.
The first headset device 100 may include a display 140 that provides visual output for viewing by a user wearing the first headset device 100. One or more displays 140 may be positioned on or near the inner side 114 of the frame 110. As used herein, the inside of a portion of a head-mountable device is the side facing the user and/or facing away from the external environment.
According to some embodiments, another head-mountable device with different components, features and/or functions may be provided to the user by another user, for example as shown in fig. 2. In some embodiments, the second headset device 200 includes a frame 210. The frame 210 may be worn on the head of the user. The frame 210 may be positioned in front of the user's eyes to provide information within the field of view of the user. The frame 210 may provide a nose pad and/or other portion to rest on the nose, forehead, cheeks, and/or other facial features of the user.
The frame 210 may provide structure about its peripheral region to support any internal components of the frame 210 in their assembled position. For example, the frame 210 may encapsulate and support various internal components (including, for example, integrated circuit chips, processors, memory devices, and other circuitry) to provide computing and functional operations for the second headset device 200, as discussed further herein. Although several components are shown within the frame 210, it should be understood that some or all of these components may be located anywhere within or on the second headset 200. For example, one or more of these components may be positioned within the head adapter 220 and/or the frame 210 of the second headset 200.
The frame 210 may optionally be supported on the head of a user with a head adapter 220. As depicted in fig. 2, the head adapter 220 optionally includes headphones for wrapping around or otherwise engaging or resting on the user's ear. It should be appreciated that other configurations may be applied to secure the second head-mountable device 200 to the head of the user. For example, one or more straps, bands, belts, covers, caps, or other components may be used in addition to or instead of the illustrated components of the second headset 200.
The frame 210 may include and/or support one or more cameras 230. The camera 230 may be positioned on or near the outside 212 of the frame 210 to capture images of views external to the second headset 200. The captured image may be available for display to a user or stored for any other purpose.
The first headset 100 may include one or more internal sensors 170 for tracking characteristics of a user wearing the first headset 100.
The second headset device 200 may include a display 240 that provides visual output for viewing by a user wearing the second headset device 200. One or more displays 240 may be positioned on or near the inner side 214 of the frame 210.
Referring now to both fig. 1 and 2, first and second headset devices 100 and 200 may provide different features, components, and/or functions. For example, one of the first and second headset devices 100, 200 may provide components that the other of the first and second headset devices 100, 200 does not provide. By way of further example, while the first headset 100 may provide one or more external sensors 132, the second headset 200 may omit such external sensors. By way of further example, while the first headset 100 may provide one or more internal sensors 170, the second headset 200 may omit an internal sensor. Thus, one of the first and second headset devices 100, 200 may provide greater sensing capability than the other.
In some implementations, components common to two head-mountable devices may differ in one or more features, capabilities, and/or characteristics. For example, the camera 130 of the first headset 100 may have a greater resolution, field of view, image quality, and/or low light performance than the camera 230 of the second headset 200.
By way of further example, the display 140 of the first headset 100 may have a greater resolution, field of view, image quality than the display 240 of the second headset 200. In some embodiments, display 140 may be a different type of display, including an opaque display and a transparent or translucent display.
For example, the display 140 of the first head-mountable device 100 may be an opaque display, and the camera 130 captures an image or video of the physical environment, which is a representation of the physical environment. The first head-mountable device 100 combines the image or video with the virtual object and presents the composition on the opaque display 140. A person uses the system to indirectly view the physical environment via an image or video of the physical environment and perceive a virtual object (where applicable) superimposed over the physical environment. As used herein, video of a physical environment displayed on an opaque display is referred to as "pass-through video," meaning that the system captures images of the physical environment using one or more image sensors, and in some operations may use those images when rendering an Augmented Reality (AR) environment on the opaque display. An Augmented Reality (AR) environment refers to a simulated environment in which one or more virtual objects are superimposed over a physical environment or representation thereof.
In some implementations, the second head-mountable device 200 may have a transparent or translucent display 240 instead of an opaque display (e.g., display 140). The transparent or translucent display 240 may have a medium through which light representing an image is directed to a person's eyes. The display 240 may utilize digital light projection, OLED, LED, uLED, liquid crystal on silicon, laser scanning light sources, or any combination of these techniques. The medium may be an optical waveguide, a holographic medium, an optical combiner, an optical reflector, or any combination thereof. In one embodiment, the transparent or translucent display may be configured to selectively become opaque. Projection-based systems may employ retinal projection techniques that project a graphical image onto a person's retina. The projection system may also be configured to project the virtual object into the physical environment, for example as a hologram or on a physical surface. For example, the second headset 200 presenting an Augmented Reality (AR) environment may have a transparent or translucent display 240 through which a person may directly view the physical environment. The head-mountable device 200 may be configured to present virtual objects on the transparent or translucent display 240 such that a person perceives the virtual objects superimposed over the physical environment with the head-mountable device 200.
Additionally or alternatively, other types of headset devices may be used with the first headset device 100 and/or the second headset device 200 or as one of the first headset device 100 and/or the second headset device 200. Such types of electronic systems enable a person to sense and/or interact with various computer-generated real-world environments. Examples include head-mounted systems, projection-based systems, head-up displays (HUDs), vehicle windshields integrated with display capabilities, windows integrated with display capabilities, displays formed as lenses designed for placement on a person's eyes (e.g., similar to contact lenses), headphones/earphones, speaker arrays, input systems (e.g., wearable or handheld controllers with or without haptic feedback), smartphones, tablets, and desktop/laptop computers. The head-mounted system may have an integrated opaque display and one or more speakers. Alternatively, the head-mounted system may be configured to accept an external opaque display (e.g., a smart phone). The head-mounted system may incorporate one or more imaging sensors for capturing images or video of the physical environment, and/or one or more microphones for capturing audio of the physical environment.
The physical environment relates to the physical world with which people (such as users of head-mountable devices) can interact and/or sense without necessarily requiring assistance from electronic devices (such as head-mountable devices). Computer-generated real-world environments involve partially or fully simulated environments in which people sense and/or interact with electronic devices (such as head-mountable devices). Computer-generated reality may include, for example, mixed reality and virtual reality. Mixed reality may include, for example, augmented reality and augmented virtualization. Electronic devices that enable a person to sense and/or interact with various computer-generated reality environments may include, for example, head-mountable devices, projection-based devices, head-up displays (HUDs), vehicle windshields with integrated display capabilities, windows with integrated display capabilities, displays formed as lenses (e.g., similar to contact lenses) designed to be placed on a person's eyes, headphones/earphones, speaker arrays, input devices (e.g., wearable or handheld controllers with or without haptic feedback), tablet computers, smartphones, and desktop/laptop computers. The head-mountable device may have an integrated opaque display, have a transparent or translucent display, or be configured to accept an external opaque display from another device, such as a smart phone.
Referring now to fig. 3, a plurality of users may each wear a corresponding head-mountable device having a field of view. As shown in fig. 3, wherein a first user 10 may wear a first headset 100 and a second user 20 may wear a second headset 200. It should be appreciated that the system may include any number of users and corresponding head-mountable devices. The head-mountable device may be provided with a communication link between any pair of head-mountable devices and/or other devices (e.g., external devices) for sharing data.
The first headset 100 may have a first field of view 190 (e.g., from camera 130) and the second headset 200 may have a second field of view 290 (e.g., from camera 230). The fields of view may at least partially overlap such that the object (e.g., virtual object 90 and/or physical object 92) is located within the field of view of more than one of the head-mountable devices. It should be appreciated that the virtual object (e.g., virtual object 90) need not be captured by a camera, but may be within an output field of view (e.g., from display 140 and/or 240) based on images captured by the corresponding camera. The first and second head-mountable devices 100, 200 may each be arranged to capture objects from different perspectives such that different portions, surfaces, sides, and/or features of the virtual object 90 and/or the physical object 92 may be observed and/or displayed by different head-mountable devices.
Referring now to fig. 4-6, the head-mountable device can provide corresponding outputs reflecting its perspective, and optionally provide information about other experiences provided by other head-mountable devices. Such information may be provided within a graphical user interface. However, not all depicted graphical elements may be used in all implementations with respect to the graphical user interfaces described herein, and one or more implementations may include additional or different graphical elements than those shown in the figures. Variations in the arrangement and type of these graphical elements may be made without departing from the spirit or scope of the claims set forth herein. Additional components, different components, or fewer components may be provided.
As shown in fig. 4, the head-mountable device 200 may operate its display 240 to provide a graphical user interface 242. As described herein, the display 240 may be a translucent or transparent display that allows light from the external environment to pass through to the user. Thus, the physical object 92 may be visible within the display 240, whether it be within the graphical user interface 242 or external. The graphical user interface 242 may further provide a view to the virtual object 90. In some embodiments, the virtual object 90 may be visible only through the graphical user interface 242 as an item that is presented as if it were located in an external environment, but not through a portion of the display 240 that does not include the graphical user interface 242. In cases where the graphical user interface 242 occupies less space (e.g., has a smaller field of view) than the display 240, the perception of the virtual object 90 by the user may be limited. As further shown in fig. 4, the view of virtual object 90 and/or physical object 92 may be based on the perspective of head-mountable device 200. Accordingly, the display 240 and/or the graphical user interface 242 may provide views to particular sides 90b and/or portions of the virtual objects 90 and/or the physical objects 92.
As shown in fig. 5, the head-mountable device 100 may operate its display 140 to provide a graphical user interface 142. As described herein, display 140 may be an opaque display that generates images based on views and/or other information captured by a camera, such as appearing as virtual objects located within an external environment. Thus, both physical object 92 and virtual object 90 may be visible within display 140 as part of graphical user interface 142. The graphical user interface 142 may occupy a substantial portion (e.g., up to all) of the display 140. Thus, it may provide a field of view that is wider than the field of view of the graphical user interface 242. As further shown in fig. 5, the view of virtual object 90 and/or physical object 92 may be based on the perspective of head-mountable device 100. Thus, the display 140 and/or the graphical user interface 142 may provide views to particular sides and be part of the virtual object 90 and/or the physical object 92.
In some implementations, as shown in fig. 5, the graphical user interface 142 may also include one or more indicators to help the user identify a perspective experienced by another user wearing the second headset 200. Such indicators 144 may be displayed concurrently with the view of virtual object 90 and/or physical object 92. For example, indicator 144 may show which sides 90a and 90b and/or portions of virtual object 90 and/or physical object 92 are being observed by a user wearing second headset 200. For example, the indicator 144 may be disposed at some outer surface of the virtual object 90 and/or the physical object 92. For example, the indicators 144 may include highlighting, lighting, shading, reflection, contours, borders, text, icons, symbols, emphasis, duplication, light rings, and/or animations disposed near the side 90b and/or portions of the virtual object 90 and/or the physical object 92 within the field of view provided by the second headset 200. Other sides 90a and/or portions of the virtual object 90 and/or the physical object 92 may omit such indicators 144. A user wearing the head-mountable device 100 may identify any difference between the user's own perspective and another user's perspective by viewing the graphical user interface 142, through the indicator 144.
In some implementations, as shown in fig. 6, the graphical user interface 142 may also include one or more window views to help the user identify a perspective experienced by another user wearing the second headset. Such window views 146 may be displayed concurrently with views of virtual object 90 and/or physical object 92. For example, window view 146 may show a copy of the user interface of the second headset (see graphical user interface 242 of fig. 4) and/or another output of its display. Thus, a user wearing the headset 100 may directly observe the output provided by the second headset to other users by observing the graphical user interface 142.
Fig. 7 shows a flow chart for operating a head-mountable device. For purposes of explanation, the process 700 is described herein primarily with reference to the head-mountable device 200 of fig. 2-4. However, process 700 is not limited to the head-mountable device 200 of fig. 2-4, and one or more blocks (or operations) of process 700 may be performed by a different head-mountable device and/or one or more other devices. For further explanation purposes, the blocks of process 700 are described herein as occurring sequentially or linearly. However, multiple blocks of process 700 may occur in parallel. Furthermore, the blocks of process 700 need not be performed in the order shown, and/or one or more blocks of process 700 need not be performed and/or may be replaced by other operations.
In operation 702, a (e.g., second) head-mountable device can capture second view data corresponding to a viewing perspective of the second head-mountable device. For example, the second view data may include information related to one or more images captured by a camera of the second headset. In some embodiments, the second view data may be received from another device that may be used to determine a position and/or orientation of the second headset within the space. Thus, the second view data may include information regarding a position and/or orientation of the second headset relative to the physical object and/or virtual object to be presented. The second view data may also include information related to one or more physical objects observed by the second headset.
In operation 704, the second headset may provide an output on its display. For example, the display may output a view of one or more virtual objects and/or physical objects, with the display and/or graphical user interface disposed thereon, such as shown in fig. 4. The output provided on the display may be based at least in part on second view data captured by the second head-mountable device, which may thereby determine sides and/or portions of the observable virtual object and/or physical object based on the output provided by the display.
In operation 706, the second view data may be transmitted to the first headset. In this regard, the second view data may include data used by the second headset to provide output on the second display in operation 704. Additionally or alternatively, the second view data may include information, images, and/or other data generated based on the original second view data. For example, the transmitted second view data may include a direct feed of output disposed on a display.
Fig. 8 shows a flow chart for operating a head-mountable device. For purposes of explanation, the process 800 is described herein primarily with reference to the head-mountable device 100 of fig. 1, 3, and 5-6. However, process 800 is not limited to head-mountable device 100 of fig. 1, 3, and 5-6, and one or more blocks (or operations) of process 800 may be performed by different head-mountable devices and/or one or more other devices. For further explanation purposes, the blocks of process 800 are described herein as occurring sequentially or linearly. However, multiple blocks of process 800 may occur in parallel. Furthermore, the blocks of process 800 need not be performed in the order shown, and/or one or more blocks of process 800 need not be performed and/or may be replaced by other operations.
In operation 802, another (e.g., a first) headset may capture first view data corresponding to a viewing perspective of the first headset. For example, the first view data may include information related to one or more images captured by a camera of the first headset. In some embodiments, the first view data may be received from another device that may be used to determine a position and/or orientation of the first head-mountable device within the space. Thus, the first view data may include information regarding a position and/or orientation of the first headset relative to the physical object and/or virtual object to be presented. The first view data may also include information related to one or more physical objects observed by the first headset.
In operation 804, second view data may be received from a second headset. The second view data may be used, for example, by the first headset along with the first view data to determine a position and/or orientation of the second headset relative to the first headset and/or the virtual object or physical object. The second view data may be further used to determine information about a perspective of the second headset. For example, the perspective of the second headset may be determined to further determine the sides and/or portions of the physical and/or virtual objects observed and/or output by the second headset to a user wearing the second headset.
In operation 806, the first headset may provide an output on its display. For example, the display may output a view of one or more virtual objects and/or physical objects, with the display and/or graphical user interface disposed thereon, such as shown in fig. 5 or 6. The output provided on the display may be based at least in part on the first view data captured by the head-mountable device. The output may also include information related to the second headset, such as indicators and/or window views described herein. Such additional information may be determined based on the second view data such that the first headset determines sides and/or portions of virtual objects and/or physical objects that may be observed and/or output by the second headset.
Referring now to fig. 9, a sensor of a head-mountable device may be used to detect facial features of a person wearing the head-mountable device. Such detection may be used to determine how an avatar representing a person should be generated for output to other users.
As shown in fig. 9, the head-mountable device 100 may include one or more internal sensors 170, each configured to detect characteristics of the user's face. For example, the internal sensors 170 may include one or more eye sensors to capture and/or process images (not shown in fig. 9) of the eye 18 and perform analysis based on one or more of hue space, brightness, color space, luminosity, and the like. By way of further example, the internal sensors 170 may include one or more capacitive sensors 172 configured to detect the nose 14 of the user 10. The capacitive sensor 172 may detect contact, proximity, and/or distance to the nose of the user 10. By way of further example, the internal sensors 170 may include one or more temperature sensors (e.g., infrared sensors, thermometers, thermocouples, etc.) 174 configured to detect the temperature of the user's face. By way of further example, the internal sensor 170 may include an eyebrow camera configured to detect the user's eyebrows 12 and/or process images of the eyebrows and perform analysis based on one or more of hue space, brightness, color space, luminosity, and the like. By way of further example, the internal sensors 170 may include one or more depth sensors 178 configured to detect the shape of the face (e.g., cheek 16) of the user 10. It should be appreciated that the internal sensors 170 may include sensors disposed at the exterior of the head-mountable device 100 to detect facial features of the user. These and/or other sensors may be positioned to detect features described herein with respect to a user's mouth, cheek, jaw, chin, ear, temple, forehead, etc. Such information may be used (e.g., by another head-mountable device) to generate an avatar having the detected feature. By way of further example, any number of other sensors may be provided to perform facial feature detection, facial motion detection, facial recognition, eye tracking, user emotion detection, user gestures, voice detection, and the like. The sensors may include force sensors, contact sensors, capacitance sensors, strain gauges, resistive touch sensors, piezoelectric sensors, cameras, pressure sensors, photodiodes, and/or other sensors.
Referring now to fig. 10, another head-mountable device could output an avatar based on the detected features. Fig. 10 illustrates a rear view of a second headset operable by a user, the headset providing a user interface 242, according to some embodiments of the present disclosure. The display 240 may provide a user interface 242. However, not all depicted graphical elements may be used in all implementations, and one or more implementations may include additional or different graphical elements than those shown in the figures. Variations in the arrangement and type of these graphical elements may be made without departing from the spirit or scope of the claims set forth herein. Additional components, different components, or fewer components may be provided.
The graphical user interface 142 provided by the display 140 may include an avatar 50 representing the user 10 wearing the first headset 100. It should be appreciated that the avatar 50 need not include a representation of the first head-mountable device 100 being worn by the user 10. Thus, each user may observe an avatar including facial features that would otherwise be covered by the head-mountable device despite wearing the head-mountable device. Head portrait 50 may be a virtual but real representation of a person based on detection made by the head-mountable device worn by the person. Such detection may be made with respect to a characteristic of the person (e.g., the user's eyebrows 12, nose 14, cheeks 16, and/or eyes 18). One or more of the features of the head portrait 50 may be based on detection performed by the first headset worn by it. Additionally or alternatively, one or more of the features of the avatar 50 may be based on selections made by a person. For example, before or while outputting the avatar 50, a person represented by the avatar 50 may select and/or modify one or more of the features. For example, a person may select a hair color that does not correspond to their actual hair color. Some features may be static, such as hair color, eye color, ear shape, etc. One or more features may be dynamic, such as eye gaze direction, eyebrow position, mouth shape, etc. In some embodiments, the detected information about facial features (e.g., dynamic features) may be mapped to static features in real-time to generate and display the avatar 50. In some cases, the term "real-time" is used to indicate that the results of the extraction, mapping, presentation, and performance are performed in response to each movement of the person and may be substantially immediately presented. When looking at the head portrait 50, the observer can feel as if they were looking at the person.
Fig. 11 shows a flowchart for operating a head-mountable device. For purposes of explanation, the process 1100 is described herein primarily with reference to the headset 100 of fig. 9. However, process 1100 is not limited to head-mountable device 100 of fig. 9, and one or more blocks (or operations) of process 1100 may be performed by a different head-mountable device and/or one or more other devices. For further explanation purposes, the blocks of process 1100 are described herein as occurring sequentially or linearly. However, multiple blocks of process 1100 may occur in parallel. Furthermore, the blocks of process 1100 need not be performed in the order shown, and/or one or more blocks of process 1100 need not be performed and/or may be replaced by other operations.
In operation 1102, a (e.g., first) head-mountable device may detect a feature of a face of a user wearing the first head-mountable device. In some implementations, the detection performed by the first headset may be sufficient to generate an avatar corresponding to the user.
In operation 1104, detection data captured by one or more sensors of the first headset may be transmitted to another headset. The detection data may be used to generate an avatar to be output to a user wearing the second headset.
Fig. 12 shows a flowchart for operating a head-mountable device. For purposes of explanation, the process 1200 is described herein primarily with reference to the head-mountable device 200 of fig. 10. However, process 1200 is not limited to head-mountable device 200 of fig. 10, and one or more blocks (or operations) of process 1200 may be performed by a different head-mountable device and/or one or more other devices. For further explanation purposes, the blocks of process 1200 are described herein as occurring sequentially or linearly. However, multiple blocks of process 1200 may occur in parallel. Moreover, the blocks of process 1200 need not be performed in the order shown, and/or one or more blocks of process 1200 need not be performed and/or may be replaced by other operations.
In operation 1202, another (e.g., a second) headset may receive detection data from a first headset. In some embodiments, the detection data may be raw data generated by one or more sensors of the first headset such that the second headset must process the detection data to generate the head portrait. In some embodiments, the detection data may be processed data based on raw data generated by one or more sensors. Such process data may include information that is readily used to generate an avatar. Thus, the processing may be performed by the first or second headset.
In operation 1204, the second head-mountable device may display the avatar on its display to the graphical user interface. The head portrait may be updated based on additional detection performed by the first head-mountable device and/or detection data received from the first head-mountable device.
Referring now to fig. 13, a sensor of a head-mountable device can be used to detect facial features of a person wearing different head-mountable devices. Such detection may be used to determine how an avatar representing a person should be generated for output to other users. Such collaborative detection may be useful when at least one of the head-mountable devices has less sensing capabilities. Thus, another head-mountable device could help generate the head portrait by providing new or additional detection used by any of the head-mountable devices.
As shown in fig. 13, the head-mountable device may be worn and operated by different individuals who may then participate in the shared environment. Within the environment, each user may observe avatars representing other individuals participating in the shared environment. As further shown in fig. 13, the first headset 100 may face in the direction of the second headset 200. The camera 130 and/or other external sensors 132 of the first headset 100 may capture a view of the user 20 and/or the second headset 200 and detect facial features. For example, the camera 130 and/or other external sensors 132 of the first headset 100 may be operable with respect to the user 20, as the sensors described with respect to the user 10 in fig. 9. Such detection may be transmitted to the second headset 200 and/or by the first headset 100. It should be appreciated that the transmitted detection may be any information that may be used to generate the avatar, including raw data regarding the detection and/or processed data including instructions on how to generate the avatar. The head-mountable device 100 receiving the detection may output an avatar based on the received information. As further described herein, the output of the avatar itself may be further affected by receiving detection by the head-mountable device.
Referring now to fig. 10, the head mountable device 100 can output an avatar based on the detected features. Fig. 10 illustrates a rear view of a second headset that can be operated by a user, which provides a user interface 142, according to some embodiments of the present disclosure. The display 140 may provide a user interface 142. However, not all depicted graphical elements may be used in all implementations, and one or more implementations may include additional or different graphical elements than those shown in the figures. Variations in the arrangement and type of these graphical elements may be made without departing from the spirit or scope of the claims set forth herein. Additional components, different components, or fewer components may be provided.
The graphical user interface 142 provided by the display 140 may include an avatar 60 representing the user 20 wearing the second headset 200. It should be appreciated that head portrait 60 need not include a representation of second headset 200 being worn by user 20. Head portrait 60 may be a virtual but real representation of another person based on detection made by the head-mountable device worn by that person. Such detection may be made with respect to a characteristic of the person (e.g., the person's eyebrows 22, nose 24, cheeks 26, and/or eyes 28). One or more of the features of the avatar 60 may be based on detection performed by the first headset 100 worn by another user, particularly if the sensing capabilities of the second headset 200 are deemed insufficient for avatar generation.
Fig. 15 shows a flowchart for operating the head-mountable device. For purposes of explanation, the process 1500 is described herein primarily with reference to the head-mountable device 100 of fig. 13 and 14. However, process 1500 is not limited to head-mountable device 100 of fig. 13 and 14, and one or more blocks (or operations) of process 1500 may be performed by a different head-mountable device and/or one or more other devices. For further explanation purposes, the blocks of process 1500 are described herein as occurring sequentially or linearly. However, multiple blocks of process 1500 may occur in parallel. Moreover, the blocks of process 1500 need not be performed in the order shown, and/or one or more blocks of process 1500 need not be performed and/or may be replaced by other operations.
In operation 1502, the together operating head-mountable devices may identify themselves to each other. For example, each head-mountable device may transmit its own identity and each head-mountable device may receive the identity of another head-mountable device. The identification may include the make, model, and/or other specifications of each of the head-mountable devices. For example, the identification may indicate whether a given head-mountable device has or lacks certain components, features, and/or functionality. By way of further example, the identification may indicate the detection capabilities of a given head-mountable device.
In operation 1504, the first headset may receive a request for detection. Additionally or alternatively, the first headset may determine the detection capabilities of another (e.g., the second) headset. Based on the request or the determined detection capabilities, the first headset may determine that it may perform detection to assist in avatar generation. In some implementations, the second headset may lack the sensors needed to detect facial features of the user wearing the second headset. In some embodiments, the second head-mountable device may request to detect whether it contains its own detection capabilities.
In operation 1506, the first headset may select a detection to perform. The selection may be based on a request for detection. For example, the request for detection may indicate an area of the face to be detected, and the first headset may select detection corresponding to the request. Additionally or alternatively, the selection may be based on a determined detection capability of the second headset. For example, the first headset may determine that the second headset is unable to detect certain facial features (based on insufficient sensing capabilities, objects outside of the field of view, and/or objects that are occluded from view). In such cases, the first headset may select detection corresponding to the undetected facial features.
In operation 1508, the first headset may detect a feature of a face of a user wearing the second headset. In some implementations, the detection performed by the first headset may be sufficient to generate an avatar corresponding to the user.
In operation 1510, the first headset may receive additional detection data from the second headset. It will be appreciated that the receipt of such additional detection data is optional, particularly in the event that the second head-mountable device has insufficient or missing detection capabilities. In some embodiments, additional detection data is received along with a request for detection, where the request for detection corresponds to facial features not represented in the additional detection data.
In operation 1512, the first head-mountable device may display the avatar on its display to the graphical user interface. The head portrait may be updated based on additional detection performed by the first head-mountable device and/or detection data received from the second head-mountable device.
Thus, both the first and second head-mountable devices may provide output including an avatar of the other user. Such head portraits may be generated even when one of the head-mountable devices lacks the detection capability to perform its own complete set of detections. Thus, the capabilities of one head-mountable device may be sufficient to provide enough data to two head-mountable devices to generate a head portrait.
Referring now to fig. 16, the head-mountable devices can cooperatively operate to perform a greater range of detection than is possible for only one of the head-mountable devices. In some embodiments, data related to the user and the head-mountable device may also be captured, processed, and/or generated by any one or more of the head-mountable devices. It should be appreciated that the user may be within the field of view of one of the head-mountable devices and outside the field of view of the other of the head-mountable devices (including the head-mountable device worn by the user). In such cases, data related to any given user may be more effectively captured by a head-mountable device worn by a user other than the given user. For example, at least a portion of second user 20 and/or second headset 200 may be within first field of view 190 of first headset 100. Accordingly, first headset 100 may capture, process, and/or generate data regarding second user 20 and/or second headset 200, and transmit such data to one or more other headable devices (e.g., second headset 200). In the event that such data relating to a user may be used by a head-mountable device that does not contain the user within its field of view, then the data may be shared with the head-mountable device.
As shown in fig. 16, the first and second head-mountable devices 100 and 200 may each be arranged to detect objects from different perspectives. In some embodiments, the object may include other portions of one of the users, such as the limb 70 (e.g., arm, hand, finger, leg, foot, toe, etc.) of the second user 20. In some embodiments, different portions, surfaces, sides, and/or features of the limb 70 of the second user 20 may be observed from the first headset 100. In some embodiments, the limb 70 of the second user 20 may be observed only by the first headset 100, such as when the limb 70 is outside of the field of view 290 of the second headset 200. In some embodiments, the limb 70 of the second user 20 may be observed only by the first headset 100, such as when the limb 70 is still blocked within the field of view 290 of the second headset 200. Thus, the first head-mountable device 100 may be operated to detect a characteristic of the limb 70 of the second user 20.
In some implementations, the headable devices may operate cooperatively to perform gesture recognition. For example, data may be captured, processed, and generated by one or more of the head-mountable devices, where the data includes a captured view of the user. Gesture recognition may involve detection of the position, orientation, and/or movement of a user (e.g., limb, hand, finger, etc.). Such detection may be enhanced when based on views captured from multiple perspectives. Such perspectives may include views from separate head-mountable devices, including head-mountable devices worn by users other than the user making the gesture. Data based on these views may be shared between or among the head-mountable device and/or the external device for processing and gesture recognition. Any processing data may be shared with a wearable device worn by the gesture user and corresponding actions may be performed.
In some implementations, the head-mountable device can interoperate to perform object recognition. For example, one or more of the head-mountable devices can capture, process, and/or generate data to determine characteristics of the object. The characteristics may include an identity, name, type, benchmark, color, size, shape, make, model, or other characteristic that one or more of the head-mountable devices may detect. Once determined, the characteristics may be shared, and one or more of the head-mountable devices may optionally provide a representation of the object to the corresponding user via its display. Such representations may include any information related to the characteristic such as labels, text indications, graphical features, and/or other information. Additionally or alternatively, the representation may include a virtual object displayed on the display as an alternative to the physical object. Thus, identified objects from the physical environment may be replaced and/or enhanced with virtual objects.
In some embodiments, the head-mountable device may operate in conjunction with the environment map. For example, data may be captured, processed, and generated by one or more of the head-mountable devices to map an outline of the environment. Each head-mountable device can capture multiple views from different positions and orientations relative to the environment. The combined data may include more views than any of the head-mountable devices captures.
Fig. 17 shows a flowchart for operating the head-mountable device. For purposes of explanation, the process 1700 is described herein primarily with reference to the head-mountable device 200 of fig. 16. However, the process 1700 is not limited to the head-mountable device 200 of fig. 16, and one or more blocks (or operations) of the process 1700 may be performed by a different head-mountable device and/or one or more other devices. For further explanation purposes, the blocks of process 1700 are described herein as occurring sequentially or linearly. However, multiple blocks of process 1700 may occur in parallel. Moreover, the blocks of process 1700 need not be performed in the order shown, and/or one or more blocks of process 1700 need not be performed and/or may be replaced by other operations.
In operation 1702, the together operating head-mountable devices may identify themselves to each other. For example, each head-mountable device may transmit its own identity and each head-mountable device may receive the identity of another head-mountable device. The identification may include the make, model, and/or other specifications of each of the head-mountable devices. For example, the identification may indicate whether a given head-mountable device has or lacks certain components, features, and/or functionality. By way of further example, the identification may indicate the detection capabilities of a given head-mountable device.
In operation 1704, the second head-mountable device may request detection data from another (e.g., the first) head-mountable device. Such a request may be determined based on known detection capabilities of the second headset and/or known detection capabilities of the first headset. Such a request may be made, for example, in the event that the limb to be detected is outside the field of view of the second headset and/or the second headset lacks a sensor for detecting the limb. In some embodiments, the second headset determines whether the first headset includes detection capabilities and/or a position and/or orientation to detect a limb, and makes the request accordingly.
In operation 1706, the second headset may receive detection data from the first headset. In some embodiments, the detection data may be raw data generated by one or more sensors of the first headset such that the second headset must process the detection data to determine the action to perform. In some embodiments, the detection data may be processed data based on raw data generated by one or more sensors. Such process data may include information that is readily available for determining actions to be performed. Thus, the processing may be performed by the first or second headset.
In operation 1708, the second headset may determine an action to perform and/or perform the action. The determination and/or action itself may be based on detection data received from the first headset. For example, where a first headset detects a gesture from a limb that corresponds to a user input (e.g., a user instruction or user command), a second headset may perform an action that corresponds to the user input.
Fig. 18 shows a flowchart for operating the head-mountable device. For purposes of explanation, the process 1800 is described herein primarily with reference to the head-mountable device 100 of fig. 16. However, process 1800 is not limited to head-mountable device 100 of fig. 16, and one or more blocks (or operations) of process 1800 may be performed by a different head-mountable device and/or one or more other devices. For further explanation purposes, the blocks of process 1800 are described herein as occurring sequentially or linearly. However, multiple blocks of process 1800 may occur in parallel. Moreover, the blocks of process 1800 need not be performed in the order shown, and/or one or more blocks of process 1800 need not be performed and/or may be replaced by other operations.
In operation 1802, the together operating head-mountable devices may identify themselves to each other. For example, each head-mountable device may transmit its own identity and each head-mountable device may receive the identity of another head-mountable device. The identification may include the make, model, and/or other specifications of each of the head-mountable devices. For example, the identification may indicate whether a given head-mountable device has or lacks certain components, features, and/or functionality. By way of further example, the identification may indicate the detection capabilities of a given head-mountable device.
In operation 1804, the first headset may receive a request for detection. Additionally or alternatively, the first headset may determine the detection capabilities of another (e.g., the second) headset. Based on the request or the determined detection capabilities, the first headset may determine that it may perform detection to assist in the action determination. In some implementations, the second headset may lack the sensors needed to detect gestures of a user (e.g., limb) wearing the second headset. In some embodiments, the second head-mountable device may request to detect whether it contains its own detection capabilities.
In operation 1806, the first headset may select a detection to be performed. The selection may be based on a request for detection. For example, the request for detection may indicate a limb to detect, and the first headset may select a detection corresponding to the request. Additionally or alternatively, the selection may be based on a determined detection capability of the second headset. For example, the first headset may determine that the second headset is unable to detect the limb (based on insufficient sensing capabilities, objects outside of the field of view, and/or objects that are occluded from view). In such cases, the first headset may select a detection corresponding to an undetected limb.
In operation 1808, the first headset may detect a feature of a limb of a user wearing the second headset. In some implementations, the detection performed by the first headset may be sufficient to determine an action to be performed by the second headset.
In operation 1810, the first headset may transmit the detection data to the second headset (i.e., received in operation 1706 of process 1700).
Referring now to fig. 19, components of the head-mountable device can be operably connected to provide the capabilities described herein. Fig. 19 shows a simplified block diagram of exemplary head-mountable devices 100 and 200, according to one embodiment of the present invention. It should be understood that additional components, different components, or fewer components than those shown may be utilized within the scope of the subject disclosure.
As shown in fig. 19, the first head-mountable device 100 may include a processor 150 (e.g., control circuitry) within or coupled to the frame 110, the processor having one or more processing units including or configured to access a memory 152 having instructions stored thereon. The instructions or computer program may be configured to perform one or more of the operations or functions described with respect to the first headset device 100. Processor 150 may be implemented as any electronic device capable of processing, receiving, or transmitting data or instructions. For example, the processor 150 may include one or more of the following: a microprocessor, a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), or a combination of such devices. As described herein, the term "processor" is intended to encompass a single processor or processing unit, multiple processors, multiple processing units, or one or more other suitably configured computing elements.
Memory 152 may store electronic data usable by first headset 100. For example, the memory 152 may store electrical data or content such as audio and video files, documents and applications, device settings and user preferences, timing and control signals or data for various modules, data structures or databases, and the like. The memory 152 may be configured as any type of memory. By way of example only, the memory 152 may be implemented as random access memory, read only memory, flash memory, removable memory, or other types of storage elements or combinations of such devices.
The first head-mountable device 100 may also include a display 140 for displaying visual information of the user. Display 140 may provide visual (e.g., image or video) output, as further described herein. The first headset 100 may also include a camera 130 for capturing a view of the external environment, as described herein. The view captured by the camera may be presented or otherwise analyzed by the display 140 to provide a basis for output on the display 140.
First head-mountable device 100 may include input component 186 and/or output component 184, which may include any suitable components for receiving user input, providing output to a user, and/or connecting head-mountable device 100 to other devices. The input means 186 may comprise buttons, keys or another feature that may act as a keyboard for user operation. Other suitable components may include, for example, audio/video jacks, data connectors, or any additional or alternative input/output components.
The first head-mountable device 100 may include a microphone 188. The microphone 188 may be operatively connected to the processor 150 for detection of sound levels and communication of the detection for further processing.
The first head-mountable device 100 may include a speaker 194. The speaker 194 is operatively connected to the processor 150 to control speaker output, including sound levels.
First headset device 100 may include a communication interface 192 for communicating with one or more servers or other devices using any suitable communication protocol. For example, the communication interface 192 may support Wi-Fi (e.g., 802.11 protocol), ethernet, bluetooth, high frequency systems (e.g., 900MHz, 2.4GHz, and 5.6GHz communication systems), infrared, TCP/IP (e.g., any of the protocols used in each of the TCP/IP layers), HTTP, bitTorrent, FTP, RTP, RTSP, SSH, any other communication protocol, or any combination thereof. Communication interface 192 may also include an antenna for transmitting and receiving electromagnetic signals.
The first head-mountable device 100 could include one or more other sensors, such as an internal sensor 170 and/or an external sensor 132. Such sensors may be configured to sense substantially any type of characteristic, such as, but not limited to, image, pressure, light, touch, force, temperature, position, motion, and the like. For example, the sensor may be a photodetector, a temperature sensor, a light or optical sensor, an atmospheric pressure sensor, a humidity sensor, a magnet, a gyroscope, an accelerometer, a chemical sensor, an ozone sensor, a particle count sensor, or the like. By way of further example, the sensor may be a biosensor for tracking biometric characteristics such as health and activity metrics. Other user sensors may perform facial feature detection, facial motion detection, facial recognition, eye tracking, user emotion detection, voice detection, and the like. The sensor may include a camera 130 that may capture image-based content of the outside world.
First headset 100 may include a battery 160 that may charge and/or power the components of first headset 100. The battery may also charge and/or power the components connected to the first headset 100.
As further shown in fig. 19, the second headset device 200 may include a processor 250 (e.g., control circuitry) within or coupled to the frame 210, the processor having one or more processing units including or configured to access a memory 252 having instructions stored thereon. The instructions or computer program may be configured to perform one or more of the operations or functions described with respect to the second headset device 200. Processor 250 may be implemented as any electronic device capable of processing, receiving, or transmitting data or instructions. For example, the processor 250 may include one or more of the following: a microprocessor, a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), or a combination of such devices. As described herein, the term "processor" is intended to encompass a single processor or processing unit, multiple processors, multiple processing units, or one or more other suitably configured computing elements.
Memory 252 may store electronic data that may be used by second headset 200. For example, the memory 252 may store electrical data or content such as audio and video files, documents and applications, device settings and user preferences, timing and control signals or data for various modules, data structures, or databases, and the like. Memory 252 may be configured as any type of memory. By way of example only, the memory 252 may be implemented as random access memory, read only memory, flash memory, removable memory, or other types of storage elements or combinations of such devices.
The second headset 200 may also include a display 240 for displaying visual information of the user. The display 240 may provide visual (e.g., image or video) output, as further described herein. The second headset 200 may also include a camera 230 for capturing a view of the external environment, as described herein. The view captured by the camera may be presented or otherwise analyzed by the display 240 to provide a basis for output on the display 240.
The second head-mountable device 200 may include an input component 286 and/or an output component 284, which may include any suitable components for receiving user input, providing output to a user, and/or connecting the head-mountable device 200 to other devices. The input component 286 may include buttons, keys, or another feature that may act as a keyboard for user operation. Other suitable components may include, for example, audio/video jacks, data connectors, or any additional or alternative input/output components.
The second headset 200 may include a microphone 288. Microphone 288 may be operatively connected to processor 250 for detection of sound levels and communication of the detection for further processing.
The second headset 200 may include a speaker 294. The speaker 294 is operatively connected to the processor 250 to control speaker output, including sound levels.
Second headset device 200 may include a communication interface 292 for communicating with first headset device 100 (e.g., via communication interface 192) and/or one or more servers or other devices using any suitable communication protocol. For example, communication interface 292 may support Wi-Fi (e.g., 802.11 protocol), ethernet, bluetooth, high frequency systems (e.g., 900MHz, 2.4GHz, and 5.6GHz communication systems), infrared, TCP/IP (e.g., any of the protocols used in each of the TCP/IP layers), HTTP, bitTorrent, FTP, RTP, RTSP, SSH, any other communication protocol, or any combination thereof. Communication interface 292 may also include an antenna for transmitting and receiving electromagnetic signals.
The second headset 200 may include one or more other sensors, such as an internal sensor 270 and/or an external sensor 232. Such sensors may be configured to sense substantially any type of characteristic, such as, but not limited to, image, pressure, light, touch, force, temperature, position, motion, and the like. For example, the sensor may be a photodetector, a temperature sensor, a light or optical sensor, an atmospheric pressure sensor, a humidity sensor, a magnet, a gyroscope, an accelerometer, a chemical sensor, an ozone sensor, a particle count sensor, or the like. By way of further example, the sensor may be a biosensor for tracking biometric characteristics such as health and activity metrics. Other user sensors may perform facial feature detection, facial motion detection, facial recognition, eye tracking, user emotion detection, voice detection, and the like. The sensor may include a camera 230 that may capture image-based content of the outside world.
The second headset 200 may include a battery 260 that may charge and/or power the components of the second headset 200. The battery may also charge and/or power the components connected to the second headset 200.
Thus, embodiments of the present disclosure include head-mountable devices with different input and output capabilities. Although operating in a shared environment, such differences may result in the head-mountable device providing a slightly different experience for the corresponding user. However, the output provided by one head-mountable device may be indicated on another head-mountable device so that the users are aware of the nature of each other's experience. Where different head-mountable devices provide different sensing capabilities, the sensor of one head-mountable device may facilitate detection of another head-mountable device to provide more accurate and detailed outputs, such as object recognition, head-image generation, hand and body tracking, and the like.
For convenience, various examples of aspects of the disclosure are described below as clauses. These examples are provided by way of example and not limitation of the subject technology.
Clause a: a head-mountable device, comprising: a first camera configured to capture first view data; a first display for providing a first graphical user interface comprising a first view of an object, the first view being based on the first view data; and a communication interface configured to receive second view data from an additional head-mountable device, the additional head-mountable device comprising a second display for providing a second graphical user interface showing a second view of the object, the second view data indicating features of the second view of the object, wherein the first graphical user interface further comprises an indicator located at the object and based on the second view data.
Clause B: a head-mountable device, comprising: a communication interface configured to receive an identification of an additional headable device from the additional headable device; a processor configured to: determining a detection capability of the additional head-mountable device; and selecting a detection to be performed based on the detection capability; an external sensor configured to perform the selected detection with respect to a portion of the face; and a display configured to output an avatar based on the detection of the face.
Clause C: a head-mountable device, comprising: a first camera configured to capture a first view; a communication interface configured to receive second view data from an additional head-mountable device indicating a second view captured by a second camera of the additional head-mountable device; and a processor configured to: determining when a limb is inside the first view and outside the second view; and operating the first camera to detect a feature of the limb when the limb is inside the first view and outside the second view, wherein the communication interface is further configured to transmit detection data based on the detected feature of the limb to the additional head-mountable device.
Clause D: a head-mountable device, comprising: a communication interface configured to receive an identification of an additional headable device from the additional headable device; and a processor configured to: determining a detection capability of the additional head-mountable device based on an identification of the additional head-mountable device; and selecting a detection to request based on the detection capability, wherein the communication interface is further configured to: transmitting a request for detection data to the additional head-mountable device; and receiving the detection data from the additional head-mountable device.
Clause E: a head-mountable device, comprising: a first camera configured to capture a first view; a processor configured to: determining when a limb is not within the first view based on the first view; and determining when an additional head-mountable device comprising a second camera is arranged to capture a second view of the limb; and a communication interface configured to: transmitting a request to the additional head-mountable device for detection data based on the second view of the limb; and receiving the detection data from the additional head-mountable device.
One or more of the above clauses may include one or more of the following features. It should be noted that any of the following clauses may be combined with each other in any combination and placed into the corresponding independent clauses, e.g., clauses A, B, C, D or E.
Clause 1: the first display is an opaque display; and the second display is a translucent display providing a view of the physical environment.
Clause 2: the additional head-mountable device further includes a second camera, wherein the first camera has a resolution that is greater than a resolution of the second camera.
Clause 3: the additional head-mountable device further includes a second camera, wherein the first camera has a field of view that is larger than a field of view of the second camera.
Clause 4: the first display has a first size; and the second display has a second size smaller than the first size.
Clause 5: the first graphical user interface has a first size; and the second graphical user interface has a second size that is smaller than the first size.
Clause 6: the second view shows a second side of the object; and the first view shows at least a portion of a first side of the object and a second side of the object, wherein the indicator is applied to a portion of the second side of the object in the first view.
Clause 7: the indicator includes at least one of highlighting, lighting, shading, back-lighting, outline, border, text, icon, symbol, emphasis, duplication, halo, or animation.
Clause 8: the object is a virtual object.
Clause 9: the object is a physical object in a physical environment.
Clause 10: the external sensor is a camera.
Clause 11: the external sensor is a depth sensor, wherein the additional head-mountable device does not comprise a depth sensor.
Clause 12: the communication interface is further configured to receive detection data from the additional head-mountable device, the detection data based on additional detection of the face performed by the additional head-mountable device, wherein the head-image is further based on the detection data.
Clause 13: the detection capability includes an indication of whether a portion of the face is within a field of view of a sensor of the additional head-mountable device.
Clause 14: determining when the limb is inside the first view and outside the second view is based on the detected position and orientation of the additional head-mountable device within the first view and the detected position of the limb within the first view.
Clause 15: determining when the limb is within the first view portion and outside the second view is based on view data received from the additional head-mountable device.
Clause 16: the communication interface is further configured to: transmitting an identification of the head-mountable device to the additional head-mountable device; and receiving a request for detection data from the additional head-mountable device.
Clause 17: the detection data includes instructions for the additional head-mountable device to perform an action in response to a gesture made by the limb and detected by the first camera.
As described herein, aspects of the present technology may include collecting and using certain data. In some cases, the collected data may include personal information or other data that may be uniquely identified or used to locate or contact a particular person. It is contemplated that the entity responsible for collecting, storing, analyzing, disclosing, transmitting, or otherwise using such personal information or other data will comply with established privacy practices and/or privacy policies. The present disclosure also contemplates embodiments in which a user may selectively block access to or access to personal information or other data that may be managed to minimize the risk of inadvertent or unauthorized access or use.
Elements referred to in the singular are not intended to be unique unless specifically stated, but rather are intended to mean one or more. For example, "a" module may refer to one or more modules. Elements prefixed with "a", "an", "the" or "the" do not exclude the presence of additional identical elements without further limitation.
Headings and subheadings, if any, are for convenience only and do not limit the invention. The term "exemplary" is used to mean serving as an example or illustration. To the extent that the terms "includes," "having," and the like are used, such terms are intended to be inclusive in a manner similar to the term "comprising" as the term "comprising" is interpreted when employed as a transitional word in a claim. Relational terms such as "first" and "second", and the like may be used to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
Phrases such as an aspect, this aspect, another aspect, some aspects, one or more aspects, an implementation, the implementation, another implementation, some implementations, one or more implementations, an embodiment, the embodiment, another embodiment, some embodiments, one or more embodiments, a configuration, the configuration, another configuration, some configurations, one or more configurations, subject technology, disclosure, the present disclosure, other variations, and the like are all for convenience and do not imply that disclosure involving such one or more phrases is essential to the subject technology, or that such disclosure applies to all configurations of the subject technology. The disclosure relating to such one or more phrases may apply to all configurations or one or more configurations. The disclosure relating to such one or more phrases may provide one or more examples. A phrase such as an aspect or some aspects may refer to one or more aspects and vice versa, and this applies similarly to other previously described phrases.
The phrase "at least one" preceding a series of items, with the term "and" or "separating any of the items, modifies the list as a whole rather than each member in the list. The phrase "at least one" does not require the selection of at least one item; rather, the phrase allows for the inclusion of at least one of any one item and/or the meaning of at least one of any combination of items and/or at least one of each item. By way of example, each of the phrases "at least one of A, B and C" or "at least one of A, B or C" refers to a alone, B alone, or C alone; A. any combination of B and C; and/or at least one of each of A, B and C.
It is to be understood that the specific order or hierarchy of steps, operations or processes disclosed is an illustration of exemplary approaches. Unless explicitly stated otherwise, it is understood that the particular order or hierarchy of steps, operations or processes may be performed in a different order. Some of the steps, operations, or processes may be performed simultaneously. The accompanying method claims, if any, present elements of the various steps, operations, or processes in a sample order, and are not meant to be limited to the specific order or hierarchy presented. These may be performed in serial, linear, parallel, or a different order. It should be understood that the described instructions, operations, and systems may be generally integrated together in a single software/hardware product or packaged into multiple software/hardware products.
In one aspect, the term "coupled" or the like may refer to a direct coupling. On the other hand, the term "coupled" or the like may refer to indirect coupling.
Terms such as top, bottom, front, rear, side, horizontal, vertical, etc. refer to any frame of reference and not to the usual gravitational frame of reference. Thus, such terms may extend upwardly, downwardly, diagonally or horizontally in a gravitational frame of reference.
The present disclosure is provided to enable one of ordinary skill in the art to practice the various aspects described herein. In some instances, well-known structures and components are shown in block diagram form in order to avoid obscuring the concepts of the subject technology. The present disclosure provides various examples of the subject technology, and the subject technology is not limited to these examples. Various modifications to these aspects will be readily apparent to those skilled in the art, and the principles described herein may be applied to other aspects.
All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Furthermore, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. According to the provisions of 35u.s.c. ≡112, there is no need to interpret any claim element unless the phrase "method is used to" explicitly state the element or, in the case of method claims, the phrase "step is used to" state the element.
The headings, background, brief description of the drawings, abstract and drawings are incorporated herein by reference into this disclosure and are provided as illustrative examples of the disclosure and not as limiting descriptions. They are not to be taken as limiting the scope or meaning of the claims. Furthermore, it can be seen in the detailed description that the description provides illustrative examples for the purpose of simplifying the disclosure, and that various features are grouped together in various implementations. This method of disclosure is not to be interpreted as reflecting an intention that the claimed subject matter requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed configuration or operation. The claims are hereby incorporated into the detailed description, with each claim standing on its own as a separately claimed subject matter.
The claims are not intended to be limited to the aspects described herein but are to be accorded the full scope consistent with the language of the claims and encompassing all legal equivalents. None of the claims, however, contain subject matter that is not in compliance with the applicable patent statute, nor should it be construed in such manner.

Claims (20)

1. A head-mountable device, comprising:
a first camera configured to capture first view data;
a first display configured to provide a first graphical user interface showing a first view of an object, the first view being based on the first view data; and
a communication interface configured to receive second view data from an additional head-mountable device, the additional head-mountable device comprising a second display configured to provide a second graphical user interface showing a second view of the object, the second view data indicating characteristics of the second view of the object,
wherein the first display is further configured to provide the first graphical user interface showing an indicator located at the object and based on the second view data.
2. The head-mountable device of claim 1, wherein:
the first display is an opaque display; and is also provided with
The second display is a translucent display providing a view of the physical environment.
3. The head-mountable device of claim 1, wherein the additional head-mountable device further comprises a second camera, wherein the first camera has a resolution that is greater than a resolution of the second camera.
4. The head-mountable device of claim 1, wherein the additional head-mountable device further comprises a second camera, wherein the first camera has a field of view that is larger than a field of view of the second camera.
5. The head-mountable device of claim 1, wherein:
the first display has a first size; and is also provided with
The second display has a second size that is smaller than the first size.
6. The head-mountable device of claim 1, wherein:
the first graphical user interface has a first size; and is also provided with
The second graphical user interface has a second size that is smaller than the first size.
7. The head-mountable device of claim 1, wherein:
the second view shows a second side of the object; and is also provided with
The first view shows at least a portion of a first side of the object and the second side of the object, wherein the indicator is applied to the portion of the second side of the object in the first view.
8. The head-mountable device of claim 1, wherein the indicator comprises at least one of highlighting, lighting, shading, back-imaging, outline, bezel, text, icon, symbol, emphasis, duplication, halo, or animation.
9. The head-mountable device of claim 1, wherein the object is a virtual object.
10. The head-mountable device of claim 1, wherein the object is a physical object in a physical environment.
11. A head-mountable device, comprising:
a communication interface configured to receive an identification of an additional headable device from the additional headable device;
a processor configured to:
determining a detection capability of the additional head-mountable device; and
selecting a detection to be performed based on the detection capability;
an external sensor configured to perform the selected detection with respect to a portion of the face; and
a display configured to output an avatar based on the detection of the face.
12. The head-mountable device of claim 11, wherein the external sensor is a camera.
13. The head-mountable device of claim 11, wherein the external sensor is a depth sensor, wherein the additional head-mountable device does not include a depth sensor.
14. The head-mountable device of claim 11, wherein the communication interface is further configured to receive detection data from the additional head-mountable device, the detection data based on additional detection of the face performed by the additional head-mountable device, wherein the avatar is further based on the detection data.
15. The head-mountable device of claim 11, wherein the detection capability comprises an indication of whether the portion of the face is within a field of view of a sensor of the additional head-mountable device.
16. A head-mountable device, comprising:
a first camera configured to capture a first view;
a communication interface configured to receive second view data from an additional head-mountable device indicating a second view captured by a second camera of the additional head-mountable device; and
a processor configured to:
determining when a limb is inside the first view and outside the second view; and
operating the first camera to detect a feature of the limb when the limb is inside the first view and outside the second view,
wherein the communication interface is further configured to transmit detection data based on the detected feature of the limb to the additional head-mountable device.
17. The headset of claim 16, wherein determining when the limb is inside the first view and outside the second view is based on a detected position and orientation of the additional headset within the first view and a detected position of the limb within the first view.
18. The head-mountable device of claim 16, wherein determining when the limb is inside the first view and outside the second view is based on view data received from the additional head-mountable device.
19. The head-mountable device of claim 16, wherein the communication interface is further configured to:
transmitting an identification of the head-mountable device to the additional head-mountable device; and
a request for the detection data is received from the additional head-mountable device.
20. The headset of claim 16, wherein the detection data includes instructions for the additional headset to perform an action in response to a gesture made by the limb and detected by the first camera.
CN202311192108.0A 2022-09-15 2023-09-14 Wearable electronic device for collaborative use Pending CN117707333A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US63/407,122 2022-09-15
US18/232,296 US20240094804A1 (en) 2022-09-15 2023-08-09 Wearable electronic devices for cooperative use
US18/232,296 2023-08-09

Publications (1)

Publication Number Publication Date
CN117707333A true CN117707333A (en) 2024-03-15

Family

ID=90150364

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311192108.0A Pending CN117707333A (en) 2022-09-15 2023-09-14 Wearable electronic device for collaborative use

Country Status (1)

Country Link
CN (1) CN117707333A (en)

Similar Documents

Publication Publication Date Title
US9829989B2 (en) Three-dimensional user input
US9165381B2 (en) Augmented books in a mixed reality environment
CN110673718B (en) Focus-based commissioning and inspection for display systems
US20150160461A1 (en) Eye Reflection Image Analysis
WO2014138267A2 (en) Inconspicuous tag for generating augmented reality experiences
CN111831110B (en) Keyboard operation for a head-mounted device
US11287886B1 (en) Systems for calibrating finger devices
AU2024200190A1 (en) Presenting avatars in three-dimensional environments
CN110968190B (en) IMU for touch detection
US12001751B2 (en) Shared data and collaboration for head-mounted devices
CN111857365A (en) Stylus-based input for head-mounted devices
US20230171484A1 (en) Devices, methods, and graphical user interfaces for generating and displaying a representation of a user
CN116910725A (en) Authenticated device assisted user authentication
US11361735B1 (en) Head-mountable device with output for distinguishing virtual and physical objects
US20240094804A1 (en) Wearable electronic devices for cooperative use
CN117707333A (en) Wearable electronic device for collaborative use
WO2021061551A1 (en) Method and device for processing camera images
CN112241200A (en) Object tracking for head mounted devices
US20240104871A1 (en) User interfaces for capturing media and manipulating virtual objects
US20240069688A1 (en) Head-Mounted Electronic Device with Magnification Tool
US20230228999A1 (en) Head-mountable device and connector
US20230350539A1 (en) Representations of messages in a three-dimensional environment
US20210349533A1 (en) Information processing method, information processing device, and information processing system
WO2023049048A2 (en) Avatar generation
WO2023096940A9 (en) Devices, methods, and graphical user interfaces for generating and displaying a representation of a user

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination