CN116848507A - Application program screen projection - Google Patents

Application program screen projection Download PDF

Info

Publication number
CN116848507A
CN116848507A CN202180092980.0A CN202180092980A CN116848507A CN 116848507 A CN116848507 A CN 116848507A CN 202180092980 A CN202180092980 A CN 202180092980A CN 116848507 A CN116848507 A CN 116848507A
Authority
CN
China
Prior art keywords
user interface
application
version
anchor
electronic device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180092980.0A
Other languages
Chinese (zh)
Inventor
J·J·泰勒
P·P·陈
M·E·布尔利
N·K·维穆里
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/541,207 external-priority patent/US20220244903A1/en
Application filed by Apple Inc filed Critical Apple Inc
Priority claimed from PCT/US2021/062450 external-priority patent/WO2022169506A1/en
Publication of CN116848507A publication Critical patent/CN116848507A/en
Pending legal-status Critical Current

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The present application provides an application screening in which an application running on an electronic device is screened onto another electronic device that does not have access to the application. The application is screened by: sufficient information is provided for rendering a user interface of the application at the device that does not have access to the application, along with modifications to the application for the device that does not have access to the application, such as user preferences.

Description

Application program screen projection
Cross Reference to Related Applications
The present application claims the benefit of priority from U.S. provisional patent application No. 63/145,952, entitled "Application Casting," filed on day 2 and 4 of 2021, the disclosure of which is hereby incorporated herein in its entirety.
Technical Field
The present description relates generally to multi-user environments in computing platforms.
Background
Users of electronic devices typically use applications running on the electronic device to view and/or interact with data. Typically, users exchange data, such as by sending files via email, each user can view and manipulate the data locally using an application running on their own electronic device. Each user may then re-share the updated data with other users, if desired. To increase the efficiency of sharing data, some applications allow collaborative viewing and/or manipulation of data by multiple users of a common application installed and running locally on multiple devices. However, in scenarios where one of the users does not have an application installed or running on their device, sharing of application data is often limited or unavailable.
Drawings
Some features of the subject technology are set forth in the following claims. However, for purposes of illustration, several implementations of the subject technology are set forth in the following drawings.
FIG. 1 illustrates an exemplary system architecture including various electronic devices that can implement the subject system in accordance with one or more implementations.
FIG. 2 illustrates an exemplary computing device in which aspects of the subject technology may be implemented.
FIG. 3 illustrates another example of a computing device in which aspects of the subject technology may be implemented.
FIG. 4 illustrates an example of an environment of a first electronic device in accordance with aspects of the subject technology.
FIG. 5 illustrates an example of an environment of a second electronic device in accordance with aspects of the subject technology.
FIG. 6 illustrates an example of a physical anchor object for a user interface of a local application in a physical environment of a first electronic device in accordance with aspects of the subject technology.
FIG. 7 illustrates an example of a version of a user interface received from a remote device and anchored to a virtual anchor point in an environment of a second electronic device in accordance with aspects of the subject technology.
Fig. 8 illustrates an example of a version of a user interface of a virtual anchor object received from a remote device and anchored to a physical environment of a second electronic device in accordance with aspects of the subject technology.
FIG. 9 illustrates a flow chart of an exemplary process for receiving a screen-cast application in accordance with aspects of the subject technology.
FIG. 10 illustrates a flow chart of an exemplary process for rendering a received version of a user interface of a screen-cast application in accordance with aspects of the subject technology.
FIG. 11 illustrates a flow chart of an exemplary process for application screening in accordance with aspects of the subject technology.
FIG. 12 illustrates a flow chart of an exemplary process for receiving a screen-cast application for three-dimensional display with anchoring in accordance with aspects of the subject technology.
FIG. 13 illustrates a flow chart of an exemplary process for screening applications with anchors in accordance with aspects of the subject technology.
FIG. 14 illustrates an exemplary computing device that may be used to implement aspects of the subject technology.
Detailed Description
The detailed description set forth below is intended as a description of various configurations of the subject technology and is not intended to represent the only configuration in which the subject technology may be practiced. The accompanying drawings are incorporated in and constitute a part of this specification. The specific embodiments include specific details for the purpose of providing a thorough understanding of the subject technology. The subject technology is not limited to the specific details described herein, however, and may be practiced using one or more other implementations. In one or more implementations, structures and components are shown in block diagram form in order to avoid obscuring the concepts of the subject technology.
A physical environment refers to a physical world that people can sense and/or interact with without the assistance of electronic devices. The physical environment may include physical features, such as physical surfaces or physical objects. For example, the physical environment corresponds to a physical park that includes physical trees, physical buildings, and physical people. People can directly sense and/or interact with a physical environment, such as by visual, tactile, auditory, gustatory, and olfactory. Conversely, an augmented reality (XR) environment refers to a fully or partially simulated environment in which people sense and/or interact via electronic devices. For example, the XR environment may include Augmented Reality (AR) content, mixed Reality (MR) content, virtual Reality (VR) content, and the like. In the case of an XR system, a subset of the physical movements of a person, or a representation thereof, are tracked and in response one or more characteristics of one or more virtual objects simulated in the XR environment are adjusted in a manner consistent with at least one physical law. As one example, the XR system may detect head movements and, in response, adjust the graphical content and sound field presented to the person in a manner similar to the manner in which such views and sounds change in the physical environment. As another example, the XR system may detect movement of an electronic device (e.g., mobile phone, tablet, laptop, etc.) presenting the XR environment, and in response, adjust the graphical content and sound field presented to the person in a manner similar to how such views and sounds would change in the physical environment. In some cases (e.g., for reachability reasons), the XR system may adjust characteristics of graphical content in the XR environment in response to representations of physical movements (e.g., voice commands).
There are many different types of electronic systems that enable a person to sense and/or interact with various XR environments. Examples include head-mounted systems, projection-based systems, head-up displays (HUDs), vehicle windshields integrated with display capabilities, windows integrated with display capabilities, displays formed as lenses designed for placement on a person's eyes (e.g., similar to contact lenses), headphones/earphones, speaker arrays, input systems (e.g., wearable or handheld controllers with or without haptic feedback), smartphones, tablets, and desktop/laptop computers. The head-mounted system may have an integrated opaque display and one or more speakers. Alternatively, the head-mounted system may be configured to accept an external opaque display (e.g., a smart phone). The head-mounted system may incorporate one or more imaging sensors for capturing images or video of the physical environment, and/or one or more microphones for capturing audio of the physical environment. The head-mounted system may have a transparent or translucent display instead of an opaque display. The transparent or translucent display may have a medium through which light representing an image is directed to the eyes of a person. The display may utilize digital light projection, OLED, LED, uLED, liquid crystal on silicon, laser scanning light sources, or any combination of these techniques. The medium may be an optical waveguide, a holographic medium, an optical combiner, an optical reflector, or any combination thereof. In some implementations, the transparent or translucent display may be configured to selectively become opaque. Projection-based systems may employ retinal projection techniques that project a graphical image onto a person's retina. The projection system may also be configured to project the virtual object into the physical environment, for example as a hologram or on a physical surface.
Implementations of the subject technology described herein provide for sharing of applications between electronic devices, particularly when one of the electronic devices does not have access to the application (e.g., because the application or an updated version of the application is not installed and/or is not running at the device). In some scenarios, all of the application data in a User Interface (UI) for displaying an application at a device running the application data may be continuously transmitted to another electronic device such that full functionality of the application is provided at the other electronic device. However, sending the entire application data in this manner may consume a significant amount of bandwidth and processing power, which may be unavailable or impractical in various scenarios.
In other scenarios, a planar image of the UI may be sent from a device running the application to a device not having the application so that a user of another device may view the UI as displayed at the device running the application (e.g., in a screen sharing mode of the video conferencing application). However, providing only a planar image to another device may prevent the other device from displaying or modifying the UI according to the preferred format, another preference, or location or orientation of the user of the second device. This can be particularly problematic when the UI of the application is shared for display in a three-dimensional environment, such as an XR environment.
Aspects of the subject technology facilitate collaborative use of an application by first and second users of the first and second devices when the application is installed at only one of the devices. In one or more implementations, an application agnostic framework (e.g., a system level framework on two devices) allows a device running an application to screen a version of an application User Interface (UI) displayed at a first device running the application to a second device on which the application is not installed.
The version of the application may be a non-interactive version of the UI displayed at the first device, but may include sufficient state information to allow the second device to render the UI using one or more user or device preferences stored at the second device and/or using features of the environment of the second device. In other implementations, the state information may include enough information to allow interaction with a version of the UI displayed at the second device, such interaction including moving, resizing, rotating, or re-coloring the UI independent of the UI displayed at the first device. In other implementations, the status information may include enough information to allow interactions (e.g., user input to an application) that are captured by the second device and sent back to the first device as input to an application running on the first device (e.g., and re-screened back to the second device).
In one or more implementations, an application screens a version of an application user interface displayed at a first device running the application to a second device on which the application is not installed along with anchoring information that allows the first device and the second device to display versions of their respective UIs at a coordinated location in a respective three-dimensional (e.g., XR) environment of each device. Anchoring information may be provided to ensure that the UI is displayed by both devices as a common location relative to a shared origin in the environment of each device. This may be useful when multiple UI elements and/or other (e.g., shared) applications are displayed simultaneously, to allow content to be transferred between elements or applications by one of the devices, and/or to allow elements and/or application interactions to be properly displayed at both devices. For example, this may be used to adjust the size or orientation of the version of the UI at the second device to account for the location of the user of the second device relative to the displayed version of the UI.
In one or more implementations, the version of the application displayed by the second device can be moved to a new location in the second physical environment of the second device, with or without affecting the location of the UI displayed by the first device. In one or more implementations, sufficient state information of the UI may be provided from the first device to the second device to allow partial or complete user interaction with a version of the user interface displayed by the second device to control an application running at the first device.
FIG. 1 illustrates an exemplary system architecture 100 including various electronic devices that can implement the subject system in accordance with one or more implementations. However, not all of the depicted components may be used in all implementations, and one or more implementations may include additional or different components than those shown in the figures. Variations in the arrangement and type of these components may be made without departing from the spirit or scope of the claims set forth herein. Additional components, different components, or fewer components may be provided.
The system architecture 100 includes an electronic device 105, an electronic device 110, an electronic device 115, and a server 120. For purposes of explanation, the system architecture 100 is shown in fig. 1 as including an electronic device 105, an electronic device 110, an electronic device 115, and a server 120; however, the system architecture 100 may include any number of electronic devices and any number of servers or data centers including multiple servers.
The electronic device 105 may be a smartphone, tablet device, or a wearable device (such as a head-mountable portable system) that includes a display system capable of presenting a visualization of the augmented reality environment to the user 101. The electronic device 105 may be powered by a battery and/or any other power source. In one example, the display system of the electronic device 105 provides a stereoscopic presentation of the augmented reality environment to the user, enabling a three-dimensional visual display of a particular scene rendering. In one or more implementations, instead of or in addition to utilizing the electronic device 105 to access an augmented reality environment, a user may use a handheld electronic device 104, such as a tablet, watch, mobile device, or the like.
The electronic device 105 may include one or more cameras, such as a camera 150 (e.g., a visible light camera, an infrared camera, etc.). Further, the electronic device 105 may include various sensors 152 including, but not limited to, cameras, image sensors, touch sensors, microphones, inertial Measurement Units (IMUs), heart rate sensors, temperature sensors, lidar sensors, radar sensors, sonar sensors, GPS sensors, wi-Fi sensors, near field communication sensors, and the like. Further, the electronic device 105 may include hardware elements, such as hardware buttons or switches, that may receive user input. User inputs detected by such sensors and/or hardware elements correspond to various input modalities for initiating recordings within a given augmented reality environment. For example, such input modalities may include, but are not limited to, face tracking, eye tracking (e.g., gaze direction), hand tracking, gesture tracking, biometric readings (e.g., heart rate, pulse, pupil dilation, respiration, temperature, electroencephalogram, smell), recognition of speech or audio (e.g., specific thermal words), and activation of buttons or switches, etc. The electronic device 105 may also detect and/or classify physical objects in the physical environment of the electronic device 105.
The electronic device 105 may be communicatively coupled to a base device, such as the electronic device 110 and/or the electronic device 115. Generally, such base devices may include more computing resources and/or available power than electronic device 105. In one example, the electronic device 105 may operate in various modes. For example, the electronic device 105 may operate in a stand-alone mode independent of any base device. When the electronic device 105 operates in a stand-alone mode, the number of input modalities may be constrained by power limitations of the electronic device 105 (such as available battery power of the device). In response to the power limitation, the electronic device 105 may deactivate certain sensors within the device itself to maintain battery power.
The electronic device 105 may also operate in a wireless wired mode (e.g., connected with the base device via a wireless connection) to work in conjunction with a given base device. The electronic device 105 may also operate in a connected mode in which the electronic device 105 is physically connected to the base device (e.g., via a cable or some other physical connector), and may utilize power resources provided by the base device (e.g., where the base device charges the electronic device 105 while physically connected).
When the electronic device 105 is operating in a wireless wired mode or a connected mode, processing user input and/or rendering at least a portion of the augmented reality environment may be offloaded to the base device, thereby reducing the processing burden on the electronic device 105. For example, in one implementation, electronic device 105 works in conjunction with electronic device 110 or electronic device 115 to generate an augmented reality environment that includes physical and/or virtual objects that enable different forms of interaction (e.g., visual, auditory, and/or physical or tactile interactions) between a user and the augmented reality environment in a real-time manner. In one example, the electronic device 105 provides a rendering of a scene corresponding to an augmented reality environment that may be perceived by a user and interacted with in real-time. Additionally, as part of rendering the rendered scene, the electronic device 105 may provide sound and/or haptic or tactile feedback to the user. The content of a given rendering scene may depend on available processing power, network availability and capacity, available battery power, and current system workload.
The electronic device 105 may also detect events that have occurred within the scene of the augmented reality environment. Examples of such events include detecting the presence of a particular person, entity, or object in a scene. The detected physical objects may be categorized by electronic device 105, electronic device 110, and/or electronic device 115, and the location, position, size, dimension, shape, and/or other characteristics of the physical objects may be used to provide physical anchor objects to an XR application, thereby generating virtual content (such as a UI of the application) for display within the XR environment.
It should also be appreciated that electronic device 110 and/or electronic device 115 may also operate in conjunction with electronic device 105 or generate such an augmented reality environment independent of electronic device 105.
Network 106 may communicatively couple (directly or indirectly) electronic device 105, electronic device 110, and/or electronic device 115, for example, with server 120 and/or one or more electronic devices of one or more other users. In one or more implementations, the network 106 may be an interconnection network that may include the internet or devices communicatively coupled to the internet.
The electronic device 110 may include a touch screen and may be, for example, a smart phone including a touch screen, a portable computing device, such as a laptop computer including a touch screen, a peripheral device including a touch screen (e.g., a digital camera, an earphone), a tablet device including a touch screen, a wearable device including a touch screen (such as a watch, a wristband, etc.), any other suitable device including, for example, a touch screen, or any electronic device having a touch pad. In one or more implementations, the electronic device 110 may not include a touch screen, but may support touch screen-like gestures, such as in an augmented reality environment. In one or more implementations, the electronic device 110 may include a touch pad. In fig. 1, by way of example, electronic device 110 is depicted as a mobile smart phone device having a touch screen. In one or more implementations, the electronic device 110, the handheld electronic device 104, and/or the electronic device 105 may be and/or may include all or part of the electronic system discussed below with respect to fig. 14. In one or more implementations, the electronic device 110 may be another device, such as an Internet Protocol (IP) camera, a tablet computer, or a peripheral device such as an electronic stylus, or the like.
The electronic device 115 may be, for example, a desktop computer, a portable computing device such as a laptop computer, a smart phone, a peripheral device (e.g., digital camera, headset), a tablet device, a wearable device such as a watch, wristband, etc. In fig. 1, by way of example, the electronic device 115 is depicted as a desktop computer. The electronic device 115 may be and/or may include all or part of an electronic system discussed below with respect to fig. 14.
The servers 120 may form all or part of a computer network or server farm 130, such as in a cloud computing or data center implementation. For example, server 120 stores data and software, and includes specific hardware (e.g., processors, graphics processors, and other special purpose or custom processors) for rendering and generating content of an augmented reality environment, such as graphics, images, video, audio, and multimedia files. In one implementation, server 120 may function as a cloud storage server that stores any of the aforementioned augmented reality content generated by the above-described devices and/or server 120.
Fig. 2 illustrates an exemplary architecture that may be implemented by the electronic device 105 and another electronic device 205 (e.g., the handheld electronic device 114, the electronic device 110, the electronic device 115, or the other electronic device 105) in accordance with one or more implementations of the subject technology. For purposes of explanation, portions of the architecture of fig. 2 are described as being implemented by the electronic device 105 of fig. 1, such as by a processor and/or memory of the electronic device; however, appropriate portions of the architecture may be implemented by any other electronic device, including electronic device 110, electronic device 115, and/or server 120. However, not all of the depicted components may be used in all implementations, and one or more implementations may include additional or different components than those shown in the figures. Variations in the arrangement and type of these components may be made without departing from the spirit or scope of the claims set forth herein. Additional components, different components, or fewer components may be provided.
Portions of the architecture of fig. 2 may be implemented in software or hardware, including by one or more processors and memory devices containing instructions that, when executed by a processor, cause the processor to perform the operations described herein. In the example of fig. 2, an application (such as application 202) provides application data for rendering a UI of the application to a rendering engine 223. The application data may include application-generated content (e.g., windows, buttons, tools, etc.) and/or user-generated content (e.g., text, images, etc.) and information for rendering the content in the UI. The rendering engine 223 renders the UI for display by a display, such as display 225 of the electronic device 105.
As shown in fig. 2, another electronic device 205 in communication with the electronic device 105 does not have an installed or available application 202. In the example of fig. 2, the electronic device 105 screens an application (e.g., a UI of the application) to another electronic device 205 by providing remote UI information to the other electronic device 205 (e.g., to the rendering engine 263 of the other electronic device 205). In the example of fig. 2, the rendering engine 223 is shown as providing remote UI information to another electronic device 205. However, this is merely illustrative, and the remote UI information may be provided by the application 202 and/or other system processes running at the electronic device 105, including system processes for providing a UI of the application 202 to the display 225, which may be implemented before or after the rendering engine 223 in the pipeline.
The remote UI information provided from the electronic device 105 to the electronic device 205 may include one or more images, one or more video streams, and/or one or more other assets associated with one or more elements of the user interface; one or more layer trees describing the layout and/or appearance of the one or more elements of the user interface; and metadata of the user interface. In one or more implementations, the metadata can include the one or more layer trees. For example, when rendering engine 223 generates rendered display frames for displaying a UI by display 225, rendering engine 223 may also generate one or more display frames for portions of the UI (e.g., elements of the UI such as text fields, buttons, tools, windows, dynamic content, images, embedded video, etc.). The display frames of the portions of the UI may be provided as images of static elements and/or may form a corresponding video stream that may be provided to another electronic device 205 for rendering of a version of the UI by the rendering engine 263. In one or more implementations, the electronic device 105 can combine one or more of the elements of the UI into a combined video stream. For example, the electronic device 105 may determine that two or more elements of the UI are coplanar and partially overlapping and generate a video stream representing a current view of the two or more coplanar and partially overlapping elements.
The rendering engine 223, application 202, and/or other processes at the electronic device 105 may also generate one or more layer trees based on the application data that describe (e.g., in hierarchical form) how the one or more images, video streams, and/or state information associated with the elements of the UI may be combined to form a version of the UI for the display 225. The rendering engine 223, application 202, and/or other processes at the electronic device 105 may also generate metadata that includes timing information for synchronizing the layer tree with images, video streams, and/or state information (e.g., primitives) of various portions of the UI for generating a version of the UI at another electronic device 205. In one or more implementations, when the UI displayed at the electronic device 105 changes (e.g., due to user input at the electronic device 105 and/or due to application activity of the application 202), one or more new layer trees or one or more incremental (difference) layer trees may be sent to another electronic device 205 to update the version of the UI at the other device.
At another electronic device 205, the rendering engine 263 can render a version of the user interface using the one or more layer trees and the one or more video streams by: for example, the preferences of the second device are applied to at least one of the one or more layer trees, and the one or more video streams and the one or more layer trees are synchronized using metadata received from the first device. The other electronic device 205 may then display the rendered version of the user interface using the display 265 (e.g., display the one or more video streams and/or one or more other assets according to instructions in the one or more layer trees). In some use cases, a tier tree may arrive at the electronic device 205 before the corresponding asset for the tier of the tier tree. In such use cases, the electronic device 205 can display the placeholder at the location of the asset until the asset arrives from the electronic device 105.
In one or more implementations, the layer tree can provide a hierarchical structure, a tree or graph structure (e.g., a view tree and/or a scene graph), and/or any other declarative form describing the UI. For example, the layer tree may include and/or may be associated with a layer hierarchy that describes each layer of the UI for display. For example, the UI may include content that incorporates a background, which may utilize one or more blur layers and/or other filter layers. Thus, a tree may include nodes and/or subtrees that contain one or more attributes describing the blur layer, such as depth, size, placement, and the like. In one or more implementations, the rendering engine 263 can parse the layer tree to manage rendering of portions of the UI, such as portions corresponding to respective video streams for the UI. In one or more implementations, the electronic device 105 (e.g., the rendering engine 223 or a system process at the electronic device 105) can serialize the layer tree and/or one or more assets (such as images, video streams, etc.) for transmission to the electronic device 205.
In some use cases, the electronic device 105 may determine that the version of the UI sent for rendering at the electronic device 205 is no longer able to be sent as a portfolio asset, layer tree, and/or metadata. For example, the electronic device 105 may determine that the bandwidth and/or quality of the connection between the electronic device 105 and the electronic device 205 is insufficient, that the computing resources (e.g., power, memory, and/or processing resources) of the receiving device are insufficient, and/or that compatibility issues between the operating system of the electronic device 105 and the operating system of the electronic device 205 may result in and/or may have resulted in an invalid state of the UI or a portion thereof at the electronic device 205. In a use case in which the electronic device 105 determines that the version of the UI sent for rendering at the electronic device 205 is no longer able to be sent as a portfolio asset, layer tree, and/or metadata (e.g., due to the reduced capabilities of another electronic device 105), the electronic device may switch to a fallback mode in which the UI (e.g., the entire UI) is encoded into the video stream and sent to the electronic device 205 (e.g., no metadata is used to reconstruct and/or render the UI at the electronic device 205). In one or more implementations, the electronic device 105 can monitor the connection and/or computing resources of the electronic device 205 and switch back to sending the version of the UI as a portfolio asset, a layer tree, and/or metadata when the connection and/or computing resources are sufficient to avoid an invalid state at the electronic device 205.
In one or more implementations, the layers of the layer tree may be associated with depth information that may be converted to z-coordinates (and/or z-planes) in a three-dimensional coordinate system. In the example of fig. 2, the application 202 is screened for display of a UI of the application 202, for example, in a two-dimensional scene (such as on a display of a smart phone, a tablet device, or a computer monitor, or a television).
Fig. 3 illustrates another implementation in which additional information is provided for displaying the UI of the application 202 in a three-dimensional (e.g., XR) scene. In the example of fig. 3, sensor 152 provides environmental information (e.g., depth information from one or more depth sensors) to an Operating System (OS), such as OS 200. In one or more implementations, OS service 200 may be a service provided by an operating system of electronic device 105 and that performs operations for generating an XR environment. The camera 150 may also provide an image of the physical environment to the OS service 200. The OS service 200 may use the environmental information (e.g., depth information and/or images) from the sensors 152 and cameras 150 to generate three-dimensional scene information, such as a three-dimensional map, of some or all of the physical environment of the electronic device 105.
As shown in fig. 3, an application 202 may request an anchor, such as a physical object anchor, from an OS service 200 in an anchor request. The application 202 may be a game application, a media player application, a content editor application, a training application, a simulator application, or generally any application that provides a UI that is displayed at a location that depends on the physical environment, such as by anchoring the UI to a physical object anchor.
The physical object anchor may be a general physical object such as a horizontal planar surface (e.g., a surface of a floor or desktop), a vertical planar surface (e.g., a surface of a wall), or a specific physical object (e.g., a table, wall, television cabinet, sofa, refrigerator, table, chair, etc.). The application 202 may include code that, when executed by one or more processors of the electronic device 105, generates application data for displaying a UI of the application on, near, attached to, or otherwise associated with the physical object anchor.
Once the application data has been generated, the application data may be provided to OS service 200 and/or rendering engine 223, as shown in FIG. 3. Environmental information such as a depth map of the physical environment and/or object information of objects detected in the physical environment may also be provided to the rendering engine 223. The rendering engine 223 may then render application data from the application 202 for display by the display 225 of the electronic device 105. The UI of the application 202 is rendered for display at an appropriate location on the display 225 to appear associated with a physical anchor object or other anchor provided by the OS service 200. The display 225 may be, for example, an opaque display, and the camera 150 may be configured to provide a pass-through video feed to the opaque display. The UI may be rendered for display on the display at a location corresponding to a display location of the physical anchor object in the pass-through video. As another example, the display 225 may be a transparent or translucent display. The UI may be rendered for display on the display at a location corresponding to a direct view of the physical anchor object through the transparent or translucent display.
As shown, the electronic device 105 may also include a composition engine 227 that composes video images of the physical environment based on the images from the camera 150 for display with the rendering UI from the rendering engine 223. For example, composition engine 227 may be provided in electronic device 105 including an opaque display to provide a pass-through video to the display. In an electronic device 105 implemented with a transparent or translucent display that allows a user to directly view the physical environment, the composition engine 227 may be omitted or not used in some cases, or may be incorporated into the rendering engine 223. Although the example of fig. 3 shows the rendering engine 223 separate from the OS service 200, it should be understood that the OS service 200 and the rendering engine 223 may form a common service and/or rendering operations for rendering content for display may be performed by the OS service 200. Although the example of fig. 3 shows the rendering engine 223 separate from the application 202, it should be appreciated that in some implementations, the application 202 may render content for display by the display 225 without using a separate rendering engine.
The electronic device 105 may allow the applications 202 to request and obtain anchor information from the OS service 200 (e.g., via an application programming interface or API), as shown in fig. 3, which may facilitate efficient development, implementation, and/or runtime execution of the applications 202 (e.g., because each application 202 does not have to perform its own object detection, scene mapping, etc.). As shown in fig. 3, when the application 202 is dropped for three-dimensional display by another electronic device, such as another electronic device 205, the OS service 200 (e.g., or the application 202 or rendering engine 223) may provide anchor information (e.g., remote anchor information) for the UI of the application 202 to the other electronic device 205 (e.g., to an Operating System (OS) service, such as the OS service 260, and/or to the rendering engine 263 at the other electronic device). In one or more implementations, the OS service 260 may be a service provided by an operating system of the electronic device 205 and that performs operations for generating an XR environment. In one or more implementations, the anchor information can be serialized for transmission from the electronic device 105 to the electronic device 205 along with one or more assets (e.g., images, video streams, etc.), one or more layer trees, and/or other metadata (e.g., including timing information). Although the example of fig. 3 shows the rendering engine 263 separate from the OS service 260, it should be understood that the OS service 260 and the rendering engine 263 may form a common service and/or rendering operations for rendering content for display may be performed by the OS service 260.
As shown in fig. 3, even if the application 202 is not available at another electronic device 205, the OS service 260 (e.g., and/or rendering engine 263) may use the remote UI information and the remote anchor information (e.g., in conjunction with environmental information obtained by the sensors 152 and/or camera 150 of the other electronic device 205) to generate scene information. As shown, the other electronic device 205 may also include a composition engine 267 that composes video (e.g., from the camera 150 at the other electronic device 205) for display by the display 265.
The anchor information (e.g., remote anchor information) provided from the electronic device 105 to the other electronic device 205 may include information indicating a location at which the other electronic device 205 should render a UI corresponding to the application 202 in the environment (e.g., physical environment, mixed reality environment, or virtual environment) of the other electronic device 205. In one example, the anchor information may include a transformation such that, as the UI is positioned by the electronic device 105 relative to an origin in the environment of the electronic device 105, an anchor point location of the UI for the other electronic device 205 is similarly positioned relative to the origin in the environment of the other electronic device 205. In an operational scenario in which the electronic device 105 and the other electronic device 205 are co-located (e.g., in a common or overlapping physical environment), the origin of the electronic device 105 and the origin of the other electronic device 205 may be at the same location. In an operational scenario in which the electronic device 105 and the other electronic device 205 are not co-located (e.g., the electronic device 105 and the other device are in a remote physical environment), the origin of the electronic device 105 and the origin of the other electronic device 205 may be at different locations.
In one or more implementations, the electronic device 105 displays a user interface of an application running on the device (such as application 202) at an anchor point location in the physical environment of the electronic device. Fig. 4 illustrates an example in which a user interface 304 (e.g., of an application 202) is displayed by the electronic device 105 as appearing at a location 305 in an environment, such as the physical environment 300 of the electronic device 105. In the example of fig. 4, UI 304 includes a plurality of windows 308, each of which may include one or more elements 306. Element 306 may include text input fields, buttons, selectable tools, scroll bars, menus, drop down menus, links, plug-ins, image viewers, media players, slider bars, and the like. In the example of fig. 4, a UI 315 of another application is also displayed. In one or more implementations, the application corresponding to the UI 315 may be a shared application running on the electronic device 105 and one or more other electronic devices (such as the other electronic devices 205 of fig. 2 and 3).
In the example of fig. 4, both UI 304 and UI 315 are displayed in the viewable area 307 of the display of electronic device 105 to appear in the three-dimensional environment of electronic device 105 as if they were on physical wall 301 in physical environment 300. In this example, a physical table 312 is also present in the physical environment 300. Displaying UI 304 as if on physical wall 301 may be accomplished in part by defining an anchor point location for UI 304 at location 305 on the physical wall. The anchor point location may be defined by detecting a physical wall and/or with respect to an origin 310 of the electronic device 105 in the physical environment 300. For example, the electronic device 105 may generate and/or store a transformation between the origin 310 and the anchor point location at the location 305. In this way, if the electronic device 105 moves within the physical environment 300, the display UI 304 remains at an anchor point location on the physical wall 301. In one or more implementations, when the electronic device 105 is communicatively coupled with one or more other electronic devices (such as the other electronic devices 205 of fig. 2 and 3), the electronic device 105 may share origin information with the other electronic devices.
In one or more implementations, a user of the electronic device 105 may desire to share the UI 304 of the application 202 with another user of another device (e.g., another device located in the same physical environment 300 or in a remote, separate physical environment). In one or more implementations, the electronic device 105 can determine that another device (e.g., the other electronic device 205 of fig. 2 and 3) with which the device is communicating (e.g., and with which the user has indicated a desire to share the UI 304) is not installed with the application. In response to determining that the application is not installed by another device, the electronic device 105 can provide information associated with the user interface 304 of the application 202 to another device on which the application is not installed. As discussed herein, the information associated with the user interface 304 may include visual display information (e.g., remote UI information as described in connection with fig. 2 and/or fig. 3) and anchor information (e.g., remote anchor information as described in connection with fig. 3) for the user interface 304. For example, the anchor information may define an anchor point location at location 305 relative to an origin 310 in a physical environment 300 of the electronic device 105.
Fig. 5 illustrates an example of a physical environment 400 of another electronic device 205. In this example, the physical environment 400 is separate and apart from the physical environment 300 of the electronic device 105. In the example of fig. 5, another electronic device 205 has received and is displaying a version 404 of the UI 304 displayed by the electronic device 105 running the application 202. In this example, version 404 of UI 304 includes version 408 of plurality of windows 308 of UI 304 of fig. 4 and version 406 of elements 306 of UI 304 of fig. 4. Version 406 of element 306 may be generated based on state information of the element, an image of the element, and/or a video stream of the element as provided by electronic device 105. The elements reconstructed from the state information, images corresponding to the elements, and/or arrangements of video streams corresponding to the elements may be determined using one or more layer trees and corresponding metadata provided from the electronic device 105 to integrally form the version 408 of the window 308 and the version 404 of the UI 304. For example, the OS services 260, rendering engine 263, and/or composition engine 267 can parse the one or more layer trees to manage rendering of the various elements 306 of the UI.
As shown in fig. 4, some of the versions 406 of the element 306 appear different in the version 404 of the UI 304 than the corresponding element 306 appears in the UI 304 displayed by the electronic device 105. In this example, two of the versions 406 of the element 306 are larger in size than the corresponding element 306 of the UI 304, and one of the versions 406 of the element 306 has the same size (but a different color) than the corresponding UI element 306 of the UI 304. These differences in the UI 304 and part (but not all) of the version 404 may be due to the application of one or more preferences (e.g., user preferences) of another electronic device 205 being applied to one or more layer trees included in the visual display information received from the electronic device 105. Other differences between the element 306 and window 308 and the corresponding versions 406 and 408 may include differences in color, font size, theme, orientation relative to the user, and so forth.
In one or more implementations, another electronic device 205 on which the application 202 is not installed receives information associated with the user interface 304 of the application 202 from the electronic device 105 running the application 202, such as when the user interface 304 is displayed by the electronic device 105 at a first anchor location (e.g., location 305) in a first environment (e.g., physical environment 300) of the electronic device 105. As shown in fig. 5, the electronic device 205 (e.g., a second electronic device) may render a version 404 of the user interface 304 using visual display information from the electronic device 105 and may display the version 404 of the user interface 304 anchored to a second anchor location (e.g., at location 405) defined relative to a second origin 410 in a second environment (e.g., physical environment 400) of the second device using the anchor information.
As shown in fig. 5, the version 404 of the UI 304 displayed by the other electronic device 205 may appear to be at a different location within the viewable area 407 corresponding to the display of the other electronic device 205 than the location of the UI 304 within the viewable area 307 of the electronic device 105 (e.g., due to the current location, orientation, etc. of the other electronic device 205 and/or the user of the other electronic device 205). However, version 404 of UI 304 may be displayed at the same relative position with respect to origin 410 as relative position 305 of UI 304 with respect to origin 310 using the received anchor information (see fig. 4). In the example of fig. 5, the physical environment 400 does not include a physical wall 301 and includes different physical tables 415 at different physical locations.
As shown in the example of fig. 5, because the physical wall 301 of the physical environment 300 is not present in the physical environment 400, the version 404 of the UI 304 displayed by the other electronic device 205 may be displayed as a UI that appears to be floating.
In the examples of fig. 4 and 5, a first environment (e.g., physical environment 300) of a first device (e.g., electronic device 105) is remote from a second environment (e.g., physical environment 300) of a second device (e.g., another electronic device 205), a first origin (e.g., origin 310) is local to the first environment, and a second origin (e.g., origin 410) is local to the second environment. In this example, the anchor information provided from the first device to the second device may include, for example, a transformation that causes the second anchor location of version 404 to be similarly positioned relative to the second origin as the first anchor location of UI 304 is positioned relative to the first origin. However, it should also be appreciated that in some scenarios, the first environment of the first device is the same as the second environment of the second device (e.g., the same physical environment), and the first origin and the second origin are a common origin at a single location.
As discussed herein, the version 404 of the UI 304 displayed by the other electronic device 205 on which the application 202 is not installed may be a non-interactive version of the UI. However, in one or more other implementations, the user of another electronic device 205 may be provided with interactivity with the version 404, such as the ability to move the version 404 to a new location, or to resize the version 404 or rotate the version. Such interactivity at the other electronic device 205 may be independent of the display of the UI 304 at the electronic device 105 and/or information associated with such interactivity may be sent to the electronic device 105 to cause corresponding movement, resizing, rotation, etc. of the UI 304 displayed by the electronic device 105.
For example, in one or more implementations, another electronic device 205 can receive user input of a version 404 of a user interface displayed at the other electronic device 205. In response to the user input, the other electronic device 205 may de-anchor the version 404 of the user interface displayed at the other electronic device 205 from the corresponding anchor location (e.g., at location 405) and move (and/or resize and/or rotate) the version of the user interface displayed at the second device to the new anchor location in the physical environment 400. In one or more implementations, moving the version 404 of the user interface displayed at the other electronic device 205 is independent of the display of the user interface 304 at the electronic device 105. In one or more other implementations, moving the version 404 of the user interface displayed at the other electronic device 205 causes a corresponding movement of the user interface 304 displayed at the electronic device 105 (e.g., using information describing movement and/or user input provided from the other electronic device 205 to the electronic device 105).
As shown in the example of fig. 5, when another device 205 has the same installed application as the application running on the electronic device 105 and the application is a shared application, both the electronic device 105 and the other electronic device 205 may display the same UI 315 of the shared application. This is because the same application running on two devices can interpret the same application data in the same way to generate a local UI of the shared application at both devices, as compared to the application 202 installed on only one of the devices.
Although the example of fig. 4 shows the UI 304 anchored to the physical wall 301, this is merely illustrative, and the UI 304 may be initially displayed at other locations and/or moved to other locations by a user of the electronic device 105. For example, fig. 6 shows an example in which UI 304 is displayed on (e.g., anchored to) a physical table 312 in physical environment 300 of electronic device 105. As shown in fig. 7, because the physical table 312 is not present in the physical environment 400 of the other electronic device 205, the version 404 of the UI 304 displayed by the other electronic device 205 may be displayed at the same relative position with respect to the origin 410 as the UI 304 is displayed with respect to the origin 310, appearing as a floating UI.
In the examples of fig. 6 and 7, the first anchor location of UI 304 corresponds to a physical anchor object (e.g., physical table 312) in a physical environment (e.g., physical environment 300) of electronic device 105, and the second anchor location of version 404 corresponds to a virtual anchor in a second environment (e.g., physical environment 400) of another electronic device 205. As shown in fig. 8, in some scenarios, when a physical anchor object (e.g., physical table 312 in this example) is not available in the physical environment 400 of another electronic device, a virtual anchor object (such as virtual table 812) may be generated for anchoring version 404 displayed by another electronic device 205. In the example of fig. 8, the virtual anchor object has been rendered for display at the virtual anchor for version 404. In this example, the virtual anchor object has a form (e.g., the form of virtual table 812) that corresponds to the form of the physical anchor object.
As discussed above in connection with fig. 4 and 5, in one or more implementations, the user of another electronic device 205 may be provided with interactivity with the version 404, such as the ability to move the version 404 to a new location, or to resize the version 404 or rotate the version. Such interactivity at the other electronic device 205 may be independent of the display of the UI 304 at the electronic device 105 and/or information associated with such interactivity may be sent to the electronic device 105 to cause corresponding movement, resizing, rotation, etc. of the UI 304 displayed by the electronic device 105.
For example, in one illustrative use case, UI 304 of fig. 6 may be or include a representation of a chess board of a chess application running at electronic device 105 anchored to physical table 312. In one or more implementations, the anchor information provided by the electronic device 105 to the electronic device 205 may be used by the electronic device 205 to orient the checkerboard in the same orientation as the checkerboard displayed by the electronic device 105. In this example, even when the user of electronic device 105 and the user of electronic device 205 are in remote locations, the user of electronic device 205 may walk or otherwise move around the displayed version 404 of the checkerboard to position themselves opposite the user of electronic device 105.
In one or more other implementations, the orientation of the version 404 of the checkerboard displayed by the electronic device 205 may be oriented differently than the orientation of the checkerboard displayed by the electronic device 105. For example, version 404 of the checkerboard displayed by electronic device 205 may be positioned relative to origin 410 similar to the positioning of UI 304 relative to origin 310 (e.g., using anchor information received from electronic device 105), but rotated (e.g., based on preferences at electronic device 205) such that the opposite side of the checkerboard to the side of the checkerboard facing the user of electronic device 105 faces the user of electronic device 205.
In an exemplary use case of the checkerboard UI, the other electronic device 205 may receive user input of the version 404 of the checkerboard displayed at the other electronic device 205. For example, the user input may be a gesture input corresponding to lifting the version 404 of the checkerboard displayed by the electronic device 205 from the virtual anchor point location and placing the version 404 of the checkerboard displayed by the electronic device 205 on the physical table 415. In response to the user input, the other electronic device 205 may de-anchor the version 404 of the checkerboard displayed at the other electronic device 205 from the corresponding anchor location (e.g., at location 405) and move the version of the checkerboard displayed at the electronic device 205 to the new anchor location associated with the physical table 415. As another example, the user input may be a gesture input corresponding to rotating version 404 of the checkerboard displayed by electronic device 205 (e.g., such that a desired side of the checkerboard faces the user of electronic device 205). In response to the user input, the other electronic device 205 may rotate the version 404 of the checkerboard displayed at the other electronic device 205 according to the rotation gesture. In one or more implementations, moving and/or rotating version 404 of the checkerboard displayed at another electronic device 205 is independent of the display of the checkerboard displayed at electronic device 105. In one or more other implementations, moving and/or rotating version 404 of the checkerboard displayed at another electronic device 205 causes a corresponding movement and/or rotation of the checkerboard displayed at electronic device 105 (e.g., using information describing movement and/or user input provided from another electronic device 205 to electronic device 105).
In one or more implementations, some of the user inputs at the electronic device 205 may cause a change to the version 404 of the board displayed by the electronic device 205 without affecting the display of the board displayed by the electronic device 105, and other user inputs to the version 404 of the board displayed by the electronic device 205 may cause a change to the display of the board at the electronic device 105. For example, a user of the electronic device 205 may be provided with the ability to rotate the board without affecting the rotation of the board displayed by the electronic device 105, and the ability to move the pawn on the version 404 of the board displayed by the electronic device 205 to cause a corresponding movement of the same pawn on the board displayed by the electronic device 105.
For example, the electronic device 105 may determine (e.g., based on metadata provided with a version of the UI 304 transmitted from the electronic device 105) not to transmit gesture inputs for rotating the checkerboard back to the electronic device 105, and transmit gesture inputs corresponding to the pawns on the mobile checkerboard back to the electronic device 105. In various implementations, gesture input information corresponding to the movement of the pawn may be applied locally by the electronic device 205 to the version 404 of the board displayed by the electronic device 205 and sent to the electronic device 105 to be applied to the board displayed by the electronic device 105 (e.g., and the underlying application for game management), or the gesture input information may be sent to the electronic device 105 to be used to affect the board displayed by the electronic device 105 (e.g., and to update the underlying application for game management), and then the version 404 of the board displayed by the electronic device 205 may be updated based on updated UI information and/or anchor information generated by the electronic device 105 in response to the electronic device 105 receiving gesture input provided to the electronic device 205 (e.g., and sent from the electronic device 105 to the electronic device 205).
FIG. 9 illustrates a flow chart of an exemplary process for receiving a screen-cast application in accordance with aspects of the subject technology. The blocks of process 900 are described herein as occurring sequentially or linearly. However, multiple blocks of process 900 may occur in parallel. Moreover, the blocks of process 900 need not be performed in the order shown, and/or one or more blocks of process 900 need not be performed and/or may be replaced by other operations.
In the example of fig. 9, at block 902, a version of a user interface of an application is received from a first device running the application at a second device on which the application is not installed. In one or more implementations, the version of the user interface includes one or more video streams associated with one or more elements of the user interface. In one or more implementations, the version of the user interface further includes one or more layer trees associated with the one or more elements of the user interface, as well as metadata of the user interface. For example, the first device and the second device may communicate via a secure wireless connection. In various implementations, the first device and the second device (and/or the one or more additional devices) may be participant devices that are videoconference sessions or coexistence sessions.
In one or more implementations, the one or more elements of the user interface include a data editing field and a button. The one or more video streams may include a first video stream corresponding to the data editing field and a second video stream corresponding to the button.
At block 904, a version of the user interface is rendered using the one or more layer trees and the one or more video streams with the second device. Further details of rendering of versions of the user interface are described below in connection with, for example, FIG. 10.
At block 906, the rendered version of the user interface is displayed with the second device. In one or more implementations, user input may be received, such as by the second device, with a version of a user interface displayed by the second device. The version of the user interface displayed at the second device may be modified in accordance with the user input with or without modification of the user interface displayed at the first device.
For example, modifying the version of the user interface displayed at the second device may include moving, resizing, rotating, or re-coloring the version of the user interface displayed at the second device independently of the user interface displayed at the first device (e.g., without changing the position, size, rotation, or color of the user interface displayed at the first device).
As another example, the second device may send information associated with the user input to the first device to cause a corresponding modification of the user interface displayed at the first device. The information associated with the user input may include location, motion, direction, depth, or other information describing the user input relative to one or more elements of the version of the user input displayed at the second device.
FIG. 10 illustrates a flow chart of an exemplary process for rendering a version of a user interface received from another device in accordance with aspects of the subject technology. The blocks of process 1000 are described herein as occurring sequentially or linearly. However, multiple blocks of process 1000 may occur in parallel. Moreover, the blocks of process 1000 need not be performed in the order shown, and/or one or more blocks of process 1000 need not be performed and/or may be replaced by other operations.
In the example of fig. 10, at block 1002, a preference of a second device is applied to at least one of the one or more layer trees. For example, the preferences stored at the second device may include text size, font, color, or theme stored at the second device for displaying user interfaces of other applications installed on the device. Because the version of the user interface provided by the first device includes the one or more layer trees in addition to the one or more video streams, the second device preferences may be applied to render the version of the user interface at the second device differently than the UI displayed at the first device.
At block 1004, the one or more video streams and the one or more layer trees are synchronized using metadata received from the first device. Synchronizing the one or more video streams and the one or more layer trees using metadata received from the first device may include synchronizing the one or more video streams and the one or more layer trees in time using metadata received from the first device. For example, the metadata may include time information indicating which frames of each of the one or more video streams are to be included in the version of the user interface described by the one or more layer trees at any given time (the one or more layer trees themselves may change over time according to changes in the user interface displayed at the first device).
In one or more implementations, at least one element of the one or more elements in the user interface displayed at the first device appears different from the at least one element of the one or more elements in the version of the user interface displayed at the second device because the preference is applied to at least one of the one or more layer trees. For example, applying the preference of the second device to at least one of the one or more layer trees includes modifying a size, shape, or color indicated by a portion of one of the one or more layer trees, the portion corresponding to one or more of the elements of the user interface, according to the preference. For example, in one exemplary operational scenario, buttons or other interactive tools displayed at a second device using a video stream of buttons from a first device may be enlarged at the second device (e.g., relative to other elements of a user interface) based on text size preferences of the second device. As another example operational scenario, the button may have a substantially orange color when displayed by the first device and may be modified to have a substantially blue color when displayed by the second device according to the theme or color preference of the second device.
FIG. 11 illustrates a flow chart of an exemplary process for a screen-casting application in accordance with aspects of the subject technology. The blocks of process 1100 are described herein as occurring sequentially or linearly. However, multiple blocks of process 1100 may occur in parallel. Furthermore, the blocks of process 1100 need not be performed in the order shown, and/or one or more blocks of process 1100 need not be performed and/or may be replaced by other operations.
In the example of fig. 11, at block 1102, the device displays a user interface for an application running on the device. The user interface may include one or more elements, such as element 306 of user interface 304 of fig. 4.
At block 1104, the device may determine that another device in communication with the device does not have the application installed. Determining that another device does not have the application installed may include: sending a query from a first device to another device or to a server associated with the device and the other device; and receiving an indication that the application is not available or installed at the other device in response to a query from the other device or the server. When an application is launched at a device, a determination may be made that another device does not have the application installed in response to a request for application sharing from a user of the application running on the device, or prior to launching the application (e.g., during a handshake operation and/or during establishment of a communication session between the device and the other device). The determination that another device does not have the application installed may be performed by the application and/or by one or more background or system processes running on the device and in communication with the application.
At block 1106, the device provides a version of the user interface that the application displays at the device to another device on which the application is not installed. The version of the user interface provided from the device to the other device may include one or more video streams associated with one or more elements of the user interface, one or more layer trees associated with the one or more elements of the user interface, and metadata of the user interface. For example, the one or more video streams include a plurality of video streams each corresponding to one of a plurality of elements of the user interface. The metadata may include time information for synchronization of the one or more video streams and the one or more layer trees.
In one or more implementations, a device receives user input of a user interface displayed at the device. As examples, the user input may include text input to a text input field of the UI, clicking on a display button of the UI, or resizing an element of the UI (e.g., a child window). The device may modify a user interface displayed at the device (e.g., to display typed text in text input fields, perform actions corresponding to buttons, or resize elements of the user interface) according to user input. The device may modify at least one of the one or more video streams and the one or more layer trees in accordance with the user input such that the other device may render a corresponding modification to a version of the user interface displayed at the other device. For example, modifying at least one of the one or more video streams and the one or more layer trees according to user input may include modifying one video stream of the plurality of video streams independently of another video stream of the plurality of video streams (e.g., a video stream corresponding to a text input field or a video stream corresponding to a button).
As described herein, the version of the user interface displayed at the other device may be a non-interactive version of the user interface. However, in some implementations, some interactivity with the version of the user interface may be provided by another device. For example, in one or more implementations, a device may receive an indication of user input from another device to a version of a user interface displayed at the other device and modify the user interface displayed at the device according to the user input to the version of the user interface displayed at the other device.
In one or more implementations, process 1100 may further include determining, by the device, a reduced capability (e.g., reduced bandwidth and/or reduced computing capability) of the other device, thereby ceasing to provide the other device with a version of the user interface including the one or more video streams, the one or more layer trees, and the metadata, and providing, from the device to the other device, a single video stream representing an entire user interface displayed by the application at the device.
In one or more implementations, a user interface displayed by a device may be displayed in a three-dimensional display (such as a mixed reality or virtual reality environment). In order to allow another device to display a version of the UI similar to the UI displayed by the device, anchoring information may also be provided from the device to the other device. For example, a user interface displaying an application running on a device may include a user interface displaying a physical anchor anchored to a physical environment of the device, and may provide anchor information for a version of the interface to another device. The anchor information may include, for example, a transformation between an origin in the physical environment of the first device and a UI displayed by the first device.
For example, if a user of a device verbally describes the location of a user interface in its environment or the location of the content of an element of the user interface relative to other elements of the user interface, and a user of another device (e.g., at a remote location) desires to view the UI or element thereof being described, it may be helpful to provide anchoring information to the other device.
Further features of implementations in which an application drop screen is provided for three-dimensional display are described in connection with fig. 12 and 13.
FIG. 12 illustrates a flow chart of an exemplary process for receiving a screen-cast application for three-dimensional display in accordance with aspects of the subject technology. The blocks of process 1200 are described herein as occurring sequentially or linearly. However, multiple blocks of process 1200 may occur in parallel. Moreover, the blocks of process 1200 need not be performed in the order shown, and/or one or more blocks of process 1200 need not be performed and/or may be replaced by other operations.
In the example of fig. 12, at block 1202, a second device on which an application is not installed receives, from a first device running the application, information associated with a user interface of the application displayed by the first device at a first anchor location in a first environment of the first device. For example, the information associated with the user interface may include visual display information and anchor information of the user interface. The anchor information may define a first anchor location relative to a first origin in a first environment of the first device.
At block 1204, the second device renders a version of the user interface using visual display information (e.g., remote UI information). Rendering the version of the user interface may include synchronizing one or more video streams in the visual display information with one or more layer trees in the visual display information using metadata included in the visual display information (e.g., by performing one or more of the operations described above in connection with fig. 10).
At block 1206, the second device displays a version of the user interface anchored to a second anchor location defined relative to a second origin in a second environment of the second device using the anchor information. For example, the anchor information may include a transformation between the first origin and a first anchor location of the user interface. Displaying the version of the user interface anchored to the second anchor location may include determining the second anchor location by applying the same transformation relative to the second origin.
In one or more implementations, the first anchor location corresponds to a physical anchor object in a physical environment of the first device and the second anchor location corresponds to a virtual anchor in a second environment of the second device (e.g., as described above in connection with fig. 7). In one or more implementations, the second device may render and/or display the virtual anchor object at the virtual anchor location (e.g., as described above in connection with fig. 8). For example, the virtual anchor object may have a form that corresponds to the form of the physical anchor object.
In one or more implementations, the first environment of the first device is the same as the second environment of the second device (e.g., the same physical environment), and the first origin and the second origin are a common origin at a single location. In other implementations, the first environment of the first device is remote from the second environment of the second device, the first origin is local to the first environment and the second origin is local to the second environment, and the anchor information includes a transformation such that the second anchor location is similarly positioned relative to the second origin as the first anchor location is positioned relative to the first origin.
As described herein, the version of the user interface displayed at the other device may be a non-interactive version of the user interface. However, in some implementations, some interactivity with the version of the user interface may be provided by another device. For example, in one or more implementations, the second device may receive user input (e.g., user input such as gestures corresponding to grabbing and moving the user interface) of a version of the user interface displayed at the second device. In response to the user input, the second device may de-anchor the version of the user interface displayed at the second device from the second anchor location and move the version of the user interface displayed at the second device to a new anchor location in the second environment. In one or more implementations, moving the version of the user interface displayed at the second device is independent of the display of the user interface at the first device. In one or more other implementations, moving the version of the user interface displayed at the second device causes a corresponding movement of the user interface displayed at the first device. For example, an indication of user input may be sent from the second device to the first device such that when the user interface is displayed at the first device, the first device may perform a corresponding movement of the user interface.
FIG. 13 illustrates a flow chart of an exemplary process for a screen-casting application for three-dimensional display in accordance with aspects of the subject technology. The blocks of process 1300 are described herein as occurring sequentially or linearly. However, multiple blocks of process 1300 may occur in parallel. Furthermore, the blocks of process 1300 need not be performed in the order shown, and/or one or more blocks of process 1300 need not be performed and/or may be replaced by other operations.
In the example of fig. 13, at block 1302, a user interface of an application running on a device is displayed at an anchor point location in an environment of the device.
At block 1304, the device may determine that another device in communication with the device does not have the application installed. Determining that another device does not have the application installed may include: sending a query from a first device to another device or to a server associated with the device and the other device; and receiving an indication that the application is not available or installed at the other device in response to a query from the other device or the server. When an application is launched at a device, a determination may be made that another device does not have the application installed in response to a request for application sharing from a user of the application running on the device, or prior to launching the application (e.g., during a handshake operation and/or during establishment of a communication session between the device and the other device). The determination that another device does not have the application installed may be performed by the application and/or by one or more background processes running on the device and in communication with the application.
At block 1306, the device may provide information associated with a user interface of the application to another device on which the application is not installed. The information associated with the user interface may include visual display information and anchor information for the user interface. The anchor information may define an anchor point location relative to an origin in the environment of the device. For example, the anchor information may include a transformation between an origin and an anchor point location of the user interface. The visual display information (e.g., remote UI information) may include multiple video streams that each correspond to an element of the user interface. The visual display information may also include metadata including time information for rendering a version of the user interface at another device using the plurality of video streams.
In one or more implementations, an anchor location in an environment of a device corresponds to a physical anchor object in a physical environment of the device, and the physical anchor object is not available in another physical environment of another device (e.g., as described above in connection with fig. 7). In one or more implementations, the device may identify the physical anchor object in response to a request from an application for the physical anchor object (e.g., a request for a particular physical object (such as a table or wall) or for a more general physical object (such as a vertical plane or horizontal plane)).
As described above, aspects of the subject technology may include the collection and transfer of data from an application to a computing device of another user. The present disclosure contemplates that in some instances, the collected data may include personal information data that uniquely identifies or may be used to identify a particular person. Such personal information data may include demographic data, location-based data, online identifiers, telephone numbers, email addresses, home addresses, data or records related to the user's health or fitness level (e.g., vital sign measurements, medication information, exercise information), date of birth, or any other personal information.
The present disclosure recognizes that the use of such personal information data in the present technology may be used to benefit users. For example, personal information data may be used in a collaborative setting with a plurality of users. In addition, the present disclosure contemplates other uses for personal information data that are beneficial to the user. For example, health and fitness data may be used according to user preferences to provide insight into their overall health condition, or may be used as positive feedback to individuals who use technology to pursue health goals.
The present disclosure contemplates that entities responsible for collecting, analyzing, disclosing, transmitting, storing, or otherwise using such personal information data will adhere to established privacy policies and/or privacy practices. In particular, it would be desirable for such entity implementations and consistent applications to generally be recognized as meeting or exceeding privacy practices required by industries or governments maintaining user privacy. Such information about the use of personal data should be prominent and easily accessible to the user and should be updated as the collection and/or use of the data changes. The user's personal information should be collected only for legitimate use. In addition, such collection/sharing should only occur after receiving user consent or other legal basis specified in the applicable law. In addition, such entities should consider taking any necessary steps to defend and secure access to such personal information data and to ensure that others who have access to personal information data adhere to their privacy policies and procedures. In addition, such entities may subject themselves to third party evaluations to prove compliance with widely accepted privacy policies and practices. In addition, policies and practices should be tailored to the particular type of personal information data being collected and/or accessed and adapted to applicable laws and standards, including jurisdictional-specific considerations that may be employed to impose higher standards. For example, in the united states, the collection or acquisition of certain health data may be governed by federal and/or state law, such as the health insurance flow and liability act (HIPAA); while health data in other countries may be subject to other regulations and policies and should be processed accordingly.
Regardless of the foregoing, the present disclosure also contemplates implementations in which a user selectively prevents use or access to personal information data. That is, the present disclosure contemplates that hardware elements and/or software elements may be provided to prevent or block access to such personal information data. For example, in the case of sharing information from a particular application, the techniques of the present invention may be configured to allow a user to choose to "opt-in" or "opt-out" to participate in the collection of personal information data at any time during or after registration with a service. In addition to providing the "opt-in" and "opt-out" options, the present disclosure also contemplates providing notifications related to accessing or using personal information. For example, the user may be notified that his personal information data will be accessed when the application is downloaded, and then be reminded again just before the personal information data is accessed by the application.
Further, it is an object of the present disclosure that personal information data should be managed and processed to minimize the risk of inadvertent or unauthorized access or use. Once the data is no longer needed, risk can be minimized by limiting the data collection and deleting the data. In addition, and when applicable, included in certain health-related applications, the data de-identification may be used to protect the privacy of the user. De-identification may be facilitated by removing identifiers, controlling the amount or specificity of stored data (e.g., collecting location data at a city level instead of at an address level), controlling how data is stored (e.g., aggregating data among users), and/or other methods such as differentiated privacy, as appropriate.
Thus, while the present disclosure broadly covers the use of personal information data to implement one or more of the various disclosed embodiments, the present disclosure also contemplates that the various embodiments may be implemented without accessing such personal information data. That is, various embodiments of the present technology do not fail to function properly due to the lack of all or a portion of such personal information data.
FIG. 14 illustrates an exemplary computing device that can be used to implement aspects of the subject technology in accordance with one or more implementations. Computing device 1400 may be and/or be part of any computing device or server for generating the features and processes described above, including but not limited to a laptop computer, a smart phone, a tablet device, a wearable device, such as goggles or glasses, etc. Computing device 1400 may include various types of computer-readable media and interfaces for various other types of computer-readable media. Computing device 1400 includes persistent storage 1402, system memory 1404 (and/or buffers), input device interface 1406, output device interface 1408, bus 1410, ROM 1412, one or more processing units 1414, one or more network interfaces 1416, and/or subsets and variations thereof.
Bus 1410 generally represents all of the system, peripherals, and chipset buses that communicatively connect many of the internal devices of computing device 1400. In one or more implementations, the bus 1410 communicatively connects the one or more processing units 1414 with the ROM 1412, the system memory 1404, and the persistent storage device 1402. One or more processing units 1414 retrieve instructions to be executed and data to be processed from these various memory units in order to perform the processes of the subject disclosure. In different implementations, one or more of the processing units 1414 may be a single processor or a multi-core processor.
ROM 1412 stores static data and instructions as required by one or more processing units 1414 and other modules of computing device 1400. In other aspects, persistent storage 1402 may be a read-write memory device. Persistent storage 1402 may be a non-volatile memory unit that stores instructions and data even when computing device 1400 is turned off. In one or more implementations, a mass storage device (such as a magnetic or optical disk and its corresponding disk drive) may be used as persistent storage device 1402.
In one or more implementations, removable storage devices (such as floppy disks, flash memory drives, and their corresponding disk drives) may be used as the permanent storage device 1402. Like persistent storage 1402, system memory 1404 may be a read-write memory device. However, unlike persistent storage 1402, system memory 1404 may be a volatile read-write memory, such as random access memory. The system memory 1404 may store any of the instructions and data that may be needed by the one or more processing units 1414 at runtime. In one or more implementations, the processes of the subject disclosure are stored in system memory 1404, persistent storage 1402, and/or ROM 1412. One or more processing units 1414 retrieve instructions to be executed and data to be processed from these various memory units in order to perform one or more embodied processes.
The bus 1410 is also connected to an input device interface 1406 and an output device interface 1408. The input device interface 1406 enables a user to communicate information as well as select commands to the computing device 1400. Input devices that may be used with input device interface 1406 may include, for example, an alphanumeric keyboard and a pointing device (also referred to as a "cursor control device"). The output device interface 1408 may, for example, enable display of images generated by the computing device 1400. Output devices that may be used with output device interface 1408 may include, for example, printers and display devices, such as Liquid Crystal Displays (LCDs), light Emitting Diode (LED) displays, organic Light Emitting Diode (OLED) displays, flexible displays, flat panel displays, solid state displays, projectors, or any other device for outputting information.
One or more implementations may include a device that serves as both an input device and an output device, such as a touch screen. In these implementations, the feedback provided to the user may be any form of sensory feedback, such as visual feedback, auditory feedback, or tactile feedback; and input from the user may be received in any form, including acoustic, speech, or tactile input.
Finally, as shown in fig. 14, bus 1410 also couples computing device 1400 to one or more networks and/or to one or more network nodes through one or more network interfaces 1416. In this manner, computing device 1400 may be part of a computer network (such as a LAN, a wide area network ("WAN") or an intranet), or may be part of a network of networks (such as the Internet). Any or all of the components of computing device 1400 may be used with the subject disclosure.
Implementations within the scope of the present disclosure may be partially or fully implemented using a tangible computer-readable storage medium (or multiple tangible computer-readable storage media of one or more types) in which one or more instructions are written. The tangible computer readable storage medium may also be non-transitory in nature.
A computer readable storage medium may be any storage medium that can be read, written, or otherwise accessed by a general purpose or special purpose computing device including any processing electronics and/or processing circuitry capable of executing the instructions. By way of example, and not limitation, computer readable media can comprise any volatile semiconductor memory such as RAM, DRAM, SRAM, T-RAM, Z-RAM and TTRAM. The computer readable medium may also include any non-volatile semiconductor memory, such as ROM, PROM, EPROM, EEPROM, NVRAM, flash, nvSRAM, feRAM, feTRAM, MRAM, PRAM, CBRAM, SONOS, RRAM, NRAM, racetrack, FJG, and Millipede memories.
Furthermore, the computer-readable storage medium may include any non-semiconductor memory, such as optical disk storage, magnetic tape, other magnetic storage devices, or any other medium capable of storing one or more instructions. In one or more implementations, the tangible computer-readable storage medium may be directly coupled to the computing device, while in other implementations, the tangible computer-readable storage medium may be indirectly coupled to the computing device, for example, via one or more wired connections, one or more wireless connections, or any combination thereof.
The instructions may be directly executable or may be used to develop executable instructions. For example, the instructions may be implemented as executable or non-executable machine code, or may be implemented as high-level language instructions that may be compiled to produce executable or non-executable machine code. Further, the instructions may also be implemented as data, or may include data. Computer-executable instructions may also be organized in any format, including routines, subroutines, programs, data structures, objects, modules, applications, applets, functions, and the like. As will be appreciated by one of skill in the art, details including, but not limited to, the number, structure, sequence, and organization of instructions may vary significantly without altering the underlying logic, functionality, processing, and output.
While the above discussion primarily refers to a microprocessor or multi-core processor executing software, one or more implementations are performed by one or more integrated circuits, such as an ASIC or FPGA. In one or more implementations, such integrated circuits execute instructions stored on the circuits themselves.
Those of skill in the art will appreciate that the various illustrative blocks, modules, elements, components, methods, and algorithms described herein may be implemented as electronic hardware, computer software, or combinations of both. To illustrate this interchangeability of hardware and software, various illustrative blocks, modules, elements, components, methods, and algorithms have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application. The various components and blocks may be arranged differently (e.g., arranged in a different order, or divided in a different manner) without departing from the scope of the subject technology.
It should be understood that the specific order or hierarchy of blocks in the processes disclosed herein is an illustration of exemplary approaches. Based on design preference requirements, it should be understood that the particular order or hierarchy of blocks in the process may be rearranged or all illustrated blocks may be performed. Any of these blocks may be performed simultaneously. In one or more implementations, multitasking and parallel processing may be advantageous. Moreover, the division of various system components in the implementations described above should not be understood as requiring such division in all implementations, and it should be understood that the program components (e.g., computer program products) and systems may be generally integrated together in a single software product or packaged into multiple software products.
As used in this specification and any claims of this patent application, the terms "base station," "receiver," "computer," "server," "processor," and "memory" refer to an electronic or other technical device. These terms exclude a person or group of people. For purposes of this specification, the term "display" or "displaying" means displaying on an electronic device.
As used herein, the phrase "at least one of" after separating a series of items of any of the items with the term "and" or "is a modification of the list as a whole, rather than modifying each member (i.e., each item) in the list. The phrase "at least one of" does not require the selection of at least one of each item listed; rather, the phrase allows for the inclusion of at least one of any one item and/or the meaning of at least one of any combination of items and/or at least one of each item. For example, the phrase "at least one of A, B and C" or "at least one of A, B or C" each refer to a only, B only, or C only; A. any combination of B and C; and/or at least one of each of A, B and C.
The predicates "configured to", "operable to", and "programmed to" do not mean any particular tangible or intangible modification to a subject but are intended to be used interchangeably. In one or more implementations, a processor configured to monitor and control operations or components may also mean that the processor is programmed to monitor and control operations or that the processor is operable to monitor and control operations. Likewise, a processor configured to execute code may be interpreted as a processor programmed to execute code or operable to execute code.
Phrases such as an aspect, this aspect, another aspect, some aspects, one or more aspects, an implementation, the implementation, another implementation, some implementations, one or more implementations, an embodiment, the embodiment, another embodiment, some embodiments, one or more embodiments, a configuration, the configuration, other configurations, some configurations, one or more configurations, subject technology, disclosure, the present disclosure, other variations thereof, and the like are all for convenience and do not imply that disclosure involving such one or more phrases is essential to the subject technology nor that such disclosure applies to all configurations of the subject technology. The disclosure relating to such one or more phrases may apply to all configurations or one or more configurations. The disclosure relating to such one or more phrases may provide one or more examples. A phrase such as an aspect or some aspects may refer to one or more aspects and vice versa, and this applies similarly to other previously described phrases.
The word "exemplary" is used herein to mean "serving as an example, instance, or illustration. Any embodiment described herein as "exemplary" or as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments. Furthermore, to the extent that the terms "includes," "has," and the like are used in either the description or the claims, such terms are intended to be inclusive in a manner similar to the term "comprising" as "comprising" is interpreted when employed as a transitional word in a claim.
All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Furthermore, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element should be construed in accordance with the specification of 35u.s.c. ≡112 (f) unless the element is explicitly stated using the phrase "means for … …" or, in the case of method claims, the element is stated using the phrase "step for … …".
The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but is to be accorded the full scope consistent with the language claims, wherein reference to an element in a singular value is not intended to mean "one only" but rather "one or more" unless specifically so stated. The term "some" means one or more unless specifically stated otherwise. The terminology of male (e.g., his) includes female and neutral (e.g., her and its), and vice versa. Headings and sub-headings (if any) are used for convenience only and do not limit the subject disclosure.

Claims (41)

1. A method, comprising:
at a second device on which an application is not installed, receiving a version of a user interface for the application displayed at a first device running the application from the first device,
wherein the version of the user interface comprises:
One or more video streams associated with one or more elements of the user interface,
one or more layer trees associated with the one or more elements of the user interface, and
the metadata of the user interface may be provided,
using the one or more layer trees and the one or more video streams, rendering the version of the user interface with the second device by:
applying the preferences of the second device to at least one of the one or more layer trees, and
synchronizing the one or more video streams and the one or more layer trees using the metadata received from the first device; and
displaying, with the second device, the rendered version of the user interface.
2. The method of claim 1, wherein at least one of the one or more elements in the user interface displayed at the first device appears different from the at least one of the one or more elements in the version of the user interface displayed at the second device due to the preference applied to the at least one of the one or more layer trees.
3. The method of claim 1, wherein the one or more elements of the user interface comprise a data editing field and a button.
4. The method of claim 3, wherein the one or more video streams comprise a first video stream corresponding to the data editing field and a second video stream corresponding to the button.
5. The method of claim 1, wherein the preferences stored at the second device include text size, color, or theme.
6. The method of claim 1, further comprising receiving user input of the version of the user interface displayed with the second device.
7. The method of claim 6, further comprising modifying the version of the user interface displayed at the second device in accordance with the user input without causing modification to the user interface displayed at the first device.
8. The method of claim 7, wherein modifying the version of the user interface displayed at the second device comprises: the version of the user interface displayed at the second device is moved, resized, rotated, or re-rendered independently of the user interface displayed at the first device.
9. The method of claim 6, further comprising:
modifying the version of the user interface displayed at the second device in accordance with the user input; and
information associated with the user input is sent to the first device to cause a corresponding modification of the user interface displayed at the first device.
10. The method of claim 1, wherein synchronizing the one or more video streams and the one or more layer trees using the metadata received from the first device comprises: the one or more video streams and the one or more layer trees are synchronized in time using the metadata received from the first device.
11. The method of claim 1, wherein applying the preference of the second device to at least one of the one or more layer trees comprises: the size, shape, or color indicated by a portion of one of the one or more layer trees corresponding to one or more of the elements of the user interface is modified according to the preference.
12. A computer program product comprising code stored in a tangible computer readable storage medium, the code comprising:
Code for receiving, at a second device on which an application is not installed, a version of a user interface for the application displayed at a first device running the application from the first device,
wherein the version of the user interface comprises:
one or more video streams associated with one or more elements of the user interface,
one or more layer trees associated with the one or more elements of the user interface, and
the metadata of the user interface may be provided,
using the one or more layer trees and the one or more video streams, rendering, with the second device, the version of the user interface by:
applying the preferences of the second device to at least one of the one or more layer trees, and
synchronizing the one or more video streams and the one or more layer trees using the metadata received from the first device; and
code for displaying, with the second device, the rendered version of the user interface.
13. An apparatus, comprising:
a processor; and
a memory including instructions that, when executed by the processor, cause the processor to:
A user interface for displaying an application running on the device;
determining that another device in communication with the device does not have the application installed; and
providing a version of a user interface for the application displayed at the device from the device to the other device on which the application is not installed,
wherein the version of the user interface comprises:
one or more video streams associated with one or more elements of the user interface,
one or more layer trees associated with the one or more elements of the user interface, and
metadata of the user interface.
14. A method, comprising:
displaying, by a device, a user interface of an application running on the device;
determining, by the device, that another device in communication with the device is not installed with the application; and
providing a version of a user interface for the application displayed at the device from the device to the other device on which the application is not installed,
wherein the version of the user interface comprises:
one or more video streams associated with one or more elements of the user interface,
One or more layer trees associated with the one or more elements of the user interface, and
metadata of the user interface.
15. The method of claim 14, further comprising:
receiving user input of the user interface displayed at the device;
modifying the user interface displayed at the device in accordance with the user input; and
at least one of the one or more video streams and the one or more layer trees is modified in accordance with the user input.
16. The method of claim 15, wherein the one or more video streams comprise a plurality of video streams each corresponding to one of a plurality of elements of the user interface.
17. The method of claim 16, wherein modifying at least one of the one or more video streams and the one or more layer trees in accordance with the user input comprises: modifying one of the plurality of video streams independently of another one of the plurality of video streams.
18. The method of claim 14, wherein the metadata comprises time information for synchronization of the one or more video streams and the one or more layer trees.
19. The method of claim 14, further comprising:
receiving, at the device, an indication of user input from the other device for the version of the user interface displayed at the other device; and
modifying the user interface displayed at the device according to the user input of the version of the user interface displayed at the other device.
20. The method of claim 14, wherein displaying the user interface of the application running on the device comprises: the method further includes displaying the user interface anchored to a physical anchor in a physical environment of the device, the method further comprising providing anchor information for the version of the user interface to the other device.
21. The method of claim 14, further comprising:
determining, by the device, a reduced capability of the other device;
stopping providing the version of the user interface including the one or more video streams, the one or more layer trees, and the metadata to the other device; and
a single video stream is provided from the device to the other device, the single video stream representing an entire user interface for the application displayed at the device.
22. A method, comprising:
receiving, at a second device on which an application is not installed, information associated with a user interface of the application from a first device running the application, wherein the user interface is displayed by the first device at a first anchor location in a first environment of the first device,
wherein the information associated with the user interface includes visual display information and anchor information for the user interface, and
wherein the anchor information defines the first anchor location relative to a first origin in the first environment of the first device;
rendering, by the second device, a version of the user interface using the visual display information; and
displaying, by the second device, the version of the user interface anchored to a second anchor location defined relative to a second origin in a second environment of the second device using the anchor information.
23. The method of claim 22, wherein the first anchor location corresponds to a physical anchor object in a physical environment of the first device, and wherein the second anchor location corresponds to a virtual anchor in the second environment of the second device.
24. The method of claim 23, further comprising rendering, by the second device, a virtual anchor object at the virtual anchor.
25. The method of claim 24, wherein the virtual anchor object has a form corresponding to a form of the physical anchor object.
26. The method of claim 22, wherein the first environment of the first device is the same as the second environment of the second device, and wherein the first origin and the second origin are a common origin at a single location.
27. The method of claim 22, wherein the first environment of the first device is remote from the second environment of the second device, wherein the first origin is local to the first environment and the second origin is local to the second environment, and wherein the anchor information comprises a transformation that causes the second anchor location to be similarly positioned relative to the second origin as the first anchor location is positioned relative to the first origin.
28. The method of claim 27, further comprising:
receiving user input of the version of the user interface displayed at the second device; and
In response to the user input:
de-anchor the version of the user interface displayed at the second device from the second anchor location; and
the version of the user interface displayed at the second device is moved to a new anchor location in the second environment.
29. The method of claim 28, wherein moving the version of the user interface displayed at the second device is independent of a display of the user interface at the first device.
30. The method of claim 28, wherein moving the version of the user interface displayed at the second device causes a corresponding movement of the user interface displayed at the first device.
31. An apparatus, comprising:
a processor; and
a memory including instructions that, when executed by the processor, cause the processor to:
displaying a user interface of an application running on the device at an anchor point location in an environment of the device;
determining that another device in communication with the device does not have the application installed; and
providing information associated with the user interface of the application from the device to the other device on which the application is not installed,
Wherein the information associated with the user interface includes visual display information and anchor information for the user interface, and
wherein the anchor information defines the anchor point location relative to an origin in the environment of the device.
32. The apparatus of claim 31, wherein the visual display information comprises a plurality of video streams each corresponding to an element of the user interface.
33. The device of claim 32, wherein the visual display information further comprises metadata including time information for rendering a version of the user interface at the other device using the plurality of video streams.
34. The apparatus of claim 31, wherein the anchor location in the environment of the apparatus corresponds to a physical anchor object in a physical environment of the apparatus.
35. The device of claim 34, wherein the physical anchor object is not available in another physical environment of the other device.
36. The device of claim 35, wherein the processor is configured to identify the physical anchor object in response to a request from the application for the physical anchor object.
37. A method, comprising:
displaying a user interface of an application running on a device at an anchor point location in an environment of the device;
determining that another device in communication with the device does not have the application installed; and
providing information associated with the user interface of the application from the device to the other device on which the application is not installed,
wherein the information associated with the user interface includes visual display information and anchor information for the user interface, and
wherein the anchor information defines the anchor point location relative to an origin in the environment of the device.
38. The method of claim 37, wherein the visual display information comprises a plurality of video streams each corresponding to an element of the user interface.
39. The method of claim 38, wherein the visual display information further comprises metadata including time information for rendering a version of the user interface at the other device using the plurality of video streams.
40. The method of claim 39, wherein:
the anchor point location in the environment of the device corresponds to a physical anchor point object in a physical environment of the device; and
The physical anchor object is not available in another physical environment of the other device.
41. The method of claim 40, further comprising identifying the physical anchor object in response to a request from the application for the physical anchor object.
CN202180092980.0A 2021-02-04 2021-12-08 Application program screen projection Pending CN116848507A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US63/145,952 2021-02-04
US17/541,207 US20220244903A1 (en) 2021-02-04 2021-12-02 Application casting
US17/541,207 2021-12-02
PCT/US2021/062450 WO2022169506A1 (en) 2021-02-04 2021-12-08 Application casting

Publications (1)

Publication Number Publication Date
CN116848507A true CN116848507A (en) 2023-10-03

Family

ID=88162110

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180092980.0A Pending CN116848507A (en) 2021-02-04 2021-12-08 Application program screen projection

Country Status (1)

Country Link
CN (1) CN116848507A (en)

Similar Documents

Publication Publication Date Title
US11831814B2 (en) Parallel video call and artificial reality spaces
US20230102820A1 (en) Parallel renderers for electronic devices
US20210326594A1 (en) Computer-generated supplemental content for video
US20230040610A1 (en) Object placement for electronic devices
US20210074014A1 (en) Positional synchronization of virtual and physical cameras
KR20220156870A (en) extended reality recorder
US20240211053A1 (en) Intention-based user interface control for electronic devices
US20230221830A1 (en) User interface modes for three-dimensional display
US20220244903A1 (en) Application casting
CN116848507A (en) Application program screen projection
EP4264422A1 (en) Application casting
US11361473B1 (en) Including a physical object based on context
US11816759B1 (en) Split applications in a multi-user communication session
US11972088B2 (en) Scene information access for electronic device applications
US12052430B2 (en) Energy efficient context relevant processing for content
US20230072623A1 (en) Artificial Reality Device Capture Control and Sharing
US20230319296A1 (en) Energy efficient context relevant processing for content
US20230315385A1 (en) Methods for quick message response and dictation in a three-dimensional environment
CN118235104A (en) Intent-based user interface control for electronic devices
CN118284880A (en) User interface mode for three-dimensional display
CN118265962A (en) Context information access for electronic device applications
WO2023014618A1 (en) Object placement for electronic devices
WO2023192047A1 (en) Energy efficient context relevant processing for content
WO2023177773A1 (en) Stereoscopic features in virtual reality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination