US11514839B2 - Optimized display image rendering - Google Patents

Optimized display image rendering Download PDF

Info

Publication number
US11514839B2
US11514839B2 US17/561,661 US202117561661A US11514839B2 US 11514839 B2 US11514839 B2 US 11514839B2 US 202117561661 A US202117561661 A US 202117561661A US 11514839 B2 US11514839 B2 US 11514839B2
Authority
US
United States
Prior art keywords
user
image
time
head
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US17/561,661
Other versions
US20220122516A1 (en
Inventor
Atsuo Kuwahara
Deepak S. Vembar
Paul S. Diefenbaugh
Vallabhajosyula S. Somayazulu
Kofi C. Whitney
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intel Corp
Original Assignee
Intel Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corp filed Critical Intel Corp
Priority to US17/561,661 priority Critical patent/US11514839B2/en
Publication of US20220122516A1 publication Critical patent/US20220122516A1/en
Priority to US17/993,614 priority patent/US11721275B2/en
Application granted granted Critical
Publication of US11514839B2 publication Critical patent/US11514839B2/en
Priority to US18/334,197 priority patent/US20230410720A1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • G09G3/2092Details of a display terminals using a flat panel, the details relating to the control arrangement of the display terminal and to the interfaces thereto
    • G09G3/2096Details of the interface to the display terminal specific for a flat panel
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/147Digital output to display device ; Cooperation and interconnection of the display device with other functional units using display panels
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/02Improving the quality of display appearance
    • G09G2320/0252Improving the response speed
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/02Improving the quality of display appearance
    • G09G2320/0261Improving the quality of display appearance in the context of movement of objects on the screen or movement of the observer relative to the screen
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/02Improving the quality of display appearance
    • G09G2320/028Improving the quality of display appearance by changing the viewing angle properties, e.g. widening the viewing angle, adapting the viewing angle to the view direction
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/10Special adaptations of display systems for operation with variable images
    • G09G2320/106Determination of movement vectors or equivalent parameters within the image
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2354/00Aspects of interface with display user
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2360/00Aspects of the architecture of display systems
    • G09G2360/12Frame memory handling
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2370/00Aspects of data communication
    • G09G2370/02Networking aspects
    • G09G2370/022Centralised management of display operation, e.g. in a server instead of locally
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2370/00Aspects of data communication
    • G09G2370/16Use of wireless transmission of display information

Definitions

  • This disclosure relates generally to rendering display images.
  • HMD head mounted display
  • VR Virtual Reality
  • AR Augmented Reality
  • This latency can be due, for example, to movement of the user's head between the image rendering and the actual display on the HMD.
  • FIG. 1 illustrates a user wearing a head mounted display
  • FIG. 2 illustrates display rendering
  • FIG. 3 illustrates display rendering
  • FIG. 4 illustrates a head mounted display system
  • FIG. 5 illustrates a head mounted display system
  • FIG. 6 illustrates display rendering
  • FIG. 7 illustrates display rendering
  • FIG. 8 illustrates display rendering
  • FIG. 9 illustrates display rendering
  • FIG. 10 illustrates display rendering
  • FIG. 11 illustrates a computing device
  • FIG. 12 illustrates one or more processor and one or more tangible, non-transitory computer readable media.
  • numbers in the 100 series refer to features originally found in FIG. 1 ; numbers in the 200 series refer to features originally found in FIG. 2 ; and so on.
  • Head mounted displays are becoming more affordable and available to users (for example, in mainstream personal computer form-factors).
  • an optimal and differentiated user experience for users wearing HMDs is made available.
  • Some embodiments relate to optimization of display image rendering, predictive display rendering, and/or predictive image rendering, etc.
  • head mounted display systems minimize latencies such as motion latencies.
  • Some embodiments relate to optimizing time warping for Head-Mounted Displays (HMDs).
  • Time warping is a method in which a large rendered image target is prepared and content to be displayed on the HMD is adjusted to account for the delta (or difference) in the field of view (FOV) due to head movement of a user of the HMD between the time that the target image is rendered and when it is actually displayed on the HMD.
  • FOV field of view
  • excess image data can be generated for the rendering, and then not actually rendered in the displayed image due to the head movement.
  • the extra generated image data that is not actually rendered can represent wasted power and memory resources. Therefore, in some embodiments, a system can limit extra generated image data while still providing enough image data to make sure that rendered image data is available for display on the HMD.
  • FIG. 1 illustrates an example 100 of a user 102 wearing a head mounted display (HMD) 104 .
  • HMD head mounted display
  • the user 102 is in a current position looking forward.
  • the same user 102 is shown wearing the head mounted display 104 as the user 102 is looking to a display position to the right of the user 102 (which appears to the left when looking at user 102 in FIG. 1 ) a delayed and/or short time after that same user 102 was looking forward.
  • a head mounted display system will have some latency for graphics rendering between the time the user 102 is looking forward (for example, at a time of image rendering) and the time that same user 102 is looking slightly to the right at a short time later (for example, at a time of display of a rendered image on the head mounted display 104 ).
  • FIG. 1 illustrates the concept of a shifted view rendered on the head mounted display 104 , based on motion of the head of the user 102 in a short timeframe between when the image is initially rendered and when that image is able to be displayed on the head mounted display 104 for the user 102 to view.
  • a needed adjustment could be very large.
  • a large latency is introduced in a graphics pipe (for example, due to the complexity of the content being rendered)
  • additional large latency can be introduced into the graphics pipe.
  • the processor may take a longer time to decode a very high resolution video than in a situation where a simple video is rendered at low resolution. Any extra time taken to place the resulting frame in the render target adds to the overall latency.
  • the overall latency is further increased. For example, if a host device and an HMD are wirelessly connected, and/or if the image render processing is implemented in the cloud, additional interface latencies can occur.
  • FIG. 2 illustrates a graphic block diagram 200 illustrating display rendering.
  • block 202 shows a frame in which a large rendered target is shown in block 204 .
  • the rendered target 204 can be an image rendering for display on a head mounted display (HMD), for example.
  • block 212 shows a frame with an actual rendering 214 on the head mounted display.
  • actual rendered image 214 can be adjusted (for example, using a time warp implementation). The adjustment is made from the initial rendering 204 , which is also shown as initial rendering 216 in dotted lines in FIG. 2 for comparison purposes.
  • a problem can occur where a large head motion or long latencies in the image display path can result in a lack of information required to display the contents based on the new field of view (FOV). This is illustrated, for example, by the portion of rendered image 214 that extends beyond the frame 212 .
  • Such a large motion or long latency in the graphics pipeline can result in the HMD not having all of the correct image data to display on the HMD.
  • these types of issues can be avoided.
  • the types of problems described herein may be overcome by optimizing a rendered image target by estimating the expected FOV at the time of display on the HMD. This is based on understanding the latency associated with displaying the rendered target image and estimating a head pose (and/or a head position) based on sensor data relating to the HMD.
  • head mounted display sensors and/or peripheral sensors may be used to detect head movement at one or more times near the time of rendering and/or the time of display.
  • Time warping is a method by which a large rendered target is prepared and content is displayed on a head mounted display in a manner such that it is adjusted to account for the change in the field of view (FOV).
  • This change is the FOV is due to head movement between the time that the rendered target was prepared and the time that it is actually displayed on the head mounted display, for example.
  • a larger image than necessary for the current image can be rendered at block 204 (for example 1.4 times what is necessary in order to render the current image).
  • This larger size rendering allows the correct image data to be available to be rendered on the head mounted display at a later time based on a potential change in the head motion of the user of the head mounted display (that is, in order to account for the latency).
  • a larger image can be rendered at block 204 .
  • the larger rendered image 204 can be transmitted to the head mounted display, with some latency associated with the head movement and/or additional latency associated with transmission of the rendered image to the head mounted display.
  • the user may have moved their head. Therefore, the user is then looking at an image that is slightly skewed from what was initially rendered at 204 . Since a bigger frame buffer can be rendered at 204 and transmitted to the head mounted display, additional image information can then be available at the time of display on the head mounted display.
  • an HMD can be used which includes integrated motion sensors to track the movement of the HMD device.
  • an HMD can include inertia motion sensors.
  • external cameras facing toward the device may be used to track positional information (including position and orientation of the HMD, for example).
  • data from these sensors can be used to determine the field of view (FOV) of the user and render the appropriate content on the display of the HMD.
  • FOV field of view
  • Data from these types of sensors can be used in some embodiments to predict and effectively make use of an image rendering buffer area to ensure that proper data to be displayed is available in the render target memory at the HMD. Some embodiments may result in a better user experience, optimized memory usage, and/or better power efficiency.
  • available head motion information can be used to predict future head poses (and/or head positions) and to adjust the render target accordingly.
  • time warping can be used to render more data into a render target buffer than what is actually necessary for display on the HMD. This can be implemented in a manner similar to digital stabilization used for cameras.
  • an image rendering target buffer is efficiently used to minimize the risk of not having available the proper content to display due to heavy motion and/or latencies in the rendering pipeline.
  • prediction (or projection) of the image position and/or orientation at a point when an image will be displayed on the HMD allows a reduction in the necessary amount of data that is rendered but not displayed, which can allow better power efficiency.
  • FIG. 3 is a graphic block diagram 300 illustrating display rendering.
  • block 302 shows a frame in which a larger render target image 304 to be displayed on a head mounted display (HMD) is initially rendered.
  • frame 312 shows an actual image rendering 314 on the head mounted display that is to be adjusted in a time warp implementation from the initial rendering 316 shown in dotted lines in FIG. 3 .
  • FIG. 3 can illustrate a situation where the user's head is turned very quickly and/or there is a long graphics latency associated with the image rendering or transmission that is too large. In such a situation, the rendered image 314 may adjust too far, such that the correct data is not available for displaying the appropriate image at that time.
  • the desired image is not actually rendered, so there is tearing, visual artifacts, etc.
  • it is desirable to take into account the direction of what the head mounted display user is looking at for example, the direction of movement of the user's head
  • the direction of motion, speed of motion, and/or the latency of motion may be optimized and some of the work done to render a larger frame buffer image may also be minimized.
  • a frame buffer image rendering may be done in an advantageous manner in order to account for latency in the pipeline, latency due to transmission of the rendered image to the head mounted display, latency in the sampling, and/or latency in actually rendering out the image to show the correct head mounted display image.
  • head motion of a user is predicted (for example, based on sampling). For example, in a head mounted display that is running at 90 Hertz rather than 30 Hertz, the head mounted display can sample head motion of a user once every 10 milliseconds. This sampling can be used in order to more accurately predict where the head is going to be at a desired display time (for example, in 10 milliseconds).
  • time warp may be used in addition to prediction of where the head will be in the future (for example in 10 ms) in order to save power, save memory, and make sure that the entire image is available to be rendered properly at the right time.
  • the prediction may occur within the head mounted display.
  • the prediction may occur somewhere else (for example in a host system, a cloud, etc.) In some embodiments, the prediction may occur in a combination of the head mounted display and somewhere else (such as in a host system, a cloud, etc.)
  • FIGS. 1, 2 and 3 can illustrate head motion of a user in a particular direction such as a horizontal direction.
  • head motion can occur in a variety of directions, motions, etc.
  • the drawings and description herein should not be limited to illustrate head motion changes in only a horizontal direction.
  • head motion can occur in a vertical direction or in a combination of horizontal and vertical directions, for example.
  • head motion of a user is predicted in any direction and is not limited to prediction of head motion in a horizontal manner. That is, even though time warp can occur due to a motion of the user in a horizontal direction, it is noted that in some embodiments, time warp can occur when motion of a user occurs in a variety of directions.
  • FIG. 4 illustrates a system 400 including a host system 402 (for example, a Virtual Reality ready desktop system) and a head mounted display (HMD) 404 .
  • System 400 additionally can include a gaming add-on 406 (for example, an Intel® WiGig Gaming Add-on device), a wireless transceiver 408 that includes a wireless sink 410 (for example, an Intel® Wireless Gigabit Sink) and a battery 412 , as well as headphones 416 that can be used by a user in conjunction with the head mounted display 404 .
  • Gaming Add-on 406 and wireless transceiver 408 allow the host system 402 and the head mounted display 404 to communicate with each other via a wireless connection 414 .
  • audio and/or video as well as motion data may be transmitted via the gaming add-on 406 to and from the host 402 .
  • video, power, and/or motion data may be transmitted to and from the head mounted display 404 via the wireless transceiver 408 .
  • the wireless transceiver 408 may also be used to transmit audio data to headphones 416 .
  • FIG. 5 illustrates a system 500 including a host system 502 and a head mounted display (HMD) 504 .
  • Host system 502 includes a processor (for example, using a CPU) that can implement a get pose (and/or get head position) operation 512 , a graphics processor that can implement a graphics rendering pipeline 524 , and a transmitter 516 .
  • transmitter 516 is a transceiver, allowing transmit and receive operations to and from the host 502 .
  • HMD 504 includes an IMU (and/or Inertial Magnetic Unit and/or Inertial Measurement Unit) 522 , a display 524 , a processor 526 , and a receiver 528 .
  • IMU Inertial Magnetic Unit and/or Inertial Measurement Unit
  • receiver 528 is a transceiver, allowing transmit and receive operations to and from the HMD 504 .
  • IMU 522 can be a sensor, an Inertial Magnetic Unit, and/or an Inertial Measurement Unit used to obtain information about the head mounted display (for example head position and/or head orientation).
  • IMU 522 is an inertia motion sensor.
  • IMU 522 includes one or more accelerometers, one or more gyroscopes, etc.
  • FIG. 5 is used to illustrate how, in some embodiments, latencies in the graphics rendering pipeline 514 and/or latencies in the interface including transmitter 516 and receiver, for example, can affect the overall time from the initial estimated pose (and/or estimated head position) to when an adjustment is made by processor 526 (for example, using time warp technology and/or predictive pose and/or predictive head position) and displayed at the display 524 .
  • get pose (and/or get head position) block 512 can typically be implemented in a processor such as a central processor or CPU, and can work to obtain where the user is in space, and what direction the user is looking. This can be passed along with all the 3-D geometry data to the graphics rendering pipeline 514 .
  • the graphics rendering pipeline 514 takes all the graphics, the models, the texture, the lighting, etc. and generates a 3-D image scene.
  • This 3-D scene is generated based on the particular head position and view direction obtained by the get pose (and/or get head position) block 512 via the IMU 522 .
  • Processor 526 can implement a time warp and/or prediction of head position and/or view information sampled from IMU 522 .
  • processor 526 is used to implement adjustment for time warp and/or predictive projected position of the rendered display from the graphics rendering pipeline 514 based on additional information from the IMU 522 based on predicting how the user has moved their head since the original pose (and/or head position) was taken by the host processor at 512 .
  • the processor 526 of the head mounted display 504 is used to provide prediction and/or time warp processing.
  • the processor 526 samples the IMU 522 .
  • the host system 502 samples the IMU 522 .
  • the prediction could occur in one or more processor in the host system (for example, in one or more processor that includes get pose (and/or head position) 512 and/or graphics rendering pipeline 514 ).
  • the sampled information from IMU 522 is used by a processor in the host system 502 to implement the image rendering.
  • the rendering may occur in the host system 502 , and in some embodiments the rendering may occur in the head mounted display 504 .
  • the rendering may occur across both the host system 502 and the head mounted display 504 .
  • predictive tracking is implemented to save power and efficiency.
  • one or more processor in the host system 502 (for example, a graphics processor performing the graphics rendering 514 ) is preempted in order to provide the predictive tracking. While the graphics rendering pipeline 514 within a processor in the host system 502 is illustrated in FIG. 5 , it is understood that graphics rendering may also be implemented in the head mounted display 504 according to some embodiments.
  • the initial latency based on obtaining the pose (and/or head position) at 512 and rendering the image at 514 is approximately 30 to 35 ms.
  • the additional interface latency associated with transmitting from transmitter 516 to receiver 528 may add another approximately 50 or 60 ms in some embodiments.
  • every reading from IMU 522 is time stamped so that exact times of each sampling is known by one or more processor(s) of the host system 502 and/or by the processor 526 of the head mounted display 504 . In this manner, exact times of receipt of the pose (and/or head position) information from the IMU 522 is known. This allows for prediction and time warp operations that are based on known sampling information from IMU 522 . This is helpful, for example, in cases where graphics pipe latency and/or interface latency is different at different times.
  • processor 526 takes various sampling information from IMU 522 , and is able to provide better predictive and/or time warp adjustments based on the received information and timestamp from the IMU (that is, pose and/or head position information initially received at get pose 512 and additional sampling directly from the IMU 522 ). Once the correct adjustments are made, a better predictive and/or time warp rendered image is able to be provided from processor 526 to the display 524 of the head mounted display 504 .
  • image rendering is implemented based on a pose (and/or head position) of a user's head.
  • Pose (and/or head position) can be obtained by sensors such as one or more cameras and/or an IMU (for example, in some embodiments from sensors such as accelerator(s) and/or gyroscope(s)), and the input data is used to render an image scene for display on the HMD. Rendering of such an image scene will take a certain amount of time, which in some embodiments is a known predetermined amount of time, and in some embodiments is a dynamic amount of time.
  • the rendered scene is displayed on the HMD screen (for example, display 524 ) which takes more time.
  • the HMD can sample the data again (for example, sample the IMU 522 using the processor 526 ) before displaying it on display 526 , and perform a two dimensional (2D) transform on the rendered data using processor 526 in order to account for the intermediate head motion. As discussed herein, this can be referred to as time warping.
  • FIG. 6 illustrates display rendering frames 600 , including display rendering frame n ( 602 ) and display rendering frame n+1 ( 604 ).
  • FIG. 6 illustrates a situation where the pose (and/or head position) 612 (in frame n 602 ) is received from the IMU, and the image scene is rendered with the pose (and/or head position) information at 614 .
  • a larger image scene is rendered at 614 similar to that described in reference to FIG. 2 and/or FIG. 3 above. It is then displayed at 616 .
  • the pose (and/or head position) 622 (in frame n+1 604 ) is received from the IMU, and the image scene is rendered with the pose (and/or head position) information at 624 .
  • a larger image scene is rendered at 624 similar to that described in reference to FIG. 2 and/or FIG. 3 above. It is then displayed at 626 .
  • the system is not taking account for latency in the graphics pipeline or in the transmission from the host to the head mounted display. It is also not taking into account the rate of motion. Additionally, the example of FIG. 6 does not take into account the sampling itself of the IMU. The example illustrated in FIG. 6 does not perform any adjustment for time warp or for prediction of the position or orientation of the user.
  • FIG. 7 illustrates display rendering frames 700 , including display rendering frame n ( 702 ) and display rendering frame n+1 ( 704 ).
  • FIG. 7 illustrates a situation where the pose (and/or head position) 712 (in frame n 702 ) is received from the IMU, and the image scene is rendered with the pose (and/or head position) information at 714 . After the rendered image is sent to the head mounted display, another IMU pose (and/or head position) is sampled at block 716 , and a two-dimensional (2D) transform on the rendered image scene is performed and the image is then displayed at block 718 .
  • 2D two-dimensional
  • the pose (and/or head position) 722 (in frame n+1 704 ) is received from the IMU, and the image scene is rendered with the pose (and/or head position) information at 724 .
  • the rendered image is sent to the head mounted display, another IMU pose (and/or head position) is sampled at block 726 , and a 2D transform on the rendered image scene is performed and the image is then displayed at block 728 .
  • the system performs an adjustment for time warp, but does not perform additional prediction of the position or orientation of the user.
  • FIG. 7 illustrates a simple implementation of time warp.
  • a bad experience may occur for the user with image data that is not correctly being displayed for the user to view.
  • Even systems that pre-render a video sequence in a sphere and show a subset of the image data have an inherent limitation that the data that was used to calculate the subset of view to render is usually stale by the time of the final display on the screen.
  • the head of the user is mostly static, there is not a huge issue with the correct image data being rendered and displayed, but if the user's head is moving rapidly, stale rendered and displayed image data could cause simulator sickness. For example, this can occur in some situations when the user stops moving and the image scene displayed for the user takes some time to update and the display feels to the user like it is still moving.
  • predictive rendering of a larger frame buffer is implemented, and the data based on the predictive rendering is used to render the difference in motion. If the scene is rendered only to the exact specs of the display, the two-dimensional (2D) transform loses information and will require blurring of the edge pixels of the rendered scene. This leads to loss of clarity along the edges of the image.
  • predictive image rendering is implemented to predict and accurately render image scenes based on head motion. In this manner, the 2D transform window can be moved to the region and all the pixels can be displayed with clarity.
  • FIG. 8 illustrates display rendering 800 according to some embodiments.
  • FIG. 8 illustrates a situation where the pose (and/or head position) is obtained at 802 and the image scene is rendered with the pose (and/or head position) information in the graphics render pipe at 804 .
  • the rendered image is sent to the head mounted display, another pose (and/or head position) is sampled at block 806 , and the rendering is adjusted at 808 (for example, using a two-dimensional transform on the rendered image scene).
  • the image is then posted to the display at block 810 .
  • block 812 shows a frame in which a large rendered target is shown in block 814 .
  • the rendered target 814 can be an image rendering for display on a head mounted display (HMD), for example.
  • HMD head mounted display
  • block 816 shows a frame with an actual rendering 818 on the head mounted display.
  • actual rendered image 818 can be adjusted (for example, using a time warp implementation). The adjustment is made from the initial rendering 814 , which is also shown as initial rendering 820 in dotted lines in FIG. 8 for comparison purposes.
  • a problem can occur where a head motion or latencies in the image display path can result in a lack of information required to display the contents based on the new field of view (FOV). This is illustrated, for example, by the portion 822 of rendered image 818 that extends beyond the frame 816 .
  • Such motion or latency (for example, due to latencies in the graphics pipeline and/or other latencies) can result in the HMD not having all of the correct image data to display on the HMD.
  • these types of issues can be avoided.
  • Block 822 is a portion of the data 818 rendered on the head mounted display, and represents missing required data.
  • an adjustment of the rendering at block 808 is based, for example, on time warp, and the rendered image has been adjusted for motion, some of the required data may still be missing as represented by block 822 in FIG. 8 .
  • the graphics render pipeline 804 renders the image data based on a particular position at a particular time.
  • the rendering adjustment at block 808 based on the pose (and/or head position) sample obtained at block 806 is unable to show the entire portion since the motion moved beyond what was rendered in block 804 based on the pose (and/or head position) sample obtained at block 802 . That is, adjustment is made for motion but may not be made for the entire latency, resulting in missing the required data illustrated by cross section 822 of rendered HMD image 818 .
  • FIG. 9 shows how the original render frame makes better use of the buffer within the render target to account for the expected field of view (FOV) when the content is displayed.
  • FOV expected field of view
  • FIG. 9 illustrates image rendering 900 .
  • a pose (and/or head position) is sampled (for example at the host).
  • a pose (and/or head position) is sampled again at the host (for example, a period of time later such as 5 ms later).
  • the pose (and/or head position) is projected at block 912 .
  • Block 914 shows a graphics render pipe at which the image is rendered based on the projected pose (and/or head position) at block 912 .
  • the rendered image is then transmitted to the head mounted display and the head mounted display samples a pose (and/or head position) at 916 .
  • the rendering is adjusted (for example based on a time warp, and/or an adjustment for motion, and/or an adjustment based on a projected or predicted pose and/or on a projected or predicted head position) at block 918 .
  • the adjusted image rendered at 918 is then posted to the display at 920 .
  • a rendering 922 at the host is additionally illustrated including a 924 rendering that is based on the projected or predicted pose (and/or head position) from block 912 .
  • power can be saved by not rendering all or some of the portion of rendering 922 .
  • block 926 illustrates the adjusted rendering 928 rendered by block 918 at the head mounted display. Dotted line 930 shows where the initial rendering would have been prior to adjustments.
  • the pose (and/or head position) sampled at 902 and the pose (and/or head position) sampled at 904 are used at block 912 to project (or predict) a pose (and/or head position) of the user (for example, a location and orientation of a user wearing a head mounted display).
  • An image is rendered based on the predicted pose (and/or head position) determined at block 912 .
  • pose (and/or head position) prediction 912 can be implemented in one or more of a variety of ways. For example, according to some embodiments, a weighted average of past pose (and/or head position) vectors is maintained, and the weighted average of past pose (and/or head position) vectors is used to predict the rate of change of pose (and/or head position) for the next time interval. The velocity and acceleration of the pose (and/or head position) can be obtained by a simple vector of the change in position. Pose (and/or head position) tracking can also rely on filtering methods (such as, for example, Kalman filtering) to predict pose (and/or head position) at the next timestep. Dead reckoning can also be used as a method to estimate the next pose (and/or head position) according to some embodiments.
  • filtering methods such as, for example, Kalman filtering
  • the projection (or production) of head pose (and/or head position) is based on a predicted latency determined by the obtained poses (and/or head positions) at 902 and 904 at the host, as well as the sample pose (and/or head position) 916 sampled at the HMD.
  • the pose (and/or head position) is sampled at 902 and 904 (for example every 5 ms or so), and based on the sampling, a projected pose (and/or head position) 912 can be derived based on what will happen in the future at the time of display on the head mounted display (for example, in 30 ms).
  • the sampling (for example, every 5 ms) and image rendering may be implemented in the host system. In some embodiments, the sampling (for example, every 5 ms) and image rendering may be implemented in the head mounted display. As illustrated in FIG. 9 , the system predicts where the head will be at some future point in time based on motion data. In some embodiments, head motion may be predicted using various techniques. For example, some head movements may include normal acceleration followed by deceleration. Possible head motions may be predictable in various embodiments. For example, when the head is moving the predicted position and orientation of the head can be accurately made according to some embodiments.
  • this information can be predicted at the host system, it is possible to render less information in the graphics render pipeline 914 since the particular image that is rendered can be provided with a fair amount of certainty, and less additional information needs to be rendered and transmitted.
  • this information can also be confirmed in the head mounted display, and the predicted position and orientation (that is, the projected pose and/or projected head position 912 ) can be predicted with much more certainty in the host.
  • the prediction according to some embodiments can be used to make educated guesses (for example, a user will not snap their head back within a 30 ms period after the head is traveling the other direction during the 5 ms period).
  • the two poses (and/or head positions) 902 and 904 can be used to predict the location and orientation at the later point in time.
  • the head mounted display can later adjust the rendered image at 918 based on the predicted pose (and/or head position) 912 and an additional sampling of the pose (and/or head position) at 916 at the head mounted display.
  • the information that needs to be rendered at 914 and sent from the host to the head mounted display is much more accurate and does not need to include as much additional information due to the prediction of the pose (and/or head position) based on the sampling occurring at the host system at 902 and 904 .
  • the system is projecting the head position and orientation, and rendering the image at the host system based on that projection (prediction) information.
  • the image rendering can be adjusted at the head mounted display at block 918 based on the sampled poses and/or head positions at 902 , 904 and 916 .
  • This adjustment can be made to adjust for motion and for the predicted pose (and/or head position) 912 based on the various sampled poses and/or sampled head positions.
  • This adjustment includes a time warp and/or an adjustment based on the projected pose (and/or head position).
  • This adjustment may be made based on a reduced rendering 924 that is sent to the head mounted display.
  • power can be saved since a lesser size rendering 924 can be rendered by the graphics render pipeline 914 .
  • the adjusted rendering 918 can then be used to post the rendered image 928 (which has been adjusted for motion and latency, for example) and posted to the display at block 920 .
  • a rendered image 928 is rendered based on head motion and velocity of motion of the user.
  • adjustment is made for motion and for the entire latency based on time warp motion adjustment as well as based on a projected pose (and/or head position) using predicted pose (and/or head position) information.
  • This creates a situation where a lesser amount of rendered data may be rendered and transmitted to the head mounted display without any required data missing (for example, such as missing required data 822 illustrated in FIG. 8 ).
  • FIG. 9 for example, by predicting the position and orientation of the user of the head mounted display, it is possible to render and transmit less data and still ensure that required data is not missing, and can be displayed to the user at the head mounted display at the appropriate time.
  • the projected (or predicted) pose (and/or head position) is determined at the host and transmitted to the HMD.
  • the image rendered on the head mounted display is adjusted based on the projected pose (and/or head position) as well as additional pose (and/or head position) information received at block 916 .
  • projected and/or predicted pose (and/or head position) coordinates and timestamp can be used to render the frame as metadata alongside (for example, in sideband) or as part of (for example, in-band) the frame data.
  • the projected pose (and/or head position) coordinates and timestamp for example, sampled from a sensor such as an IMU and then calculated based on the sampling(s), can be used to render the image frame as metadata alongside (for example, in a sideband) or as part of (for example in-band) the frame image data.
  • time warp when employed, applies last-minute adjustments to the rendered frame to correct for any changes in the user's pose and/or head position (for example, HMD position and/or orientation). Explicitly knowing the coordinates+timestamp that were used (projected or measured) when rendering the frame can allow time warp adjustment to more accurately correct for these changes.
  • time warp can be disabled when pose (and/or head position) projection is used. However, this could produce an inability to correct for incorrect projections caused by unexpected changes in head movement (incorrect vector), variable ⁇ latency transports (incorrect presentation time), etc.
  • relative sampling can be useful when both render and time warp adjustments occur in the same system.
  • Sampling such as IMU sampling can be performed in a manner that allows the coordinates and time delta (render vs. warp) to easily be calculated.
  • it can be difficult to support projected pose (and/or head position) and extend it to a system where the sink device performs offload (for example, virtual reality offload).
  • metadata information conveyed to the back end time warp can include a render position (for example, 3 dimensional x, y, and z coordinates and/or yaw, pitch and roll information in some embodiments, and/or in some embodiments a 3 dimensional coordinate position as well as vector coordinates conveying a viewing orientation of the user's head in addition to the coordinate position thereof).
  • a render position for example, 3 dimensional x, y, and z coordinates and/or yaw, pitch and roll information in some embodiments, and/or in some embodiments a 3 dimensional coordinate position as well as vector coordinates conveying a viewing orientation of the user's head in addition to the coordinate position thereof.
  • an exact position is used to render the image frame.
  • a render timestamp with an exact time is used to render an image frame.
  • a host side includes a transmitter and an HMD side includes a receiver.
  • the transmitter on the host side and/or the receiver on the HMD side can be a transceiver, allowing communication in either direction.
  • transmission between a host system and a head mounted display can be wired or wireless.
  • the connection between host and head mounted display may be an HDMI wired connector, for example.
  • the host system is a computer, and in some embodiments the host system is implemented in a cloud infrastructure. In some embodiments, any of the operations/functionality/structure are performed at the HMD. In some embodiments, any of the operations/functionality/structure are performed at the host.
  • image rendering is implemented in a cloud infrastructure. In some embodiments, image rendering is implemented in a combination of a local computer and a cloud infrastructure. In some embodiments, image rendering is implemented in a head mounted display. In some embodiments, image rendering is implemented in a computer at a host side or a computer at an HMD side.
  • motion prediction, head position prediction, and/or pose prediction is implemented in one of many ways. For example, in some embodiments, it is implemented by maintaining a weighted average of past pose (and/or head position) vectors, and using the weighted average to predict the rate of change of pose (and/or head position) for the next time interval. In some embodiments, the velocity and acceleration of the pose (and/or head position) are obtained by a simple vector of the change in position. In some embodiments, pose (and/or head position) tracking relies on filtering methods (for example, such as Kalman filtering) to predict pose (and/or head position) at the next timestep.
  • filtering methods for example, such as Kalman filtering
  • dead reckoning can be used to estimate the next pose (and/or head position).
  • external sensors for example, cameras such as depth cameras
  • pose (and/or head position) information may be used to obtain pose (and/or head position) information, either in addition to or instead of sampling pose (and/or head position) information from a sensor such as an IMU of the HMD, for example.
  • sampling pose (and/or head position) information at a certain frequency for example, after 5 ms, etc.
  • other frequencies may be used (for example, after 2 ms).
  • more samples may be taken according to some embodiments (for example, every 2 ms, every 5 ms, etc., or additional pose (and/or head position) samples such as three or more pose (and/or head position) samples obtained at the host rather than two samples, etc).
  • known data about the movement of a person's head may be used to predict user location and orientation. For example, maximum known speed of a human head, known directions and likely continued movements of human heads, etc. may be used.
  • the known information being presented on the HMD display may be used in order to predict user location and orientation.
  • perceptual computing may be implemented. If something is about to move fast in a virtually displayed environment, since people are very aware of fast motion, a user may be inclined to move their head toward that motion. Similarly, if a sound were to be provided, in some embodiments it can be predicted that the user is likely to turn their head toward that sound. In some embodiments, since the eyes are a good indicator of where the head might turn, sensors may be used in some embodiments to track eye movement of the user to help predict which direction the user may turn their head.
  • a host system for example, host system 402 of FIG. 4 or host system 502 of FIG. 5 . It is noted that in some embodiments, the host system could be a computer, a desktop computer, or even a cloud system, and is not limited to a particular type of host system. Additionally, many or all features, elements, functions, etc. described herein as included within or performed by a host system could be performed elsewhere (for example, within a head mounted display or within another device coupled to a head mounted display).
  • HMD head mounted display
  • techniques used herein can be used in other non-HMD environments (for example, in any case where images are rendered for display, but the desired image to be displayed might change based on latencies in the system due to image rendering, some type of movement, transmission of data such as the rendered image, and/or other latencies).
  • predictive rendering may be used in a head mounted display system where the head mounted display communicates wirelessly (for example, where the HMD communicates wirelessly with a host system, the cloud, etc).
  • Predictive rendering for wireless HMDs can provide reduced power and/or increased efficiency.
  • FIG. 10 illustrates display rendering 1000 according to some embodiments.
  • display rendering 1000 can include one or more of display rendering, image rendering, predictive rendering, projected pose, projected head position, time warping optimization, and/or predicted rendering, etc.
  • a position of a user's head can be detected (for example, a position of a head of user of a head mounted display system).
  • a pose and/or position of the user's head is obtained at 1002 using an IMU, an accelerator, a gyroscope, and/or some other sensor, for example.
  • a latency in displaying rendered image data is determined.
  • This latency can be due to, for example, one or more of latency in sampling, latency in obtaining a head pose (and/or head position) of a user, latency due to motion, latency due to movement of a head of a user, latency due to movement of a head of a user of a head mounted display, latency due to a potential change of head movement, latency due to image rendering, latency due to image render processing, latency due to graphics rendering, latency due to transmission, latency due to transmission between a host system and a head mounted display, wireless transmission latency, non-static or changing latencies over time, latency due to predicted known possible head movements, and/or any other latencies, etc.
  • a head pose (and/or head position) of a user is estimated (and/or is predicted) based on, for example, a detected position of the user's head and any or all latencies.
  • an image is rendered based on the estimated (or predicted) head pose (and/or head position). A portion or all of the rendered image can then be displayed (for example, on a head mounted display).
  • display rendering 1000 can implement display rendering features as described and/or illustrated anywhere in this specification and drawings.
  • display rendering 1000 can use available and/or obtained head motion information to predict future head poses and/or head positions, and adjust the render target accordingly.
  • a buffer for example, an image buffer, a graphics buffer, a rendering buffer, and/or any other type of buffer
  • rendering 1000 can reduce an amount of data that needs to be rendered but not displayed. This can result in better power efficiency.
  • display rendering 1000 can optimize a render target by estimating an expected field of view (FOV) at a time of display. This can be based on understanding of a latency to display the render target and/or estimating a head pose and/or head position (for example, based on sensor data such as an IMU, accelerometers, gyroscopes, camera sensors, etc. in order to detect head movement).
  • display rendering 1000 can predictively render a large frame buffer and use the data to render the difference in motion.
  • display rendering 1000 can predict and accurately render image scenes based on head motion.
  • display rendering 1000 can implement two dimensional (2D) transform, and can move a 2D transform window and display pixels with clarity.
  • display rendering 1000 can implement motion prediction in one of many ways. For example, in some embodiments, display rendering 1000 can implement motion prediction using a weighted average of past pose (and/or head position) vectors, and using the weighted average to predict a rate of change of pose (and/or head position) for a next time interval. Velocity and acceleration of the pose (and/or head position) can be obtained in some embodiments by a simple vector of the change in position. In some embodiments, display rendering 1000 can implement pose (and/or head position) tracking using filtering methods (for example, using filtering methods such as Kalman filtering) to predict a pose (and/or head position) at a next time step. In some embodiments, display rendering 1000 can use dead reckoning to estimate a next pose (and/or head position).
  • filtering methods for example, using filtering methods such as Kalman filtering
  • FIG. 11 is a block diagram of an example of a computing device 1100 .
  • computing device 1100 can include display and/or image features including one or more of display rendering, image rendering, predictive rendering, projected pose, projected head position, time warping optimization, predicted rendering, etc. and/or any other features or techniques discussed herein according to some embodiments.
  • any of the features illustrated in and/or described in reference to any one or more of FIGS. 1-11 can be included within computing device 1100 .
  • all of part of computing device 1100 can be included as host system 402 of FIG. 4 or host system 502 of FIG. 5 .
  • all or part of computing device 1100 can be included as head mounted display 404 of FIG. 4 or head mounted display 504 of FIG. 5 .
  • the computing device 1100 may be, for example, a mobile device, phone, laptop computer, notebook, tablet, all in one, 2 in 1, and/or desktop computer, etc., among others.
  • the computing device 1100 may include a processor 1102 that is adapted to execute stored instructions, as well as a memory device 1104 (and/or storage device 1104 ) that stores instructions that are executable by the processor 1102 .
  • the processor 1102 can be a single core processor, a multi-core processor, a computing cluster, or any number of other configurations.
  • processor 1102 can be an Intel® processor such as an Intel® Celeron, Pentium, Core, Core i3, Core i5, or Core i7 processor.
  • processor 1102 can be an Intel® x86 based processor.
  • processor 1102 can be an ARM based processor.
  • the memory device 1104 can be a memory device and/or a storage device, and can include volatile storage, non-volatile storage, random access memory, read only memory, flash memory, and/or any other suitable memory and/or storage systems.
  • the instructions that are executed by the processor 1102 may also be used to implement features described in this specification, including display coordinate configuration, for example.
  • the processor 1102 may also be linked through a system interconnect 1106 (e.g., PCI®, PCI-Express®, NuBus, etc.) to a display interface 1108 adapted to connect the computing device 1100 to a display device 1110 .
  • display device 1110 can include any display screen.
  • the display device 1110 may include a display screen that is a built-in component of the computing device 1100 .
  • the display device 1110 may also include a computer monitor, television, or projector, among others, that is externally connected to the computing device 1100 .
  • the display device 1110 can include liquid crystal display (LCD), for example.
  • display device 1110 can include a backlight including light sources such as light emitting diodes (LEDs), organic light emitting diodes (OLEDs), and/or micro-LEDs ( ⁇ LEDs), among others.
  • LEDs light emitting diodes
  • OLEDs organic light emitting diodes
  • ⁇ LEDs micro-LEDs
  • the display interface 1108 can include any suitable graphics processing unit, transmitter, port, physical interconnect, and the like. In some examples, the display interface 1108 can implement any suitable protocol for transmitting data to the display device 1110 . For example, the display interface 1108 can transmit data using a high-definition multimedia interface (HDMI) protocol, a DisplayPort protocol, or some other protocol or communication link, and the like
  • HDMI high-definition multimedia interface
  • DisplayPort or some other protocol or communication link, and the like
  • display device 1110 includes a display controller 1130 .
  • the display controller 1130 can provide control signals within and/or to the display device 1110 .
  • all or portions of the display controller 1130 can be included in the display interface 1108 (and/or instead of or in addition to being included in the display device 1110 ).
  • all or portions of the display controller 1130 can be coupled between the display interface 1108 and the display device 1110 .
  • all or portions of the display controller 1130 can be coupled between the display interface 1108 and the interconnect 1106 .
  • all or portions of the display controller 1130 can be included in the processor 1102 .
  • display controller 1130 can implement one or more of display rendering, image rendering, predictive rendering, projected pose, projected head position, time warping optimization, predicted rendering, etc. and/or any other features or techniques discussed herein according to any of the examples illustrated in any of the drawings and/or as described anywhere herein.
  • any of the features illustrated in and/or described in reference to all or portions of any one or more of FIGS. 1-10 can be included within display controller 1130 .
  • any of the techniques described in this specification can be implemented entirely or partially within the display device 1110 . In some embodiments, any of the techniques described in this specification can be implemented entirely or partially within the display controller 1130 . In some embodiments, any of the techniques described in this specification can be implemented entirely or partially within the processor 1102 .
  • a network interface controller (also referred to herein as a NIC) 1112 may be adapted to connect the computing device 1100 through the system interconnect 1106 to a network (not depicted).
  • the network may be a wireless network, a wired network, cellular network, a radio network, a wide area network (WAN), a local area network (LAN), a global position satellite (GPS) network, and/or the Internet, among others.
  • the processor 1102 may be connected through system interconnect 1106 to an input/output (I/O) device interface 1114 adapted to connect the computing host device 1100 to one or more I/O devices 1116 .
  • the I/O devices 1116 may include, for example, a keyboard and/or a pointing device, where the pointing device may include a touchpad or a touchscreen, among others.
  • the I/O devices 1116 may be built-in components of the computing device 1100 , or may be devices that are externally connected to the computing device 1100 .
  • the processor 1102 may also be linked through the system interconnect 1106 to a storage device 1118 that can include a hard drive, a solid state drive (SSD), a magnetic drive, an optical drive, a portable drive, a flash drive, a Universal Serial Bus (USB) flash drive, an array of drives, and/or any other type of storage, including combinations thereof.
  • a storage device 1118 can include any suitable applications.
  • the storage device 1118 can include a basic input/output system (BIOS).
  • BIOS basic input/output system
  • the storage device 1118 can include any device or software, instructions, etc. that can be used (for example, by a processor such as processor 1102 ) to implement any of the functionality described herein such as, for example, one or more of display rendering, image rendering, predictive rendering, projected pose, projected head position, time warping optimization, predicted rendering, etc. and/or any other features or techniques discussed herein.
  • predictive display rendering 1120 is included in storage device 1118 .
  • predictive display rendering 1120 incudes a portion or all of any one or more of the techniques described herein. For example, any of the features illustrated in and/or described in reference to any portions of one or more of FIGS. 1-10 can be included within predictive display rendering 1120 .
  • FIG. 11 is not intended to indicate that the computing device 1100 is to include all of the components shown in FIG. 11 . Rather, the computing device 1100 can include fewer and/or additional components not illustrated in FIG. 11 (e.g., additional memory components, embedded controllers, additional modules, additional network interfaces, etc.). Furthermore, any of the functionalities of the BIOS or of the optimization predictive display rendering 1120 that can be included in storage device 1118 may be partially, or entirely, implemented in hardware and/or in the processor 1102 . For example, the functionality may be implemented with an application specific integrated circuit, logic implemented in an embedded controller, or in logic implemented in the processor 1102 , among others.
  • the functionalities of the BIOS and/or predictive display rendering 1120 can be implemented with logic, wherein the logic, as referred to herein, can include any suitable hardware (e.g., a processor, among others), software (e.g., an application, among others), firmware, or any suitable combination of hardware, software, and/or firmware.
  • the logic can include any suitable hardware (e.g., a processor, among others), software (e.g., an application, among others), firmware, or any suitable combination of hardware, software, and/or firmware.
  • FIG. 12 is a block diagram of an example of one or more processor and one or more tangible, non-transitory computer readable media.
  • the one or more tangible, non-transitory, computer-readable media 1200 may be accessed by a processor or processors 1202 over a computer interconnect 1204 .
  • the one or more tangible, non-transitory, computer-readable media 1200 may include code to direct the processor 1202 to perform operations as described herein.
  • computer-readable media 1200 may include code to direct the processor to perform predictive display rendering 1206 , which can include display rendering, image rendering, predictive rendering, projected pose, projected head position, time warping optimization, predicted rendering, etc. and/or any other features or techniques discussed herein according to some embodiments.
  • predictive display rendering 1206 can be used to provide any of the features or techniques according to any of the examples illustrated in any of the drawings and/or as described anywhere herein. For example, any of the features illustrated in and/or described in reference to portions of any one or more of FIGS. 1-10 can be included within predictive display rendering 1206 .
  • processor 1202 is one or more processors. In some embodiments, processor 1202 can perform similarly to (and/or the same as) processor 1102 of FIG. 11 , and/or can perform some or all of the same functions as can be performed by processor 1102 .
  • Various components discussed in this specification may be implemented using software components. These software components may be stored on the one or more tangible, non-transitory, computer-readable media 1200 , as indicated in FIG. 12 .
  • software components including, for example, computer readable instructions implementing predictive display rendering 1206 may be included in one or more computer readable media 1200 according to some embodiments.
  • any suitable number of software components may be included within the one or more tangible, non-transitory computer-readable media 1200 .
  • any number of additional software components not shown in FIG. 12 may be included within the one or more tangible, non-transitory, computer-readable media 1200 , depending on the specific application.
  • Embodiments have been described herein relating to head mounted displays, head pose and/or head position detection/prediction, etc. However, it is noted that some embodiments relate to other image and/or display rendering than in head mounted displays. Some embodiments are not limited to head mounted displays or head pose and/or head position. For example, in some embodiments, a position of all or a portion of a body of a user can be used (for example, using a projected pose and/or position of a portion of a body of a user including the user's head or not including the user's head). Motion and/or predicted motion, latency, etc. of other body parts than a user's head can be used in some embodiments. In some embodiments, body parts may not be involved. For example, some embodiments can relate to movement of a display or other computing device, and prediction of motion and/or latency relating to those devices can be implemented according to some embodiments.
  • a head mounted display system including one or more processor.
  • the one or more processor is to detect a position of a head of a user of the head mounted display, predict a position of the head of the user of the head mounted display at a time after a time that the position of the head of the user was detected, and render image data based on the predicted head position.
  • the head mounted display system of Example 1 including a transmitter to transmit the rendered image data to the head mounted display.
  • the head mounted display system of Example 1 or Example 2 the one or more processor to create an image to be displayed on the head mounted display based on the predicted position and based on the rendered image data.
  • the head mounted display system of any of Examples 1-3 the one or more processor to display an image on the head mounted display based on the rendered image data.
  • the head mounted display system of any of Examples 1-4 the one or more processor to estimate an expected field of view of the user at a time of display, and to render the image data based on the predicted head position and based on the expected field of view.
  • the head mounted display system of any of Examples 1-5 the one or more processor to perform a two dimensional transform on the rendered image data.
  • the head mounted display system of any of Examples 1-6 the one or more processor to maintain a weighted average of past head position vectors, and to predict the position of the head based on the weighted average.
  • the head mounted display system of any of Examples 1-7 the one or more processor to predict the position of the head based on a filtering method.
  • the head mounted display system of any of Examples 1-8 the one or more processor to predict the position of the head based on dead reckoning.
  • the head mounted display system of any of Examples 1-9 the one or more processor to render the image data based on a predicted amount of motion and latency.
  • the head mounted display system of any of Examples 1-10 the one or more processor to determine a latency to display the rendered image data, and to predict the position of the head of the user based on the detected position and based on the determined latency.
  • a method including detecting a position of a head of a user of a head mounted display, predicting a position of the head of the user of the head mounted display at a time after a time that the position of the head of the user was detected, and rendering image data based on the predicted head position.
  • Example 12 including transmitting the rendered image data to the head mounted display.
  • the method of any of Examples 12-13 including creating an image to be displayed on the head mounted display based on the predicted position and based on the rendered image data.
  • the method of any of Examples 12-14 including displaying an image on the head mounted display based on the rendered image data.
  • the method of any of Examples 12-15 including estimating an expected field of view of the user at a time of display, and rendering the image data based on the predicted head position and based on the expected field of view.
  • the method of any of Examples 12-16 including performing a two dimensional transform on the rendered image data.
  • the method of any of Examples 12-17 including maintaining a weighted average of past head position vectors, and predicting the position of the head based on the weighted average.
  • the method of any of Examples 12-18 including predicting the position of the head based on a filtering method.
  • the method of any of Examples 12-19 including predicting the position of the head based on dead reckoning.
  • the method of any of Examples 12-20 including rendering the image data based on a predicted amount of motion and latency.
  • the method of any of Examples 12-21 including determining a latency to display the rendered image data, and predicting the position of the head of the user based on the detected position and based on the determined latency.
  • one or more tangible, non-transitory machine readable media include a plurality of instructions that, in response to being executed on at least one processor, cause the at least one processor to detect a position of a head of a user of a head mounted display, predict a position of the head of the user of the head mounted display at a time after a time that the position of the head of the user was detected, and render image data based on the predicted head position.
  • the one or more tangible, non-transitory machine readable media of Example 23 including a plurality of instructions that, in response to being executed on at least one processor, cause the at least one processor to transmit the rendered image data to the head mounted display.
  • the one or more tangible, non-transitory machine readable media of any of Examples 23-24 including a plurality of instructions that, in response to being executed on at least one processor, cause the at least one processor to create an image to be displayed on the head mounted display based on the predicted position and based on the rendered image data.
  • the one or more tangible, non-transitory machine readable media of any of Examples 23-25 including a plurality of instructions that, in response to being executed on at least one processor, cause the at least one processor to display an image on the head mounted display based on the rendered image data.
  • the one or more tangible, non-transitory machine readable media of any of Examples 23-26 including a plurality of instructions that, in response to being executed on at least one processor, cause the at least one processor to estimate an expected field of view of the user at a time of display, and to render the image data based on the predicted head position and based on the expected field of view.
  • the one or more tangible, non-transitory machine readable media of any of Examples 23-27 including a plurality of instructions that, in response to being executed on at least one processor, cause the at least one processor to perform a two dimensional transform on the rendered image data.
  • the one or more tangible, non-transitory machine readable media of any of Examples 23-28 including a plurality of instructions that, in response to being executed on at least one processor, cause the at least one processor to maintain a weighted average of past head position vectors, and to predict the position of the head based on the weighted average.
  • the one or more tangible, non-transitory machine readable media of any of Examples 23-29 including a plurality of instructions that, in response to being executed on at least one processor, cause the at least one processor to predict the position of the head based on a filtering method.
  • the one or more tangible, non-transitory machine readable media of any of Examples 23-30 including a plurality of instructions that, in response to being executed on at least one processor, cause the at least one processor to predict the position of the head based on dead reckoning.
  • the one or more tangible, non-transitory machine readable media of any of Examples 23-31 including a plurality of instructions that, in response to being executed on at least one processor, cause the at least one processor to render the image data based on a predicted amount of motion and latency.
  • the one or more tangible, non-transitory machine readable media of any of Examples 23-24 including a plurality of instructions that, in response to being executed on at least one processor, cause the at least one processor to determine a latency to display the rendered image data, and to predict the position of the head of the user based on the detected position and based on the determined latency.
  • a display system includes means for detecting a position of a head of a user of the display at a first time, means for predicting a position of the head of the user of the display at a second time that is after the first time, and means for rendering image data based on the predicted head position.
  • the display system is a head mounted display system.
  • the display system of Example 34 including means for transmitting the rendered image data to the display.
  • the display system of any of Examples 34-35 including means for creating an image to be displayed on the display based on the predicted position and based on the rendered image data.
  • the display system of any of Examples 34-36 including means for displaying an image on the display based on the rendered image data.
  • the display system of any of Examples 34-37 including means for estimating an expected field of view of the user at a time of display, and means for rendering the image data based on the predicted head position and based on the expected field of view.
  • the display system of any of Examples 34-38 including means for performing a two dimensional transform on the rendered image data.
  • the display system of any of Examples 34-39 including means for maintaining a weighted average of past head position vectors, and means for predicting the position of the head based on the weighted average.
  • the display system of any of Examples 34-40 including means for predicting the position of the head based on a filtering method.
  • the display system of any of Examples 34-41 including means for predicting the position of the head based on dead reckoning.
  • the display system of any of Examples 34-42 including means for rendering the image data based on a predicted amount of motion and latency.
  • the display system of any of Examples 34-43 including means for determining a latency to display the rendered image data, and means for predicting the position of the head of the user based on the detected position and based on the determined latency.
  • an apparatus including means to perform a method as in any preceding Example.
  • machine-readable instructions when executed, to implement a method, realize an apparatus, or realize a system as in any preceding Example.
  • a machine readable medium including code, when executed, to cause a machine to perform the method, realize an apparatus, or realize a system as in any one of the preceding Examples.
  • a head mounted display system includes a first processor to predict a pose (and/or head position) of a user of the head mounted display, a second processor to render an image based on the predicted pose (and/or head position), and a transmitter to transmit the rendered image to the head mounted display.
  • a head mounted display system includes a processor to receive a predicted pose (and/or head position) of a user of the head mounted display and to receive a rendered image that is based on the predicted pose (and/or head position).
  • the processor is to create an image to be displayed on the head mounted display based on the predicted pose (and/or head position) and based on the rendered image.
  • a head mounted display system includes a first processor to predict a pose (and/or head position) of a user of the head mounted display, a second processor to render an image based on the predicted pose (and/or head position), and a third processor to create an image to be displayed on the head mounted display based on the predicted pose (and/or head position) and based on the rendered image.
  • At least one computer-readable medium includes instructions to direct a processor to predict a pose (and/or head position) of a user of a head mounted display, render an image based on the predicted pose (and/or head position), and transmit the rendered image to the head mounted display.
  • At least one computer-readable medium includes instructions to direct a processor to predict a pose (and/or head position) of a user of a head mounted display, render an image based on the predicted pose (and/or head position), and display an image on the head mounted display based on the predicted pose (and/or head position) and based on the rendered image.
  • At least one computer-readable medium includes instructions to direct a processor to receive a predicted pose (and/or head position) of a user of a head mounted display, receive a rendered image that is based on the predicted pose (and/or head position), and create an image to be displayed on the head mounted display based on the predicted pose (and/or head position) and based on the rendered image.
  • a method includes predicting a pose (and/or head position) of a user of a head mounted display, rendering an image based on the predicted pose (and/or head position), and transmitting the rendered image to the head mounted display.
  • a method includes predicting a pose (and/or head position) of a user of a head mounted display, rendering an image based on the predicted pose (and/or head position), and displaying an image on the head mounted display based on the predicted pose (and/or head position) and based on the rendered image.
  • a method includes receiving a predicted pose (and/or head position) of a user of a head mounted display, receiving a rendered image that is based on the predicted pose (and/or head position), and creating an image to be displayed on the head mounted display based on the predicted pose (and/or head position) and based on the rendered image.
  • Various embodiments of the disclosed subject matter may be implemented in hardware, firmware, software, or combination thereof, and may be described by reference to or in conjunction with program code, such as instructions, functions, procedures, data structures, logic, application programs, design representations or formats for simulation, emulation, and fabrication of a design, which when accessed by a machine results in the machine performing tasks, defining abstract data types or low-level hardware contexts, or producing a result.
  • program code such as instructions, functions, procedures, data structures, logic, application programs, design representations or formats for simulation, emulation, and fabrication of a design, which when accessed by a machine results in the machine performing tasks, defining abstract data types or low-level hardware contexts, or producing a result.
  • Program code may represent hardware using a hardware description language or another functional description language which essentially provides a model of how designed hardware is expected to perform.
  • Program code may be assembly or machine language or hardware-definition languages, or data that may be compiled and/or interpreted.
  • Program code may be stored in, for example, volatile and/or non-volatile memory, such as storage devices and/or an associated machine readable or machine accessible medium including solid-state memory, hard-drives, floppy-disks, optical storage, tapes, flash memory, memory sticks, digital video disks, digital versatile discs (DVDs), etc., as well as more exotic mediums such as machine-accessible biological state preserving storage.
  • a machine readable medium may include any tangible mechanism for storing, transmitting, or receiving information in a form readable by a machine, such as antennas, optical fibers, communication interfaces, etc.
  • Program code may be transmitted in the form of packets, serial data, parallel data, etc., and may be used in a compressed or encrypted format.
  • Program code may be implemented in programs executing on programmable machines such as mobile or stationary computers, personal digital assistants, set top boxes, cellular telephones and pagers, and other electronic devices, each including a processor, volatile and/or non-volatile memory readable by the processor, at least one input device and/or one or more output devices.
  • Program code may be applied to the data entered using the input device to perform the described embodiments and to generate output information.
  • the output information may be applied to one or more output devices.
  • programmable machines such as mobile or stationary computers, personal digital assistants, set top boxes, cellular telephones and pagers, and other electronic devices, each including a processor, volatile and/or non-volatile memory readable by the processor, at least one input device and/or one or more output devices.
  • Program code may be applied to the data entered using the input device to perform the described embodiments and to generate output information.
  • the output information may be applied to one or more output devices.
  • One of ordinary skill in the art may appreciate that embodiments of the disclosed subject
  • each element may be implemented with logic, wherein the logic, as referred to herein, can include any suitable hardware (e.g., a processor, among others), software (e.g., an application, among others), firmware, or any suitable combination of hardware, software, and firmware, for example.

Abstract

In one example, a head mounted display system includes at least one memory; and at least one processor to execute instructions to: detect a first position and a first view direction of a head of a user based on sensor data generated by at least one of an accelerometer, at least one camera, or a gyroscope at a first point in time; determine a latency associated with a time to cause an image to be presented on the display; determine a predicted position and a predicted view direction of the head of the user at a second point in time based on the latency; render, prior to the second point in time, the image for presentation on the display based on the predicted position and the predicted view direction of the head of the user; and cause the display to present the rendered image.

Description

RELATED APPLICATIONS
This patent arises from a continuation of U.S. patent application Ser. No. 17/133,265, (now U.S. Pat. No. 11,210,933), filed on Dec. 23, 2020, which is a non-provisional application claiming priority to U.S. patent application Ser. No. 15/675,653, (now U.S. Pat. No. 11,017,712), filed on Aug. 11, 2017, which is a non-provisional application claiming priority to U.S. Provisional Patent Application No. 62/374,696, filed on Aug. 12, 2016. Priority is claimed to U.S. patent application Ser. No. 17/133,265, U.S. patent application Ser. No. 15/675,653, and U.S. Provisional Patent Application No. 62/374,696. U.S. patent application Ser. No. 17/133,265, U.S. patent application Ser. No. 15/675,653, and U.S. Provisional Patent Application No. 62/374,696 are incorporated herein by reference in their entireties.
TECHNICAL FIELD
This disclosure relates generally to rendering display images.
BACKGROUND
In head mounted display (HMD) systems such as Virtual Reality (VR) and/or Augmented Reality (AR) systems, there is typically latency between when an image is rendered for viewing and when the user views a rendered image displayed on the head mounted display. This latency can be due, for example, to movement of the user's head between the image rendering and the actual display on the HMD.
BRIEF DESCRIPTION OF THE DRAWINGS
The following detailed description may be better understood by referencing the accompanying drawings, which contain specific examples of numerous features of the disclosed subject matter.
FIG. 1 illustrates a user wearing a head mounted display;
FIG. 2 illustrates display rendering;
FIG. 3 illustrates display rendering;
FIG. 4 illustrates a head mounted display system;
FIG. 5 illustrates a head mounted display system;
FIG. 6 illustrates display rendering;
FIG. 7 illustrates display rendering;
FIG. 8 illustrates display rendering;
FIG. 9 illustrates display rendering;
FIG. 10 illustrates display rendering;
FIG. 11 illustrates a computing device;
FIG. 12 illustrates one or more processor and one or more tangible, non-transitory computer readable media.
In some cases, the same numbers are used throughout the disclosure and the figures to reference like components and features. In some cases, numbers in the 100 series refer to features originally found in FIG. 1; numbers in the 200 series refer to features originally found in FIG. 2; and so on.
DESCRIPTION OF THE EMBODIMENTS
Head mounted displays (HMDs) are becoming more affordable and available to users (for example, in mainstream personal computer form-factors). In some embodiments, an optimal and differentiated user experience for users wearing HMDs is made available. Some embodiments relate to optimization of display image rendering, predictive display rendering, and/or predictive image rendering, etc.
In some embodiments, head mounted display systems minimize latencies such as motion latencies. Some embodiments relate to optimizing time warping for Head-Mounted Displays (HMDs). Time warping is a method in which a large rendered image target is prepared and content to be displayed on the HMD is adjusted to account for the delta (or difference) in the field of view (FOV) due to head movement of a user of the HMD between the time that the target image is rendered and when it is actually displayed on the HMD. As a result, excess image data can be generated for the rendering, and then not actually rendered in the displayed image due to the head movement. However, the extra generated image data that is not actually rendered can represent wasted power and memory resources. Therefore, in some embodiments, a system can limit extra generated image data while still providing enough image data to make sure that rendered image data is available for display on the HMD.
FIG. 1 illustrates an example 100 of a user 102 wearing a head mounted display (HMD) 104. On the left side of FIG. 1, the user 102 is in a current position looking forward. On the right side of FIG. 1, the same user 102 is shown wearing the head mounted display 104 as the user 102 is looking to a display position to the right of the user 102 (which appears to the left when looking at user 102 in FIG. 1) a delayed and/or short time after that same user 102 was looking forward. A head mounted display system will have some latency for graphics rendering between the time the user 102 is looking forward (for example, at a time of image rendering) and the time that same user 102 is looking slightly to the right at a short time later (for example, at a time of display of a rendered image on the head mounted display 104). FIG. 1 illustrates the concept of a shifted view rendered on the head mounted display 104, based on motion of the head of the user 102 in a short timeframe between when the image is initially rendered and when that image is able to be displayed on the head mounted display 104 for the user 102 to view. For example, if the user 102 quickly rotates their head, depending on the latency of a graphics rendering pipe and/or a large distance movement of the user's head between when the image is rendered and when it is displayed, a needed adjustment could be very large. If a large latency is introduced in a graphics pipe (for example, due to the complexity of the content being rendered), additional large latency can be introduced into the graphics pipe. For media content, for example, the processor may take a longer time to decode a very high resolution video than in a situation where a simple video is rendered at low resolution. Any extra time taken to place the resulting frame in the render target adds to the overall latency. If an increased latency is introduced, for example in an interface between a host system and an HMD, the overall latency is further increased. For example, if a host device and an HMD are wirelessly connected, and/or if the image render processing is implemented in the cloud, additional interface latencies can occur.
FIG. 2 illustrates a graphic block diagram 200 illustrating display rendering. As illustrated in FIG. 2, block 202 shows a frame in which a large rendered target is shown in block 204. The rendered target 204 can be an image rendering for display on a head mounted display (HMD), for example. At a time shortly after the initial rendering at 204, block 212 shows a frame with an actual rendering 214 on the head mounted display. According to some embodiments, actual rendered image 214 can be adjusted (for example, using a time warp implementation). The adjustment is made from the initial rendering 204, which is also shown as initial rendering 216 in dotted lines in FIG. 2 for comparison purposes.
As illustrated in FIG. 2, a problem can occur where a large head motion or long latencies in the image display path can result in a lack of information required to display the contents based on the new field of view (FOV). This is illustrated, for example, by the portion of rendered image 214 that extends beyond the frame 212. Such a large motion or long latency in the graphics pipeline can result in the HMD not having all of the correct image data to display on the HMD. However, according to some embodiments, these types of issues can be avoided.
In some embodiments, the types of problems described herein may be overcome by optimizing a rendered image target by estimating the expected FOV at the time of display on the HMD. This is based on understanding the latency associated with displaying the rendered target image and estimating a head pose (and/or a head position) based on sensor data relating to the HMD. In some embodiments, for example, head mounted display sensors and/or peripheral sensors may be used to detect head movement at one or more times near the time of rendering and/or the time of display.
Time warping is a method by which a large rendered target is prepared and content is displayed on a head mounted display in a manner such that it is adjusted to account for the change in the field of view (FOV). This change is the FOV is due to head movement between the time that the rendered target was prepared and the time that it is actually displayed on the head mounted display, for example. In order to ensure that enough image rendering information is available for display, a larger image than necessary for the current image can be rendered at block 204 (for example 1.4 times what is necessary in order to render the current image). This larger size rendering allows the correct image data to be available to be rendered on the head mounted display at a later time based on a potential change in the head motion of the user of the head mounted display (that is, in order to account for the latency). Therefore, in some embodiments, a larger image can be rendered at block 204. The larger rendered image 204 can be transmitted to the head mounted display, with some latency associated with the head movement and/or additional latency associated with transmission of the rendered image to the head mounted display. In the time that is taken to transmit the image to the head mounted display, the user may have moved their head. Therefore, the user is then looking at an image that is slightly skewed from what was initially rendered at 204. Since a bigger frame buffer can be rendered at 204 and transmitted to the head mounted display, additional image information can then be available at the time of display on the head mounted display. However, there can be data that is rendered at 204 that is not necessary for the user to view at the later time, since the image has changed slightly due to movement of the user's head. If the direction that the user's head is moving between the time of the rendering at 204 and the image at 214 can be predicted, a more accurate assessment can be made of the image needed to be rendered for the user to see the updated image information correctly.
In some embodiments, an HMD can be used which includes integrated motion sensors to track the movement of the HMD device. For example, in some embodiments, an HMD can include inertia motion sensors. In some embodiments, external cameras facing toward the device may be used to track positional information (including position and orientation of the HMD, for example). According to some embodiments, data from these sensors can be used to determine the field of view (FOV) of the user and render the appropriate content on the display of the HMD. Data from these types of sensors can be used in some embodiments to predict and effectively make use of an image rendering buffer area to ensure that proper data to be displayed is available in the render target memory at the HMD. Some embodiments may result in a better user experience, optimized memory usage, and/or better power efficiency.
In some embodiments, available head motion information can be used to predict future head poses (and/or head positions) and to adjust the render target accordingly.
In some embodiments, time warping can be used to render more data into a render target buffer than what is actually necessary for display on the HMD. This can be implemented in a manner similar to digital stabilization used for cameras. In some embodiments, an image rendering target buffer is efficiently used to minimize the risk of not having available the proper content to display due to heavy motion and/or latencies in the rendering pipeline. According to some embodiments, prediction (or projection) of the image position and/or orientation at a point when an image will be displayed on the HMD allows a reduction in the necessary amount of data that is rendered but not displayed, which can allow better power efficiency.
FIG. 3 is a graphic block diagram 300 illustrating display rendering. As illustrated in FIG. 3, block 302 shows a frame in which a larger render target image 304 to be displayed on a head mounted display (HMD) is initially rendered. At a time shortly after the initial rendering, frame 312 shows an actual image rendering 314 on the head mounted display that is to be adjusted in a time warp implementation from the initial rendering 316 shown in dotted lines in FIG. 3. In some embodiments, FIG. 3 can illustrate a situation where the user's head is turned very quickly and/or there is a long graphics latency associated with the image rendering or transmission that is too large. In such a situation, the rendered image 314 may adjust too far, such that the correct data is not available for displaying the appropriate image at that time. That is, the desired image is not actually rendered, so there is tearing, visual artifacts, etc. In order to prevent this problem, it is desirable to take into account the direction of what the head mounted display user is looking at (for example, the direction of movement of the user's head), as well as the direction of motion, speed of motion, and/or the latency of motion. In this manner, according to some embodiments, the amount of memory may be optimized and some of the work done to render a larger frame buffer image may also be minimized. In some embodiments, a frame buffer image rendering may be done in an advantageous manner in order to account for latency in the pipeline, latency due to transmission of the rendered image to the head mounted display, latency in the sampling, and/or latency in actually rendering out the image to show the correct head mounted display image.
In some embodiments, head motion of a user is predicted (for example, based on sampling). For example, in a head mounted display that is running at 90 Hertz rather than 30 Hertz, the head mounted display can sample head motion of a user once every 10 milliseconds. This sampling can be used in order to more accurately predict where the head is going to be at a desired display time (for example, in 10 milliseconds). In some embodiments, time warp may be used in addition to prediction of where the head will be in the future (for example in 10 ms) in order to save power, save memory, and make sure that the entire image is available to be rendered properly at the right time. In some embodiments, the prediction may occur within the head mounted display. In some embodiments, the prediction may occur somewhere else (for example in a host system, a cloud, etc.) In some embodiments, the prediction may occur in a combination of the head mounted display and somewhere else (such as in a host system, a cloud, etc.)
It is noted that FIGS. 1, 2 and 3, for example, can illustrate head motion of a user in a particular direction such as a horizontal direction. However, in some embodiments, head motion can occur in a variety of directions, motions, etc. The drawings and description herein should not be limited to illustrate head motion changes in only a horizontal direction. For example, in some embodiments head motion can occur in a vertical direction or in a combination of horizontal and vertical directions, for example. According to some embodiments, head motion of a user is predicted in any direction and is not limited to prediction of head motion in a horizontal manner. That is, even though time warp can occur due to a motion of the user in a horizontal direction, it is noted that in some embodiments, time warp can occur when motion of a user occurs in a variety of directions.
FIG. 4 illustrates a system 400 including a host system 402 (for example, a Virtual Reality ready desktop system) and a head mounted display (HMD) 404. System 400 additionally can include a gaming add-on 406 (for example, an Intel® WiGig Gaming Add-on device), a wireless transceiver 408 that includes a wireless sink 410 (for example, an Intel® Wireless Gigabit Sink) and a battery 412, as well as headphones 416 that can be used by a user in conjunction with the head mounted display 404. Gaming Add-on 406 and wireless transceiver 408 allow the host system 402 and the head mounted display 404 to communicate with each other via a wireless connection 414. In some embodiments, audio and/or video as well as motion data may be transmitted via the gaming add-on 406 to and from the host 402. In some embodiments, video, power, and/or motion data may be transmitted to and from the head mounted display 404 via the wireless transceiver 408. The wireless transceiver 408 may also be used to transmit audio data to headphones 416.
FIG. 5 illustrates a system 500 including a host system 502 and a head mounted display (HMD) 504. Host system 502 includes a processor (for example, using a CPU) that can implement a get pose (and/or get head position) operation 512, a graphics processor that can implement a graphics rendering pipeline 524, and a transmitter 516. It is noted that in some embodiments transmitter 516 is a transceiver, allowing transmit and receive operations to and from the host 502. HMD 504 includes an IMU (and/or Inertial Magnetic Unit and/or Inertial Measurement Unit) 522, a display 524, a processor 526, and a receiver 528. It is noted that in some embodiments receiver 528 is a transceiver, allowing transmit and receive operations to and from the HMD 504. IMU 522 can be a sensor, an Inertial Magnetic Unit, and/or an Inertial Measurement Unit used to obtain information about the head mounted display (for example head position and/or head orientation). In some embodiments, IMU 522 is an inertia motion sensor. For example, in some embodiments, IMU 522 includes one or more accelerometers, one or more gyroscopes, etc.
FIG. 5 is used to illustrate how, in some embodiments, latencies in the graphics rendering pipeline 514 and/or latencies in the interface including transmitter 516 and receiver, for example, can affect the overall time from the initial estimated pose (and/or estimated head position) to when an adjustment is made by processor 526 (for example, using time warp technology and/or predictive pose and/or predictive head position) and displayed at the display 524.
When implementing graphics rendering, the system is trying to figure out where the user is located (and/or oriented) in order to render the correct image (for example, where the user is, and what the user is looking at, which is fundamentally the view direction). Therefore, get pose (and/or get head position) block 512 can typically be implemented in a processor such as a central processor or CPU, and can work to obtain where the user is in space, and what direction the user is looking. This can be passed along with all the 3-D geometry data to the graphics rendering pipeline 514. In some embodiments, the graphics rendering pipeline 514 takes all the graphics, the models, the texture, the lighting, etc. and generates a 3-D image scene. This 3-D scene is generated based on the particular head position and view direction obtained by the get pose (and/or get head position) block 512 via the IMU 522. There can be a graphics pipe latency associated with obtaining the pose (and/or head position) 512 and rendering the graphics pipeline 514. There can also be additional latency associated with transmitting the rendered image via transmitter 516 of the host system 502 and receiving it at receiver 528 of the head mounted display 504 (that is, the interface latency). Processor 526 can implement a time warp and/or prediction of head position and/or view information sampled from IMU 522.
In some embodiments, processor 526 is used to implement adjustment for time warp and/or predictive projected position of the rendered display from the graphics rendering pipeline 514 based on additional information from the IMU 522 based on predicting how the user has moved their head since the original pose (and/or head position) was taken by the host processor at 512.
In some embodiments, the processor 526 of the head mounted display 504 is used to provide prediction and/or time warp processing. In some embodiments, the processor 526 samples the IMU 522. In some embodiments, the host system 502 samples the IMU 522. In some embodiments, the prediction could occur in one or more processor in the host system (for example, in one or more processor that includes get pose (and/or head position) 512 and/or graphics rendering pipeline 514). In some embodiments, the sampled information from IMU 522 is used by a processor in the host system 502 to implement the image rendering. In some embodiments, the rendering may occur in the host system 502, and in some embodiments the rendering may occur in the head mounted display 504. In some embodiments, the rendering may occur across both the host system 502 and the head mounted display 504. In some embodiments, predictive tracking is implemented to save power and efficiency. In some embodiments, one or more processor in the host system 502 (for example, a graphics processor performing the graphics rendering 514) is preempted in order to provide the predictive tracking. While the graphics rendering pipeline 514 within a processor in the host system 502 is illustrated in FIG. 5, it is understood that graphics rendering may also be implemented in the head mounted display 504 according to some embodiments.
In some embodiments, the initial latency based on obtaining the pose (and/or head position) at 512 and rendering the image at 514 is approximately 30 to 35 ms. However, the additional interface latency associated with transmitting from transmitter 516 to receiver 528 may add another approximately 50 or 60 ms in some embodiments.
In some embodiments, every reading from IMU 522 is time stamped so that exact times of each sampling is known by one or more processor(s) of the host system 502 and/or by the processor 526 of the head mounted display 504. In this manner, exact times of receipt of the pose (and/or head position) information from the IMU 522 is known. This allows for prediction and time warp operations that are based on known sampling information from IMU 522. This is helpful, for example, in cases where graphics pipe latency and/or interface latency is different at different times. In some embodiments, processor 526 takes various sampling information from IMU 522, and is able to provide better predictive and/or time warp adjustments based on the received information and timestamp from the IMU (that is, pose and/or head position information initially received at get pose 512 and additional sampling directly from the IMU 522). Once the correct adjustments are made, a better predictive and/or time warp rendered image is able to be provided from processor 526 to the display 524 of the head mounted display 504.
In some embodiments, image rendering is implemented based on a pose (and/or head position) of a user's head. Pose (and/or head position) can be obtained by sensors such as one or more cameras and/or an IMU (for example, in some embodiments from sensors such as accelerator(s) and/or gyroscope(s)), and the input data is used to render an image scene for display on the HMD. Rendering of such an image scene will take a certain amount of time, which in some embodiments is a known predetermined amount of time, and in some embodiments is a dynamic amount of time. The rendered scene is displayed on the HMD screen (for example, display 524) which takes more time.
For example, if rendering an image scene and displaying it on an HMD takes 30 ms, within that 30 ms the head of the user could have moved quite a bit, such that the obtained data (for example, from the IMU) that was used to render the scene from a particular position and/or orientation has become stale. However, the HMD can sample the data again (for example, sample the IMU 522 using the processor 526) before displaying it on display 526, and perform a two dimensional (2D) transform on the rendered data using processor 526 in order to account for the intermediate head motion. As discussed herein, this can be referred to as time warping.
FIG. 6 illustrates display rendering frames 600, including display rendering frame n (602) and display rendering frame n+1 (604). FIG. 6 illustrates a situation where the pose (and/or head position) 612 (in frame n 602) is received from the IMU, and the image scene is rendered with the pose (and/or head position) information at 614. For example, in some embodiments a larger image scene is rendered at 614 similar to that described in reference to FIG. 2 and/or FIG. 3 above. It is then displayed at 616. Similarly, the pose (and/or head position) 622 (in frame n+1 604) is received from the IMU, and the image scene is rendered with the pose (and/or head position) information at 624. For example, in some embodiments a larger image scene is rendered at 624 similar to that described in reference to FIG. 2 and/or FIG. 3 above. It is then displayed at 626. In the example illustrated in FIG. 6, the system is not taking account for latency in the graphics pipeline or in the transmission from the host to the head mounted display. It is also not taking into account the rate of motion. Additionally, the example of FIG. 6 does not take into account the sampling itself of the IMU. The example illustrated in FIG. 6 does not perform any adjustment for time warp or for prediction of the position or orientation of the user.
FIG. 7 illustrates display rendering frames 700, including display rendering frame n (702) and display rendering frame n+1 (704). FIG. 7 illustrates a situation where the pose (and/or head position) 712 (in frame n 702) is received from the IMU, and the image scene is rendered with the pose (and/or head position) information at 714. After the rendered image is sent to the head mounted display, another IMU pose (and/or head position) is sampled at block 716, and a two-dimensional (2D) transform on the rendered image scene is performed and the image is then displayed at block 718. Similarly, the pose (and/or head position) 722 (in frame n+1 704) is received from the IMU, and the image scene is rendered with the pose (and/or head position) information at 724. After the rendered image is sent to the head mounted display, another IMU pose (and/or head position) is sampled at block 726, and a 2D transform on the rendered image scene is performed and the image is then displayed at block 728. In the example illustrated in FIG. 7, the system performs an adjustment for time warp, but does not perform additional prediction of the position or orientation of the user.
FIG. 7 illustrates a simple implementation of time warp. When there is a large latency in sampling the IMU and displaying it on the HMD screen, a bad experience may occur for the user with image data that is not correctly being displayed for the user to view. Even systems that pre-render a video sequence in a sphere and show a subset of the image data have an inherent limitation that the data that was used to calculate the subset of view to render is usually stale by the time of the final display on the screen. If the head of the user is mostly static, there is not a huge issue with the correct image data being rendered and displayed, but if the user's head is moving rapidly, stale rendered and displayed image data could cause simulator sickness. For example, this can occur in some situations when the user stops moving and the image scene displayed for the user takes some time to update and the display feels to the user like it is still moving.
In some embodiments, predictive rendering of a larger frame buffer is implemented, and the data based on the predictive rendering is used to render the difference in motion. If the scene is rendered only to the exact specs of the display, the two-dimensional (2D) transform loses information and will require blurring of the edge pixels of the rendered scene. This leads to loss of clarity along the edges of the image. In some embodiments, predictive image rendering is implemented to predict and accurately render image scenes based on head motion. In this manner, the 2D transform window can be moved to the region and all the pixels can be displayed with clarity.
FIG. 8 illustrates display rendering 800 according to some embodiments. FIG. 8 illustrates a situation where the pose (and/or head position) is obtained at 802 and the image scene is rendered with the pose (and/or head position) information in the graphics render pipe at 804. After the rendered image is sent to the head mounted display, another pose (and/or head position) is sampled at block 806, and the rendering is adjusted at 808 (for example, using a two-dimensional transform on the rendered image scene). The image is then posted to the display at block 810.
As illustrated in FIG. 8, block 812 shows a frame in which a large rendered target is shown in block 814. The rendered target 814 can be an image rendering for display on a head mounted display (HMD), for example. At a time shortly after the initial rendering at 814, block 816 shows a frame with an actual rendering 818 on the head mounted display. According to some embodiments, actual rendered image 818 can be adjusted (for example, using a time warp implementation). The adjustment is made from the initial rendering 814, which is also shown as initial rendering 820 in dotted lines in FIG. 8 for comparison purposes.
As illustrated in FIG. 8, a problem can occur where a head motion or latencies in the image display path can result in a lack of information required to display the contents based on the new field of view (FOV). This is illustrated, for example, by the portion 822 of rendered image 818 that extends beyond the frame 816. Such motion or latency (for example, due to latencies in the graphics pipeline and/or other latencies) can result in the HMD not having all of the correct image data to display on the HMD. However, according to some embodiments, these types of issues can be avoided.
In FIG. 8, the block portion 822 is shown in cross section. Block 822 is a portion of the data 818 rendered on the head mounted display, and represents missing required data. Although an adjustment of the rendering at block 808 is based, for example, on time warp, and the rendered image has been adjusted for motion, some of the required data may still be missing as represented by block 822 in FIG. 8. This occurs in situations where the image frame is adjusted to compensate for motion, but is too far from the original position, for example. That is, there is a risk of not having an appropriate full frame to display on the head mounted display. The graphics render pipeline 804 renders the image data based on a particular position at a particular time. However, when there is more latency and the head of the user moves beyond what the user actually expects to see (for example rendering 818 including portion 822), the rendering adjustment at block 808 based on the pose (and/or head position) sample obtained at block 806 is unable to show the entire portion since the motion moved beyond what was rendered in block 804 based on the pose (and/or head position) sample obtained at block 802. That is, adjustment is made for motion but may not be made for the entire latency, resulting in missing the required data illustrated by cross section 822 of rendered HMD image 818.
As illustrated in FIG. 8, if the frame that is adjusted to compensate for motion is too far from the original position, there is a risk of not having enough image information to display the necessary full frame on the HMD. By preparing an image render target based on a predicted amount of motion and latency, the risk of lacking content display is minimized. This is illustrated, for example, in FIG. 9. The flow of FIG. 9 shows how the original render frame makes better use of the buffer within the render target to account for the expected field of view (FOV) when the content is displayed.
FIG. 9 illustrates image rendering 900. At block 902 a pose (and/or head position) is sampled (for example at the host). At block 904, a pose (and/or head position) is sampled again at the host (for example, a period of time later such as 5 ms later). The pose (and/or head position) is projected at block 912. Block 914 shows a graphics render pipe at which the image is rendered based on the projected pose (and/or head position) at block 912. The rendered image is then transmitted to the head mounted display and the head mounted display samples a pose (and/or head position) at 916. The rendering is adjusted (for example based on a time warp, and/or an adjustment for motion, and/or an adjustment based on a projected or predicted pose and/or on a projected or predicted head position) at block 918. The adjusted image rendered at 918 is then posted to the display at 920. A rendering 922 at the host is additionally illustrated including a 924 rendering that is based on the projected or predicted pose (and/or head position) from block 912. In some embodiments, power can be saved by not rendering all or some of the portion of rendering 922. For example, in some embodiments, power can be saved by not rendering all or some of the portion of rendering 922 that is to the right side of rendering 924 in FIG. 9. Once the rendering 924 is transmitted to the head mounted display, block 926 illustrates the adjusted rendering 928 rendered by block 918 at the head mounted display. Dotted line 930 shows where the initial rendering would have been prior to adjustments.
The pose (and/or head position) sampled at 902 and the pose (and/or head position) sampled at 904 are used at block 912 to project (or predict) a pose (and/or head position) of the user (for example, a location and orientation of a user wearing a head mounted display). An image is rendered based on the predicted pose (and/or head position) determined at block 912.
In some embodiments, pose (and/or head position) prediction 912 can be implemented in one or more of a variety of ways. For example, according to some embodiments, a weighted average of past pose (and/or head position) vectors is maintained, and the weighted average of past pose (and/or head position) vectors is used to predict the rate of change of pose (and/or head position) for the next time interval. The velocity and acceleration of the pose (and/or head position) can be obtained by a simple vector of the change in position. Pose (and/or head position) tracking can also rely on filtering methods (such as, for example, Kalman filtering) to predict pose (and/or head position) at the next timestep. Dead reckoning can also be used as a method to estimate the next pose (and/or head position) according to some embodiments.
In some embodiments of FIG. 9, the projection (or production) of head pose (and/or head position) is based on a predicted latency determined by the obtained poses (and/or head positions) at 902 and 904 at the host, as well as the sample pose (and/or head position) 916 sampled at the HMD. In some embodiments of FIG. 9, the pose (and/or head position) is sampled at 902 and 904 (for example every 5 ms or so), and based on the sampling, a projected pose (and/or head position) 912 can be derived based on what will happen in the future at the time of display on the head mounted display (for example, in 30 ms). In some embodiments, the sampling (for example, every 5 ms) and image rendering may be implemented in the host system. In some embodiments, the sampling (for example, every 5 ms) and image rendering may be implemented in the head mounted display. As illustrated in FIG. 9, the system predicts where the head will be at some future point in time based on motion data. In some embodiments, head motion may be predicted using various techniques. For example, some head movements may include normal acceleration followed by deceleration. Possible head motions may be predictable in various embodiments. For example, when the head is moving the predicted position and orientation of the head can be accurately made according to some embodiments. In embodiments where this information can be predicted at the host system, it is possible to render less information in the graphics render pipeline 914 since the particular image that is rendered can be provided with a fair amount of certainty, and less additional information needs to be rendered and transmitted. In some embodiments, this information can also be confirmed in the head mounted display, and the predicted position and orientation (that is, the projected pose and/or projected head position 912) can be predicted with much more certainty in the host. For example, the prediction according to some embodiments can be used to make educated guesses (for example, a user will not snap their head back within a 30 ms period after the head is traveling the other direction during the 5 ms period). The two poses (and/or head positions) 902 and 904 can be used to predict the location and orientation at the later point in time. The head mounted display can later adjust the rendered image at 918 based on the predicted pose (and/or head position) 912 and an additional sampling of the pose (and/or head position) at 916 at the head mounted display. However, the information that needs to be rendered at 914 and sent from the host to the head mounted display is much more accurate and does not need to include as much additional information due to the prediction of the pose (and/or head position) based on the sampling occurring at the host system at 902 and 904. In this manner, the system is projecting the head position and orientation, and rendering the image at the host system based on that projection (prediction) information.
The image rendering can be adjusted at the head mounted display at block 918 based on the sampled poses and/or head positions at 902, 904 and 916. This adjustment can be made to adjust for motion and for the predicted pose (and/or head position) 912 based on the various sampled poses and/or sampled head positions. This adjustment includes a time warp and/or an adjustment based on the projected pose (and/or head position). This adjustment may be made based on a reduced rendering 924 that is sent to the head mounted display. In some embodiments, power can be saved since a lesser size rendering 924 can be rendered by the graphics render pipeline 914. The adjusted rendering 918 can then be used to post the rendered image 928 (which has been adjusted for motion and latency, for example) and posted to the display at block 920. In some embodiments, a rendered image 928 is rendered based on head motion and velocity of motion of the user.
In some embodiments (for example, as illustrated in and described in reference to FIG. 9), adjustment is made for motion and for the entire latency based on time warp motion adjustment as well as based on a projected pose (and/or head position) using predicted pose (and/or head position) information. This creates a situation where a lesser amount of rendered data may be rendered and transmitted to the head mounted display without any required data missing (for example, such as missing required data 822 illustrated in FIG. 8). In some embodiments of FIG. 9, for example, by predicting the position and orientation of the user of the head mounted display, it is possible to render and transmit less data and still ensure that required data is not missing, and can be displayed to the user at the head mounted display at the appropriate time.
In some embodiments, the projected (or predicted) pose (and/or head position) is determined at the host and transmitted to the HMD. The image rendered on the head mounted display is adjusted based on the projected pose (and/or head position) as well as additional pose (and/or head position) information received at block 916.
In some embodiments, projected and/or predicted pose (and/or head position) coordinates and timestamp can be used to render the frame as metadata alongside (for example, in sideband) or as part of (for example, in-band) the frame data. For example, in some embodiments, the projected pose (and/or head position) coordinates and timestamp, for example, sampled from a sensor such as an IMU and then calculated based on the sampling(s), can be used to render the image frame as metadata alongside (for example, in a sideband) or as part of (for example in-band) the frame image data.
In some embodiments, time warp, when employed, applies last-minute adjustments to the rendered frame to correct for any changes in the user's pose and/or head position (for example, HMD position and/or orientation). Explicitly knowing the coordinates+timestamp that were used (projected or measured) when rendering the frame can allow time warp adjustment to more accurately correct for these changes.
In some embodiments, time warp can be disabled when pose (and/or head position) projection is used. However, this could produce an inability to correct for incorrect projections caused by unexpected changes in head movement (incorrect vector), variable−latency transports (incorrect presentation time), etc.
In some embodiments, relative sampling can be useful when both render and time warp adjustments occur in the same system. Sampling such as IMU sampling can be performed in a manner that allows the coordinates and time delta (render vs. warp) to easily be calculated. However, it can be difficult to support projected pose (and/or head position) and extend it to a system where the sink device performs offload (for example, virtual reality offload).
In some embodiments, metadata information conveyed to the back end time warp (in addition to the rendered frame) can include a render position (for example, 3 dimensional x, y, and z coordinates and/or yaw, pitch and roll information in some embodiments, and/or in some embodiments a 3 dimensional coordinate position as well as vector coordinates conveying a viewing orientation of the user's head in addition to the coordinate position thereof). In some embodiments, an exact position (sampled or projected) is used to render the image frame. In some embodiments, a render timestamp with an exact time (rendered or projected) is used to render an image frame.
In some embodiments, a host side includes a transmitter and an HMD side includes a receiver. However, in some embodiments, it is noted that the transmitter on the host side and/or the receiver on the HMD side can be a transceiver, allowing communication in either direction.
In some embodiments, transmission between a host system and a head mounted display can be wired or wireless. For example, in an embodiment with wired transmission the connection between host and head mounted display may be an HDMI wired connector, for example.
In some embodiments, the host system is a computer, and in some embodiments the host system is implemented in a cloud infrastructure. In some embodiments, any of the operations/functionality/structure are performed at the HMD. In some embodiments, any of the operations/functionality/structure are performed at the host. In some embodiments, image rendering is implemented in a cloud infrastructure. In some embodiments, image rendering is implemented in a combination of a local computer and a cloud infrastructure. In some embodiments, image rendering is implemented in a head mounted display. In some embodiments, image rendering is implemented in a computer at a host side or a computer at an HMD side.
In some embodiments, motion prediction, head position prediction, and/or pose prediction (for example, motion projection and/or pose projection) is implemented in one of many ways. For example, in some embodiments, it is implemented by maintaining a weighted average of past pose (and/or head position) vectors, and using the weighted average to predict the rate of change of pose (and/or head position) for the next time interval. In some embodiments, the velocity and acceleration of the pose (and/or head position) are obtained by a simple vector of the change in position. In some embodiments, pose (and/or head position) tracking relies on filtering methods (for example, such as Kalman filtering) to predict pose (and/or head position) at the next timestep. In some embodiments, dead reckoning can be used to estimate the next pose (and/or head position). In some embodiments, external sensors (for example, cameras such as depth cameras) may be used to obtain pose (and/or head position) information, either in addition to or instead of sampling pose (and/or head position) information from a sensor such as an IMU of the HMD, for example.
Some embodiments have been described as sampling pose (and/or head position) information at a certain frequency (for example, after 5 ms, etc.). It is noted that other frequencies may be used (for example, after 2 ms). It is also noted that more samples may be taken according to some embodiments (for example, every 2 ms, every 5 ms, etc., or additional pose (and/or head position) samples such as three or more pose (and/or head position) samples obtained at the host rather than two samples, etc).
In some embodiments, known data about the movement of a person's head may be used to predict user location and orientation. For example, maximum known speed of a human head, known directions and likely continued movements of human heads, etc. may be used. In some embodiments, the known information being presented on the HMD display may be used in order to predict user location and orientation. For example, perceptual computing may be implemented. If something is about to move fast in a virtually displayed environment, since people are very aware of fast motion, a user may be inclined to move their head toward that motion. Similarly, if a sound were to be provided, in some embodiments it can be predicted that the user is likely to turn their head toward that sound. In some embodiments, since the eyes are a good indicator of where the head might turn, sensors may be used in some embodiments to track eye movement of the user to help predict which direction the user may turn their head.
Some embodiments have been described herein as including a host system (for example, host system 402 of FIG. 4 or host system 502 of FIG. 5). It is noted that in some embodiments, the host system could be a computer, a desktop computer, or even a cloud system, and is not limited to a particular type of host system. Additionally, many or all features, elements, functions, etc. described herein as included within or performed by a host system could be performed elsewhere (for example, within a head mounted display or within another device coupled to a head mounted display).
Some embodiments have been described herein as being related to display of rendered data in a head mounted display (HMD) environment. However, according to some embodiments techniques used herein can be used in other non-HMD environments (for example, in any case where images are rendered for display, but the desired image to be displayed might change based on latencies in the system due to image rendering, some type of movement, transmission of data such as the rendered image, and/or other latencies).
In some embodiments, predictive rendering may be used in a head mounted display system where the head mounted display communicates wirelessly (for example, where the HMD communicates wirelessly with a host system, the cloud, etc). Predictive rendering for wireless HMDs according to some embodiments can provide reduced power and/or increased efficiency.
FIG. 10 illustrates display rendering 1000 according to some embodiments. In some embodiments, display rendering 1000 can include one or more of display rendering, image rendering, predictive rendering, projected pose, projected head position, time warping optimization, and/or predicted rendering, etc. At 1002, a position of a user's head can be detected (for example, a position of a head of user of a head mounted display system). In some embodiments, a pose and/or position of the user's head is obtained at 1002 using an IMU, an accelerator, a gyroscope, and/or some other sensor, for example. At 1004, a latency in displaying rendered image data is determined. This latency can be due to, for example, one or more of latency in sampling, latency in obtaining a head pose (and/or head position) of a user, latency due to motion, latency due to movement of a head of a user, latency due to movement of a head of a user of a head mounted display, latency due to a potential change of head movement, latency due to image rendering, latency due to image render processing, latency due to graphics rendering, latency due to transmission, latency due to transmission between a host system and a head mounted display, wireless transmission latency, non-static or changing latencies over time, latency due to predicted known possible head movements, and/or any other latencies, etc. At 1006, a head pose (and/or head position) of a user is estimated (and/or is predicted) based on, for example, a detected position of the user's head and any or all latencies. At 1008, an image is rendered based on the estimated (or predicted) head pose (and/or head position). A portion or all of the rendered image can then be displayed (for example, on a head mounted display).
In some embodiments, display rendering 1000 can implement display rendering features as described and/or illustrated anywhere in this specification and drawings. In some embodiments, display rendering 1000 can use available and/or obtained head motion information to predict future head poses and/or head positions, and adjust the render target accordingly. In some embodiments, a buffer (for example, an image buffer, a graphics buffer, a rendering buffer, and/or any other type of buffer) can efficiently minimize a risk of not having proper content to display due to motion (for example, motion of a user's head) and/or due to latencies (for example, transmission latencies, rendering latencies, etc). In some embodiments, rendering 1000 can reduce an amount of data that needs to be rendered but not displayed. This can result in better power efficiency.
In some embodiments, display rendering 1000 can optimize a render target by estimating an expected field of view (FOV) at a time of display. This can be based on understanding of a latency to display the render target and/or estimating a head pose and/or head position (for example, based on sensor data such as an IMU, accelerometers, gyroscopes, camera sensors, etc. in order to detect head movement). In some embodiments, display rendering 1000 can predictively render a large frame buffer and use the data to render the difference in motion. In some embodiments, display rendering 1000 can predict and accurately render image scenes based on head motion. In some embodiments, display rendering 1000 can implement two dimensional (2D) transform, and can move a 2D transform window and display pixels with clarity.
In some embodiments, display rendering 1000 can implement motion prediction in one of many ways. For example, in some embodiments, display rendering 1000 can implement motion prediction using a weighted average of past pose (and/or head position) vectors, and using the weighted average to predict a rate of change of pose (and/or head position) for a next time interval. Velocity and acceleration of the pose (and/or head position) can be obtained in some embodiments by a simple vector of the change in position. In some embodiments, display rendering 1000 can implement pose (and/or head position) tracking using filtering methods (for example, using filtering methods such as Kalman filtering) to predict a pose (and/or head position) at a next time step. In some embodiments, display rendering 1000 can use dead reckoning to estimate a next pose (and/or head position).
FIG. 11 is a block diagram of an example of a computing device 1100. In some embodiments, computing device 1100 can include display and/or image features including one or more of display rendering, image rendering, predictive rendering, projected pose, projected head position, time warping optimization, predicted rendering, etc. and/or any other features or techniques discussed herein according to some embodiments. For example, any of the features illustrated in and/or described in reference to any one or more of FIGS. 1-11 can be included within computing device 1100. For example, in some embodiments, all of part of computing device 1100 can be included as host system 402 of FIG. 4 or host system 502 of FIG. 5. As another example, in some embodiments, all or part of computing device 1100 can be included as head mounted display 404 of FIG. 4 or head mounted display 504 of FIG. 5.
The computing device 1100 may be, for example, a mobile device, phone, laptop computer, notebook, tablet, all in one, 2 in 1, and/or desktop computer, etc., among others. The computing device 1100 may include a processor 1102 that is adapted to execute stored instructions, as well as a memory device 1104 (and/or storage device 1104) that stores instructions that are executable by the processor 1102. The processor 1102 can be a single core processor, a multi-core processor, a computing cluster, or any number of other configurations. For example, processor 1102 can be an Intel® processor such as an Intel® Celeron, Pentium, Core, Core i3, Core i5, or Core i7 processor. In some embodiments, processor 1102 can be an Intel® x86 based processor. In some embodiments, processor 1102 can be an ARM based processor. The memory device 1104 can be a memory device and/or a storage device, and can include volatile storage, non-volatile storage, random access memory, read only memory, flash memory, and/or any other suitable memory and/or storage systems. The instructions that are executed by the processor 1102 may also be used to implement features described in this specification, including display coordinate configuration, for example.
The processor 1102 may also be linked through a system interconnect 1106 (e.g., PCI®, PCI-Express®, NuBus, etc.) to a display interface 1108 adapted to connect the computing device 1100 to a display device 1110. In some embodiments, display device 1110 can include any display screen. The display device 1110 may include a display screen that is a built-in component of the computing device 1100. The display device 1110 may also include a computer monitor, television, or projector, among others, that is externally connected to the computing device 1100. The display device 1110 can include liquid crystal display (LCD), for example. In addition, display device 1110 can include a backlight including light sources such as light emitting diodes (LEDs), organic light emitting diodes (OLEDs), and/or micro-LEDs (μLEDs), among others.
In some embodiments, the display interface 1108 can include any suitable graphics processing unit, transmitter, port, physical interconnect, and the like. In some examples, the display interface 1108 can implement any suitable protocol for transmitting data to the display device 1110. For example, the display interface 1108 can transmit data using a high-definition multimedia interface (HDMI) protocol, a DisplayPort protocol, or some other protocol or communication link, and the like
In some embodiments, display device 1110 includes a display controller 1130. In some embodiments, the display controller 1130 can provide control signals within and/or to the display device 1110. In some embodiments, all or portions of the display controller 1130 can be included in the display interface 1108 (and/or instead of or in addition to being included in the display device 1110). In some embodiments, all or portions of the display controller 1130 can be coupled between the display interface 1108 and the display device 1110. In some embodiments, all or portions of the display controller 1130 can be coupled between the display interface 1108 and the interconnect 1106. In some embodiments, all or portions of the display controller 1130 can be included in the processor 1102. In some embodiments, display controller 1130 can implement one or more of display rendering, image rendering, predictive rendering, projected pose, projected head position, time warping optimization, predicted rendering, etc. and/or any other features or techniques discussed herein according to any of the examples illustrated in any of the drawings and/or as described anywhere herein. For example, any of the features illustrated in and/or described in reference to all or portions of any one or more of FIGS. 1-10 can be included within display controller 1130.
In some embodiments, any of the techniques described in this specification can be implemented entirely or partially within the display device 1110. In some embodiments, any of the techniques described in this specification can be implemented entirely or partially within the display controller 1130. In some embodiments, any of the techniques described in this specification can be implemented entirely or partially within the processor 1102.
In addition, a network interface controller (also referred to herein as a NIC) 1112 may be adapted to connect the computing device 1100 through the system interconnect 1106 to a network (not depicted). The network (not depicted) may be a wireless network, a wired network, cellular network, a radio network, a wide area network (WAN), a local area network (LAN), a global position satellite (GPS) network, and/or the Internet, among others.
The processor 1102 may be connected through system interconnect 1106 to an input/output (I/O) device interface 1114 adapted to connect the computing host device 1100 to one or more I/O devices 1116. The I/O devices 1116 may include, for example, a keyboard and/or a pointing device, where the pointing device may include a touchpad or a touchscreen, among others. The I/O devices 1116 may be built-in components of the computing device 1100, or may be devices that are externally connected to the computing device 1100.
In some embodiments, the processor 1102 may also be linked through the system interconnect 1106 to a storage device 1118 that can include a hard drive, a solid state drive (SSD), a magnetic drive, an optical drive, a portable drive, a flash drive, a Universal Serial Bus (USB) flash drive, an array of drives, and/or any other type of storage, including combinations thereof. In some embodiments, the storage device 1118 can include any suitable applications. In some embodiments, the storage device 1118 can include a basic input/output system (BIOS).
In some embodiments, the storage device 1118 can include any device or software, instructions, etc. that can be used (for example, by a processor such as processor 1102) to implement any of the functionality described herein such as, for example, one or more of display rendering, image rendering, predictive rendering, projected pose, projected head position, time warping optimization, predicted rendering, etc. and/or any other features or techniques discussed herein. In some embodiments, for example, predictive display rendering 1120 is included in storage device 1118. In some embodiments, predictive display rendering 1120 incudes a portion or all of any one or more of the techniques described herein. For example, any of the features illustrated in and/or described in reference to any portions of one or more of FIGS. 1-10 can be included within predictive display rendering 1120.
It is to be understood that the block diagram of FIG. 11 is not intended to indicate that the computing device 1100 is to include all of the components shown in FIG. 11. Rather, the computing device 1100 can include fewer and/or additional components not illustrated in FIG. 11 (e.g., additional memory components, embedded controllers, additional modules, additional network interfaces, etc.). Furthermore, any of the functionalities of the BIOS or of the optimization predictive display rendering 1120 that can be included in storage device 1118 may be partially, or entirely, implemented in hardware and/or in the processor 1102. For example, the functionality may be implemented with an application specific integrated circuit, logic implemented in an embedded controller, or in logic implemented in the processor 1102, among others. In some embodiments, the functionalities of the BIOS and/or predictive display rendering 1120 can be implemented with logic, wherein the logic, as referred to herein, can include any suitable hardware (e.g., a processor, among others), software (e.g., an application, among others), firmware, or any suitable combination of hardware, software, and/or firmware.
FIG. 12 is a block diagram of an example of one or more processor and one or more tangible, non-transitory computer readable media. The one or more tangible, non-transitory, computer-readable media 1200 may be accessed by a processor or processors 1202 over a computer interconnect 1204. Furthermore, the one or more tangible, non-transitory, computer-readable media 1200 may include code to direct the processor 1202 to perform operations as described herein. For example, in some embodiments, computer-readable media 1200 may include code to direct the processor to perform predictive display rendering 1206, which can include display rendering, image rendering, predictive rendering, projected pose, projected head position, time warping optimization, predicted rendering, etc. and/or any other features or techniques discussed herein according to some embodiments. In some embodiments, predictive display rendering 1206 can be used to provide any of the features or techniques according to any of the examples illustrated in any of the drawings and/or as described anywhere herein. For example, any of the features illustrated in and/or described in reference to portions of any one or more of FIGS. 1-10 can be included within predictive display rendering 1206.
In some embodiments, processor 1202 is one or more processors. In some embodiments, processor 1202 can perform similarly to (and/or the same as) processor 1102 of FIG. 11, and/or can perform some or all of the same functions as can be performed by processor 1102.
Various components discussed in this specification may be implemented using software components. These software components may be stored on the one or more tangible, non-transitory, computer-readable media 1200, as indicated in FIG. 12. For example, software components including, for example, computer readable instructions implementing predictive display rendering 1206 may be included in one or more computer readable media 1200 according to some embodiments.
It is to be understood that any suitable number of software components may be included within the one or more tangible, non-transitory computer-readable media 1200. Furthermore, any number of additional software components not shown in FIG. 12 may be included within the one or more tangible, non-transitory, computer-readable media 1200, depending on the specific application.
Embodiments have been described herein relating to head mounted displays, head pose and/or head position detection/prediction, etc. However, it is noted that some embodiments relate to other image and/or display rendering than in head mounted displays. Some embodiments are not limited to head mounted displays or head pose and/or head position. For example, in some embodiments, a position of all or a portion of a body of a user can be used (for example, using a projected pose and/or position of a portion of a body of a user including the user's head or not including the user's head). Motion and/or predicted motion, latency, etc. of other body parts than a user's head can be used in some embodiments. In some embodiments, body parts may not be involved. For example, some embodiments can relate to movement of a display or other computing device, and prediction of motion and/or latency relating to those devices can be implemented according to some embodiments.
Reference in the specification to “one embodiment” or “an embodiment” or “some embodiments” of the disclosed subject matter means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosed subject matter. Thus, the phrase “in one embodiment” or “in some embodiments” may appear in various places throughout the specification, but the phrase may not necessarily refer to the same embodiment or embodiments.
EXAMPLE 1
In some examples, a head mounted display system including one or more processor. The one or more processor is to detect a position of a head of a user of the head mounted display, predict a position of the head of the user of the head mounted display at a time after a time that the position of the head of the user was detected, and render image data based on the predicted head position.
EXAMPLE 2
In some examples, the head mounted display system of Example 1, including a transmitter to transmit the rendered image data to the head mounted display.
EXAMPLE 3
In some examples, the head mounted display system of Example 1 or Example 2, the one or more processor to create an image to be displayed on the head mounted display based on the predicted position and based on the rendered image data.
EXAMPLE 4
In some examples, the head mounted display system of any of Examples 1-3, the one or more processor to display an image on the head mounted display based on the rendered image data.
EXAMPLE 5
In some examples, the head mounted display system of any of Examples 1-4, the one or more processor to estimate an expected field of view of the user at a time of display, and to render the image data based on the predicted head position and based on the expected field of view.
EXAMPLE 6
In some examples, the head mounted display system of any of Examples 1-5, the one or more processor to perform a two dimensional transform on the rendered image data.
EXAMPLE 7
In some examples, the head mounted display system of any of Examples 1-6, the one or more processor to maintain a weighted average of past head position vectors, and to predict the position of the head based on the weighted average.
EXAMPLE 8
In some examples, the head mounted display system of any of Examples 1-7, the one or more processor to predict the position of the head based on a filtering method.
EXAMPLE 9
In some examples, the head mounted display system of any of Examples 1-8, the one or more processor to predict the position of the head based on dead reckoning.
EXAMPLE 10
In some examples, the head mounted display system of any of Examples 1-9, the one or more processor to render the image data based on a predicted amount of motion and latency.
EXAMPLE 11
In some examples, the head mounted display system of any of Examples 1-10, the one or more processor to determine a latency to display the rendered image data, and to predict the position of the head of the user based on the detected position and based on the determined latency.
EXAMPLE 12
In some examples, a method including detecting a position of a head of a user of a head mounted display, predicting a position of the head of the user of the head mounted display at a time after a time that the position of the head of the user was detected, and rendering image data based on the predicted head position.
EXAMPLE 13
In some examples, the method of Example 12, including transmitting the rendered image data to the head mounted display.
EXAMPLE 14
In some examples, the method of any of Examples 12-13, including creating an image to be displayed on the head mounted display based on the predicted position and based on the rendered image data.
EXAMPLE 15
In some examples, the method of any of Examples 12-14, including displaying an image on the head mounted display based on the rendered image data.
EXAMPLE 16
In some examples, the method of any of Examples 12-15, including estimating an expected field of view of the user at a time of display, and rendering the image data based on the predicted head position and based on the expected field of view.
EXAMPLE 17
In some examples, the method of any of Examples 12-16, including performing a two dimensional transform on the rendered image data.
EXAMPLE 18
In some examples, the method of any of Examples 12-17, including maintaining a weighted average of past head position vectors, and predicting the position of the head based on the weighted average.
EXAMPLE 19
In some examples, the method of any of Examples 12-18, including predicting the position of the head based on a filtering method.
EXAMPLE 20
In some examples, the method of any of Examples 12-19, including predicting the position of the head based on dead reckoning.
EXAMPLE 21
In some examples, the method of any of Examples 12-20, including rendering the image data based on a predicted amount of motion and latency.
EXAMPLE 22
In some examples, the method of any of Examples 12-21, including determining a latency to display the rendered image data, and predicting the position of the head of the user based on the detected position and based on the determined latency.
EXAMPLE 23
In some examples, one or more tangible, non-transitory machine readable media include a plurality of instructions that, in response to being executed on at least one processor, cause the at least one processor to detect a position of a head of a user of a head mounted display, predict a position of the head of the user of the head mounted display at a time after a time that the position of the head of the user was detected, and render image data based on the predicted head position.
EXAMPLE 24
In some examples, the one or more tangible, non-transitory machine readable media of Example 23, including a plurality of instructions that, in response to being executed on at least one processor, cause the at least one processor to transmit the rendered image data to the head mounted display.
EXAMPLE 25
In some examples, the one or more tangible, non-transitory machine readable media of any of Examples 23-24, including a plurality of instructions that, in response to being executed on at least one processor, cause the at least one processor to create an image to be displayed on the head mounted display based on the predicted position and based on the rendered image data.
EXAMPLE 26
In some examples, the one or more tangible, non-transitory machine readable media of any of Examples 23-25, including a plurality of instructions that, in response to being executed on at least one processor, cause the at least one processor to display an image on the head mounted display based on the rendered image data.
EXAMPLE 27
In some examples, the one or more tangible, non-transitory machine readable media of any of Examples 23-26, including a plurality of instructions that, in response to being executed on at least one processor, cause the at least one processor to estimate an expected field of view of the user at a time of display, and to render the image data based on the predicted head position and based on the expected field of view.
EXAMPLE 28
In some examples, the one or more tangible, non-transitory machine readable media of any of Examples 23-27, including a plurality of instructions that, in response to being executed on at least one processor, cause the at least one processor to perform a two dimensional transform on the rendered image data.
EXAMPLE 29
In some examples, the one or more tangible, non-transitory machine readable media of any of Examples 23-28, including a plurality of instructions that, in response to being executed on at least one processor, cause the at least one processor to maintain a weighted average of past head position vectors, and to predict the position of the head based on the weighted average.
EXAMPLE 30
In some examples, the one or more tangible, non-transitory machine readable media of any of Examples 23-29, including a plurality of instructions that, in response to being executed on at least one processor, cause the at least one processor to predict the position of the head based on a filtering method.
EXAMPLE 31
In some examples, the one or more tangible, non-transitory machine readable media of any of Examples 23-30, including a plurality of instructions that, in response to being executed on at least one processor, cause the at least one processor to predict the position of the head based on dead reckoning.
EXAMPLE 32
In some examples, the one or more tangible, non-transitory machine readable media of any of Examples 23-31, including a plurality of instructions that, in response to being executed on at least one processor, cause the at least one processor to render the image data based on a predicted amount of motion and latency.
EXAMPLE 33
In some examples, the one or more tangible, non-transitory machine readable media of any of Examples 23-24, including a plurality of instructions that, in response to being executed on at least one processor, cause the at least one processor to determine a latency to display the rendered image data, and to predict the position of the head of the user based on the detected position and based on the determined latency.
EXAMPLE 34
In some examples, a display system includes means for detecting a position of a head of a user of the display at a first time, means for predicting a position of the head of the user of the display at a second time that is after the first time, and means for rendering image data based on the predicted head position. In some examples, the display system is a head mounted display system.
EXAMPLE 35
In some examples, the display system of Example 34, including means for transmitting the rendered image data to the display.
EXAMPLE 36
In some examples, the display system of any of Examples 34-35, including means for creating an image to be displayed on the display based on the predicted position and based on the rendered image data.
EXAMPLE 37
In some examples, the display system of any of Examples 34-36, including means for displaying an image on the display based on the rendered image data.
EXAMPLE 38
In some examples, the display system of any of Examples 34-37, including means for estimating an expected field of view of the user at a time of display, and means for rendering the image data based on the predicted head position and based on the expected field of view.
EXAMPLE 39
In some examples, the display system of any of Examples 34-38, including means for performing a two dimensional transform on the rendered image data.
EXAMPLE 40
In some examples, the display system of any of Examples 34-39, including means for maintaining a weighted average of past head position vectors, and means for predicting the position of the head based on the weighted average.
EXAMPLE 41
In some examples, the display system of any of Examples 34-40, including means for predicting the position of the head based on a filtering method.
EXAMPLE 42
In some examples, the display system of any of Examples 34-41, including means for predicting the position of the head based on dead reckoning.
EXAMPLE 43
In some examples, the display system of any of Examples 34-42, including means for rendering the image data based on a predicted amount of motion and latency.
EXAMPLE 44
In some examples, the display system of any of Examples 34-43, including means for determining a latency to display the rendered image data, and means for predicting the position of the head of the user based on the detected position and based on the determined latency.
EXAMPLE 45
In some examples, an apparatus including means to perform a method as in any preceding Example.
EXAMPLE 46
In some examples, machine-readable instructions, when executed, to implement a method, realize an apparatus, or realize a system as in any preceding Example.
EXAMPLE 47
In some examples, A machine readable medium including code, when executed, to cause a machine to perform the method, realize an apparatus, or realize a system as in any one of the preceding Examples.
EXAMPLE 48
In some examples, a head mounted display system includes a first processor to predict a pose (and/or head position) of a user of the head mounted display, a second processor to render an image based on the predicted pose (and/or head position), and a transmitter to transmit the rendered image to the head mounted display.
EXAMPLE 49
In some examples, a head mounted display system includes a processor to receive a predicted pose (and/or head position) of a user of the head mounted display and to receive a rendered image that is based on the predicted pose (and/or head position). The processor is to create an image to be displayed on the head mounted display based on the predicted pose (and/or head position) and based on the rendered image.
EXAMPLE 50
In some examples, a head mounted display system includes a first processor to predict a pose (and/or head position) of a user of the head mounted display, a second processor to render an image based on the predicted pose (and/or head position), and a third processor to create an image to be displayed on the head mounted display based on the predicted pose (and/or head position) and based on the rendered image.
EXAMPLE 51
In some examples, at least one computer-readable medium includes instructions to direct a processor to predict a pose (and/or head position) of a user of a head mounted display, render an image based on the predicted pose (and/or head position), and transmit the rendered image to the head mounted display.
EXAMPLE 52
In some examples, at least one computer-readable medium includes instructions to direct a processor to predict a pose (and/or head position) of a user of a head mounted display, render an image based on the predicted pose (and/or head position), and display an image on the head mounted display based on the predicted pose (and/or head position) and based on the rendered image.
EXAMPLE 53
In some examples, at least one computer-readable medium includes instructions to direct a processor to receive a predicted pose (and/or head position) of a user of a head mounted display, receive a rendered image that is based on the predicted pose (and/or head position), and create an image to be displayed on the head mounted display based on the predicted pose (and/or head position) and based on the rendered image.
EXAMPLE 54
In some examples, a method includes predicting a pose (and/or head position) of a user of a head mounted display, rendering an image based on the predicted pose (and/or head position), and transmitting the rendered image to the head mounted display.
EXAMPLE 55
In some examples, a method includes predicting a pose (and/or head position) of a user of a head mounted display, rendering an image based on the predicted pose (and/or head position), and displaying an image on the head mounted display based on the predicted pose (and/or head position) and based on the rendered image.
EXAMPLE 56
In some examples, a method includes receiving a predicted pose (and/or head position) of a user of a head mounted display, receiving a rendered image that is based on the predicted pose (and/or head position), and creating an image to be displayed on the head mounted display based on the predicted pose (and/or head position) and based on the rendered image.
Although an example embodiments of the disclosed subject matter are described with reference to FIGS. 1-12, persons of ordinary skill in the art will readily appreciate that many other ways of implementing the disclosed subject matter may alternatively be used. For example, the order of execution of the blocks in flow diagrams may be changed, and/or some of the blocks in block/flow diagrams described may be changed, eliminated, or combined. Additionally, some of the circuit and/or block elements may be changed, eliminated, or combined.
In the preceding description, various aspects of the disclosed subject matter have been described. For purposes of explanation, specific numbers, systems and configurations were set forth in order to provide a thorough understanding of the subject matter. However, it is apparent to one skilled in the art having the benefit of this disclosure that the subject matter may be practiced without the specific details. In other instances, well-known features, components, or modules were omitted, simplified, combined, or split in order not to obscure the disclosed subject matter.
Various embodiments of the disclosed subject matter may be implemented in hardware, firmware, software, or combination thereof, and may be described by reference to or in conjunction with program code, such as instructions, functions, procedures, data structures, logic, application programs, design representations or formats for simulation, emulation, and fabrication of a design, which when accessed by a machine results in the machine performing tasks, defining abstract data types or low-level hardware contexts, or producing a result.
Program code may represent hardware using a hardware description language or another functional description language which essentially provides a model of how designed hardware is expected to perform. Program code may be assembly or machine language or hardware-definition languages, or data that may be compiled and/or interpreted. Furthermore, it is common in the art to speak of software, in one form or another as taking an action or causing a result. Such expressions are merely a shorthand way of stating execution of program code by a processing system which causes a processor to perform an action or produce a result.
Program code may be stored in, for example, volatile and/or non-volatile memory, such as storage devices and/or an associated machine readable or machine accessible medium including solid-state memory, hard-drives, floppy-disks, optical storage, tapes, flash memory, memory sticks, digital video disks, digital versatile discs (DVDs), etc., as well as more exotic mediums such as machine-accessible biological state preserving storage. A machine readable medium may include any tangible mechanism for storing, transmitting, or receiving information in a form readable by a machine, such as antennas, optical fibers, communication interfaces, etc. Program code may be transmitted in the form of packets, serial data, parallel data, etc., and may be used in a compressed or encrypted format.
Program code may be implemented in programs executing on programmable machines such as mobile or stationary computers, personal digital assistants, set top boxes, cellular telephones and pagers, and other electronic devices, each including a processor, volatile and/or non-volatile memory readable by the processor, at least one input device and/or one or more output devices. Program code may be applied to the data entered using the input device to perform the described embodiments and to generate output information. The output information may be applied to one or more output devices. One of ordinary skill in the art may appreciate that embodiments of the disclosed subject matter can be practiced with various computer system configurations, including multiprocessor or multiple-core processor systems, minicomputers, mainframe computers, as well as pervasive or miniature computers or processors that may be embedded into virtually any device. Embodiments of the disclosed subject matter can also be practiced in distributed computing environments where tasks may be performed by remote processing devices that are linked through a communications network.
Although operations may be described as a sequential process, some of the operations may in fact be performed in parallel, concurrently, and/or in a distributed environment, and with program code stored locally and/or remotely for access by single or multi-processor machines. In addition, in some embodiments the order of operations may be rearranged without departing from the spirit of the disclosed subject matter. Program code may be used by or in conjunction with embedded controllers.
While the disclosed subject matter has been described with reference to illustrative embodiments, this description is not intended to be construed in a limiting sense. Various modifications of the illustrative embodiments, as well as other embodiments of the subject matter, which are apparent to persons skilled in the art to which the disclosed subject matter pertains are deemed to lie within the scope of the disclosed subject matter. For example, in each illustrated embodiment and each described embodiment, it is to be understood that the diagrams of the figures and the description herein is not intended to indicate that the illustrated or described devices include all of the components shown in a particular figure or described in reference to a particular figure. In addition, each element may be implemented with logic, wherein the logic, as referred to herein, can include any suitable hardware (e.g., a processor, among others), software (e.g., an application, among others), firmware, or any suitable combination of hardware, software, and firmware, for example.

Claims (21)

What is claimed is:
1. A head mounted display system, comprising:
a display;
at least one of an eye tracking sensor, an accelerometer, or a gyroscope;
at least one memory;
instructions; and
processor circuitry to execute the instructions to:
determine a first view direction of a user based on first signals output, at a first point in time, by the at least one of the eye tracking sensor, the accelerometer, or the gyroscope;
determine a time delay between (1) initiation of rendering an image for presentation on the display and (2) actual presentation of the image on the display;
predict a second view direction of the user at a second point in time after the first point in time, the second view direction predicted based on the time delay and second signals output, prior to the second point in time, by the at least one of the eye tracking sensor, the accelerometer, or the gyroscope; and
cause, prior to the second point in time, rendering of the image for presentation on the display based on the second view direction of the user.
2. The head mounted display system of claim 1, wherein the processor circuitry is to execute the instructions to determine a head pose of the user at the first point in time, and to predict the second view direction of the user based on the head pose.
3. The head mounted display system of claim 1, wherein the processor circuitry is to execute the instructions to predict the second view direction of the user based on (1) the first view direction of the user and (2) previous view directions of the user, the previous view directions of the user based on third signals output by the at least one of the eye tracking sensor, the accelerometer, or the gyroscope at different points in time before the first point in time, the third signals including the second signals.
4. The head mounted display system of claim 3, wherein the processor circuitry is to execute the instructions to:
determine weighted average values for the previous view directions of the user; and
predict the second view direction of the user based on the weighted average values.
5. The head mounted display system of claim 1, wherein the processor circuitry is to execute the instructions to cause the presentation of the rendered image on the display at the second point in time.
6. The head mounted display system of claim 1, wherein the processor circuitry is to execute the instructions to cause rendering of the image by causing rendering of a frame buffer that includes the image, the frame buffer larger than the image.
7. The head mounted display system of claim 6, wherein the processor circuitry is to execute the instructions to decrease a size of the frame buffer to reduce a memory load.
8. At least one machine readable storage device comprising instructions that, when executed, cause processor circuitry to at least:
identify a first view direction of a user based on first signals output, at a first point in time, by at least one of an eye tracking sensor, an accelerometer, or a gyroscope;
identify a time delay between (1) initiation of rendering an image for presentation on a display and (2) actual presentation of the image on the display;
determine a second view direction of the user at a second point in time after the first point in time, the second view direction determined based on the time delay and second signals output, prior to the second point in time, by the at least one of the eye tracking sensor, the accelerometer, or the gyroscope; and
cause, prior to the second point in time, rendering of the image for presentation on the display based on the second view direction of the user.
9. The at least one machine readable storage device of claim 8, wherein the instructions cause the processor circuitry to determine a head pose of the user at the first point in time, and to determine the second view direction based on the head pose.
10. The at least one machine readable storage device of claim 8, wherein the instructions cause the processor circuitry to determine the second view direction of the user based on the first view direction the user and based on previous view directions of the user, the previous view directions of the user based on third signals output by the at least one of the eye tracking sensor, the accelerometer, or the gyroscope at different points in time before the first point in time, the third signals including the second signals.
11. The at least one machine readable storage device of claim 10, wherein the instructions cause the processor circuitry to:
determine weighted average values for the previous view directions of the user; and
determine the second view direction of the user based on the weighted average values.
12. The at least one machine readable storage device of claim 8, wherein the instructions cause the processor circuitry to cause the presentation of the rendered image on the display at the second point in time.
13. The at least one machine readable storage device of claim 8, wherein the image corresponds to target content within a frame buffer, the target content being smaller than the frame buffer.
14. The at least one machine readable storage device of claim 13, wherein the instructions cause the processor circuitry to reduce a size of the frame buffer to reduce a memory demand.
15. A method comprising:
determining a first view direction of a user based on first signals output, at a first point in time, by at least one of an eye tracking sensor, an accelerometer, or a gyroscope;
identifying a time delay between (1) initiation of rendering an image for presentation on a display and (2) actual presentation of the image on the display;
predicting, by executing an instruction with processor circuitry, a future view direction of the user at a second point in time after the first point in time, the future view direction predicted based on the time delay and second signals output, prior to the second point in time, by the at least one of the eye tracking sensor, the accelerometer, or the gyroscope; and
causing, prior to the second point in time, rendering of the image for presentation on the display based on the future view direction of the user.
16. The method of claim 15, further including determining a head pose of the user at the first point in time, and predicting, by executing an instruction with processor circuitry, the future view direction based on the head pose.
17. The method of claim 15, wherein the predicting of the future view direction of the user is based on (1) the first view direction of the user and (2) previous view directions of the user, the previous view directions of the user based on third signals output by the at least one of the eye tracking sensor, the accelerometer, or the gyroscope at different points in time before the first point in time, the third signals including the second signals.
18. The method of claim 17, further including:
determining weighted average values for the previous view directions of the user; and
predicting the future view direction of the user based on the weighted average values.
19. The method of claim 15, further including causing the presentation of the rendered image on the display at the second point in time.
20. The method of claim 15, wherein the causing of the rendering of the image includes causing rendering of a frame buffer that includes the image, the frame buffer larger than the image.
21. The method of claim 20, further including decreasing a size of the frame buffer to reduce a memory load.
US17/561,661 2016-08-12 2021-12-23 Optimized display image rendering Active US11514839B2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US17/561,661 US11514839B2 (en) 2016-08-12 2021-12-23 Optimized display image rendering
US17/993,614 US11721275B2 (en) 2016-08-12 2022-11-23 Optimized display image rendering
US18/334,197 US20230410720A1 (en) 2016-08-12 2023-06-13 Optimized Display Image Rendering

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201662374696P 2016-08-12 2016-08-12
US15/675,653 US11017712B2 (en) 2016-08-12 2017-08-11 Optimized display image rendering
US17/133,265 US11210993B2 (en) 2016-08-12 2020-12-23 Optimized display image rendering
US17/561,661 US11514839B2 (en) 2016-08-12 2021-12-23 Optimized display image rendering

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US17/133,265 Continuation US11210993B2 (en) 2016-08-12 2020-12-23 Optimized display image rendering

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/993,614 Continuation US11721275B2 (en) 2016-08-12 2022-11-23 Optimized display image rendering

Publications (2)

Publication Number Publication Date
US20220122516A1 US20220122516A1 (en) 2022-04-21
US11514839B2 true US11514839B2 (en) 2022-11-29

Family

ID=61159285

Family Applications (5)

Application Number Title Priority Date Filing Date
US15/675,653 Active 2037-12-13 US11017712B2 (en) 2016-08-12 2017-08-11 Optimized display image rendering
US17/133,265 Active US11210993B2 (en) 2016-08-12 2020-12-23 Optimized display image rendering
US17/561,661 Active US11514839B2 (en) 2016-08-12 2021-12-23 Optimized display image rendering
US17/993,614 Active US11721275B2 (en) 2016-08-12 2022-11-23 Optimized display image rendering
US18/334,197 Pending US20230410720A1 (en) 2016-08-12 2023-06-13 Optimized Display Image Rendering

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US15/675,653 Active 2037-12-13 US11017712B2 (en) 2016-08-12 2017-08-11 Optimized display image rendering
US17/133,265 Active US11210993B2 (en) 2016-08-12 2020-12-23 Optimized display image rendering

Family Applications After (2)

Application Number Title Priority Date Filing Date
US17/993,614 Active US11721275B2 (en) 2016-08-12 2022-11-23 Optimized display image rendering
US18/334,197 Pending US20230410720A1 (en) 2016-08-12 2023-06-13 Optimized Display Image Rendering

Country Status (1)

Country Link
US (5) US11017712B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11721275B2 (en) 2016-08-12 2023-08-08 Intel Corporation Optimized display image rendering

Families Citing this family (54)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10043319B2 (en) 2014-11-16 2018-08-07 Eonite Perception Inc. Optimizing head mounted displays for augmented reality
US10180734B2 (en) 2015-03-05 2019-01-15 Magic Leap, Inc. Systems and methods for augmented reality
US10838207B2 (en) 2015-03-05 2020-11-17 Magic Leap, Inc. Systems and methods for augmented reality
US20160259404A1 (en) 2015-03-05 2016-09-08 Magic Leap, Inc. Systems and methods for augmented reality
CN108604383A (en) 2015-12-04 2018-09-28 奇跃公司 Reposition system and method
EP3494549A4 (en) 2016-08-02 2019-08-14 Magic Leap, Inc. Fixed-distance virtual and augmented reality systems and methods
WO2018044544A1 (en) * 2016-09-01 2018-03-08 Apple Inc. Electronic devices with displays
US10169919B2 (en) * 2016-09-09 2019-01-01 Oath Inc. Headset visual displacement for motion correction
US9928660B1 (en) 2016-09-12 2018-03-27 Intel Corporation Hybrid rendering for a wearable display attached to a tethered computer
US10708569B2 (en) * 2016-09-29 2020-07-07 Eric Wilson Turbine-Powered Pool Scrubber
US10812936B2 (en) 2017-01-23 2020-10-20 Magic Leap, Inc. Localization determination for mixed reality systems
US10560680B2 (en) * 2017-01-28 2020-02-11 Microsoft Technology Licensing, Llc Virtual reality with interactive streaming video and likelihood-based foveation
US10687050B2 (en) * 2017-03-10 2020-06-16 Qualcomm Incorporated Methods and systems of reducing latency in communication of image data between devices
EP3596703A4 (en) 2017-03-17 2020-01-22 Magic Leap, Inc. Mixed reality system with virtual content warping and method of generating virtual content using same
KR102366781B1 (en) * 2017-03-17 2022-02-22 매직 립, 인코포레이티드 Mixed reality system with color virtual content warping and method for creating virtual content using same
AU2018233733B2 (en) 2017-03-17 2021-11-11 Magic Leap, Inc. Mixed reality system with multi-source virtual content compositing and method of generating virtual content using same
US10939038B2 (en) * 2017-04-24 2021-03-02 Intel Corporation Object pre-encoding for 360-degree view for optimal quality and latency
WO2018200993A1 (en) * 2017-04-28 2018-11-01 Zermatt Technologies Llc Video pipeline
US10979685B1 (en) 2017-04-28 2021-04-13 Apple Inc. Focusing for virtual and augmented reality systems
EP3619568A4 (en) * 2017-05-01 2021-01-27 Infinity Augmented Reality Israel Ltd. Optical engine time warp for augmented or mixed reality environment
US11158101B2 (en) * 2017-06-07 2021-10-26 Sony Interactive Entertainment Inc. Information processing system, information processing device, server device, image providing method and image generation method
US10762691B2 (en) 2017-09-08 2020-09-01 Microsoft Technology Licensing, Llc Techniques for compensating variable display device latency in image display
US10521881B1 (en) * 2017-09-28 2019-12-31 Apple Inc. Error concealment for a head-mountable device
RU2750505C1 (en) * 2017-10-12 2021-06-29 Фраунхофер-Гезелльшафт Цур Фердерунг Дер Ангевандтен Форшунг Е.Ф. Sound delivery optimisation for virtual reality applications
US11166080B2 (en) * 2017-12-21 2021-11-02 Facebook, Inc. Systems and methods for presenting content
US10390063B2 (en) 2017-12-22 2019-08-20 Comcast Cable Communications, Llc Predictive content delivery for video streaming services
US10798455B2 (en) * 2017-12-22 2020-10-06 Comcast Cable Communications, Llc Video delivery
US10841533B2 (en) * 2018-03-23 2020-11-17 Raja Singh Tuli Telepresence system with virtual reality
EP3782125A1 (en) 2018-04-19 2021-02-24 PCMS Holdings, Inc. Systems and methods employing predictive overfilling for virtual reality
WO2019229906A1 (en) * 2018-05-30 2019-12-05 株式会社ソニー・インタラクティブエンタテインメント Image generation device, image display system, image generation method, and computer program
DE102018209377A1 (en) * 2018-06-12 2019-12-12 Volkswagen Aktiengesellschaft A method of presenting AR / VR content on a mobile terminal and mobile terminal presenting AR / VR content
CN117711284A (en) 2018-07-23 2024-03-15 奇跃公司 In-field subcode timing in a field sequential display
EP3827299A4 (en) 2018-07-23 2021-10-27 Magic Leap, Inc. Mixed reality system with virtual content warping and method of generating virtual content using same
CN110868581A (en) * 2018-08-28 2020-03-06 华为技术有限公司 Image display method, device and system
US10810747B2 (en) * 2018-09-06 2020-10-20 Disney Enterprises, Inc. Dead reckoning positional prediction for augmented reality and virtual reality applications
WO2020078354A1 (en) * 2018-10-16 2020-04-23 北京凌宇智控科技有限公司 Video streaming system, video streaming method and apparatus
CN109743626B (en) * 2019-01-02 2022-08-12 京东方科技集团股份有限公司 Image display method, image processing method and related equipment
US10802287B2 (en) * 2019-01-14 2020-10-13 Valve Corporation Dynamic render time targeting based on eye tracking
US11138804B2 (en) * 2019-08-02 2021-10-05 Fmr Llc Intelligent smoothing of 3D alternative reality applications for secondary 2D viewing
US11948242B2 (en) 2019-08-02 2024-04-02 Fmr Llc Intelligent smoothing of 3D alternative reality applications for secondary 2D viewing
US11417065B2 (en) 2019-10-29 2022-08-16 Magic Leap, Inc. Methods and systems for reprojection in augmented-reality displays
US20210192681A1 (en) * 2019-12-18 2021-06-24 Ati Technologies Ulc Frame reprojection for virtual reality and augmented reality
US11195498B2 (en) * 2020-01-15 2021-12-07 Charter Communications Operating, Llc Compensating for latency in a streaming virtual reality environment
WO2021206251A1 (en) * 2020-04-07 2021-10-14 Samsung Electronics Co., Ltd. System and method for reduced communication load through lossless data reduction
US20210311307A1 (en) * 2020-04-07 2021-10-07 Samsung Electronics Co., Ltd. System and method for reduced communication load through lossless data reduction
CN111556314A (en) * 2020-05-18 2020-08-18 郑州工商学院 Computer image processing method
US11543665B2 (en) * 2020-12-01 2023-01-03 Microsoft Technology Licensing, Llc Low motion to photon latency rapid target acquisition
CN112822480B (en) * 2020-12-31 2022-05-17 青岛小鸟看看科技有限公司 VR system and positioning tracking method thereof
KR102448833B1 (en) * 2021-10-13 2022-09-29 서울과학기술대학교 산학협력단 Method for rendering for virtual reality
WO2023084284A1 (en) * 2021-11-11 2023-05-19 Telefonaktiebolaget Lm Ericsson (Publ) Predictive extended reality system
US11694409B1 (en) 2021-12-08 2023-07-04 Google Llc Augmented reality using a split architecture
US11681358B1 (en) * 2021-12-10 2023-06-20 Google Llc Eye image stabilized augmented reality displays
US11675430B1 (en) * 2021-12-17 2023-06-13 Varjo Technologies Oy Display apparatus and method incorporating adaptive gaze locking
GB2614326A (en) * 2021-12-31 2023-07-05 Sony Interactive Entertainment Europe Ltd Apparatus and method for virtual reality

Citations (127)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5537128A (en) 1993-08-04 1996-07-16 Cirrus Logic, Inc. Shared memory for split-panel LCD display systems
US5832212A (en) 1996-04-19 1998-11-03 International Business Machines Corporation Censoring browser method and apparatus for internet viewing
US20020013675A1 (en) 1998-11-12 2002-01-31 Alois Knoll Method and device for the improvement of the pose accuracy of effectors on mechanisms and for the measurement of objects in a workspace
US20040107356A1 (en) 1999-03-16 2004-06-03 Intertrust Technologies Corp. Methods and apparatus for persistent control and protection of content
US20050132070A1 (en) 2000-11-13 2005-06-16 Redlich Ron M. Data security system and method with editor
US6922701B1 (en) 2000-08-03 2005-07-26 John A. Ananian Generating cad independent interactive physical description remodeling, building construction plan database profile
US7002551B2 (en) 2002-09-25 2006-02-21 Hrl Laboratories, Llc Optical see-through augmented reality modified-scale display
US20060238380A1 (en) 2005-04-21 2006-10-26 Microsoft Corporation Maintaining user privacy in a virtual earth environment
US20060256110A1 (en) 2005-05-11 2006-11-16 Yasuhiro Okuno Virtual reality presentation apparatus, virtual reality presentation method, program, image processing method, image processing apparatus, information processing method, and information processing apparatus
US20070035562A1 (en) 2002-09-25 2007-02-15 Azuma Ronald T Method and apparatus for image enhancement
US7313825B2 (en) 2000-11-13 2007-12-25 Digital Doors, Inc. Data security system and method for portable device
US7382244B1 (en) 2007-10-04 2008-06-03 Kd Secure Video surveillance, storage, and alerting system having network management, hierarchical data storage, video tip processing, and vehicle plate analysis
US20080172201A1 (en) 2007-01-17 2008-07-17 Canon Kabushiki Kaisha Information processing apparatus and method
US20080195956A1 (en) 2007-01-25 2008-08-14 Samuel Pierce Baron Virtual social interactions
US20090047972A1 (en) 2007-08-14 2009-02-19 Chawla Neeraj Location based presence and privacy management
US20090104686A1 (en) 2007-10-21 2009-04-23 King Car Food Industrial Co., Ltd. Apparatus for Thin-Layer Cell Smear Preparation and In-situ Hybridization
US20090104585A1 (en) 2007-10-19 2009-04-23 Denis John Diangelo Dental framework
US7546334B2 (en) 2000-11-13 2009-06-09 Digital Doors, Inc. Data security system and method with adaptive filter
US7583275B2 (en) 2002-10-15 2009-09-01 University Of Southern California Modeling and video projection for augmented virtual environments
US20090278917A1 (en) 2008-01-18 2009-11-12 Lockheed Martin Corporation Providing A Collaborative Immersive Environment Using A Spherical Camera and Motion Capture
US20100060632A1 (en) 2007-01-05 2010-03-11 Total Immersion Method and devices for the real time embeding of virtual objects in an image stream using data from a real scene represented by said images
US20100103196A1 (en) 2008-10-27 2010-04-29 Rakesh Kumar System and method for generating a mixed reality environment
US20100164990A1 (en) 2005-08-15 2010-07-01 Koninklijke Philips Electronics, N.V. System, apparatus, and method for augmented reality glasses for end-user programming
US20100166294A1 (en) 2008-12-29 2010-07-01 Cognex Corporation System and method for three-dimensional alignment of objects using machine vision
US20100182340A1 (en) 2009-01-19 2010-07-22 Bachelder Edward N Systems and methods for combining virtual and real-time physical environments
US20100306825A1 (en) 2009-05-27 2010-12-02 Lucid Ventures, Inc. System and method for facilitating user interaction with a simulated object associated with a physical location
US20110046925A1 (en) 2007-06-15 2011-02-24 Commissariat A L'energie Atomique Process for Calibrating the Position of a Multiply Articulated System Such as a Robot
US20110102460A1 (en) 2009-11-04 2011-05-05 Parker Jordan Platform for widespread augmented reality and 3d mapping
US20110199479A1 (en) 2010-02-12 2011-08-18 Apple Inc. Augmented reality maps
US20110221771A1 (en) 2010-03-12 2011-09-15 Cramer Donald M Merging of Grouped Markers in An Augmented Reality-Enabled Distribution Network
US20110286631A1 (en) 2010-05-21 2011-11-24 Qualcomm Incorporated Real time tracking/detection of multiple targets
US20110313779A1 (en) 2010-06-17 2011-12-22 Microsoft Corporation Augmentation and correction of location based data through user feedback
US20120105475A1 (en) 2010-11-02 2012-05-03 Google Inc. Range of Focus in an Augmented Reality Application
US20120197439A1 (en) 2011-01-28 2012-08-02 Intouch Health Interfacing with a mobile telepresence robot
US8275635B2 (en) 2007-02-16 2012-09-25 Bodymedia, Inc. Integration of lifeotypes with devices and systems
US20120249741A1 (en) 2011-03-29 2012-10-04 Giuliano Maciocci Anchoring virtual images to real world surfaces in augmented reality systems
US20120315884A1 (en) 2011-06-08 2012-12-13 Qualcomm Incorporated Mobile device access of location specific images from a remote database
US20120329486A1 (en) 2011-06-21 2012-12-27 Cisco Technology, Inc. Delivering Wireless Information Associating to a Facility
US20130026224A1 (en) 2011-07-26 2013-01-31 ByteLight, Inc. Method and system for determining the position of a device in a light based positioning system using locally stored maps
US20130042296A1 (en) 2011-08-09 2013-02-14 Ryan L. Hastings Physical interaction with virtual objects for drm
US20130044130A1 (en) 2011-08-17 2013-02-21 Kevin A. Geisner Providing contextual personal information by a mixed reality device
US20130101163A1 (en) 2011-09-30 2013-04-25 Rajarshi Gupta Method and/or apparatus for location context identifier disambiguation
US20130116968A1 (en) 2010-05-19 2013-05-09 Nokia Corporation Extended fingerprint generation
US20130117377A1 (en) 2011-10-28 2013-05-09 Samuel A. Miller System and Method for Augmented and Virtual Reality
US20130132488A1 (en) 2011-11-21 2013-05-23 Andrew Garrod Bosworth Location Aware Sticky Notes
US20130132477A1 (en) 2011-11-21 2013-05-23 Andrew Garrod Bosworth Location Aware Shared Spaces
US20130129230A1 (en) 2011-11-18 2013-05-23 Microsoft Corporation Computing Pose and/or Shape of Modifiable Entities
US8452080B2 (en) 2007-05-22 2013-05-28 Metaio Gmbh Camera pose estimation apparatus and method for augmented reality imaging
US20130174213A1 (en) 2011-08-23 2013-07-04 James Liu Implicit sharing and privacy control through physical behaviors using sensor-rich devices
US20130176447A1 (en) 2012-01-11 2013-07-11 Panasonic Corporation Image processing apparatus, image capturing apparatus, and program
US20130182891A1 (en) 2012-01-17 2013-07-18 Curtis Ling Method and system for map generation for location and navigation with user sharing/social networking
US8521128B1 (en) 2011-12-09 2013-08-27 Google Inc. Method, system, and computer program product for obtaining crowd-sourced location information
US20130222369A1 (en) 2012-02-23 2013-08-29 Charles D. Huston System and Method for Creating an Environment and for Sharing a Location Based Experience in an Environment
US20130242106A1 (en) 2012-03-16 2013-09-19 Nokia Corporation Multicamera for crowdsourced video services with augmented reality guiding system
US20130286004A1 (en) 2012-04-27 2013-10-31 Daniel J. McCulloch Displaying a collision between real and virtual objects
US8620532B2 (en) 2009-03-25 2013-12-31 Waldeck Technology, Llc Passive crowd-sourced map updates and alternate route recommendations
US20140002444A1 (en) 2012-06-29 2014-01-02 Darren Bennett Configuring an interaction zone within an augmented reality environment
US20140125699A1 (en) 2012-11-06 2014-05-08 Ripple Inc Rendering a digital element
US20140189515A1 (en) 2012-07-12 2014-07-03 Spritz Technology Llc Methods and systems for displaying text using rsvp
US20140204077A1 (en) 2013-01-22 2014-07-24 Nicholas Kamuda Mixed reality experience sharing
US20140210710A1 (en) 2013-01-28 2014-07-31 Samsung Electronics Co., Ltd. Method for generating an augmented reality content and terminal using the same
US20140241614A1 (en) 2013-02-28 2014-08-28 Motorola Mobility Llc System for 2D/3D Spatial Feature Processing
US20140254934A1 (en) 2013-03-06 2014-09-11 Streamoid Technologies Private Limited Method and system for mobile visual search using metadata and segmentation
US8839121B2 (en) 2009-05-06 2014-09-16 Joseph Bertolami Systems and methods for unifying coordinate systems in augmented reality applications
US20140267234A1 (en) 2013-03-15 2014-09-18 Anselm Hook Generation and Sharing Coordinate System Between Users on Mobile
US20140276242A1 (en) 2013-03-14 2014-09-18 Healthward International, LLC Wearable body 3d sensor network system and method
US20140292645A1 (en) 2013-03-28 2014-10-02 Sony Corporation Display control device, display control method, and recording medium
US20140307798A1 (en) 2011-09-09 2014-10-16 Newsouth Innovations Pty Limited Method and apparatus for communicating and recovering motion information
US20140307793A1 (en) 2006-09-06 2014-10-16 Alexander MacInnis Systems and Methods for Faster Throughput for Compressed Video Data Decoding
US20140324517A1 (en) 2013-04-30 2014-10-30 Jpmorgan Chase Bank, N.A. Communication Data Analysis and Processing System and Method
US20140323148A1 (en) 2013-04-30 2014-10-30 Qualcomm Incorporated Wide area localization from slam maps
US20140357290A1 (en) 2013-05-31 2014-12-04 Michael Grabner Device localization using camera and wireless signal
US20140368532A1 (en) 2013-06-18 2014-12-18 Brian E. Keane Virtual object orientation and visualization
US8933931B2 (en) 2011-06-02 2015-01-13 Microsoft Corporation Distributed asynchronous localization and mapping for augmented reality
US20150046284A1 (en) 2013-08-12 2015-02-12 Airvirtise Method of Using an Augmented Reality Device
US20150123993A1 (en) 2011-12-09 2015-05-07 Sony Computer Entertainment Inc Image processing device and image processing method
US20150143459A1 (en) 2013-11-15 2015-05-21 Microsoft Corporation Protecting privacy in web-based immersive augmented reality
US20150183465A1 (en) 2013-12-27 2015-07-02 Hon Hai Precision Industry Co., Ltd. Vehicle assistance device and method
US20150204676A1 (en) 2012-08-15 2015-07-23 Google Inc. Crowd-sourcing indoor locations
US20150208072A1 (en) * 2014-01-22 2015-07-23 Nvidia Corporation Adaptive video compression based on motion
US20150206350A1 (en) 2012-09-04 2015-07-23 Laurent Gardes Augmented reality for video system
US20150228114A1 (en) 2014-02-13 2015-08-13 Microsoft Corporation Contour completion for augmenting surface reconstructions
US20150234462A1 (en) 2013-03-11 2015-08-20 Magic Leap, Inc. Interacting with a network to transmit virtual image data in augmented or virtual reality systems
US9124635B2 (en) 2012-11-30 2015-09-01 Intel Corporation Verified sensor data processing
US20150296170A1 (en) 2014-04-11 2015-10-15 International Business Machines Corporation System and method for fine-grained control of privacy from image and video recording devices
US20150309263A2 (en) 2012-06-11 2015-10-29 Magic Leap, Inc. Planar waveguide apparatus with diffraction element(s) and system employing same
US20150317518A1 (en) 2014-05-01 2015-11-05 Seiko Epson Corporation Head-mount type display device, control system, method of controlling head-mount type display device, and computer program
US20150332439A1 (en) 2014-05-13 2015-11-19 Xiaomi Inc. Methods and devices for hiding privacy information
US20150348511A1 (en) 2014-05-30 2015-12-03 Apple Inc. Dynamic Display Refresh Rate Based On Device Motion
US20160026253A1 (en) 2014-03-11 2016-01-28 Magic Leap, Inc. Methods and systems for creating virtual and augmented reality
US9256987B2 (en) 2013-06-24 2016-02-09 Microsoft Technology Licensing, Llc Tracking head movement when wearing mobile device
US20160049008A1 (en) 2014-08-12 2016-02-18 Osterhout Group, Inc. Content presentation in head worn computing
US20160080642A1 (en) 2014-09-12 2016-03-17 Microsoft Technology Licensing, Llc Video capture with privacy safeguard
US20160098862A1 (en) 2014-10-07 2016-04-07 Microsoft Technology Licensing, Llc Driving a projector to generate a shared spatial augmented reality experience
US20160110560A1 (en) 2012-12-07 2016-04-21 At&T Intellectual Property I, L.P. Augmented reality based privacy and decryption
US20160119536A1 (en) 2014-10-28 2016-04-28 Google Inc. Systems and methods for autonomously generating photo summaries
US20160147064A1 (en) 2014-11-26 2016-05-26 Osterhout Group, Inc. See-through computer display systems
US20160180590A1 (en) 2014-12-23 2016-06-23 Lntel Corporation Systems and methods for contextually augmented video creation and sharing
US20160189419A1 (en) 2013-08-09 2016-06-30 Sweep3D Corporation Systems and methods for generating data indicative of a three-dimensional representation of a scene
US20160217623A1 (en) 2013-09-30 2016-07-28 Pcms Holdings, Inc. Methods, apparatus, systems, devices, and computer program products for providing an augmented reality display and/or user interface
US20160260260A1 (en) 2014-10-24 2016-09-08 Usens, Inc. System and method for immersive and interactive multimedia generation
US20160282619A1 (en) * 2013-11-11 2016-09-29 Sony Interactive Entertainment Inc. Image generation apparatus and image generation method
US20160335275A1 (en) 2015-05-11 2016-11-17 Google Inc. Privacy-sensitive query for localization area description file
US20160335802A1 (en) 2015-05-14 2016-11-17 Magic Leap, Inc. Privacy-sensitive consumer cameras coupled to augmented reality systems
US20160337599A1 (en) 2015-05-11 2016-11-17 Google Inc. Privacy filtering of area description file prior to upload
US20160335497A1 (en) 2015-05-11 2016-11-17 Google Inc. Crowd-sourced creation and updating of area description file for mobile device localization
US20160358485A1 (en) 2015-06-07 2016-12-08 Apple Inc. Collision Avoidance Of Arbitrary Polygonal Obstacles
US20160360970A1 (en) 2015-06-14 2016-12-15 Facense Ltd. Wearable device for taking thermal and visual measurements from fixed relative positions
US20170021273A1 (en) 2015-07-23 2017-01-26 At&T Intellectual Property I, L.P. Coordinating multiple virtual environments
US9595083B1 (en) 2013-04-16 2017-03-14 Lockheed Martin Corporation Method and apparatus for image producing with predictions of future positions
US20170123750A1 (en) 2015-10-28 2017-05-04 Paypal, Inc. Private virtual object handling
US20170201740A1 (en) 2016-01-11 2017-07-13 Microsoft Technology Licensing, Llc Distributing video among multiple display zones
US9754419B2 (en) 2014-11-16 2017-09-05 Eonite Perception Inc. Systems and methods for augmented reality preparation, processing, and application
US20170276780A1 (en) 2016-03-22 2017-09-28 Mitsubishi Electric Corporation Moving body recognition system
US20170294044A1 (en) 2016-04-06 2017-10-12 Tmrwland Hongkong Limited Shared experience of virtual environments
US20170293146A1 (en) 2016-04-07 2017-10-12 Oculus Vr, Llc Accommodation based optical correction
US20170374343A1 (en) 2016-06-22 2017-12-28 Microsoft Technology Licensing, Llc Velocity and depth aware reprojection
US20180004285A1 (en) 2016-06-30 2018-01-04 Sony Interactive Entertainment Inc. Apparatus and method for gaze tracking
US20180008141A1 (en) * 2014-07-08 2018-01-11 Krueger Wesley W O Systems and methods for using virtual reality, augmented reality, and/or a synthetic 3-dimensional information for the measurement of human ocular performance
US20180047332A1 (en) 2016-08-12 2018-02-15 Intel Corporation Optimized Display Image Rendering
US9916002B2 (en) 2014-11-16 2018-03-13 Eonite Perception Inc. Social applications for augmented reality technologies
US20180075654A1 (en) 2016-09-12 2018-03-15 Intel Corporation Hybrid rendering for a wearable display attached to a tethered computer
US20180165888A1 (en) 2016-12-13 2018-06-14 Alibaba Group Holding Limited Allocating virtual objects based on augmented reality
US20190139311A1 (en) 2014-11-16 2019-05-09 Intel Corporation Optimizing head mounted displays for augmented reality
US20190172410A1 (en) 2016-04-21 2019-06-06 Sony Interactive Entertainment Inc. Image processing device and image processing method
US20190179423A1 (en) 2016-06-16 2019-06-13 Sensomotoric Instruments Gesellschaft Fur Innovative Sensork MBH Minimized Bandwidth Requirements for Transmitting Mobile HMD Gaze Data
US20190355050A1 (en) 2018-05-18 2019-11-21 Gift Card Impressions, LLC Augmented reality gifting on a mobile device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9063330B2 (en) 2013-05-30 2015-06-23 Oculus Vr, Llc Perception based predictive tracking for head mounted displays
GB2527503A (en) 2014-06-17 2015-12-30 Next Logic Pty Ltd Generating a sequence of stereoscopic images for a head-mounted display
US10089063B2 (en) * 2016-08-10 2018-10-02 Qualcomm Incorporated Multimedia device for processing spatialized audio based on movement
US11542870B1 (en) 2021-11-24 2023-01-03 General Electric Company Gas supply system

Patent Citations (140)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5537128A (en) 1993-08-04 1996-07-16 Cirrus Logic, Inc. Shared memory for split-panel LCD display systems
US5832212A (en) 1996-04-19 1998-11-03 International Business Machines Corporation Censoring browser method and apparatus for internet viewing
US20020013675A1 (en) 1998-11-12 2002-01-31 Alois Knoll Method and device for the improvement of the pose accuracy of effectors on mechanisms and for the measurement of objects in a workspace
US20040107356A1 (en) 1999-03-16 2004-06-03 Intertrust Technologies Corp. Methods and apparatus for persistent control and protection of content
US6922701B1 (en) 2000-08-03 2005-07-26 John A. Ananian Generating cad independent interactive physical description remodeling, building construction plan database profile
US7313825B2 (en) 2000-11-13 2007-12-25 Digital Doors, Inc. Data security system and method for portable device
US20050132070A1 (en) 2000-11-13 2005-06-16 Redlich Ron M. Data security system and method with editor
US7546334B2 (en) 2000-11-13 2009-06-09 Digital Doors, Inc. Data security system and method with adaptive filter
US20070035562A1 (en) 2002-09-25 2007-02-15 Azuma Ronald T Method and apparatus for image enhancement
US7002551B2 (en) 2002-09-25 2006-02-21 Hrl Laboratories, Llc Optical see-through augmented reality modified-scale display
US7583275B2 (en) 2002-10-15 2009-09-01 University Of Southern California Modeling and video projection for augmented virtual environments
US20060238380A1 (en) 2005-04-21 2006-10-26 Microsoft Corporation Maintaining user privacy in a virtual earth environment
US20060256110A1 (en) 2005-05-11 2006-11-16 Yasuhiro Okuno Virtual reality presentation apparatus, virtual reality presentation method, program, image processing method, image processing apparatus, information processing method, and information processing apparatus
US20100164990A1 (en) 2005-08-15 2010-07-01 Koninklijke Philips Electronics, N.V. System, apparatus, and method for augmented reality glasses for end-user programming
US20140307793A1 (en) 2006-09-06 2014-10-16 Alexander MacInnis Systems and Methods for Faster Throughput for Compressed Video Data Decoding
US20100060632A1 (en) 2007-01-05 2010-03-11 Total Immersion Method and devices for the real time embeding of virtual objects in an image stream using data from a real scene represented by said images
US20080172201A1 (en) 2007-01-17 2008-07-17 Canon Kabushiki Kaisha Information processing apparatus and method
US20080195956A1 (en) 2007-01-25 2008-08-14 Samuel Pierce Baron Virtual social interactions
US8275635B2 (en) 2007-02-16 2012-09-25 Bodymedia, Inc. Integration of lifeotypes with devices and systems
US8452080B2 (en) 2007-05-22 2013-05-28 Metaio Gmbh Camera pose estimation apparatus and method for augmented reality imaging
US20110046925A1 (en) 2007-06-15 2011-02-24 Commissariat A L'energie Atomique Process for Calibrating the Position of a Multiply Articulated System Such as a Robot
US20090047972A1 (en) 2007-08-14 2009-02-19 Chawla Neeraj Location based presence and privacy management
US7382244B1 (en) 2007-10-04 2008-06-03 Kd Secure Video surveillance, storage, and alerting system having network management, hierarchical data storage, video tip processing, and vehicle plate analysis
US20090104585A1 (en) 2007-10-19 2009-04-23 Denis John Diangelo Dental framework
US20090104686A1 (en) 2007-10-21 2009-04-23 King Car Food Industrial Co., Ltd. Apparatus for Thin-Layer Cell Smear Preparation and In-situ Hybridization
US20090278917A1 (en) 2008-01-18 2009-11-12 Lockheed Martin Corporation Providing A Collaborative Immersive Environment Using A Spherical Camera and Motion Capture
US20100103196A1 (en) 2008-10-27 2010-04-29 Rakesh Kumar System and method for generating a mixed reality environment
US20100166294A1 (en) 2008-12-29 2010-07-01 Cognex Corporation System and method for three-dimensional alignment of objects using machine vision
US20100182340A1 (en) 2009-01-19 2010-07-22 Bachelder Edward N Systems and methods for combining virtual and real-time physical environments
US8620532B2 (en) 2009-03-25 2013-12-31 Waldeck Technology, Llc Passive crowd-sourced map updates and alternate route recommendations
US8839121B2 (en) 2009-05-06 2014-09-16 Joseph Bertolami Systems and methods for unifying coordinate systems in augmented reality applications
US20100306825A1 (en) 2009-05-27 2010-12-02 Lucid Ventures, Inc. System and method for facilitating user interaction with a simulated object associated with a physical location
US20110102460A1 (en) 2009-11-04 2011-05-05 Parker Jordan Platform for widespread augmented reality and 3d mapping
US20110199479A1 (en) 2010-02-12 2011-08-18 Apple Inc. Augmented reality maps
US20110221771A1 (en) 2010-03-12 2011-09-15 Cramer Donald M Merging of Grouped Markers in An Augmented Reality-Enabled Distribution Network
US9304970B2 (en) 2010-05-19 2016-04-05 Nokia Technologies Oy Extended fingerprint generation
US20130116968A1 (en) 2010-05-19 2013-05-09 Nokia Corporation Extended fingerprint generation
US20110286631A1 (en) 2010-05-21 2011-11-24 Qualcomm Incorporated Real time tracking/detection of multiple targets
US20110313779A1 (en) 2010-06-17 2011-12-22 Microsoft Corporation Augmentation and correction of location based data through user feedback
US20120105475A1 (en) 2010-11-02 2012-05-03 Google Inc. Range of Focus in an Augmented Reality Application
US20120197439A1 (en) 2011-01-28 2012-08-02 Intouch Health Interfacing with a mobile telepresence robot
US20120249741A1 (en) 2011-03-29 2012-10-04 Giuliano Maciocci Anchoring virtual images to real world surfaces in augmented reality systems
US8933931B2 (en) 2011-06-02 2015-01-13 Microsoft Corporation Distributed asynchronous localization and mapping for augmented reality
US20120315884A1 (en) 2011-06-08 2012-12-13 Qualcomm Incorporated Mobile device access of location specific images from a remote database
US20120329486A1 (en) 2011-06-21 2012-12-27 Cisco Technology, Inc. Delivering Wireless Information Associating to a Facility
US20130026224A1 (en) 2011-07-26 2013-01-31 ByteLight, Inc. Method and system for determining the position of a device in a light based positioning system using locally stored maps
US20130042296A1 (en) 2011-08-09 2013-02-14 Ryan L. Hastings Physical interaction with virtual objects for drm
US20130044130A1 (en) 2011-08-17 2013-02-21 Kevin A. Geisner Providing contextual personal information by a mixed reality device
US20130174213A1 (en) 2011-08-23 2013-07-04 James Liu Implicit sharing and privacy control through physical behaviors using sensor-rich devices
US20140307798A1 (en) 2011-09-09 2014-10-16 Newsouth Innovations Pty Limited Method and apparatus for communicating and recovering motion information
US20130101163A1 (en) 2011-09-30 2013-04-25 Rajarshi Gupta Method and/or apparatus for location context identifier disambiguation
US20130117377A1 (en) 2011-10-28 2013-05-09 Samuel A. Miller System and Method for Augmented and Virtual Reality
US20130129230A1 (en) 2011-11-18 2013-05-23 Microsoft Corporation Computing Pose and/or Shape of Modifiable Entities
US20130132477A1 (en) 2011-11-21 2013-05-23 Andrew Garrod Bosworth Location Aware Shared Spaces
US20130132488A1 (en) 2011-11-21 2013-05-23 Andrew Garrod Bosworth Location Aware Sticky Notes
US8521128B1 (en) 2011-12-09 2013-08-27 Google Inc. Method, system, and computer program product for obtaining crowd-sourced location information
US20150123993A1 (en) 2011-12-09 2015-05-07 Sony Computer Entertainment Inc Image processing device and image processing method
US20130176447A1 (en) 2012-01-11 2013-07-11 Panasonic Corporation Image processing apparatus, image capturing apparatus, and program
US20130182891A1 (en) 2012-01-17 2013-07-18 Curtis Ling Method and system for map generation for location and navigation with user sharing/social networking
US20150287246A1 (en) 2012-02-23 2015-10-08 Charles D. Huston System, Method, and Device Including a Depth Camera for Creating a Location Based Experience
US20130222369A1 (en) 2012-02-23 2013-08-29 Charles D. Huston System and Method for Creating an Environment and for Sharing a Location Based Experience in an Environment
US20130242106A1 (en) 2012-03-16 2013-09-19 Nokia Corporation Multicamera for crowdsourced video services with augmented reality guiding system
US20130286004A1 (en) 2012-04-27 2013-10-31 Daniel J. McCulloch Displaying a collision between real and virtual objects
US20150309263A2 (en) 2012-06-11 2015-10-29 Magic Leap, Inc. Planar waveguide apparatus with diffraction element(s) and system employing same
US20140002444A1 (en) 2012-06-29 2014-01-02 Darren Bennett Configuring an interaction zone within an augmented reality environment
US20140189515A1 (en) 2012-07-12 2014-07-03 Spritz Technology Llc Methods and systems for displaying text using rsvp
US20150204676A1 (en) 2012-08-15 2015-07-23 Google Inc. Crowd-sourcing indoor locations
US20150206350A1 (en) 2012-09-04 2015-07-23 Laurent Gardes Augmented reality for video system
US20140125699A1 (en) 2012-11-06 2014-05-08 Ripple Inc Rendering a digital element
US9124635B2 (en) 2012-11-30 2015-09-01 Intel Corporation Verified sensor data processing
US20160110560A1 (en) 2012-12-07 2016-04-21 At&T Intellectual Property I, L.P. Augmented reality based privacy and decryption
US20140204077A1 (en) 2013-01-22 2014-07-24 Nicholas Kamuda Mixed reality experience sharing
US20140210710A1 (en) 2013-01-28 2014-07-31 Samsung Electronics Co., Ltd. Method for generating an augmented reality content and terminal using the same
US20140241614A1 (en) 2013-02-28 2014-08-28 Motorola Mobility Llc System for 2D/3D Spatial Feature Processing
US20140254934A1 (en) 2013-03-06 2014-09-11 Streamoid Technologies Private Limited Method and system for mobile visual search using metadata and segmentation
US20150234462A1 (en) 2013-03-11 2015-08-20 Magic Leap, Inc. Interacting with a network to transmit virtual image data in augmented or virtual reality systems
US20140276242A1 (en) 2013-03-14 2014-09-18 Healthward International, LLC Wearable body 3d sensor network system and method
US20140267234A1 (en) 2013-03-15 2014-09-18 Anselm Hook Generation and Sharing Coordinate System Between Users on Mobile
US20140292645A1 (en) 2013-03-28 2014-10-02 Sony Corporation Display control device, display control method, and recording medium
US9595083B1 (en) 2013-04-16 2017-03-14 Lockheed Martin Corporation Method and apparatus for image producing with predictions of future positions
US20140323148A1 (en) 2013-04-30 2014-10-30 Qualcomm Incorporated Wide area localization from slam maps
US20140324517A1 (en) 2013-04-30 2014-10-30 Jpmorgan Chase Bank, N.A. Communication Data Analysis and Processing System and Method
US20140357290A1 (en) 2013-05-31 2014-12-04 Michael Grabner Device localization using camera and wireless signal
US20140368532A1 (en) 2013-06-18 2014-12-18 Brian E. Keane Virtual object orientation and visualization
US9256987B2 (en) 2013-06-24 2016-02-09 Microsoft Technology Licensing, Llc Tracking head movement when wearing mobile device
US20160189419A1 (en) 2013-08-09 2016-06-30 Sweep3D Corporation Systems and methods for generating data indicative of a three-dimensional representation of a scene
US20150046284A1 (en) 2013-08-12 2015-02-12 Airvirtise Method of Using an Augmented Reality Device
US20160217623A1 (en) 2013-09-30 2016-07-28 Pcms Holdings, Inc. Methods, apparatus, systems, devices, and computer program products for providing an augmented reality display and/or user interface
US20160282619A1 (en) * 2013-11-11 2016-09-29 Sony Interactive Entertainment Inc. Image generation apparatus and image generation method
US20150143459A1 (en) 2013-11-15 2015-05-21 Microsoft Corporation Protecting privacy in web-based immersive augmented reality
US20150183465A1 (en) 2013-12-27 2015-07-02 Hon Hai Precision Industry Co., Ltd. Vehicle assistance device and method
US20150208072A1 (en) * 2014-01-22 2015-07-23 Nvidia Corporation Adaptive video compression based on motion
US20150228114A1 (en) 2014-02-13 2015-08-13 Microsoft Corporation Contour completion for augmenting surface reconstructions
US20160026253A1 (en) 2014-03-11 2016-01-28 Magic Leap, Inc. Methods and systems for creating virtual and augmented reality
US20150296170A1 (en) 2014-04-11 2015-10-15 International Business Machines Corporation System and method for fine-grained control of privacy from image and video recording devices
US20150317518A1 (en) 2014-05-01 2015-11-05 Seiko Epson Corporation Head-mount type display device, control system, method of controlling head-mount type display device, and computer program
US20150332439A1 (en) 2014-05-13 2015-11-19 Xiaomi Inc. Methods and devices for hiding privacy information
US20150348511A1 (en) 2014-05-30 2015-12-03 Apple Inc. Dynamic Display Refresh Rate Based On Device Motion
US20180008141A1 (en) * 2014-07-08 2018-01-11 Krueger Wesley W O Systems and methods for using virtual reality, augmented reality, and/or a synthetic 3-dimensional information for the measurement of human ocular performance
US20160049008A1 (en) 2014-08-12 2016-02-18 Osterhout Group, Inc. Content presentation in head worn computing
US20160080642A1 (en) 2014-09-12 2016-03-17 Microsoft Technology Licensing, Llc Video capture with privacy safeguard
US20160098862A1 (en) 2014-10-07 2016-04-07 Microsoft Technology Licensing, Llc Driving a projector to generate a shared spatial augmented reality experience
US20160260260A1 (en) 2014-10-24 2016-09-08 Usens, Inc. System and method for immersive and interactive multimedia generation
US20160119536A1 (en) 2014-10-28 2016-04-28 Google Inc. Systems and methods for autonomously generating photo summaries
US9916002B2 (en) 2014-11-16 2018-03-13 Eonite Perception Inc. Social applications for augmented reality technologies
US20180373320A1 (en) 2014-11-16 2018-12-27 Eonite Perception Inc. Social applications for augmented reality technologies
US9754419B2 (en) 2014-11-16 2017-09-05 Eonite Perception Inc. Systems and methods for augmented reality preparation, processing, and application
US20190139311A1 (en) 2014-11-16 2019-05-09 Intel Corporation Optimizing head mounted displays for augmented reality
US20210125415A1 (en) 2014-11-16 2021-04-29 Intel Corporation Optimizing head mounted displays for augmented reality
US20210350630A1 (en) 2014-11-16 2021-11-11 Intel Corporation Optimizing head mounted displays for augmented reality
US20160147064A1 (en) 2014-11-26 2016-05-26 Osterhout Group, Inc. See-through computer display systems
US20160180590A1 (en) 2014-12-23 2016-06-23 Lntel Corporation Systems and methods for contextually augmented video creation and sharing
US20160335497A1 (en) 2015-05-11 2016-11-17 Google Inc. Crowd-sourced creation and updating of area description file for mobile device localization
US20160337599A1 (en) 2015-05-11 2016-11-17 Google Inc. Privacy filtering of area description file prior to upload
US20160335275A1 (en) 2015-05-11 2016-11-17 Google Inc. Privacy-sensitive query for localization area description file
US20160335802A1 (en) 2015-05-14 2016-11-17 Magic Leap, Inc. Privacy-sensitive consumer cameras coupled to augmented reality systems
US20160358485A1 (en) 2015-06-07 2016-12-08 Apple Inc. Collision Avoidance Of Arbitrary Polygonal Obstacles
US20160360970A1 (en) 2015-06-14 2016-12-15 Facense Ltd. Wearable device for taking thermal and visual measurements from fixed relative positions
US20170021273A1 (en) 2015-07-23 2017-01-26 At&T Intellectual Property I, L.P. Coordinating multiple virtual environments
US20170123750A1 (en) 2015-10-28 2017-05-04 Paypal, Inc. Private virtual object handling
US20170201740A1 (en) 2016-01-11 2017-07-13 Microsoft Technology Licensing, Llc Distributing video among multiple display zones
US20170276780A1 (en) 2016-03-22 2017-09-28 Mitsubishi Electric Corporation Moving body recognition system
US20170294044A1 (en) 2016-04-06 2017-10-12 Tmrwland Hongkong Limited Shared experience of virtual environments
US20170293146A1 (en) 2016-04-07 2017-10-12 Oculus Vr, Llc Accommodation based optical correction
US20190172410A1 (en) 2016-04-21 2019-06-06 Sony Interactive Entertainment Inc. Image processing device and image processing method
US20190179423A1 (en) 2016-06-16 2019-06-13 Sensomotoric Instruments Gesellschaft Fur Innovative Sensork MBH Minimized Bandwidth Requirements for Transmitting Mobile HMD Gaze Data
US20170374343A1 (en) 2016-06-22 2017-12-28 Microsoft Technology Licensing, Llc Velocity and depth aware reprojection
US20180004285A1 (en) 2016-06-30 2018-01-04 Sony Interactive Entertainment Inc. Apparatus and method for gaze tracking
US20210118357A1 (en) 2016-08-12 2021-04-22 Intel Corporation Optimized Display Image Rendering
US11017712B2 (en) 2016-08-12 2021-05-25 Intel Corporation Optimized display image rendering
US20180047332A1 (en) 2016-08-12 2018-02-15 Intel Corporation Optimized Display Image Rendering
US11210993B2 (en) 2016-08-12 2021-12-28 Intel Corporation Optimized display image rendering
US20220122516A1 (en) 2016-08-12 2022-04-21 Intel Corporation Optimized Display Image Rendering
US20180218543A1 (en) 2016-09-12 2018-08-02 Intel Corporation Hybrid rendering for a wearable display attached to a tethered computer
US20200160609A1 (en) 2016-09-12 2020-05-21 Intel Corporation Hybrid rendering for a wearable display attached to a tethered computer
US9928660B1 (en) 2016-09-12 2018-03-27 Intel Corporation Hybrid rendering for a wearable display attached to a tethered computer
US20180075654A1 (en) 2016-09-12 2018-03-15 Intel Corporation Hybrid rendering for a wearable display attached to a tethered computer
US11244512B2 (en) 2016-09-12 2022-02-08 Intel Corporation Hybrid rendering for a wearable display attached to a tethered computer
US20180165888A1 (en) 2016-12-13 2018-06-14 Alibaba Group Holding Limited Allocating virtual objects based on augmented reality
US20190355050A1 (en) 2018-05-18 2019-11-21 Gift Card Impressions, LLC Augmented reality gifting on a mobile device

Non-Patent Citations (46)

* Cited by examiner, † Cited by third party
Title
Curless et al., "A Volumetric Method for Building Complex Models From Range Images," in proceedings of the 23rd annual conference on Computer Graphics and Interactive Techniques, pp. 303-312, ACM, 1996, 10 pages.
International Searching Authority, "International Search Report & Written Opinion," mailed in connection with International Patent Application No. PCT/US2015/60744, dated Feb. 2, 2016, 8 Pages.
Lorensen et al., "Marching Cubes: A High Resolution 3D Surface Construction Algorithm," ACM Siggraph Computer Graphics, vol. 21, pp. 163-169, 1987, 7 pages.
Petrovskaya, "Towards Dependable Robotic Perception," PhD Thesis, Jun. 2011, retrieved from (http://cs.stanford.edu/people/petrovsk/dn/publications/anya-thesis.pdf) 226 pages.
U.S. Appl. No. 10/043,319, filed Aug. 7, 2018, Petrovskaya et al.
U.S. Appl. No. 10/504,291, filed Dec. 10, 2019, Petrovskaya et al.
U.S. Appl. No. 10/573,079, filed Feb. 25, 2020, Vembar et al.
U.S. Appl. No. 10/832,488, filed Nov. 10, 2020, Petrovskaya et al.
United States Patent and Trademark Office, "Advisory Action," mailed in connection with U.S. Appl. No. 16/749,501, dated Aug. 26, 2021, 2 pages.
United States Patent and Trademark Office, "Corrected Notice of Allowability," issued in connection with U.S. Appl. No. 17/133,265, dated Oct. 18, 2021, 2 pages.
United States Patent and Trademark Office, "Final Office Action," issued in connection with U.S. Appl. No. 15/054,082, dated May 4, 2017, 12 pages.
United States Patent and Trademark Office, "Final Office Action," mailed in connection U.S. Appl. No. 16/749,501, dated May 11, 2021, 23 pages.
United States Patent and Trademark Office, "Final Office Action," mailed in connection with U.S. Appl. No. 15/675,653, dated Jan. 10, 2019, 12 pages.
United States Patent and Trademark Office, "Final Office Action," mailed in connection with U.S. Appl. No. 15/879,717, dated Mar. 15, 2022, 19 pages.
United States Patent and Trademark Office, "Final Office Action," mailed in connection with U.S. Appl. No. 15/879,717, dated Oct. 19, 2020, 13 pages.
United States Patent and Trademark Office, "Final Office Action," mailed in connection with U.S. Appl. No. 15/879,717, dated Sep. 16, 2019, 12 pages.
United States Patent and Trademark Office, "Final Office Action," mailed in connection with U.S. Appl. No. 15/937,649, dated Aug. 16, 2019, 11 pages.
United States Patent and Trademark Office, "Final Office Action," mailed in connection with U.S. Appl. No. 16/034,275, dated Jun. 11, 2019, 19 pages.
United States Patent and Trademark Office, "Final Office Action," mailed in connection with U.S. Appl. No. 17/094,138, dated Jan. 31, 2022, 6 pages.
United States Patent and Trademark Office, "Non Final Office Action," issued in connection with U.S. Appl. No. 17/094,138, dated Aug. 11, 2021, 9 pages.
United States Patent and Trademark Office, "Non-final Office Action," issued in connection with U.S. Appl. No. 15/054,082, dated Dec. 9, 2016, 9 pages.
United States Patent and Trademark Office, "Non-final Office Action," mailed in connection with U.S. Appl. No. 15/406,652, dated Oct. 6, 2017, 22 pages.
United States Patent and Trademark Office, "Non-final Office Action," mailed in connection with U.S. Appl. No. 15/675,653, dated Aug. 10, 2020, 14 pages.
United States Patent and Trademark Office, "Non-Final Office Action," mailed in connection with U.S. Appl. No. 15/675,653, dated Sep. 17, 2018, 10 pages.
United States Patent and Trademark Office, "Non-final Office Action," mailed in connection with U.S. Appl. No. 15/879,717, dated Apr. 24, 2020, 12 pages.
United States Patent and Trademark Office, "Non-final Office Action," mailed in connection with U.S. Appl. No. 15/879,717, dated Aug. 3, 2021, 12 pages.
United States Patent and Trademark Office, "Non-final Office Action," mailed in connection with U.S. Appl. No. 15/879,717, dated Feb. 21, 2019, 11 pages.
United States Patent and Trademark Office, "Non-final Office Action," mailed in connection with U.S. Appl. No. 15/937,649, dated Apr. 4, 2019, 14 pages.
United States Patent and Trademark Office, "Non-final Office Action," mailed in connection with U.S. Appl. No. 16/034,275, dated Mar. 8, 2019, 4 pages.
United States Patent and Trademark Office, "Non-final Office Action," mailed in connection with U.S. Appl. No. 16/051,099, dated Jan. 25, 2019, 4 pages.
United States Patent and Trademark Office, "Non-final Office Action," mailed in connection with U.S. Appl. No. 16/706,108, dated Mar. 10, 2020, 8 pages.
United States Patent and Trademark Office, "Non-Final Office Action," mailed in connection with U.S. Appl. No. 16/749,501, dated Dec. 10, 2020, 23 pages.
United States Patent and Trademark Office, "Non-final Office Action," mailed in connection with U.S. Appl. No. 16/749,501, dated Dec. 10, 2020, 24 pages.
United States Patent and Trademark Office, "Notice of Allowability," issued in connection with U.S. Appl. No. 15/054,082, dated Dec. 29, 2017, 5 pages.
United States Patent and Trademark Office, "Notice of Allowability," issued in connection with U.S. Appl. No. 17/133,265, dated Sep. 9, 2021, 2 pages.
United States Patent and Trademark Office, "Notice of Allowability," mailed in connection with U.S. Appl. No. 16/051,099, dated Sep. 30, 2019, 8 pages.
United States Patent and Trademark Office, "Notice of Allowance and Fee(s) Due," mailed in connection with U.S. Appl. No. 15/262,369, dated Nov. 13, 2017, 7 pages.
United States Patent and Trademark Office, "Notice of Allowance and Fee(s) Due," mailed in connection with U.S. Appl. No. 15/406,652, dated Apr. 27, 2018, 10 pages.
United States Patent and Trademark Office, "Notice of Allowance and Fee(s) Due," mailed in connection with U.S. Appl. No. 15/675,653, dated Feb. 18, 2021, 7 pages.
United States Patent and Trademark Office, "Notice of Allowance and Fee(s) Due," mailed in connection with U.S. Appl. No. 15/937,649, dated Oct. 17, 2019, 7 pages.
United States Patent and Trademark Office, "Notice of Allowance and Fee(s) Due," mailed in connection with U.S. Appl. No. 16/051,099, dated Aug. 7, 2019, 11 pages.
United States Patent and Trademark Office, "Notice of Allowance and Fee(s) Due," mailed in connection with U.S. Appl. No. 16/706,108, dated Jul. 8, 2020, 9 pages.
United States Patent and Trademark Office, "Notice of Allowance and Fee(s) Due," mailed in connection with U.S. Appl. No. 16/749,501, dated Oct. 1, 2021, 12 pages.
United States Patent and Trademark Office, "Notice of Allowance and Fee(s) Due," mailed in connection with U.S. Appl. No. 17/094,138, dated May 9, 2022, 9 pages.
United States Patent and Trademark Office, "Notice of Allowance and Fee(s) Due," mailed in connection with U.S. Appl. No. 17/133,265, dated Aug. 30, 2021, 11 pages.
United States Patent and Trademark Office, "Notice of Allowance," issued in connection with U.S. Appl. No. 15/054,082, dated Jan. 19, 2018, 8 pages.

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11721275B2 (en) 2016-08-12 2023-08-08 Intel Corporation Optimized display image rendering

Also Published As

Publication number Publication date
US20230410720A1 (en) 2023-12-21
US20180047332A1 (en) 2018-02-15
US11721275B2 (en) 2023-08-08
US11017712B2 (en) 2021-05-25
US20220122516A1 (en) 2022-04-21
US11210993B2 (en) 2021-12-28
US20210118357A1 (en) 2021-04-22
US20230110339A1 (en) 2023-04-13

Similar Documents

Publication Publication Date Title
US11514839B2 (en) Optimized display image rendering
US10732707B2 (en) Perception based predictive tracking for head mounted displays
US10311833B1 (en) Head-mounted display device and method of operating a display apparatus tracking an object
EP3368965B1 (en) Remote rendering for virtual images
EP3491489B1 (en) Systems and methods for reducing motion-to-photon latency and memory bandwidth in a virtual reality system
CN113811920A (en) Distributed pose estimation
CN112020858A (en) Asynchronous temporal and spatial warping with determination of regions of interest
CN109743626B (en) Image display method, image processing method and related equipment
WO2017169081A1 (en) Information processing device, information processing method, and program
US11533468B2 (en) System and method for generating a mixed reality experience
US10395418B2 (en) Techniques for predictive prioritization of image portions in processing graphics
US11375244B2 (en) Dynamic video encoding and view adaptation in wireless computing environments
WO2020003860A1 (en) Information processing device, information processing method, and program
US11924391B2 (en) Immersive video streaming using view-adaptive prefetching and buffer control
CN111066081B (en) Techniques for compensating for variable display device latency in virtual reality image display
US11681358B1 (en) Eye image stabilized augmented reality displays
KR20180061956A (en) Method and apparatus for estimating eye location
US20240029363A1 (en) Late stage occlusion based rendering for extended reality (xr)

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE