WO2022226224A1 - Immersive viewing experience - Google Patents

Immersive viewing experience Download PDF

Info

Publication number
WO2022226224A1
WO2022226224A1 PCT/US2022/025818 US2022025818W WO2022226224A1 WO 2022226224 A1 WO2022226224 A1 WO 2022226224A1 US 2022025818 W US2022025818 W US 2022025818W WO 2022226224 A1 WO2022226224 A1 WO 2022226224A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
imagery
specific display
illustrates
image
Prior art date
Application number
PCT/US2022/025818
Other languages
French (fr)
Inventor
Robert Edwin DOUGLAS
David Byron DOUGLAS
Kathleen Mary DOUGLAS
Original Assignee
Douglas Robert Edwin
Douglas David Byron
Douglas Kathleen Mary
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/237,152 external-priority patent/US11589033B1/en
Application filed by Douglas Robert Edwin, Douglas David Byron, Douglas Kathleen Mary filed Critical Douglas Robert Edwin
Priority to JP2023558524A priority Critical patent/JP2024518243A/en
Priority to EP22792523.7A priority patent/EP4327552A1/en
Priority to CN202280030471.XA priority patent/CN117321987A/en
Publication of WO2022226224A1 publication Critical patent/WO2022226224A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/243Image signal generators using stereoscopic image cameras using three or more 2D image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/122Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays

Definitions

  • Movies are a form of entertainment.
  • This patent discloses, a system, a method, an apparatus and software to achieve an improved immersive viewing experience.
  • First upload a user's viewing parameter to a cloud wherein said cloud stores imagery (which in the preferred embodiments is extremely large datasets).
  • Viewing parameters can include any action, gesture, body position, eye look angle, eye convergence/vergence or input (e.g., via a graphical user interface).
  • user's viewing parameters are characterized (e.g., by a variety of devices, such as eye-facing cameras, cameras to record gestures) and sent to the cloud.
  • Second a set of user-specific imagery is optimized from said imagery wherein said user-specific imagery is based on at least said viewing parameter.
  • the field of view of the user-specific imagery is smaller than the imagery.
  • the location where a user is looking would have high resolution and the location where the user is not looking would have low resolution. For example, if a user is looking at an object on the left, then the user-specific imagery would be high resolution on the left side. In some embodiments, a user-specific imagery would be streamed in near-real time.
  • the user-specific imagery comprises a first portion with a first spatial resolution and a second portion with a second spatial resolution, and wherein said first spatial resolution is higher than said second spatial resolution.
  • said viewing parameter comprises a viewing location and wherein said viewing location corresponds to said first portion.
  • user-specific imagery comprises a first portion with a first zoom setting and a second portion with a second zoom setting, and wherein said first zoom setting is higher than said second zoom setting.
  • a first portion is determined by said viewing parameter wherein said viewing parameter comprises at least one of the group consisting of: a position of said user's body; an orientation of said user's body; a gesture of said user's hand; a facial expression of said user; a position of said user's head; and an orientation of said user's head.
  • a first portion is determined by a graphical user interface, such as a mouse or controller.
  • Some embodiments comprise wherein the imagery comprises a first field of view (FOV) and wherein said user-specific imagery comprises a second field of view, and wherein said first FOV is larger than said second FOV.
  • FOV field of view
  • imagery comprises stereoscopic imagery and wherein said stereoscopic imagery is obtained via stereoscopic cameras or stereoscopic camera clusters.
  • imagery comprises stitched imagery wherein said stitched imagery is generated by at least two cameras.
  • imagery comprises composite imagery, wherein said composite imagery is generated by: taking an first image of a scene with a first set of camera settings wherein said first set of camera settings causes a first object to be in focus and a second object to be out of focus; and taking an second image of a scene with a second set of camera settings wherein said second set of camera settings causes said second object to be in focus and said first object to be out of focus.
  • said imagery comprises composite imagery, wherein said composite imagery is generated by: taking an first image of a scene with a first set of camera settings wherein said first set of camera settings causes a first object to be in focus and a second object to be out of focus; and taking an second image of a scene with a second set of camera settings wherein said second set of camera settings causes said second object to be in focus and said first object to be out of focus.
  • Some embodiments comprise wherein when user looks at said first object, said first image would be presented to said user and when user looks at said second object, said second image would be presented to said user.
  • Some embodiments
  • Some embodiments comprise wherein image stabilization is performed. Some embodiments comprise wherein said viewing parameter comprises convergence. Some embodiments comprise wherein user-specific imagery is 3D imagery wherein said 3D imagery is presented on a HDU, a set of anaglyph glasses or a set of polarized glasses.
  • Some embodiments comprise wherein said user-specific imagery is presented to said user on a display wherein said user has at least a 0.5p steradian field of view.
  • Some embodiments comprise wherein user-specific imagery is presented on a display.
  • the display is a screen (e.g., TV, reflective screen coupled with a projector system, an extended reality head display unit including an augmented reality display, a virtual reality display or a mixed reality display).
  • Figure 1 illustrates retrospective display of stereoscopic images.
  • Figure 2 illustrates methods to determine which stereo pair to display to a user for a given time point.
  • Figure 3 illustrates displaying a video recording on a HDU.
  • Figure 4 illustrates a pre-recorded stereo viewing performed by user 1.
  • Figure 5 illustrates performing long range stereoscopic imaging of a distant object using stereoscopic camera clusters.
  • Figure 6 illustrates a capability of post-acquisition adjusting the images to bring into the best possible picture based on user eye tracking by the generation of a stereoscopic composite image.
  • Figure 7 A illustrates an image with motion and the application of image stabilization processing.
  • Figure 7B illustrates an image with motion displayed in a HDU.
  • Figure 7C illustrates an image stabilization applied to the image using stereoscopic imagery.
  • Figure 8 A illustrates a left image and a right image with a first camera setting.
  • Figure 8B illustrates a left image and a right image with a second camera setting.
  • Figure 9A illustrates a top down view of all data gathered of a scene at a time point.
  • Figure 9B illustrates a displayed wide angle 2D image frame of the video recording.
  • Figure 9C illustrates a top down view of User A’s viewing angle of -70° and 55° FOV.
  • Figure 9D illustrates what User A would see given User As viewing angle of -70° and 55° FOV.
  • Figure 9E illustrates a top down view of User B’s viewing angle of +50° and 85° FOV.
  • Figure 9F illustrates what User B would see given User B’s viewing angle of +50° and 85° FOV.
  • Figure 10A illustrates the field of view captured at a first time point by the left camera.
  • Figure 10B illustrates the field of view captured at a first time point by the right camera.
  • Figure IOC illustrates a first user’s personalized field of view (FOV) at a given time point.
  • Figure 10D illustrates a second user’s personalized field of view (FOV) at a given time point.
  • FOV field of view
  • Figure 10E illustrates a third user’s personalized field of view (FOV) at a given time point.
  • Figure 10F illustrates a fourth user’s personalized field of view (FOV) at a given time point.
  • Figure 11 A illustrates a top down view of the first user’s left eye view.
  • Figure 11B illustrates a top down view of the first user’s left eye view wherein a convergence point in close proximity to the left eye and right eye.
  • Figure 11C illustrates a left eye view at time point 1 without convergence.
  • Figure 11D illustrates a left eye view at time point 2 with convergence.
  • Figure 12 illustrates the reconstruction of various stereoscopic images from previously acquired wide angle stereo images.
  • Figure 13 A illustrates a top down view of a home theater.
  • Figure 13B illustrates a side view of the home theater as shown in Figure 13 A.
  • Figure 14A illustrates a top down view of a home theater.
  • Figure 14B illustrates a side view of the home theater as shown in Figure 14A.
  • Figure 15A illustrates a near- spherical TV with a user looking straight ahead at time point #1.
  • Figure 15B shows the portion of the TV and the field of view being observed by the user at time point #1.
  • Figure 15C illustrates a near- spherical TV with a user looking straight ahead at time point #2.
  • Figure 15D shows the portion of the TV and the field of view being observed by the user at time point #2.
  • Figure 15E illustrates a near- spherical TV with a user looking straight ahead at time point #3.
  • Figure 15F shows the portion of the TV and the field of view being observed by the user at time point #3.
  • Figure 16A illustrates an un-zoomed image.
  • Figure 16B illustrates a digital -type zooming in on a portion of an image.
  • Figure 17A illustrates an un-zoomed image.
  • Figure 17B illustrates the optical -type zooming in on a portion of an image.
  • Figure 18 A illustrates a single resolution image.
  • Figure 18B illustrates a multi -resolution image.
  • Figure 19A illustrates a large field of view wherein a first user is looking at a first portion of the image and a second user is looking at a second portion of the image.
  • Figure 19B illustrates that only the first portion of the image in Figure 19A and that the second portion of the image in Figure 19A are high resolution and the remainder of the image is lower resolution.
  • Figure 20A illustrates a low resolution image
  • Figure 20B illustrates a high resolution image.
  • Figure 20C illustrates a composite image
  • Figure 21 illustrates a method and a process for performing near-real -time streaming of customized images.
  • Figure 22A illustrates using resection in conjunction with stereoscopic cameras wherein a first camera location is unknown.
  • Figure 22B illustrates using resection in conjunction with stereoscopic cameras wherein an object location is unknown.
  • Figure 23 A illustrates a top down view of a person looking forward to the center of the screen of the home theater.
  • Figure 23B illustrates a top down view of a person looking forward to the right side of the screen of the home theater.
  • Figure 24 illustrates a method, system and apparatus for optimizing stereoscopic camera settings during image acquisition during movement.
  • step A is to determine a location (e.g., (a n , b «, r supplement) coordinate) where a viewer is looking at time point n.
  • This location could be a near, medium or far convergence point.
  • Note #2 A collection of stereoscopic imagery has been collected and recorded. Step A follows the collection process and takes place at some subsequent time period during viewing by a user.
  • 101 illustrates step B, which is to determine a FOV « corresponding to the location (e.g., (a generous, b supplement, x terminating) coordinate for time point n. Note: user had option to select the FOV).
  • step C which is to select camera(s) that correspond to the FOV for left eye with option to perform additional image processing (e.g., use composite image, use vergence zone) to generate personalized left eye image at time point n (PLEI n ).
  • step D which is to select camera(s) that correspond to the FOV for right eye with option to perform additional image processing (e.g., use composite image, use vergence zone) to generate personalized right eye image at time point n (PREI n ).
  • step E which is to display PLEI n on a left eye display of a HDU.
  • step F which is to display PREI n on a right eye display of a HDU.
  • step G which is to increment time step to n+1 and go to Step A, above.
  • Figure 2 illustrates methods to determine which stereo pair to display to a user for a given time point.
  • 200 illustrates a text box of analyzing the user’s parameters to determine which stereoscopic image to display to the user.
  • First use the viewing direction of a user’s head. For example, if user’s head is in a forward direction, a first stereo pair could be used and if a user’s head is in a direction toward the left a second stereo pair could be used.
  • a distant object e.g., mountain in the distance
  • a viewing direction of a near object e.g., leaf on a tree
  • a viewing direction to a distant object e.g., mountain in the distance
  • option to use combination of convergence and viewing angle e.g., a viewing direction of a near object (e.g., leaf on a tree)
  • option to use combination of convergence and viewing angle e.g., a viewing direction of a distant object.
  • accommodation of the user’s eyes For example, monitor a user’s pupil size and use change in size to indicate where (near / far) the user is looking.
  • Figure 3 illustrates displaying a video recording on a HDU.
  • 300 illustrates establishing a coordinate system. For example, use camera coordinate as the origin and use pointing direction of camera as an axis. This is discussed in more detail in US Patent Application 17/225,610, which is incorporated by reference in its entirety.
  • 301 illustrates performing wide angle recording of a scene. For example, record data with a FOV larger than the FOV shown to a user).
  • 302 illustrates performing an analysis of a user, as discussed in Figure 2 to determine where the user is looking at in the scene.
  • 303 illustrates optimizing the display based on the analysis in 302.
  • a feature (e.g., position, size, shape, orientation, color, brightness, texture, classification by AI algorithm) of a physical object determines a feature (e.g., position, size, shape, orientation, color, brightness, texture) of a virtual object.
  • a user is using a mixed reality display in a room in a house wherein some of the areas in the room (e.g., a window during the daytime) are bright and some of the areas in the room are dark (e.g., a dark blue wall).
  • the position of placement of virtual objects is based on the location of objects within the room. For example, a virtual object could be colored white if the background is a dark blue wall, so that it stands out.
  • FIG. 4 illustrates a pre-recorded stereo viewing performed by user 1.
  • 400 illustrates user 1 performing a stereo recording using a stereo camera system (e.g., smart phone, etc.). This is discussed in more detail in US Patent Application 17/225,610, which is incorporated by reference in its entirety.
  • 401 illustrates the stereo recording being stored on a memory device.
  • 402 illustrates a user (e.g., User 1 or other user(s)) retrieving the stored stereo recording.
  • the stereo recording may be transmitted to the other user(s) and the other user(s) would receive the stored stereo recording.
  • 403 illustrates a user (e.g., User 1 or other user(s)) viewing the stored stereo recording on a stereo display unit (e.g., augmented reality, mixed reality, virtual reality display).
  • a stereo display unit e.g., augmented reality, mixed reality, virtual reality display.
  • Figure 5 illustrates performing long range stereoscopic imaging of a distant object using stereoscopic camera clusters.
  • 500 illustrates positioning two camera clusters at at least 50 feet apart.
  • 501 illustrates elects a target at at least 1 mile away.
  • 502 illustrates precisely aiming each camera cluster such that the centerline of focus intersects at the target.
  • 503 illustrates acquiring stereoscopic imagery of the target.
  • 504 illustrates viewing and/or analyzing the acquired stereoscopic imagery.
  • Some embodiments use cameras with telephoto lenses rather than camera clusters. Also, some embodiments, have stereo separation of less than or equal to 50 feet apart for optimized viewing of less than 1 mile away.
  • Figure 6 illustrates a capability of post-acquisition adjusting the images to bring into the best possible picture based on user eye tracking by the generation of a stereoscopic composite image.
  • the stereoscopic images displayed at this time point has several objects that might be of interest to a person observing the scene.
  • a stereoscopic composite image will be generated to match at least one user’s input. For example, if a user is viewing (eye tracking determines viewing location) the mountains 600 or cloud 601 at a first time point, then the stereoscopic composite image pair delivered to a HDU would be generated such that the distant objects of the mountains 600 or cloud 601 were in focus and the nearby objects including the deer 603 and the flower 602 were out of focus.
  • the stereoscopic composite images presented at this frame would be optimized for medium range.
  • the stereoscopic composite images would be optimized for closer range (e.g., implement convergence, and blur out distant items, such as the deer 603, the mountains 600 and the cloud 601).
  • a variety of user inputs could be used to indicate to a software suite how to optimize the stereoscopic composite images. Gestures such as squint could be used to optimize the stereoscopic composite image for more distant objects. Gestures such as lean forward could be used to zoom in to a distant object.
  • a GUI could also be used to improve the immersive viewing experience.
  • Figure 7A illustrates an image with motion and the application of image stabilization processing.
  • 700A illustrates a left eye image of an object wherein there is motion blurring the edges of the object.
  • 701A illustrates a left eye image of an object wherein image stabilization processing has been applied.
  • Figure 7B illustrates an image with motion displayed in a HDU.
  • 702 illustrates the HDU.
  • 700A illustrates a left eye image of an object wherein there is motion blurring the edges of the object.
  • 700B illustrates a right eye image of an object wherein there is motion blurring the edges of the object.
  • 701A illustrates a left eye display, which is aligned with a left eye of a user.
  • 701B illustrates a right eye display, which is aligned with a right eye of a user.
  • Figure 7C illustrates an image stabilization applied to the image using stereoscopic imagery. A key task of image processing is the image stabilization using stereoscopic imagery.
  • 700A illustrates a left eye image of an object wherein image stabilization processing has been applied.
  • 700B illustrates a left eye image of an object wherein image stabilization processing has been applied.
  • 701A illustrates a left eye display, which is aligned with a left eye of a user.
  • 701B illustrates a right eye display, which is aligned with a right eye of a user.
  • 702 illustrates the HDU.
  • Figure 8 A illustrates a left image and a right image with a first camera setting. Note that the text on the monitor is in focus and the distant object of the knob on the cabinet is out of focus.
  • Figure 8B illustrates a left image and a right image with a second camera setting. Note that the text on the monitor is out of focus and the distant object of the knob on the cabinet is in focus.
  • a point of novelty is using at least two cameras. A first image from a first camera is obtained. A second image from a second camera is obtained. The first camera and the second camera are in the same viewing perspectives. Also, they are of the scene (e.g., a still scene or a same time point of an scene with movement/changes).
  • a composite image is generated wherein a first portion of the composite image is obtained from the first image and a second portion of the composite image is obtained from the second image.
  • Figure 9A illustrates a top down view of all data gathered of a scene at a time point.
  • Figure 9B illustrates a displayed wide angle 2D image frame of the video recording. Note that displaying this whole field of view to a user would be distorted given the mismatch between the user’s intrinsic FOV (human eye FOV) and the camera system FOV.
  • intrinsic FOV human eye FOV
  • Figure 9C illustrates a top down view of User A’s viewing angle of -70° and 55° FOV.
  • a key point of novelty is the user’s ability to select the portion of the stereoscopic imagery with the viewing angle. Note that the selected portion could realistically be up to -180°, but not more.
  • Figure 9D illustrates what User A would see given User A’s viewing angle of -70° and 55° FOV. This improves over the prior art because it allows different viewers to see different portions of the field of view. While a human has a horizontal field of view of slightly more than 180 degrees, a human can only read text over approximately 10 degrees of the field of view, can only assess shape over approximately 30 degrees of the field of view and can only assess colors over approximately 60 degrees of the field of view. In some embodiments, filtering (subtracting) is performed. A human has a vertical field of view of approximately 120 degrees with an upward (above the horizontal) field of view of 50 degrees and a downward (below the horizontal) field of view of approximately 70 degrees. Maximum eye rotation however, is limited to approximately 25 degrees above the horizontal and approximately 30 degrees below the horizontal. Typically, the normal line of sight from the seated position is approximately 15 degreed below the horizontal.
  • Figure 9E illustrates a top down view of User B’s viewing angle of +50° and 85° FOV.
  • a key point of novelty is the user’s ability to select the portion of the stereoscopic imagery with the viewing angle.
  • the FOV of User B is larger than the FOV of User A. Note that the selected portion could realistically be up to -180°, but not more because of the limitations of the human eye.
  • Figure 9F illustrates what User B would see given User B’s viewing angle of +50° and 85° FOV. This improves over the prior art because it allows different viewers to see different portions of the field of view.
  • multiple cameras are recording for a 240° film.
  • 4 cameras each with a 60° sector) for simultaneous recording.
  • the sectors are filmed sequentially - one at a time.
  • Some scenes of a film could be filmed sequentially and other scenes could be filed simultaneously.
  • a camera set up could be used with overlap for image stitching.
  • Some embodiments comprise using a camera ball system described in described in US Patent Application 17/225,610, which is incorporated by reference in its entirety. After the imagery is recorded, imagery from the cameras are edited to sync the scenes and stitch them together.
  • LIDAR devices can be integrated into the camera systems for precise camera direction pointing.
  • Figure 10A illustrates the field of view captured at a first time point by the left camera.
  • the left camera 1000 and right camera 1001 are shown.
  • the left FOV 1002 is shown by the white region and is approximately 215° and would have an a ranging from +90° to -135° (sweeping from +90° to -135° in a counterclockwise direction).
  • the area not imaged within the left FOV 1003 would be approximately 135° and would have an a ranging from +90° to -135° (sweeping from +90° to -135° in a clockwise direction).
  • Figure 10B illustrates the field of view captured at a first time point by the right camera.
  • the left camera 1000 and right camera 1001 are shown.
  • the right FOV 1004 is shown by the white region and is approximately 215° and would have an a ranging from +135° to -90° (sweeping from +135° to -90° in a counterclockwise direction).
  • the area not imaged within the right FOV 1005 would be approximately 135° and would have an a ranging from +135° to -90° (sweeping from +135° to -90° in a counterclockwise direction).
  • Figure IOC illustrates a first user’s personalized field of view (FOV) at a given time point.
  • 1000 illustrates the left camera.
  • 1001 illustrates the right camera.
  • 1006a illustrates the left boundary of the left eye FOV for the first user, which is shown in light gray.
  • 1007a illustrates the right side boundary of the left eye FOV for the first user, which is shown in light gray.
  • 1008a illustrates the left boundary of the right eye FOV for the first user, which is shown in light gray.
  • 1009a illustrates the right side boundary of the right eye FOV for the first user, which is shown in light gray.
  • 1010a illustrates the center line of the left eye FOV for the first user.
  • 1011a illustrates the center line of the right eye FOV for the first user.
  • center line of the left eye FOV 1010a for the first user and the center line of the right eye FOV 1011a for the first user are parallel, which is equivalent to a convergence point at infinity.
  • the first user is looking in the forward direction. It is suggested that during filming of a moving that most of the action in the scene occur in this forward looking direction.
  • Figure 10D illustrates a second user’s personalized field of view (FOV) at a given time point.
  • 1000 illustrates the left camera.
  • 1001 illustrates the right camera.
  • 1006b illustrates the left boundary of the left eye FOV for the second user, which is shown in light gray.
  • 1007b illustrates the right side boundary of the left eye FOV for the second user, which is shown in light gray.
  • 1008b illustrates the left boundary of the right eye FOV for the second user, which is shown in light gray.
  • 1009b illustrates the right side boundary of the right eye FOV for the second user, which is shown in light gray.
  • 1010b illustrates the center line of the left eye FOV for the second user.
  • 1011b illustrates the center line of the right eye FOV for the second user.
  • center line of the left eye FOV 1010b for the second user and the center line of the right eye FOV 1011b for the second user meet at a convergence point 1012. This allows the second user to view a small object with greater detail. Note that the second user is looking in the forward direction. It is suggested that during filming of a moving that most of the action in the scene occur in this forward looking direction.
  • Figure 10E illustrates a third user’s personalized field of view (FOV) at a given time point.
  • 1000 illustrates the left camera.
  • 1001 illustrates the right camera.
  • 1006c illustrates the left boundary of the left eye FOV for the third user, which is shown in light gray.
  • 1007c illustrates the right side boundary of the left eye FOV for the third user, which is shown in light gray.
  • 1008c illustrates the left boundary of the right eye FOV for the third user, which is shown in light gray.
  • 1009c illustrates the right side boundary of the right eye FOV for the third user, which is shown in light gray.
  • 1010c illustrates the center line of the left eye FOV for the third user.
  • 1011c illustrates the center line of the right eye FOV for the third user.
  • center line of the left eye FOV 1010c for the third user and the center line of the right eye FOV 1011c for the third user are approximately parallel, which is equivalent to looking at a very far distance.
  • the third user is looking in a moderately leftward direction.
  • the overlap of the left eye FOV and right eye FOV provide stereoscopic viewing to the third viewer.
  • Figure 10F illustrates a fourth user’s personalized field of view (FOV) at a given time point.
  • 1000 illustrates the left camera.
  • 1001 illustrates the right camera.
  • 1006d illustrates the left boundary of the left eye FOV for the fourth user, which is shown in light gray.
  • 1107d illustrates the right side boundary of the left eye FOV for the fourth user, which is shown in light gray.
  • 1008d illustrates the left boundary of the right eye FOV for the fourth user, which is shown in light gray.
  • 1009d illustrates the right side boundary of the right eye FOV for the fourth user, which is shown in light gray.
  • lOlOd illustrates the center line of the left eye FOV for the fourth user.
  • 101 Id illustrates the center line of the right eye FOV for the fourth user.
  • center line of the left eye FOV lOlOd for the fourth user and the center line of the right eye FOV 101 Id for the fourth user are approximately parallel, which is equivalent to looking at a very far distance. Note that the fourth user is looking in a far leftward direction. Note that the first user, second user, third user and fourth user are all seeing different views of the movie at the same time point. It should be noted that some of the designs, such as the camera cluster or ball system as described in
  • Figure 11A illustrates a top down view of the first user’s left eye view at time point 1.
  • 1100 illustrates the left eye view point.
  • 1101 illustrates the right eye viewpoint.
  • 1102 illustrates the portion of the field of view (FOV) not covered by either camera.
  • 1103 illustrates the portion of the FOV that is covered by at least one camera.
  • Figure 11B illustrates a top down view of the first user’s left eye view wherein a convergence point in close proximity to the left eye and right eye. 1100 illustrates the left eye view point.
  • 1101 illustrates the right eye viewpoint.
  • 1102 illustrates the portion of the field of view (FOV) not covered by either camera.
  • 1103 illustrates the portion of the FOV that is covered by at least one camera.
  • Figure 12 illustrates the reconstruction of various stereoscopic images from previously acquired wide angle stereo images.
  • 1200 illustrates acquiring imagery from a stereoscopic camera system. This is camera system is discussed in more detail in US Patent Application 17/225,610, which is incorporated by reference in its entirety.
  • 1201 illustrates wherein a first camera for a left eye viewing perspective and a second camera for a right eye viewing perspective is utilized.
  • 1202 illustrates selecting the field of view of the first camera based on the left eye look angle and the field of view for the second camera based on the right eye look angle. In the preferred embodiment, the selection would be performed by a computer (e.g., integrated into a head display unit) based on an eye tracking system tracking eye movements of a user.
  • a computer e.g., integrated into a head display unit
  • left eye image is generated from at least two lenses
  • right eye image is generated from at least two lenses
  • present stereoscopic image pair with nearby object in focus and distant objects out of focus When user is looking at distant object, present stereoscopic image pair with nearby object out of focus and distant object in focus.
  • Second, use a variety of display devices e.g., Augmented Reality, Virtual Reality, Mixed Reality displays).
  • FIG. 13A illustrates a top down view of a home theater.
  • 1300 illustrates the user.
  • 1301 illustrates the projector.
  • 1302 illustrates the screen.
  • this immersive home theater is displays a field of view larger than a user’s 1300 field of view. For example, if a user 1300 was looking straight forward, the home theater would display a horizontal FOV of greater than 180 degrees. Thus, the home theater’s FOV would completely cover the user’s horizontal FOV. Similarly, if the user was looking straight forward, the home theater would display a vertical FOV of greater than 120 degrees. Thus, the home theater’s FOV would completely cover the user’s vertical FOV.
  • An AR / VR / MR headset could be used in conjunction with this system, but would not be required.
  • a conventional IMAX polarized projector could be utilized with IMAX-type polarized disposable glasses.
  • the size of the home theater could vary.
  • the home theater walls could be built with white, reflective panels and framing.
  • the projector would have multiple heads to cover the larger field of view.
  • Figure 13B illustrates a side view of the home theater as shown in Figure 13A.
  • 1300 illustrates the user.
  • 1301 illustrates the projector.
  • 1302 illustrates the screen.
  • this immersive home theater is displays a field of view larger than a user’s 1300 field of view. For example, if a user 100 was looking forward while on a recliner, the home theater would display a vertical FOV of greater than 120 degrees. Thus, the home theater’s FOV would completely cover the user’s FOV. Similarly, if the user was looking straight forward, the home theater would display a horizontal FOV of greater than 120 degrees. Thus, the home theater’s FOV would completely cover the user’s FOV.
  • Figure 14A illustrates a top down view of a home theater.
  • 1400 A illustrates a first user.
  • 1400B illustrates a first user.
  • 1401 illustrates the projector.
  • 1402 illustrates the screen.
  • this immersive home theater is displays a field of view larger than the FOV of the first user 1400 A or the second user 1400B.
  • the first user 1400 A was looking straight forward, the first user 1400A would see a horizontal FOV of greater than 180 degrees.
  • the home theater’s FOV would completely cover the user’s horizontal FOV.
  • the home theater would display a vertical FOV of greater than 120 degrees, as shown in Figure 14B.
  • the home theater’s FOV would completely cover the user’s vertical FOV.
  • An AR / VR / MR headset could be used in conjunction with this system, but would not be required.
  • Cheap anaglyph or polarized glasses could also be used.
  • a conventional IMAX polarized projector could be utilized with IMAX-type polarized disposable glasses.
  • the size of the home theater could vary.
  • the home theater walls could be built with white, reflective panels and framing. The projector would have multiple heads to cover the larger field of view.
  • Figure 14B illustrates a side view of the home theater as shown in Figure 14A.
  • 1400A illustrates the first user.
  • 1401 illustrates the projector.
  • 1402 illustrates the screen.
  • this immersive home theater is displays a field of view larger than the first user’s 1400A field of view. For example, if the first user 1400 A was looking forward while on a recliner, the user would see a vertical FOV of greater than 120 degrees. Thus, the home theater’s FOV would completely cover the FOV of the first user 1400 A. Similarly, if the first user 1400 A was looking straight forward, the home theater would display a horizontal FOV of greater than 120 degrees. Thus, the home theater’s FOV would completely cover the FOV of the first user 1400 A.
  • a typical high resolution display has 4000 pixels over a 1.37 m distance. This would be equivalent to 10 x 10 6 pixels per 1.87 m 2 .
  • the surface area of a hemisphere is 2 x p x r 2 , which is equal to (4)(3.14)(2 2 ) or 50.24 m 2 .
  • a spatial resolution was desired to be equal to that of a typical high resolution display, this would equal (50.24 m 2 )(10 x 10 6 pixels per 1.87 m 2 ) or 429 million pixels.
  • the frame rate 60 frames per second. This is 26 times the amount of data as compared to a standard 4K monitor.
  • a field of view comprises a spherical coverage with a 4p steradians. This can be accomplished via a HDU.
  • a field of view comprises sub-spherical coverage with at least 3p steradians.
  • a field of view comprises sub-spherical coverage with at least 2p steradians.
  • a field of view comprises sub-spherical coverage with at least 1p steradians. In some embodiments, a field of view comprises sub-spherical coverage with at least 0.5p steradians. In some embodiments, a field of view comprises sub-spherical coverage with at least 0.25p steradians. In some embodiments, a field of view comprises sub-spherical coverage with at least 0.05p steradians. In some embodiments, a sub-spherical IMAX system is created for an improved movie theater experience with many viewers. The chairs would be positioned in a similar position as standard movie theaters, but the screen would be sub-spherical. In some embodiments, non-spherical shapes could also be used.
  • Figure 15A illustrates time point #1 wherein a user looking straight ahead and sees a horizontal field of view of approximately 60 degrees horizontal and 40 degrees vertical with a reasonably precise field of view (e.g., user can see shapes and colors in peripheral FOV).
  • Figure 15B shows the center portion of the TV and the field of view being observed by the user at time point #1.
  • data would be streamed (e.g., via the internet).
  • a novel feature of this patent is called “viewing-parameter directed streaming”.
  • a viewing parameter is used to direct the data streamed. For example, if the user 1500 were looking straight forward, then a first set of data would be streamed to correspond with the straight forward viewing angle of the user 1500. If, however, the user were looking at to the side of the screen, a second set of data would be streamed to correspond with the looking to the side viewing angle of the user 1500.
  • viewing parameters that could control viewing angles include, but are not limited to, the following: user’s vergence; user’s head position; user’s head orientation.
  • any feature (age, gender, preference) or action of a user (viewing angle, positions, etc.) could be used to direct streaming.
  • another novel feature is the streaming of at least two image qualities. For example, a first image quality (e.g., high quality) would be streamed in accordance with a first parameter (e.g., within user’s 30° horizontal FOV and 30° vertical FOV). And, a second image quality (e.g., lower quality) would be also be streamed that did not meet this criteria (e.g., not within user’s 30° horizontal FOV and 30° vertical FOV). Surround sound would be implemented in this system.
  • a first image quality e.g., high quality
  • a second image quality e.g., lower quality
  • Figure 15C illustrates time point #2 wherein a user looking to the user’s left side of the screen and sees a horizontal field of view of approximately 60 degrees horizontal and 40 degrees vertical with a reasonably precise field of view (e.g., user can see shapes and colors in peripheral FOV).
  • Figure 15D illustrates time point #2 with the field of view being observed by the user at time point #2, which is different as compared to Figure 15B.
  • the area of interest is half that of time point #1.
  • greater detail and higher resolution of objects within a small FOV within the scene is provided to the user. Outside of this high resolution field of view zone, a lower resolution image quality could be presented on the screen.
  • Figure 15E illustrates time point #3 wherein a user looking to the user’s right side of the screen.
  • Figure 15F illustrates time point #3 and sees a circularly shaped high-resolution FOV.
  • Figure 16A illustrates an un-zoomed image.
  • 1600 illustrates the image.
  • 1601 A illustrates a box illustrated to denote the area within image 1600 that is set to be zoomed in on.
  • Figure 16B illustrates a digital-type zooming in on a portion of an image. This can be accomplished via methods described in US Patent 8,384,771 (e.g., 1 pixel turns into 4), which is incorporated by reference in its entirety.
  • a the area to be zoomed in on can be accomplished through a variety of user inputs including: gesture tracking systems; eye tracking systems; and, graphical user interfaces (GUIs).
  • GUIs graphical user interfaces
  • Figure 17A illustrates an un-zoomed image.
  • 1700 illustrates the image.
  • 1701 A illustrates a box illustrated to denote the area within image 1700 that is set to be zoomed in on.
  • Figure 17B illustrates the optical -type zooming in on a portion of an image.
  • a the area to be zoomed in on can be accomplished through a variety of user inputs including: gesture tracking systems; eye tracking systems; and, graphical user interfaces (GUIs).
  • GUIs graphical user interfaces
  • the area within the image 1701A that was of denoted in Figure 17A is now zoomed in on as shown in 170 IB and also note that the image inside of 170 IB appears higher image quality. This can be done by selectively displaying the maximum quality imagery in region 170 IB and enlarging region 1701B. Not only is the cloud bigger, the resolution of the cloud is also better.
  • Figure 18A illustrates a single resolution image.
  • 1800 A illustrates the image.
  • 1801 A illustrates a box illustrated to denote the area within image 1800A that is set to have resolution improved in.
  • Figure 18B illustrates a multi-resolution image. Note that the area where resolution is improved can be accomplished through a variety of user inputs including: gesture tracking systems; eye tracking systems; and, graphical user interfaces (GUIs) to include a joystick or controller. Note that the area within the image 1801A that was of denoted in Figure 18A is now displayed with higher resolution as shown in 180 IB. In some embodiments, the image inside of 180 IB can be changed in other options as well (e.g., different color scheme, different brightness settings, etc.). This can be done by selectively displaying a higher (e.g., maximum) quality imagery in region 1801B without enlarging region 1701B.
  • a higher e.g., maximum
  • Figure 19A illustrates a large field of view wherein a first user is looking at a first portion of the image and a second user is looking at a second portion of the image.
  • 1900A is the large field of view, which is of a first resolution.
  • 1900B is the location where a first user is looking which is set to become high resolution, as shown in Figure 19B.
  • 1900C is the location where a second user is looking which is set to become high resolution, as shown in Figure 19B.
  • Figure 19B illustrates that only the first portion of the image in Figure 19A and that the second portion of the image in Figure 19A are high resolution and the remainder of the image is low resolution.
  • 1900A is the large field of view, which is of a first resolution (low resolution).
  • 1900B is the location of the high resolution zone of a first user, which is of a second resolution (high resolution in this example).
  • 1900C is the location of the high resolution zone of a second user, which is of a second resolution (high resolution in this example).
  • a first high resolution zone be used for a first user.
  • a second high resolution zone can be used for a second user. This system could be useful for the home theater display as shown in Figures 14A and 14B.
  • Figure 20 A illustrates a low resolution image
  • Figure 20B illustrates a high resolution image.
  • Figure 20C illustrates a composite image. Note that this composite image has a first portion 2000 that is of low resolution and a second portion 2001 that is of high resolution. This was described in US Patent 16/893,291, which is incorporated by reference in its entirety. The first portion is determined by the user’s viewing parameter (e.g., viewing angle). A point of novelty is near-real time streaming of the first portion 2000 with the first image quality and the second portion with the second image quality. Note that the first portion could be displayed differently from the second portion. For example, the first portion and second portion could differ in visual presentation parameters including: brightness; color scheme; or other. Thus, in some embodiments, a first portion of the image can be compressed and a second portion of the image is not compressed.
  • viewing parameter e.g., viewing angle
  • a composite image is generated with the arranging of some high resolution images and some low resolution images stitched together for display to a user.
  • some portions of a large (e.g., 429 million pixel) image are high resolution and some portions of the large image are low resolution.
  • the portions of the large image that are high resolution will be streamed in accordance with the user’s viewing parameters (e.g., convergence point, viewing angle, head angle, etc.).
  • Figure 21 illustrates a method and a process for performing near-real -time streaming of customized images.
  • the displays include, but are not limited to the following: a large TV; an extended reality (e.g., Augmented Reality, Virtual Reality, or Mixed Reality display); a projector system on a screen; a computer monitor, or the like.
  • a key component of the display is the ability to track where in the image a user is looking and what the viewing parameters are.
  • the viewing parameters include, but are not limited to the following: viewing angle; vergence/convergence; user preferences (e.g., objects of particular interest, filtering - some objects rated “R” can be filtered for a particular user, etc.).
  • each frame in the movie or video would be of extremely large data (especially if the home theater shown in Figure 14A and 14B is used in combination with the camera cluster as described in US Patent Application 17/225,610, which is incorporated by reference in its entirety.
  • the cloud refers to storage, databases, etc.
  • the cloud is capable of cloud computing.
  • a point of novelty in this patent is the sending of the viewing parameters of user(s) to the cloud, processing of the viewing parameters in the cloud (e.g., selecting a field of view or composite stereoscopic image pair as discussed in Figure 12) and determining which portions of extremely large data to stream to optimize the individual user’s experience. For example, multiple users could have their movie synchronized.
  • a user named Kathy could be looking at the chandelier and Kathy’s images would be optimized (e.g., images with maximum resolution and optimized color of the chandelier are streamed to Kathy’s mobile device and displayed on Kathy’s HDU).
  • a user named Bob could be looking at the old man and Bob’s images would be optimized (e.g., images with maximum resolution and optimized color of the old man are streamed to Bob’s mobile device and displayed on Bob’s HDU).
  • the cloud would stored a tremendous dataset at each time point, but only portions of it would be streamed and those portions are determined by the user’s viewing parameters and/or preferences. So, the book case, long table, carpet and wall art may all be within the field of view for Dave, Kathy and Bob, but these objects would not be optimized for display (e.g., the highest possible resolution of these images stored in the cloud was not streamed).
  • pre-emptive is introduced. If it is predicted that an upcoming scene is may cause a specific user viewing parameter to change (e.g., user head turn), then pre emptively streaming of that additional image frames can be performed. For example, if the time of a movie is at 1:43:05 and a dinosaur is going to make a noise and pop out from the left side of the screen at 1:43:30. Thus, the whole scene could be downloaded in a low resolution format and additional sets of data of selective portions of the FOV could be downloaded as needed (e.g., based on user’s viewing parameter, based on upcoming dinosaur scene where user is predicted to look). Thus, the dinosaur popping out will always be in its maximum resolution. Such technique creates a more immersive and improved viewing experience.
  • a specific user viewing parameter e.g., user head turn
  • Figure 22A illustrates using resection in conjunction with stereoscopic cameras.
  • Camera #1 has a known location (e.g., latitude and longitude from a GPS). From Camera #1, a range (2 miles) and direction (330° North North West) to an object 2200 is known. The location of the object 2200 can be computed.
  • Camera #2 has an unknown location, but the range (1 mile) and direction (30° North Northeast) to the object 2200 is known. Since the object 2200’s location can be computed, the geometry can be solved and the location of camera #2 determined.
  • Figure 22A illustrates using resection in conjunction with stereoscopic cameras.
  • Camera #1 has a known location (e.g., latitude and longitude from a GPS).
  • Camera #1 and Camera #2 have known locations. From Camera #1, a direction (330° North Northwest) to an object 2200B is known. From Camera #2, a direction (30° North Northeast) to an object 2200B is known. The location of the object 2200B can be computed.
  • Figure 23 A illustrates a top down view of a person looking forward to the center of the screen of the home theater.
  • the person 2300 is looking forward toward the center section 2302B of the screen 2301 of the home theater.
  • the streaming is customized to have the center section 2302B optimized (e.g., highest possible resolution), the left section 2302 A non-optimized (e.g., low resolution or black), and the right section 2302C non-optimized (e.g., low resolution or black).
  • a monitoring system to detect user’s viewing direction and other viewing parameters, such as gesture or facial expression
  • a controller to receive commands from the user must also be in place to be inputted for the appropriate streaming.
  • FIG. 23B illustrates a top down view of a person looking forward to the right side of the screen of the home theater.
  • the person 2300 is looking toward the right side of section 2302C of the screen 2301 of the home theater.
  • the streaming is customized to have the right section 2302C optimized (e.g., highest possible resolution), the left section 2302A non-optimized (e.g., low resolution or black), and the center section 2302B non- optimized (e.g., low resolution or black).
  • a monitoring system to detect user’s viewing direction and other viewing parameters, such as gesture or facial expression
  • a controller to receive commands from the user must also be in place to be inputted for the appropriate streaming.
  • Figure 24 illustrates a method, system and apparatus for optimizing stereoscopic camera settings during image acquisition during movement.
  • 2400 illustrates determining a distance of an object (e.g., use laser range finder) at a time point.
  • An object tracking / target tracking system can be implemented.
  • 2401 illustrates adjusting a zoom setting of a stereoscopic camera system to be optimized for said distance as determined in step 2400. In the preferred embodiment, this would be performed when using a zoom lens, as opposed to performing digital zooming.
  • 2402 illustrates adjusting the distance of separation (stereo distance) between stereoscopic cameras to be optimized for said distance as determined in step 2400. Note that there is also an option to also adjust the orientation of the cameras to be optimized for said distance as determined in step 2400.
  • 2403 illustrates acquiring stereoscopic imagery of the target at time point in step 2400.
  • 2404 illustrates recording, view and/or analyzing the acquired stereoscopic imagery.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Stereoscopic And Panoramic Photography (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

This patent discloses a method to record imagery in a way that is larger than a user could visualize. Then, allow the user to view naturally via head tracking and eye tracking to allow one to see and inspect a scene as if one were naturally there viewing it in real time. A smart system of analyzing the viewing parameters of a user and streaming of customized image to be displayed is also taught herein.

Description

IMMERSIVE VIEWING EXPERIENCE
TECHNICAL FIELD
[001] Aspects of this disclosure are generally related to use of distribution of work.
CROSS REFERENCE TO RELATED APPLICATIONS
[002] This application is PCT of US 17/237,152 filed on 4/22/2021, which is a continuation-in part of US Patent Application 17/225,610 filed on 7 April 2021, which is a continuation-in- part of US Patent Application 17/187,828 filed on 2/28/2021.
INTRODUCTION
[003] Movies are a form of entertainment.
SUMMARY
[004] All examples, aspects and features mentioned in this document can be combined in any technically conceivable way. This patent teaches a method, software and apparatus for an immersive viewing experience.
[005] In general, this patent improves on techniques taught in US Patent Application 17/225,610 filed on 7 April 2021, which is incorporated by reference in its entirety. Some of the apparatuses described in US Patent Application 17/225,610 have capabilities to generate extremely large datasets. This patent improved the display of such extremely large datasets.
[006] This patent discloses, a system, a method, an apparatus and software to achieve an improved immersive viewing experience. First, upload a user's viewing parameter to a cloud wherein said cloud stores imagery (which in the preferred embodiments is extremely large datasets). Viewing parameters can include any action, gesture, body position, eye look angle, eye convergence/vergence or input (e.g., via a graphical user interface). Thus, in near real time, user's viewing parameters are characterized (e.g., by a variety of devices, such as eye-facing cameras, cameras to record gestures) and sent to the cloud. Second, a set of user-specific imagery is optimized from said imagery wherein said user-specific imagery is based on at least said viewing parameter. In the preferred embodiment, the field of view of the user-specific imagery is smaller than the imagery. In the preferred embodiment, the location where a user is looking would have high resolution and the location where the user is not looking would have low resolution. For example, if a user is looking at an object on the left, then the user-specific imagery would be high resolution on the left side. In some embodiments, a user-specific imagery would be streamed in near-real time.
[007] In some embodiments, the user-specific imagery comprises a first portion with a first spatial resolution and a second portion with a second spatial resolution, and wherein said first spatial resolution is higher than said second spatial resolution. Some embodiments comprise wherein said viewing parameter comprises a viewing location and wherein said viewing location corresponds to said first portion. [008] Some embodiments comprise wherein user-specific imagery comprises a first portion with a first zoom setting and a second portion with a second zoom setting, and wherein said first zoom setting is higher than said second zoom setting. Some embodiments comprise wherein a first portion is determined by said viewing parameter wherein said viewing parameter comprises at least one of the group consisting of: a position of said user's body; an orientation of said user's body; a gesture of said user's hand; a facial expression of said user; a position of said user's head; and an orientation of said user's head. Some embodiments comprise wherein a first portion is determined by a graphical user interface, such as a mouse or controller.
[009] Some embodiments comprise wherein the imagery comprises a first field of view (FOV) and wherein said user-specific imagery comprises a second field of view, and wherein said first FOV is larger than said second FOV.
[0010] Some embodiments comprise wherein imagery comprises stereoscopic imagery and wherein said stereoscopic imagery is obtained via stereoscopic cameras or stereoscopic camera clusters.
[0011] Some embodiments comprise wherein said imagery comprises stitched imagery wherein said stitched imagery is generated by at least two cameras.
[0012] Some embodiments comprise wherein said imagery comprises composite imagery, wherein said composite imagery is generated by: taking an first image of a scene with a first set of camera settings wherein said first set of camera settings causes a first object to be in focus and a second object to be out of focus; and taking an second image of a scene with a second set of camera settings wherein said second set of camera settings causes said second object to be in focus and said first object to be out of focus. Some embodiments comprise wherein when user looks at said first object, said first image would be presented to said user and when user looks at said second object, said second image would be presented to said user. Some embodiments comprise combining at least said first object from said first image and said second object from said second image into said composite image.
[0013] Some embodiments comprise wherein image stabilization is performed. Some embodiments comprise wherein said viewing parameter comprises convergence. Some embodiments comprise wherein user-specific imagery is 3D imagery wherein said 3D imagery is presented on a HDU, a set of anaglyph glasses or a set of polarized glasses.
[0014] Some embodiments comprise wherein said user-specific imagery is presented to said user on a display wherein said user has at least a 0.5p steradian field of view.
[0015] Some embodiments comprise wherein user-specific imagery is presented on a display. In some embodiments, the display is a screen (e.g., TV, reflective screen coupled with a projector system, an extended reality head display unit including an augmented reality display, a virtual reality display or a mixed reality display).
BRIEF DESCRIPTION OF THE FIGURES
[0016] Figure 1 illustrates the
[0017] Figure 1 illustrates retrospective display of stereoscopic images.
[0018] Figure 2 illustrates methods to determine which stereo pair to display to a user for a given time point. [0019] Figure 3 illustrates displaying a video recording on a HDU.
[0020] Figure 4 illustrates a pre-recorded stereo viewing performed by user 1.
[0021] Figure 5 illustrates performing long range stereoscopic imaging of a distant object using stereoscopic camera clusters.
[0022] Figure 6 illustrates a capability of post-acquisition adjusting the images to bring into the best possible picture based on user eye tracking by the generation of a stereoscopic composite image.
[0023] Figure 7 A illustrates an image with motion and the application of image stabilization processing.
[0024] Figure 7B illustrates an image with motion displayed in a HDU.
[0025] Figure 7C illustrates an image stabilization applied to the image using stereoscopic imagery.
[0026] Figure 8 A illustrates a left image and a right image with a first camera setting.
[0027] Figure 8B illustrates a left image and a right image with a second camera setting.
[0028] Figure 9A illustrates a top down view of all data gathered of a scene at a time point.
[0029] Figure 9B illustrates a displayed wide angle 2D image frame of the video recording.
[0030] Figure 9C illustrates a top down view of User A’s viewing angle of -70° and 55° FOV. [0031] Figure 9D illustrates what User A would see given User As viewing angle of -70° and 55° FOV.
[0032] Figure 9E illustrates a top down view of User B’s viewing angle of +50° and 85° FOV.
[0033] Figure 9F illustrates what User B would see given User B’s viewing angle of +50° and 85° FOV.
[0034] Figure 10A illustrates the field of view captured at a first time point by the left camera.
[0035] Figure 10B illustrates the field of view captured at a first time point by the right camera.
[0036] Figure IOC illustrates a first user’s personalized field of view (FOV) at a given time point.
[0037] Figure 10D illustrates a second user’s personalized field of view (FOV) at a given time point.
[0038] Figure 10E illustrates a third user’s personalized field of view (FOV) at a given time point.
[0039] Figure 10F illustrates a fourth user’s personalized field of view (FOV) at a given time point.
[0040] Figure 11 A illustrates a top down view of the first user’s left eye view.
[0041] Figure 11B illustrates a top down view of the first user’s left eye view wherein a convergence point in close proximity to the left eye and right eye.
[0042] Figure 11C illustrates a left eye view at time point 1 without convergence. [0043] Figure 11D illustrates a left eye view at time point 2 with convergence.
[0044] Figure 12 illustrates the reconstruction of various stereoscopic images from previously acquired wide angle stereo images.
[0045] Figure 13 A illustrates a top down view of a home theater.
[0046] Figure 13B illustrates a side view of the home theater as shown in Figure 13 A.
[0047] Figure 14A illustrates a top down view of a home theater.
[0048] Figure 14B illustrates a side view of the home theater as shown in Figure 14A.
[0049] Figure 15A illustrates a near- spherical TV with a user looking straight ahead at time point #1.
[0050] Figure 15B shows the portion of the TV and the field of view being observed by the user at time point #1.
[0051] Figure 15C illustrates a near- spherical TV with a user looking straight ahead at time point #2.
[0052] Figure 15D shows the portion of the TV and the field of view being observed by the user at time point #2.
[0053] Figure 15E illustrates a near- spherical TV with a user looking straight ahead at time point #3.
[0054] Figure 15F shows the portion of the TV and the field of view being observed by the user at time point #3. [0055] Figure 16A illustrates an un-zoomed image.
[0056] Figure 16B illustrates a digital -type zooming in on a portion of an image.
[0057] Figure 17A illustrates an un-zoomed image.
[0058] Figure 17B illustrates the optical -type zooming in on a portion of an image.
[0059] Figure 18 A illustrates a single resolution image.
[0060] Figure 18B illustrates a multi -resolution image.
[0061] Figure 19A illustrates a large field of view wherein a first user is looking at a first portion of the image and a second user is looking at a second portion of the image.
[0062] Figure 19B illustrates that only the first portion of the image in Figure 19A and that the second portion of the image in Figure 19A are high resolution and the remainder of the image is lower resolution.
[0063] Figure 20A illustrates a low resolution image.
[0064] Figure 20B illustrates a high resolution image.
[0065] Figure 20C illustrates a composite image.
[0066] Figure 21 illustrates a method and a process for performing near-real -time streaming of customized images.
[0067] Figure 22A illustrates using resection in conjunction with stereoscopic cameras wherein a first camera location is unknown. [0068] Figure 22B illustrates using resection in conjunction with stereoscopic cameras wherein an object location is unknown.
[0069] Figure 23 A illustrates a top down view of a person looking forward to the center of the screen of the home theater.
[0070] Figure 23B illustrates a top down view of a person looking forward to the right side of the screen of the home theater.
[0071] Figure 24 illustrates a method, system and apparatus for optimizing stereoscopic camera settings during image acquisition during movement.
[0072]
DETAILED DESCRIPTIONS
[0073] The flow diagrams do not depict the syntax of any particular programming language. Rather, the flow diagrams illustrate the functional information one of ordinary skill in the art requires to fabricate circuits or to generate computer software to perform the processing required in accordance with the present invention. It should be noted that many routine program elements, such as initialization of loops and variables and the use of temporary variables, are not shown. It will be appreciated by those of ordinary skill in the art that unless otherwise indicated herein, the particular sequence of steps described is illustrative only and can be varied without departing from the spirit of the invention. Thus, unless otherwise stated the steps described below are unordered meaning that, when possible, the steps can be performed in any convenient or desirable order. [0074] Figure 1 illustrates retrospective display of stereoscopic images. 100 illustrates step A, which is to determine a location (e.g., (an, b«, r„) coordinate) where a viewer is looking at time point n. Note #1: This location could be a near, medium or far convergence point. Note #2: A collection of stereoscopic imagery has been collected and recorded. Step A follows the collection process and takes place at some subsequent time period during viewing by a user. 101 illustrates step B, which is to determine a FOV« corresponding to the location (e.g., (a„, b„, x„) coordinate for time point n. Note: user had option to select the FOV). 102 illustrates step C, which is to select camera(s) that correspond to the FOV for left eye with option to perform additional image processing (e.g., use composite image, use vergence zone) to generate personalized left eye image at time point n (PLEIn). 103 illustrates step D, which is to select camera(s) that correspond to the FOV for right eye with option to perform additional image processing (e.g., use composite image, use vergence zone) to generate personalized right eye image at time point n (PREIn). 104 illustrates step E, which is to display PLEIn on a left eye display of a HDU. 105 illustrates step F, which is to display PREIn on a right eye display of a HDU. 106 illustrates step G, which is to increment time step to n+1 and go to Step A, above.
[0075] Figure 2 illustrates methods to determine which stereo pair to display to a user for a given time point. 200 illustrates a text box of analyzing the user’s parameters to determine which stereoscopic image to display to the user. First, use the viewing direction of a user’s head. For example, if user’s head is in a forward direction, a first stereo pair could be used and if a user’s head is in a direction toward the left a second stereo pair could be used. Second, use the viewing angle of the user’s gaze. For example, if user is looking in a direction towards a distant object (e.g., mountain in the distance), then the distant (e.g., zone 3) stereo image pair would be selected for that time point. Third, use the user’s convergence. For example, if a viewing direction of a near object (e.g., leaf on a tree) is extremely similar to a viewing direction to a distant object (e.g., mountain in the distance), option to use combination of convergence and viewing angle. Fourth, use the accommodation of the user’s eyes. For example, monitor a user’s pupil size and use change in size to indicate where (near / far) the user is looking.
[0076] Figure 3 illustrates displaying a video recording on a HDU. 300 illustrates establishing a coordinate system. For example, use camera coordinate as the origin and use pointing direction of camera as an axis. This is discussed in more detail in US Patent Application 17/225,610, which is incorporated by reference in its entirety. 301 illustrates performing wide angle recording of a scene. For example, record data with a FOV larger than the FOV shown to a user). 302 illustrates performing an analysis of a user, as discussed in Figure 2 to determine where the user is looking at in the scene. 303 illustrates optimizing the display based on the analysis in 302. In some embodiments, a feature (e.g., position, size, shape, orientation, color, brightness, texture, classification by AI algorithm) of a physical object determines a feature (e.g., position, size, shape, orientation, color, brightness, texture) of a virtual object. For example, a user is using a mixed reality display in a room in a house wherein some of the areas in the room (e.g., a window during the daytime) are bright and some of the areas in the room are dark (e.g., a dark blue wall). In some embodiments, the position of placement of virtual objects is based on the location of objects within the room. For example, a virtual object could be colored white if the background is a dark blue wall, so that it stands out. For example, a virtual object could be colored blue if the background is a white wall, so that it stands out. For example, an virtual object could be positioned (or re-positioned) so its background is such that the virtual object can be displayed in a fashion to optimize viewing experience for a user. [0077] Figure 4 illustrates a pre-recorded stereo viewing performed by user 1. 400 illustrates user 1 performing a stereo recording using a stereo camera system (e.g., smart phone, etc.). This is discussed in more detail in US Patent Application 17/225,610, which is incorporated by reference in its entirety. 401 illustrates the stereo recording being stored on a memory device. 402 illustrates a user (e.g., User 1 or other user(s)) retrieving the stored stereo recording. Note that the stereo recording may be transmitted to the other user(s) and the other user(s) would receive the stored stereo recording. 403 illustrates a user (e.g., User 1 or other user(s)) viewing the stored stereo recording on a stereo display unit (e.g., augmented reality, mixed reality, virtual reality display).
[0078] Figure 5 illustrates performing long range stereoscopic imaging of a distant object using stereoscopic camera clusters. 500 illustrates positioning two camera clusters at at least 50 feet apart. 501 illustrates elects a target at at least 1 mile away. 502 illustrates precisely aiming each camera cluster such that the centerline of focus intersects at the target. 503 illustrates acquiring stereoscopic imagery of the target. 504 illustrates viewing and/or analyzing the acquired stereoscopic imagery. Some embodiments use cameras with telephoto lenses rather than camera clusters. Also, some embodiments, have stereo separation of less than or equal to 50 feet apart for optimized viewing of less than 1 mile away.
[0079] Figure 6 illustrates a capability of post-acquisition adjusting the images to bring into the best possible picture based on user eye tracking by the generation of a stereoscopic composite image. The stereoscopic images displayed at this time point has several objects that might be of interest to a person observing the scene. Thus, at each time point, a stereoscopic composite image will be generated to match at least one user’s input. For example, if a user is viewing (eye tracking determines viewing location) the mountains 600 or cloud 601 at a first time point, then the stereoscopic composite image pair delivered to a HDU would be generated such that the distant objects of the mountains 600 or cloud 601 were in focus and the nearby objects including the deer 603 and the flower 602 were out of focus. If the user was viewing (eye tracking determines viewing location) a deer 603, then the stereoscopic composite images presented at this frame would be optimized for medium range. Finally, if a user is viewing (eye tracking determines viewing location) a nearby flower 603, then the stereoscopic composite images would be optimized for closer range (e.g., implement convergence, and blur out distant items, such as the deer 603, the mountains 600 and the cloud 601). A variety of user inputs could be used to indicate to a software suite how to optimize the stereoscopic composite images. Gestures such as squint could be used to optimize the stereoscopic composite image for more distant objects. Gestures such as lean forward could be used to zoom in to a distant object. A GUI could also be used to improve the immersive viewing experience.
[0080] Figure 7A illustrates an image with motion and the application of image stabilization processing. 700A illustrates a left eye image of an object wherein there is motion blurring the edges of the object. 701A illustrates a left eye image of an object wherein image stabilization processing has been applied.
[0081] Figure 7B illustrates an image with motion displayed in a HDU. 702 illustrates the HDU. 700A illustrates a left eye image of an object wherein there is motion blurring the edges of the object. 700B illustrates a right eye image of an object wherein there is motion blurring the edges of the object. 701A illustrates a left eye display, which is aligned with a left eye of a user. 701B illustrates a right eye display, which is aligned with a right eye of a user. [0082] Figure 7C illustrates an image stabilization applied to the image using stereoscopic imagery. A key task of image processing is the image stabilization using stereoscopic imagery. 700A illustrates a left eye image of an object wherein image stabilization processing has been applied. 700B illustrates a left eye image of an object wherein image stabilization processing has been applied. 701A illustrates a left eye display, which is aligned with a left eye of a user. 701B illustrates a right eye display, which is aligned with a right eye of a user. 702 illustrates the HDU.
[0083] Figure 8 A illustrates a left image and a right image with a first camera setting. Note that the text on the monitor is in focus and the distant object of the knob on the cabinet is out of focus.
[0084] Figure 8B illustrates a left image and a right image with a second camera setting. Note that the text on the monitor is out of focus and the distant object of the knob on the cabinet is in focus. A point of novelty is using at least two cameras. A first image from a first camera is obtained. A second image from a second camera is obtained. The first camera and the second camera are in the same viewing perspectives. Also, they are of the scene (e.g., a still scene or a same time point of an scene with movement/changes). A composite image is generated wherein a first portion of the composite image is obtained from the first image and a second portion of the composite image is obtained from the second image. Note that in some embodiments, an object within the first image can be segmented and the same object within the second image can also be segmented. The first image of the object and the second image of the object can be compared to see which one has better quality. The image with better image quality can be added to the composite image. In some embodiments, however, deliberately selecting some portions to be not clear can be performed. [0085] Figure 9A illustrates a top down view of all data gathered of a scene at a time point.
[0086] Figure 9B illustrates a displayed wide angle 2D image frame of the video recording. Note that displaying this whole field of view to a user would be distorted given the mismatch between the user’s intrinsic FOV (human eye FOV) and the camera system FOV.
[0087] Figure 9C illustrates a top down view of User A’s viewing angle of -70° and 55° FOV. A key point of novelty is the user’s ability to select the portion of the stereoscopic imagery with the viewing angle. Note that the selected portion could realistically be up to -180°, but not more.
[0088] Figure 9D illustrates what User A would see given User A’s viewing angle of -70° and 55° FOV. This improves over the prior art because it allows different viewers to see different portions of the field of view. While a human has a horizontal field of view of slightly more than 180 degrees, a human can only read text over approximately 10 degrees of the field of view, can only assess shape over approximately 30 degrees of the field of view and can only assess colors over approximately 60 degrees of the field of view. In some embodiments, filtering (subtracting) is performed. A human has a vertical field of view of approximately 120 degrees with an upward (above the horizontal) field of view of 50 degrees and a downward (below the horizontal) field of view of approximately 70 degrees. Maximum eye rotation however, is limited to approximately 25 degrees above the horizontal and approximately 30 degrees below the horizontal. Typically, the normal line of sight from the seated position is approximately 15 degreed below the horizontal.
[0089] Figure 9E illustrates a top down view of User B’s viewing angle of +50° and 85° FOV. A key point of novelty is the user’s ability to select the portion of the stereoscopic imagery with the viewing angle. Also, note that the FOV of User B is larger than the FOV of User A. Note that the selected portion could realistically be up to -180°, but not more because of the limitations of the human eye.
[0090] Figure 9F illustrates what User B would see given User B’s viewing angle of +50° and 85° FOV. This improves over the prior art because it allows different viewers to see different portions of the field of view. In some embodiments, multiple cameras are recording for a 240° film. In one embodiment, 4 cameras (each with a 60° sector) for simultaneous recording. In another embodiment, the sectors are filmed sequentially - one at a time. Some scenes of a film could be filmed sequentially and other scenes could be filed simultaneously. In some embodiments, a camera set up could be used with overlap for image stitching. Some embodiments comprise using a camera ball system described in described in US Patent Application 17/225,610, which is incorporated by reference in its entirety. After the imagery is recorded, imagery from the cameras are edited to sync the scenes and stitch them together. LIDAR devices can be integrated into the camera systems for precise camera direction pointing.
[0091] Figure 10A illustrates the field of view captured at a first time point by the left camera. The left camera 1000 and right camera 1001 are shown. The left FOV 1002 is shown by the white region and is approximately 215° and would have an a ranging from +90° to -135° (sweeping from +90° to -135° in a counterclockwise direction). The area not imaged within the left FOV 1003 would be approximately 135° and would have an a ranging from +90° to -135° (sweeping from +90° to -135° in a clockwise direction).
[0092] Figure 10B illustrates the field of view captured at a first time point by the right camera. The left camera 1000 and right camera 1001 are shown. The right FOV 1004 is shown by the white region and is approximately 215° and would have an a ranging from +135° to -90° (sweeping from +135° to -90° in a counterclockwise direction). The area not imaged within the right FOV 1005 would be approximately 135° and would have an a ranging from +135° to -90° (sweeping from +135° to -90° in a counterclockwise direction).
[0093] Figure IOC illustrates a first user’s personalized field of view (FOV) at a given time point. 1000 illustrates the left camera. 1001 illustrates the right camera. 1006a illustrates the left boundary of the left eye FOV for the first user, which is shown in light gray. 1007a illustrates the right side boundary of the left eye FOV for the first user, which is shown in light gray. 1008a illustrates the left boundary of the right eye FOV for the first user, which is shown in light gray. 1009a illustrates the right side boundary of the right eye FOV for the first user, which is shown in light gray. 1010a illustrates the center line of the left eye FOV for the first user. 1011a illustrates the center line of the right eye FOV for the first user. Note that the center line of the left eye FOV 1010a for the first user and the center line of the right eye FOV 1011a for the first user are parallel, which is equivalent to a convergence point at infinity. Note that the first user is looking in the forward direction. It is suggested that during filming of a moving that most of the action in the scene occur in this forward looking direction.
[0094] Figure 10D illustrates a second user’s personalized field of view (FOV) at a given time point. 1000 illustrates the left camera. 1001 illustrates the right camera. 1006b illustrates the left boundary of the left eye FOV for the second user, which is shown in light gray. 1007b illustrates the right side boundary of the left eye FOV for the second user, which is shown in light gray. 1008b illustrates the left boundary of the right eye FOV for the second user, which is shown in light gray. 1009b illustrates the right side boundary of the right eye FOV for the second user, which is shown in light gray. 1010b illustrates the center line of the left eye FOV for the second user. 1011b illustrates the center line of the right eye FOV for the second user. Note that the center line of the left eye FOV 1010b for the second user and the center line of the right eye FOV 1011b for the second user meet at a convergence point 1012. This allows the second user to view a small object with greater detail. Note that the second user is looking in the forward direction. It is suggested that during filming of a moving that most of the action in the scene occur in this forward looking direction.
[0095] Figure 10E illustrates a third user’s personalized field of view (FOV) at a given time point. 1000 illustrates the left camera. 1001 illustrates the right camera. 1006c illustrates the left boundary of the left eye FOV for the third user, which is shown in light gray. 1007c illustrates the right side boundary of the left eye FOV for the third user, which is shown in light gray. 1008c illustrates the left boundary of the right eye FOV for the third user, which is shown in light gray. 1009c illustrates the right side boundary of the right eye FOV for the third user, which is shown in light gray. 1010c illustrates the center line of the left eye FOV for the third user. 1011c illustrates the center line of the right eye FOV for the third user. Note that the center line of the left eye FOV 1010c for the third user and the center line of the right eye FOV 1011c for the third user are approximately parallel, which is equivalent to looking at a very far distance. Note that the third user is looking in a moderately leftward direction. Note that the overlap of the left eye FOV and right eye FOV provide stereoscopic viewing to the third viewer.
[0096] Figure 10F illustrates a fourth user’s personalized field of view (FOV) at a given time point. 1000 illustrates the left camera. 1001 illustrates the right camera. 1006d illustrates the left boundary of the left eye FOV for the fourth user, which is shown in light gray. 1107d illustrates the right side boundary of the left eye FOV for the fourth user, which is shown in light gray. 1008d illustrates the left boundary of the right eye FOV for the fourth user, which is shown in light gray. 1009d illustrates the right side boundary of the right eye FOV for the fourth user, which is shown in light gray. lOlOd illustrates the center line of the left eye FOV for the fourth user. 101 Id illustrates the center line of the right eye FOV for the fourth user. Note that the center line of the left eye FOV lOlOd for the fourth user and the center line of the right eye FOV 101 Id for the fourth user are approximately parallel, which is equivalent to looking at a very far distance. Note that the fourth user is looking in a far leftward direction. Note that the first user, second user, third user and fourth user are all seeing different views of the movie at the same time point. It should be noted that some of the designs, such as the camera cluster or ball system as described in
[0097] Figure 11A illustrates a top down view of the first user’s left eye view at time point 1. 1100 illustrates the left eye view point. 1101 illustrates the right eye viewpoint. 1102 illustrates the portion of the field of view (FOV) not covered by either camera. 1103 illustrates the portion of the FOV that is covered by at least one camera. 1104A illustrates a medial portion of a high resolution FOV used by a user, which corresponds to a = +25°. This is discussed in more detail in US Patent Application 17/225,610, which is incorporated by reference in its entirety.
[0098] 1105A illustrates a lateral portion of a high resolution FOV used by a user, which corresponds to a = -25°.
[0099] Figure 11B illustrates a top down view of the first user’s left eye view wherein a convergence point in close proximity to the left eye and right eye. 1100 illustrates the left eye view point.
[00100] 1101 illustrates the right eye viewpoint. 1102 illustrates the portion of the field of view (FOV) not covered by either camera. 1103 illustrates the portion of the FOV that is covered by at least one camera. 1104B illustrates a medial portion of a high resolution FOV used by a user, which corresponds to a = -5°. 1105B illustrates a lateral portion of a high resolution FOV used by a user, which corresponds to a = +45°.
[00101] Figure 11C illustrates a left eye view at time point 1 without convergence. Note that a flower 1106 is shown in the image, which is locate along the viewing angle a = 0°.
[00102] Figure 1 ID illustrates a left eye view at time point 2 with convergence. Note that a flower 1106 is shown in the image, which is still located along the viewing angle a = 0°. However, the user has converged during this time point. This act of convergence causes the left eye field of view to be altered from a horizontal field of view with a ranging between -25°and 25° (as shown in Figures 11 A and 11C) to a ranging between -5° and +45° (as shown in Figures 1 IB and 1 ID). This system improves upon the prior art because it provides stereoscopic convergence on stereoscopic cameras by shifting the images according to the left (and right) field of views. In some embodiments, a portion of the display is non-optimized, which is described in US Patent 10,712,837, which is incorporated by reference in its entirety.
[00103] Figure 12 illustrates the reconstruction of various stereoscopic images from previously acquired wide angle stereo images. 1200 illustrates acquiring imagery from a stereoscopic camera system. This is camera system is discussed in more detail in US Patent Application 17/225,610, which is incorporated by reference in its entirety. 1201 illustrates wherein a first camera for a left eye viewing perspective and a second camera for a right eye viewing perspective is utilized. 1202 illustrates selecting the field of view of the first camera based on the left eye look angle and the field of view for the second camera based on the right eye look angle. In the preferred embodiment, the selection would be performed by a computer (e.g., integrated into a head display unit) based on an eye tracking system tracking eye movements of a user. It should also be noted that in the preferred embodiment, there would also be an image shift inward on the display closer to the nose during convergence, which is taught in US Patent 10,712,837 especially Figures 15 A, 15B, 16A, and 16B, which is incorporated by reference in its entirety. 1203 illustrates presenting the left eye field of view to a left eye of a user and the right eye field of view to a right eye of a user. There are a variety of options at this juncture. First, use composite stereoscopic image pair wherein left eye image is generated from at least two lenses (e.g., first optimized for close up image and second optimized for far away imaging) and wherein right eye image is generated from at least two lenses (e.g., first optimized for close up image and second optimized for far away imaging). When user is looking at nearby object, present stereoscopic image pair with nearby object in focus and distant objects out of focus. When user is looking at distant object, present stereoscopic image pair with nearby object out of focus and distant object in focus. Second, use a variety of display devices (e.g., Augmented Reality, Virtual Reality, Mixed Reality displays).
[00104] Figure 13A illustrates a top down view of a home theater. 1300 illustrates the user. 1301 illustrates the projector. 1302 illustrates the screen. Note that this immersive home theater is displays a field of view larger than a user’s 1300 field of view. For example, if a user 1300 was looking straight forward, the home theater would display a horizontal FOV of greater than 180 degrees. Thus, the home theater’s FOV would completely cover the user’s horizontal FOV. Similarly, if the user was looking straight forward, the home theater would display a vertical FOV of greater than 120 degrees. Thus, the home theater’s FOV would completely cover the user’s vertical FOV. An AR / VR / MR headset could be used in conjunction with this system, but would not be required. Cheap anaglyph or disposable color glasses could also be used. A conventional IMAX polarized projector could be utilized with IMAX-type polarized disposable glasses. The size of the home theater could vary. The home theater walls could be built with white, reflective panels and framing. The projector would have multiple heads to cover the larger field of view.
[00105] Figure 13B illustrates a side view of the home theater as shown in Figure 13A. 1300 illustrates the user. 1301 illustrates the projector. 1302 illustrates the screen. Note that this immersive home theater is displays a field of view larger than a user’s 1300 field of view. For example, if a user 100 was looking forward while on a recliner, the home theater would display a vertical FOV of greater than 120 degrees. Thus, the home theater’s FOV would completely cover the user’s FOV. Similarly, if the user was looking straight forward, the home theater would display a horizontal FOV of greater than 120 degrees. Thus, the home theater’s FOV would completely cover the user’s FOV.
[00106] Figure 14A illustrates a top down view of a home theater. 1400 A illustrates a first user. 1400B illustrates a first user. 1401 illustrates the projector. 1402 illustrates the screen. Note that this immersive home theater is displays a field of view larger than the FOV of the first user 1400 A or the second user 1400B. For example, if the first user 1400 A was looking straight forward, the first user 1400A would see a horizontal FOV of greater than 180 degrees. Thus, the home theater’s FOV would completely cover the user’s horizontal FOV. Similarly, if the first user 1400A was looking straight forward, the home theater would display a vertical FOV of greater than 120 degrees, as shown in Figure 14B. Thus, the home theater’s FOV would completely cover the user’s vertical FOV. An AR / VR / MR headset could be used in conjunction with this system, but would not be required. Cheap anaglyph or polarized glasses could also be used. A conventional IMAX polarized projector could be utilized with IMAX-type polarized disposable glasses. The size of the home theater could vary. The home theater walls could be built with white, reflective panels and framing. The projector would have multiple heads to cover the larger field of view.
[00107] Figure 14B illustrates a side view of the home theater as shown in Figure 14A. 1400A illustrates the first user. 1401 illustrates the projector. 1402 illustrates the screen. Note that this immersive home theater is displays a field of view larger than the first user’s 1400A field of view. For example, if the first user 1400 A was looking forward while on a recliner, the user would see a vertical FOV of greater than 120 degrees. Thus, the home theater’s FOV would completely cover the FOV of the first user 1400 A. Similarly, if the first user 1400 A was looking straight forward, the home theater would display a horizontal FOV of greater than 120 degrees. Thus, the home theater’s FOV would completely cover the FOV of the first user 1400 A.
[00108] A typical high resolution display has 4000 pixels over a 1.37 m distance. This would be equivalent to 10 x 106 pixels per 1.87 m2. Consider the data for a hemisphere theater. Assume that the hemisphere theater has a radius of 2 meters. The surface area of a hemisphere is 2 x p x r2, which is equal to (4)(3.14)(22) or 50.24 m2. Assuming that a spatial resolution was desired to be equal to that of a typical high resolution display, this would equal (50.24 m2)(10 x 106 pixels per 1.87 m2) or 429 million pixels. Assuming the frame rate of 60 frames per second. This is 26 times the amount of data as compared to a standard 4K monitor.
[00109] Some embodiments, comprise constructing a home theater to match the geometry of the projector. The preferred embodiment is sub-spherical (e.g., hemispherical). A low cost construction would be the use of a reflective surfaces stitched together with a multi-head projector. In some embodiments, a field of view comprises a spherical coverage with a 4p steradians. This can be accomplished via a HDU. In some embodiments, a field of view comprises sub-spherical coverage with at least 3p steradians. In some embodiments, a field of view comprises sub-spherical coverage with at least 2p steradians. In some embodiments, a field of view comprises sub-spherical coverage with at least 1p steradians. In some embodiments, a field of view comprises sub-spherical coverage with at least 0.5p steradians. In some embodiments, a field of view comprises sub-spherical coverage with at least 0.25p steradians. In some embodiments, a field of view comprises sub-spherical coverage with at least 0.05p steradians. In some embodiments, a sub-spherical IMAX system is created for an improved movie theater experience with many viewers. The chairs would be positioned in a similar position as standard movie theaters, but the screen would be sub-spherical. In some embodiments, non-spherical shapes could also be used.
[00110] Figure 15A illustrates time point #1 wherein a user looking straight ahead and sees a horizontal field of view of approximately 60 degrees horizontal and 40 degrees vertical with a reasonably precise field of view (e.g., user can see shapes and colors in peripheral FOV).
[00111] Figure 15B shows the center portion of the TV and the field of view being observed by the user at time point #1. Note that in some embodiments, data would be streamed (e.g., via the internet). Note that a novel feature of this patent is called “viewing-parameter directed streaming”. In this embodiment, a viewing parameter is used to direct the data streamed. For example, if the user 1500 were looking straight forward, then a first set of data would be streamed to correspond with the straight forward viewing angle of the user 1500. If, however, the user were looking at to the side of the screen, a second set of data would be streamed to correspond with the looking to the side viewing angle of the user 1500. Other viewing parameters that could control viewing angles include, but are not limited to, the following: user’s vergence; user’s head position; user’s head orientation. In a broad sense, any feature (age, gender, preference) or action of a user (viewing angle, positions, etc.) could be used to direct streaming. Note that another novel feature is the streaming of at least two image qualities. For example, a first image quality (e.g., high quality) would be streamed in accordance with a first parameter (e.g., within user’s 30° horizontal FOV and 30° vertical FOV). And, a second image quality (e.g., lower quality) would be also be streamed that did not meet this criteria (e.g., not within user’s 30° horizontal FOV and 30° vertical FOV). Surround sound would be implemented in this system.
[00112] Figure 15C illustrates time point #2 wherein a user looking to the user’s left side of the screen and sees a horizontal field of view of approximately 60 degrees horizontal and 40 degrees vertical with a reasonably precise field of view (e.g., user can see shapes and colors in peripheral FOV).
[00113] Figure 15D illustrates time point #2 with the field of view being observed by the user at time point #2, which is different as compared to Figure 15B. The area of interest is half that of time point #1. In some embodiments, greater detail and higher resolution of objects within a small FOV within the scene is provided to the user. Outside of this high resolution field of view zone, a lower resolution image quality could be presented on the screen.
[00114] Figure 15E illustrates time point #3 wherein a user looking to the user’s right side of the screen.
[00115] Figure 15F illustrates time point #3 and sees a circularly shaped high-resolution FOV.
[00116] Figure 16A illustrates an un-zoomed image. 1600 illustrates the image. 1601 A illustrates a box illustrated to denote the area within image 1600 that is set to be zoomed in on. [00117] Figure 16B illustrates a digital-type zooming in on a portion of an image. This can be accomplished via methods described in US Patent 8,384,771 (e.g., 1 pixel turns into 4), which is incorporated by reference in its entirety. Note that a the area to be zoomed in on can be accomplished through a variety of user inputs including: gesture tracking systems; eye tracking systems; and, graphical user interfaces (GUIs). Note that the area within the image 1601 A that was of denoted in Figure 16A is now zoomed in on as shown in 1601B. Note that the resolution of region 160 IB is equal to that of image 1600, but just larger. Note that 1600B illustrates portions of 1600A, which are not enlarged. Note that 1601A is now enlarged and note that portions of 1600 A are not visualized.
[00118] Figure 17A illustrates an un-zoomed image. 1700 illustrates the image. 1701 A illustrates a box illustrated to denote the area within image 1700 that is set to be zoomed in on.
[00119] Figure 17B illustrates the optical -type zooming in on a portion of an image. Note that a the area to be zoomed in on can be accomplished through a variety of user inputs including: gesture tracking systems; eye tracking systems; and, graphical user interfaces (GUIs). Note that the area within the image 1701A that was of denoted in Figure 17A is now zoomed in on as shown in 170 IB and also note that the image inside of 170 IB appears higher image quality. This can be done by selectively displaying the maximum quality imagery in region 170 IB and enlarging region 1701B. Not only is the cloud bigger, the resolution of the cloud is also better. Note that 1700B illustrates portions of 1700A, which are not enlarged (Note that some of the portions of 1700A, which are not enlarged are now covered by the zoomed in region.) [00120] Figure 18A illustrates a single resolution image. 1800 A illustrates the image. 1801 A illustrates a box illustrated to denote the area within image 1800A that is set to have resolution improved in.
[00121] Figure 18B illustrates a multi-resolution image. Note that the area where resolution is improved can be accomplished through a variety of user inputs including: gesture tracking systems; eye tracking systems; and, graphical user interfaces (GUIs) to include a joystick or controller. Note that the area within the image 1801A that was of denoted in Figure 18A is now displayed with higher resolution as shown in 180 IB. In some embodiments, the image inside of 180 IB can be changed in other options as well (e.g., different color scheme, different brightness settings, etc.). This can be done by selectively displaying a higher (e.g., maximum) quality imagery in region 1801B without enlarging region 1701B.
[00122] Figure 19A illustrates a large field of view wherein a first user is looking at a first portion of the image and a second user is looking at a second portion of the image. 1900A is the large field of view, which is of a first resolution. 1900B is the location where a first user is looking which is set to become high resolution, as shown in Figure 19B. 1900C is the location where a second user is looking which is set to become high resolution, as shown in Figure 19B.
[00123] Figure 19B illustrates that only the first portion of the image in Figure 19A and that the second portion of the image in Figure 19A are high resolution and the remainder of the image is low resolution. 1900A is the large field of view, which is of a first resolution (low resolution). 1900B is the location of the high resolution zone of a first user, which is of a second resolution (high resolution in this example). 1900C is the location of the high resolution zone of a second user, which is of a second resolution (high resolution in this example). Thus, a first high resolution zone be used for a first user. And, a second high resolution zone can be used for a second user. This system could be useful for the home theater display as shown in Figures 14A and 14B.
[00124] Figure 20 A illustrates a low resolution image.
[00125] Figure 20B illustrates a high resolution image.
[00126] Figure 20C illustrates a composite image. Note that this composite image has a first portion 2000 that is of low resolution and a second portion 2001 that is of high resolution. This was described in US Patent 16/893,291, which is incorporated by reference in its entirety. The first portion is determined by the user’s viewing parameter (e.g., viewing angle). A point of novelty is near-real time streaming of the first portion 2000 with the first image quality and the second portion with the second image quality. Note that the first portion could be displayed differently from the second portion. For example, the first portion and second portion could differ in visual presentation parameters including: brightness; color scheme; or other. Thus, in some embodiments, a first portion of the image can be compressed and a second portion of the image is not compressed. In other embodiments, a composite image is generated with the arranging of some high resolution images and some low resolution images stitched together for display to a user. In some embodiments, some portions of a large (e.g., 429 million pixel) image are high resolution and some portions of the large image are low resolution. The portions of the large image that are high resolution will be streamed in accordance with the user’s viewing parameters (e.g., convergence point, viewing angle, head angle, etc.).
[00127] Figure 21 illustrates a method and a process for performing near-real -time streaming of customized images. [00128] With respect to the display 2100, the displays include, but are not limited to the following: a large TV; an extended reality (e.g., Augmented Reality, Virtual Reality, or Mixed Reality display); a projector system on a screen; a computer monitor, or the like. A key component of the display is the ability to track where in the image a user is looking and what the viewing parameters are.
[00129] With respect to the viewing parameters 2101, the viewing parameters include, but are not limited to the following: viewing angle; vergence/convergence; user preferences (e.g., objects of particular interest, filtering - some objects rated “R” can be filtered for a particular user, etc.).
[00130] With respect to the cloud 1202, each frame in the movie or video would be of extremely large data (especially if the home theater shown in Figure 14A and 14B is used in combination with the camera cluster as described in US Patent Application 17/225,610, which is incorporated by reference in its entirety. Note that the cloud refers to storage, databases, etc. Note that the cloud is capable of cloud computing. A point of novelty in this patent is the sending of the viewing parameters of user(s) to the cloud, processing of the viewing parameters in the cloud (e.g., selecting a field of view or composite stereoscopic image pair as discussed in Figure 12) and determining which portions of extremely large data to stream to optimize the individual user’s experience. For example, multiple users could have their movie synchronized. Each would stream 2103 from the cloud their individually optimized data for that particular time point onto their mobile device. And, each would then view their individually optimized data on their device. This would result in an improved immersive viewing experience. For example, suppose at a single time point, there was a dinner scene with a chandelier, a dog, an old man, a book case, a long table, a carpet and wall art. A user named Dave could be looking at the dog and Dave’s images would be optimized (e.g., images with maximum resolution and optimized color of the dog are streamed to Dave’s mobile device and displayed on Dave’s HDU). A user named Kathy could be looking at the chandelier and Kathy’s images would be optimized (e.g., images with maximum resolution and optimized color of the chandelier are streamed to Kathy’s mobile device and displayed on Kathy’s HDU). Finally, a user named Bob could be looking at the old man and Bob’s images would be optimized (e.g., images with maximum resolution and optimized color of the old man are streamed to Bob’s mobile device and displayed on Bob’s HDU). It should be noted that the cloud would stored a tremendous dataset at each time point, but only portions of it would be streamed and those portions are determined by the user’s viewing parameters and/or preferences. So, the book case, long table, carpet and wall art may all be within the field of view for Dave, Kathy and Bob, but these objects would not be optimized for display (e.g., the highest possible resolution of these images stored in the cloud was not streamed).
[00131] Finally, the concept of pre-emptive is introduced. If it is predicted that an upcoming scene is may cause a specific user viewing parameter to change (e.g., user head turn), then pre emptively streaming of that additional image frames can be performed. For example, if the time of a movie is at 1:43:05 and a dinosaur is going to make a noise and pop out from the left side of the screen at 1:43:30. Thus, the whole scene could be downloaded in a low resolution format and additional sets of data of selective portions of the FOV could be downloaded as needed (e.g., based on user’s viewing parameter, based on upcoming dinosaur scene where user is predicted to look). Thus, the dinosaur popping out will always be in its maximum resolution. Such technique creates a more immersive and improved viewing experience.
[00132] Figure 22A illustrates using resection in conjunction with stereoscopic cameras. Camera #1 has a known location (e.g., latitude and longitude from a GPS). From Camera #1, a range (2 miles) and direction (330° North North West) to an object 2200 is known. The location of the object 2200 can be computed. Camera #2 has an unknown location, but the range (1 mile) and direction (30° North Northeast) to the object 2200 is known. Since the object 2200’s location can be computed, the geometry can be solved and the location of camera #2 determined.
[00133] Figure 22A illustrates using resection in conjunction with stereoscopic cameras. Camera #1 has a known location (e.g., latitude and longitude from a GPS). Camera #1 and Camera #2 have known locations. From Camera #1, a direction (330° North Northwest) to an object 2200B is known. From Camera #2, a direction (30° North Northeast) to an object 2200B is known. The location of the object 2200B can be computed.
[00134] Figure 23 A illustrates a top down view of a person looking forward to the center of the screen of the home theater. The person 2300 is looking forward toward the center section 2302B of the screen 2301 of the home theater. During this time point, the streaming is customized to have the center section 2302B optimized (e.g., highest possible resolution), the left section 2302 A non-optimized (e.g., low resolution or black), and the right section 2302C non-optimized (e.g., low resolution or black). Note that a monitoring system (to detect user’s viewing direction and other viewing parameters, such as gesture or facial expression) or a controller (to receive commands from the user must also be in place) to be inputted for the appropriate streaming.
[00135] Figure 23B illustrates a top down view of a person looking forward to the right side of the screen of the home theater. The person 2300 is looking toward the right side of section 2302C of the screen 2301 of the home theater. During this time point, the streaming is customized to have the right section 2302C optimized (e.g., highest possible resolution), the left section 2302A non-optimized (e.g., low resolution or black), and the center section 2302B non- optimized (e.g., low resolution or black). Note that a monitoring system (to detect user’s viewing direction and other viewing parameters, such as gesture or facial expression) or a controller (to receive commands from the user must also be in place) to be inputted for the appropriate streaming.
[00136] Figure 24 illustrates a method, system and apparatus for optimizing stereoscopic camera settings during image acquisition during movement. 2400 illustrates determining a distance of an object (e.g., use laser range finder) at a time point. An object tracking / target tracking system can be implemented. 2401 illustrates adjusting a zoom setting of a stereoscopic camera system to be optimized for said distance as determined in step 2400. In the preferred embodiment, this would be performed when using a zoom lens, as opposed to performing digital zooming. 2402 illustrates adjusting the distance of separation (stereo distance) between stereoscopic cameras to be optimized for said distance as determined in step 2400. Note that there is also an option to also adjust the orientation of the cameras to be optimized for said distance as determined in step 2400. 2403 illustrates acquiring stereoscopic imagery of the target at time point in step 2400. 2404 illustrates recording, view and/or analyzing the acquired stereoscopic imagery.

Claims

WHAT IS CLAIMED IS:
1. A method comprising: uploading via an internet a user's viewing parameter to a cloud wherein said cloud stores imagery, wherein said cloud is capable of cloud computing, and wherein said user’s viewing parameter comprises a viewing angle; in said cloud, optimizing user-specific display imagery from said imagery wherein said user-specific display imagery is based on at least said viewing parameter, wherein said user-specific display imagery comprises a first portion and a second portion, wherein said first portion of said user-specific display imagery is different from said second portion of said user-specific display imagery, wherein said first portion of said user-specific display imagery comprises a first image quality, wherein said first portion of said user-specific display imagery corresponds to said viewing angle, wherein said second portion of said user-specific display imagery comprises a second image quality, and wherein said second image quality is lower than said first image quality; downloading via the internet said user-specific display imagery; and displaying said user-specific display imagery to said user.
2. The method of claim 1 further comprises: wherein said user-specific display imagery comprises a first portion with a first spatial resolution and a second portion with a second spatial resolution, and wherein said first spatial resolution is higher than said second spatial resolution.
3. The method of claim 1 further comprises wherein said imagery comprises video imagery.
4. The method of claim 1 further comprises: wherein said user-specific display imagery comprises wherein said first portion comprises a first zoom setting and wherein said second portion comprises a second zoom setting, and wherein said first zoom setting is higher than said second zoom setting.
5. The method of claim 4 further comprises wherein said first portion is determined by at least one of the group consisting of: a position of said user's body; an orientation of said user's body; a gesture of said user's hand; a facial expression of said user; a position of said user's head; and an orientation of said user's head.
6. The method of claim 4 further comprises wherein said first portion is determined by a graphical user interface.
7. The method of claim 1 further comprising: wherein said imagery comprises a first field of view (FOV), wherein said user-specific display imagery comprises a second FOV, and wherein said first FOV is larger than said second FOV.
8. The method of claim 1 further comprising: wherein said imagery comprises stereoscopic imagery; and wherein said stereoscopic imagery is obtained via stereoscopic cameras or stereoscopic camera clusters.
9. The method of claim 1 further comprising wherein said imagery comprises stitched imagery wherein said stitched imagery is generated by at least two cameras.
10. The method of claim 1 further comprising: wherein said imagery comprises composite imagery, wherein said composite imagery is generated by: taking a first image of a scene with a first set of camera settings wherein said first set of camera settings causes a first object to be in focus and a second object to be out of focus; and taking a second image of a scene with a second set of camera settings wherein said second set of camera settings causes said second object to be in focus and said first object to be out of focus.
11. The method of claim 10 further comprises wherein: when said user looks at said first object, said first image would be presented to said user; and when said user looks at said second object, said second image would be presented to said user.
12. The method of claim 10 further comprising combining at least said first object from said first image and said second object from said second image into said composite image.
13. The method of claim 1 further comprises wherein said viewing angle is movable by said user.
14. The method of claim 1 further comprises wherein said viewing parameter comprises convergence.
15. The method of claim 1 further comprising wherein said user-specific imagery is 3D imagery wherein said 3D imagery is presented on a head display unit (HDU).
16. The method of claim 15 further comprising wherein wherein said viewing angle is determined by an orientation of said HDU.
17. A method comprising: determining a user’s viewing parameter wherein said user’s viewing parameter comprises a viewing angle; sending via an internet said user's viewing parameter to a cloud wherein said cloud is capable of cloud computing, wherein said cloud computing generates user-specific display imagery from imagery stored on said cloud, wherein said user-specific display imagery is based on at least said user’s viewing parameter, wherein said user-specific display imagery comprises a first portion and a second portion, wherein said first portion of said user-specific display imagery is different from said second portion of said user-specific display imagery, wherein said first portion of said user-specific display imagery comprises a first image quality, wherein said first portion of said user-specific display imagery corresponds to said viewing angle, wherein said second portion of said user-specific display imagery comprises a second image quality, and wherein said second image quality is lower than said first image quality; receiving via said internet said user-specific display imagery; and displaying said user-specific display imagery on a head display unit (HDU) wherein said HDU comprises a left eye display and a right eye display.
18. The method of claim 1 further comprising wherein said user-specific display imagery is presented to said user on a display wherein said user has at least a 0.5p steradian field of view.
19. The method of claim 18 further comprising wherein said display comprises at least one of the group consisting of: a screen and projector; a TV; and a monitor.
20. A method comprising: receiving via an internet a user's viewing parameter at a cloud wherein said user’s viewing parameter comprises a viewing angle, wherein said cloud is capable of cloud computing, using cloud computing to generate user-specific display imagery from imagery stored on said cloud, wherein said user-specific display imagery is based on at least said user’s viewing parameter, wherein said user-specific display imagery comprises a first portion and a second portion, wherein said first portion of said user-specific display imagery is different from said second portion of said user-specific display imagery, wherein said first portion of said user-specific display imagery comprises a first image quality, wherein said first portion of said user-specific display imagery corresponds to said viewing angle, wherein said second portion of said user-specific display imagery comprises a second image quality, and wherein said second image quality is lower than said first image quality; sending via said internet said user-specific display imagery to a head display unit
(HDU) wherein said HDU comprises a left eye display and a right eye display, wherein said HDU displays said user-specific display imagery.
PCT/US2022/025818 2021-04-22 2022-04-21 Immersive viewing experience WO2022226224A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2023558524A JP2024518243A (en) 2021-04-22 2022-04-21 Immersive Viewing Experience
EP22792523.7A EP4327552A1 (en) 2021-04-22 2022-04-21 Immersive viewing experience
CN202280030471.XA CN117321987A (en) 2021-04-22 2022-04-21 Immersive viewing experience

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US17/237,152 2021-04-22
US17/237,152 US11589033B1 (en) 2021-02-28 2021-04-22 Immersive viewing experience

Publications (1)

Publication Number Publication Date
WO2022226224A1 true WO2022226224A1 (en) 2022-10-27

Family

ID=83723167

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2022/025818 WO2022226224A1 (en) 2021-04-22 2022-04-21 Immersive viewing experience

Country Status (4)

Country Link
EP (1) EP4327552A1 (en)
JP (1) JP2024518243A (en)
CN (1) CN117321987A (en)
WO (1) WO2022226224A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090219283A1 (en) * 2008-02-29 2009-09-03 Disney Enterprises, Inc. Non-linear depth rendering of stereoscopic animated images
US20180165830A1 (en) * 2016-12-14 2018-06-14 Thomson Licensing Method and device for determining points of interest in an immersive content
US10551993B1 (en) * 2016-05-15 2020-02-04 Google Llc Virtual reality content development environment
US20200371673A1 (en) * 2019-05-22 2020-11-26 Microsoft Technology Licensing, Llc Adaptive interaction models based on eye gaze gestures
US11206364B1 (en) * 2020-12-08 2021-12-21 Microsoft Technology Licensing, Llc System configuration for peripheral vision with reduced size, weight, and cost

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090219283A1 (en) * 2008-02-29 2009-09-03 Disney Enterprises, Inc. Non-linear depth rendering of stereoscopic animated images
US10551993B1 (en) * 2016-05-15 2020-02-04 Google Llc Virtual reality content development environment
US20180165830A1 (en) * 2016-12-14 2018-06-14 Thomson Licensing Method and device for determining points of interest in an immersive content
US20200371673A1 (en) * 2019-05-22 2020-11-26 Microsoft Technology Licensing, Llc Adaptive interaction models based on eye gaze gestures
US11206364B1 (en) * 2020-12-08 2021-12-21 Microsoft Technology Licensing, Llc System configuration for peripheral vision with reduced size, weight, and cost

Also Published As

Publication number Publication date
EP4327552A1 (en) 2024-02-28
CN117321987A (en) 2023-12-29
JP2024518243A (en) 2024-05-01

Similar Documents

Publication Publication Date Title
US9842433B2 (en) Method, apparatus, and smart wearable device for fusing augmented reality and virtual reality
US11257233B2 (en) Volumetric depth video recording and playback
US20200288113A1 (en) System and method for creating a navigable, three-dimensional virtual reality environment having ultra-wide field of view
US9137524B2 (en) System and method for generating 3-D plenoptic video images
CN105103034B (en) Display
US20080246759A1 (en) Automatic Scene Modeling for the 3D Camera and 3D Video
RU2765424C2 (en) Equipment and method for imaging
US20060114251A1 (en) Methods for simulating movement of a computer user through a remote environment
CN113891060B (en) Free viewpoint video reconstruction method, play processing method, device and storage medium
WO2012166593A2 (en) System and method for creating a navigable, panoramic three-dimensional virtual reality environment having ultra-wide field of view
US11218681B2 (en) Apparatus and method for generating an image
CN110291577A (en) The part of pairing for the experience of improved augmented reality and global user interface
JP2019512177A (en) Device and related method
CN111602391B (en) Method and apparatus for customizing a synthetic reality experience from a physical environment
EP4327552A1 (en) Immersive viewing experience
US11589033B1 (en) Immersive viewing experience
US11366319B1 (en) Immersive viewing experience
DeHart Directing audience attention: cinematic composition in 360 natural history films
Zhang et al. Walk Through a Virtual Museum with Binocular Stereo Effect and Spherical Panorama Views Based on Image Rendering Carried by Tracked Robot

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22792523

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023558524

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 202280030471.X

Country of ref document: CN

WWE Wipo information: entry into national phase

Ref document number: 2022792523

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2022792523

Country of ref document: EP

Effective date: 20231122