WO2017109567A1 - A method and apparatus for facilitating video rendering in a device - Google Patents
A method and apparatus for facilitating video rendering in a device Download PDFInfo
- Publication number
- WO2017109567A1 WO2017109567A1 PCT/IB2016/001826 IB2016001826W WO2017109567A1 WO 2017109567 A1 WO2017109567 A1 WO 2017109567A1 IB 2016001826 W IB2016001826 W IB 2016001826W WO 2017109567 A1 WO2017109567 A1 WO 2017109567A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- motion
- user
- video
- screen
- video frame
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
- H04N21/44012—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
- H04N21/44213—Monitoring of end-user related data
- H04N21/44218—Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/1626—Constructional details or arrangements for portable computers with a single-body enclosure integrating a flat display, e.g. Personal Digital Assistants [PDAs]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/1633—Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
- G06F1/1684—Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
- G06F1/1694—Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being a single or a set of motion sensors for pointer control or gesture input obtained by sensing movements of the portable computer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/19—Sensors therefor
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/36—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
- G09G5/38—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory with means for controlling the display position
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/55—Motion estimation with spatial constraints, e.g. at image or region borders
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/214—Specialised server platform, e.g. server located in an airplane, hotel, hospital
- H04N21/2146—Specialised server platform, e.g. server located in an airplane, hotel, hospital located in mass transportation means, e.g. aircraft, train or bus
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/414—Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
- H04N21/41407—Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance embedded in a portable device, e.g. video client on a mobile phone, PDA, laptop
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/4223—Cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2200/00—Indexing scheme relating to G06F1/04 - G06F1/32
- G06F2200/16—Indexing scheme relating to G06F1/16 - G06F1/18
- G06F2200/163—Indexing scheme relating to constructional details of the computer
- G06F2200/1637—Sensing arrangement for detection of housing movement or orientation, e.g. for controlling scrolling or cursor movement on the display of an handheld computer
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2320/00—Control of display operating conditions
- G09G2320/10—Special adaptations of display systems for operation with variable images
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/04—Changes in size, position or resolution of an image
- G09G2340/0464—Positioning
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2354/00—Aspects of interface with display user
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2370/00—Aspects of data communication
- G09G2370/02—Networking aspects
- G09G2370/022—Centralised management of display operation, e.g. in a server instead of locally
Definitions
- the present invention generally relates to video rendering, more specifically, relates to a method and apparatus for facilitating video rendering in a device.
- a very common video watching scenario is implemented on the metro or bus by users' mobile phones or tablets, and even some people only watch videos on these public transportation vehicles. This is usual in large city like Shanghai, where it often takes more than one hour to/off work. However, many people complain it is inconvenient to watch video on the metro or bus because there is continuously vibration. Doctors say it is harm to eyes if watching on bus for a long time.
- the problem to be solved by the present invention is to improve the user experience in vibration environment.
- the present description involves a method and apparatus for facilitating video rendering in a device.
- a method for facilitating video rendering in a device comprising:
- the eye gaze position of the user in the screen of the device determining whether the eye gaze is deep inside the screen or at the edge of the video frame based on the eye gaze position;
- an apparatus for facilitating video rendering in a device comprising:
- a first obtaining means configured to obtain the eye gaze position of the user in the screen of the device
- a processing means configured to determine whether the eye gaze is deep inside the screen or at the edge of the video frame based on the eye gaze position
- a second obtaining means configured to obtain the motion of the device on condition of the processing means determining that the eye gaze is deep inside the screen
- a compensating means configured to compensate the motion of the device by video frame shifting in the screen of the device based on the motion of the device to make the video frame retain the position relative to the user same or nearly same with that before the motion.
- the provided method and apparatus may compensate the motion of the device by video frame shifting in the screen of the device based on the motion of the device by using the eye gaze position of the user. In this way, the video frame will virtually retain the same position relative to the user same with that before the motion, then the impact caused by vibration is relieved and the user experience in vibration environment is improved.
- Fig. l is a flowchart illustrating a method for facilitating video rendering in a device in accordance with embodiments of the present invention
- Fig.2 shows an example of controlling the picture re-positioning to relieve the effect of device vibration by using motion sensors
- Fig. 3 shows an example of controlling the picture re-positioning to relieve the effect of both device and body vibration by using motion sensors
- Fig. 4 shows the time relationship of the human visual acuity of a point in retina
- Fig.5 illustrates a block diagram of an apparatus for facilitating video rendering in a device in accordance with embodiments of the present invention.
- the basic idea of present invention is to use sensors in phone or tablet to reduce the influence of vibration and optimize video transmission resource.
- the solution includes three parts, reducing the influence of the device vibration, reducing the influence of the human body vibration and transmission resource optimization.
- the transmission resources can be reduced during the user is watching real time high definition (HD) video, because of the vibration, the user does not need (for example, cannot find any difference from standard definition video) very high resolution video. So the environment information can be sent to the video server to facilitate the video encoding to save the transmission resource.
- HD high definition
- the present invention provides a method for facilitating video rendering in a device, as shown in Fig. 1, the method may comprise: at step S101, obtaining the eye gaze position of the user in the screen of the device; at step SI 02, determining whether the eye gaze is deep inside the screen or at the edge of the video frame based on the eye gaze position; at step SI 03, obtaining the motion of the device on condition of determining that the eye gaze is deep inside the screen; and at step SI 04, compensating the motion of the device by video frame shifting in the screen of the device based on the motion of the device to make the video frame virtually retain the same position relative to the user same or nearly same with that before the motion.
- the eye gaze position information can be used to facilitate the positioning.
- the eye gaze position is obtained by an eye tracker which is a type of sensor. If the eye gaze is deep inside the screen, a small part of the video frame that goes out of the screen is not a problem because the eye will not notice this. If the eye is just gazing the edge of the video frame, it is better not shift the picture. So with the eye gaze information, there is no need to shrink the video frame to leave the protection boarder.
- the motion of the device can be detected by one or more sensor (s), such as an accelerometer and gyroscope.
- the gyroscope ADXRS453 can track accurately the vibration at frequency 50Hz and still performs very well at 100Hz (see “Analyzing Frequency Response of Inertial MEMS in Stabilization Systems", ht ⁇ :/Avww.analog.com/library/an ⁇ . 100Hz corresponds to 6000rpm engine rotation speed, so it can track the engine accurately.
- Most of the advanced mobile phones now are equipped with such sensors.
- iPhone there is an official method to get the motion information by the UIKit, UIDevice, or even the low level Core Motion API. The proposed idea is to compensate the device vibration by video frame shifting on the screen to let the image virtually retain the same position.
- Fig. 2 shows an example of controlling the picture re-positioning to relieve the effect of device vibration by using motion sensors, wherein the motion sensors output the instantaneous motion of the tablet, with this information, the video frame is shifted in the reverse direction, so the position of the video frame relative to the user is not changed.
- the example ADXRS453 gyroscope is able to track 50Hz vibration and the screen refresh frequency is usually more than 50Hz, so the video frame can be displayed in new position at the 50Hz in this example. It is noted that higher frequency motion sensor can be used to tracking finer motion.
- step SI 04 may comprise: shifting the video frame in the screen of the device in the direction reverse to the obtained motion by a distance same with the motion of the device.
- the video is shifted by a distance smaller than or equal to a predetermined distance, i.e., a configured maximum shifting value.
- a predetermined distance i.e., a configured maximum shifting value.
- the video frame shifting can use the maximum shifting value or does not perform shifting as the device and the viewer may experience heavy rocking.
- the video frame is shown in a size smaller than normal (conventional).
- the video frame may move out of the tablet screen. This can be solved by a small protecting black boarder around the video frame and the video frame is smaller compared to normal video rendering. This is not an issue for large screen device because the screen is large enough.
- the method for facilitating video rendering in a device may further comprise: obtaining the motion of the user's head; calculating the relative movement between the motion of the user's head and the motion of the device; and step SI 04 may comprise: compensating the motion of the device by video shifting on the screen of the device based on the relative movement between the motion of the user's head and the motion of the device.
- the method may further comprise: determining whether the change of the eye gaze position is due to the motion of the user's head or eye saccadic movement; and the step of detecting the motion of the user's head is performed on condition of determining that the change of the eye gaze position is due to the motion of the user's head.
- Fig. 3 shows an example of controlling the picture re-positioning to relieve the effect of device and body vibration by using motion sensors.
- the user's body may also experience vibration, which may increase the relative movement between the eye and the device.
- the camera of the tablet can be used.
- the eye performs saccadic movement and fixation. In the saccadic movement, the eye ball turns, this does not need the movement of the head. But the head vibration will also cause the eye gaze position shakes. So the detection of the head movement by the tablet camera, in combination with the eye tracker, it is easy to know if the eye gaze position change is due to the head movement or the eye saccadic movement.
- the relative movement to the tablet can be calculated and this information is used to shift the video frame on the screen to the reverse direction.
- the method for facilitating video rendering in a device may further comprise: calculating the visual acuity of the user based on the eye gaze position; and transmitting the visual acuity to the video server which provides the video, so that the server can encode the video based on the visual acuity to reduce the quality of the video.
- the influence of vibration can be reduced effectively. But there is still certain level of vibration that cannot be eliminated, because of the vibration tracking delay and error.
- the influence of the vibration can be evaluated by the frequent gaze shaking in small area, which is different from the saccadic movement which is ballistic and moves from one point directly to the new position.
- the eye gaze position can be filtered to get the high frequency movement, and this reflects the vibration.
- the eye acuity in such condition can be obtained.
- the visual acuity is considered to be normalized as 1.
- the time a point exists in the preset degree fovea for example, 2 degree fovea
- the visual acuity can be obtained, which is smaller than 1 if normalized.
- This information is passed to the video server, and the server can encode the video based on this visual acuity information, to reduce the quality without the visual influence to the viewer.
- the method for facilitating video rendering in a device according to the present invention can save network resource for video transmission.
- the dynamic range of the visual acuity can be obtained, denoted by [a, b], where a is the lowest acuity, and b is the highest acuity.
- the video to be transmitted is encoded by a given max bit rate m and an average bit rate n.
- b maps to the maximum bit rate m
- (a+b)/2 maps to the average bit rate n.
- the max bit rate m and average bit rate n refer to a nominal acuity, e.g. averaging the measurement of many people, the mapping can be done as follows.
- the nominal acuity is denoted by [aO, b0].
- the video bit rate of a specific user with acuity x can be calculated by the following steps:
- the present invention further provide an apparatus for facilitating video rendering in a device, as shown in Fig. 5, the apparatus may comprise: a first obtaining means 510, configured to obtain the eye gaze position of the user in the screen of the device; a processing means 520, configured to determine whether the eye gaze is deep inside the screen or at the edge of the video frame based on the eye gaze position; a second obtaining means 530, configured to obtain the motion of the device on condition of the processing means 520 determining that the eye gaze is deep inside the screen; and a compensating means 540, configured to compensate the motion of the device by video frame shifting in the screen of the device based on the motion of the device to make the video frame virtually retain the position relative to the user same or nearly same with that before the motion.
- a first obtaining means 510 configured to obtain the eye gaze position of the user in the screen of the device
- a processing means 520 configured to determine whether the eye gaze is deep inside the screen or at the edge of the video frame based on the eye gaze position
- the compensating means 540 is further configured to: shift the video frame in the screen of the device in the direction reverse to the obtained motion by a distance same with the motion of the device.
- the video frame is shifted by a distance smaller than or equal to a predetermined distance.
- the video frame is shown in a size smaller than normal.
- the first obtaining means 510 is further configured to obtain the motion of the user's head; the compensating means 540 is further configured to calculate the relative movement between the motion of the user's head and the motion of the device; and the compensating means 540 is further configured to compensate the motion of the device by video frame shifting on the screen of the device based on the relative movement between the motion of the user's head and the motion of the device.
- the compensating means 540 is further configured to determine the change of the eye gaze position is due to the motion of the user's head or eye saccadic movement; and
- the first obtaining means 510 is configured to obtain the motion of the user's head on condition of the compensating means 540 determining that the change of the eye gaze position is due to the motion of the user's head.
- the apparatus may further comprise:
- a calculating means 550 configured to calculate the visual acuity of the user based on the eye gaze position
- a transmitting means 560 configured to transmit the visual acuity to the video server which provides the video, so that the server can encode the video based on the visual acuity to reduce the quality of the video.
- the first obtaining means 510 comprises an eye tracker
- the second obtaining means 530 comprises an accelerometer and gyroscope sensor.
- At least one of the first obtaining means 510, the processing means 520, the second obtaining means 530, the compensating means 540, the calculating means 550, and the transmitting means 560 are assumed to comprise program instructions that, when executed, enable the apparatus to operate in accordance with the exemplary embodiments, as discussed above.
- any of the first obtaining means 510, the processing means 520, the second obtaining means 530, the compensating means 540, the calculating means 550, and the transmitting means 560 as discussed above may be integrated together or implemented by separated components, and may be of any type suitable to the local technical environment, and may comprise one or more of general purpose computers, special purpose computers, microprocessors, digital signal processors (DSP) and processors based on multi-core processor architectures, as non-limiting examples.
- the ROM mentioned above may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as semiconductor based memory devices, flash memory, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory.
- the various exemplary embodiments may be implemented in hardware or special purpose circuits, software, logic or any combination thereof.
- some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto.
- firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto.
- While various aspects of the exemplary embodiments of this invention may be illustrated and described as block diagrams, flowcharts, or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
- program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types when executed by a processor in a computer or other device.
- the computer executable instructions may be stored on a computer readable medium such as a hard disk, optical disk, removable storage media, solid state memory, random access memory (RAM), and etc.
- a computer readable medium such as a hard disk, optical disk, removable storage media, solid state memory, random access memory (RAM), and etc.
- RAM random access memory
Abstract
The invention provides a method and apparatus for facilitating video rendering in a device, the method comprising: obtaining the eye gaze position of the user in the screen of the device; determining whether the eye gaze is deep inside the screen or at the edge of the video frame based on the eye gaze position; obtaining the motion of the device on condition of determining that the eye gaze is deep inside the screen; and compensating the motion of the device by video frame shifting in the screen of the device based on the motion of the device to make the video frame retain the position relative to the user same or nearly same with that before the motion. Then the impact caused by vibration is relieved and the user experience in vibration environment is improved.
Description
A METHOD AND APPARATUS FOR FACILITATING VIDEO RENDERING IN
A DEVICE
FIELD OF THE INVENTION
The present invention generally relates to video rendering, more specifically, relates to a method and apparatus for facilitating video rendering in a device.
BACKGROUND
A very common video watching scenario is implemented on the metro or bus by users' mobile phones or tablets, and even some people only watch videos on these public transportation vehicles. This is usual in large city like Shanghai, where it often takes more than one hour to/off work. However, many people complain it is inconvenient to watch video on the metro or bus because there is continuously vibration. Doctors say it is harm to eyes if watching on bus for a long time.
In such environment, not only the phone or tablet vibrates, but also the body vibrates too. In vibration condition, the eyes do not have enough time to gaze at a specific spot, and because of visual staying phenomenon, the video frame is blurred.
SUMMARY
The problem to be solved by the present invention is to improve the user experience in vibration environment.
The present description involves a method and apparatus for facilitating video rendering in a device.
According to a first aspect of the present invention, there is provided a method for facilitating video rendering in a device, which comprising:
obtaining the eye gaze position of the user in the screen of the device;
determining whether the eye gaze is deep inside the screen or at the edge of the video frame based on the eye gaze position;
obtaining the motion of the device on condition of determining that the eye gaze is deep inside the screen; and
compensating the motion of the device by video frame shifting in the screen of the device based on the motion of the device to make the video frame retain the position relative to the user same or nearly same with that before the motion.
According to a second aspect of the present invention, there is provided an apparatus for facilitating video rendering in a device, which comprising:
a first obtaining means, configured to obtain the eye gaze position of the user in the screen of the device;
a processing means, configured to determine whether the eye gaze is deep inside the screen or at the edge of the video frame based on the eye gaze position;
a second obtaining means, configured to obtain the motion of the device on condition of the processing means determining that the eye gaze is deep inside the screen; and
a compensating means, configured to compensate the motion of the device by video frame shifting in the screen of the device based on the motion of the device to make the video frame retain the position relative to the user same or nearly same with that before the motion.
In exemplary embodiments of the present invention, the provided method and apparatus may compensate the motion of the device by video frame shifting in the screen of the device based on the motion of the device by using the eye gaze position of the user. In this way, the video frame will virtually retain the same position relative to the user same with that before the motion, then the impact caused by vibration is relieved and the user experience in vibration environment is improved.
BRIEF DESCRIPTION OF THE DRAWINGS
The invention itself, the preferable mode of use and further objectives are best understood by reference to the following detailed description of the embodiments when read in conjunction with the accompanying drawings, in which:
Fig. l is a flowchart illustrating a method for facilitating video rendering in a device in accordance with embodiments of the present invention;
Fig.2 shows an example of controlling the picture re-positioning to relieve the effect of device vibration by using motion sensors;
Fig. 3 shows an example of controlling the picture re-positioning to relieve the effect of both device and body vibration by using motion sensors;
Fig. 4 shows the time relationship of the human visual acuity of a point in retina; and
Fig.5 illustrates a block diagram of an apparatus for facilitating video rendering in a device in accordance with embodiments of the present invention.
DETAILED DESCRIPTION OF THE INVENTION
The embodiments of the present invention are described in detail with reference to the accompanying drawings. Reference throughout this specification to features, advantages, or similar language does not imply that all of the features and advantages that may be realized with the present invention should be or are in any single embodiment of the invention. Rather, language referring to the features and advantages is understood to mean that a specific feature, advantage, or characteristic described in connection with an embodiment is included in at least one embodiment of the present invention. Furthermore, the described features, advantages, and characteristics of the invention may be combined in any suitable manner in one or more embodiments. One skilled in the relevant art will recognize that the invention may be practiced without one or more of the specific features or advantages of a particular embodiment. In other instances, additional features and advantages may be
recognized in certain embodiments that may not be present in all embodiments of the invention.
The basic idea of present invention is to use sensors in phone or tablet to reduce the influence of vibration and optimize video transmission resource. The solution includes three parts, reducing the influence of the device vibration, reducing the influence of the human body vibration and transmission resource optimization.
First, we are going to relieve the impact caused by vibration. After this, there might be still some vibration effect that cannot be eliminated, and the visual acuity degrades. Bear this in mind, the transmission resources can be reduced during the user is watching real time high definition (HD) video, because of the vibration, the user does not need (for example, cannot find any difference from standard definition video) very high resolution video. So the environment information can be sent to the video server to facilitate the video encoding to save the transmission resource.
Reducing the influence of the device vibration
The present invention provides a method for facilitating video rendering in a device, as shown in Fig. 1, the method may comprise: at step S101, obtaining the eye gaze position of the user in the screen of the device; at step SI 02, determining whether the eye gaze is deep inside the screen or at the edge of the video frame based on the eye gaze position; at step SI 03, obtaining the motion of the device on condition of determining that the eye gaze is deep inside the screen; and at step SI 04, compensating the motion of the device by video frame shifting in the screen of the device based on the motion of the device to make the video frame virtually retain the same position relative to the user same or nearly same with that before the motion.
The eye gaze position information can be used to facilitate the positioning. The eye gaze position is obtained by an eye tracker which is a type of sensor. If the eye gaze is deep inside the screen, a small part of the video frame that goes out of the screen is not a problem because the eye will not notice this. If the eye is just gazing
the edge of the video frame, it is better not shift the picture. So with the eye gaze information, there is no need to shrink the video frame to leave the protection boarder.
Then the motion of the device can be detected by one or more sensor (s), such as an accelerometer and gyroscope. For example, the gyroscope ADXRS453 can track accurately the vibration at frequency 50Hz and still performs very well at 100Hz (see "Analyzing Frequency Response of Inertial MEMS in Stabilization Systems", ht^:/Avww.analog.com/library/an^ . 100Hz corresponds to 6000rpm engine rotation speed, so it can track the engine accurately. Most of the advanced mobile phones now are equipped with such sensors. In iPhone, there is an official method to get the motion information by the UIKit, UIDevice, or even the low level Core Motion API. The proposed idea is to compensate the device vibration by video frame shifting on the screen to let the image virtually retain the same position.
Fig. 2 shows an example of controlling the picture re-positioning to relieve the effect of device vibration by using motion sensors, wherein the motion sensors output the instantaneous motion of the tablet, with this information, the video frame is shifted in the reverse direction, so the position of the video frame relative to the user is not changed. The example ADXRS453 gyroscope is able to track 50Hz vibration and the screen refresh frequency is usually more than 50Hz, so the video frame can be displayed in new position at the 50Hz in this example. It is noted that higher frequency motion sensor can be used to tracking finer motion.
In an exemplary embodiment, step SI 04 may comprise: shifting the video frame in the screen of the device in the direction reverse to the obtained motion by a distance same with the motion of the device.
In an exemplary embodiment, the video is shifted by a distance smaller than or equal to a predetermined distance, i.e., a configured maximum shifting value. When the motion of the device exceeds the maximum shifting value, the video frame
shifting can use the maximum shifting value or does not perform shifting as the device and the viewer may experience heavy rocking.
In an exemplary embodiment, the video frame is shown in a size smaller than normal (conventional). The video frame may move out of the tablet screen. This can be solved by a small protecting black boarder around the video frame and the video frame is smaller compared to normal video rendering. This is not an issue for large screen device because the screen is large enough.
Reducing the influence of the body vibration
In an exemplary embodiment, the method for facilitating video rendering in a device may further comprise: obtaining the motion of the user's head; calculating the relative movement between the motion of the user's head and the motion of the device; and step SI 04 may comprise: compensating the motion of the device by video shifting on the screen of the device based on the relative movement between the motion of the user's head and the motion of the device.
In an exemplary embodiment, the method may further comprise: determining whether the change of the eye gaze position is due to the motion of the user's head or eye saccadic movement; and the step of detecting the motion of the user's head is performed on condition of determining that the change of the eye gaze position is due to the motion of the user's head.
Fig. 3 shows an example of controlling the picture re-positioning to relieve the effect of device and body vibration by using motion sensors. The user's body may also experience vibration, which may increase the relative movement between the eye and the device. To relieve this, the camera of the tablet can be used. According to the study of the inventor of the present invention, the eye performs saccadic movement and fixation. In the saccadic movement, the eye ball turns, this does not need the movement of the head. But the head vibration will also cause the eye gaze position shakes. So the detection of the head movement by the tablet camera, in combination
with the eye tracker, it is easy to know if the eye gaze position change is due to the head movement or the eye saccadic movement. When the head vibration is detected, the relative movement to the tablet can be calculated and this information is used to shift the video frame on the screen to the reverse direction.
Transmission resource optimization
In an exemplary embodiment, the method for facilitating video rendering in a device may further comprise: calculating the visual acuity of the user based on the eye gaze position; and transmitting the visual acuity to the video server which provides the video, so that the server can encode the video based on the visual acuity to reduce the quality of the video.
When the user is watching real time video from the network in metro or bus, with the above two techniques, the influence of vibration can be reduced effectively. But there is still certain level of vibration that cannot be eliminated, because of the vibration tracking delay and error. The influence of the vibration can be evaluated by the frequent gaze shaking in small area, which is different from the saccadic movement which is ballistic and moves from one point directly to the new position. The eye gaze position can be filtered to get the high frequency movement, and this reflects the vibration.
According to the inventor's study, the visual acuity in such environment decreases, which obeys the following equation:
where 9 is the visual acuity of human, and x is the eccentricity to the fovea area in degree, which represents a distance, t is time, e is the base of natural logarithm, (Xi , a2, and a3 may be preset, and may be derived by experiments and/or experience,
This equation (1) reveals the fact that it needs some time for the eye to reach the
best acuity, which is determined by the numerator. Figure 4 shows the time relationship of the human visual acuity of a point in retina.
With the above equation (1), the eye acuity in such condition can be obtained. At the center of the retina and after a sufficiently long time, the visual acuity is considered to be normalized as 1. Based on the vibration frequency obtained in the above paragraph, the time a point exists in the preset degree fovea (for example, 2 degree fovea) can be obtained. Inputting this time into the above equation (1), the visual acuity can be obtained, which is smaller than 1 if normalized.
This information is passed to the video server, and the server can encode the video based on this visual acuity information, to reduce the quality without the visual influence to the viewer. In this way, the method for facilitating video rendering in a device according to the present invention can save network resource for video transmission.
An example for encoding the video based on the visual acuity information is given as follows.
Based on the visual acuity information, the dynamic range of the visual acuity can be obtained, denoted by [a, b], where a is the lowest acuity, and b is the highest acuity. The video to be transmitted is encoded by a given max bit rate m and an average bit rate n. Then b maps to the maximum bit rate m, and (a+b)/2 maps to the average bit rate n. There is a linear mapping between these parameters. So by controlling the bit rate, the video can be encoded. If the max bit rate m and average bit rate n refer to a nominal acuity, e.g. averaging the measurement of many people, the mapping can be done as follows. The nominal acuity is denoted by [aO, b0]. Then the video bit rate of a specific user with acuity x can be calculated by the following steps:
Firstly, mapping x to the nominal acuity z by z=a0+(x-a) *(b0-a0)/(b-a) ; and Next, mapping z to the bit rate b by b=n+(m-n)/(bO-t)/(z-t), where t=(b0+a0)/2.
Then the server can encode the video by using the result so as to save network resource for video transmission.
The present invention further provide an apparatus for facilitating video rendering in a device, as shown in Fig. 5, the apparatus may comprise: a first obtaining means 510, configured to obtain the eye gaze position of the user in the screen of the device; a processing means 520, configured to determine whether the eye gaze is deep inside the screen or at the edge of the video frame based on the eye gaze position; a second obtaining means 530, configured to obtain the motion of the device on condition of the processing means 520 determining that the eye gaze is deep inside the screen; and a compensating means 540, configured to compensate the motion of the device by video frame shifting in the screen of the device based on the motion of the device to make the video frame virtually retain the position relative to the user same or nearly same with that before the motion.
In an exemplary embodiment, the compensating means 540 is further configured to: shift the video frame in the screen of the device in the direction reverse to the obtained motion by a distance same with the motion of the device.
In an exemplary embodiment, the video frame is shifted by a distance smaller than or equal to a predetermined distance.
In an exemplary embodiment, the video frame is shown in a size smaller than normal.
In an exemplary embodiment, the first obtaining means 510 is further configured to obtain the motion of the user's head; the compensating means 540 is further configured to calculate the relative movement between the motion of the user's head and the motion of the device; and the compensating means 540 is further configured to compensate the motion of the device by video frame shifting on the screen of the device based on the relative movement between the motion of the user's head and the motion of the device.
In an exemplary embodiment, the compensating means 540 is further configured to determine the change of the eye gaze position is due to the motion of the user's head or eye saccadic movement; and
the first obtaining means 510 is configured to obtain the motion of the user's head on condition of the compensating means 540 determining that the change of the eye gaze position is due to the motion of the user's head.
In an exemplary embodiment, the apparatus may further comprise:
a calculating means 550, configured to calculate the visual acuity of the user based on the eye gaze position; and
a transmitting means 560, configured to transmit the visual acuity to the video server which provides the video, so that the server can encode the video based on the visual acuity to reduce the quality of the video.
In an exemplary embodiment, the first obtaining means 510 comprises an eye tracker, and the second obtaining means 530 comprises an accelerometer and gyroscope sensor.
At least one of the first obtaining means 510, the processing means 520, the second obtaining means 530, the compensating means 540, the calculating means 550, and the transmitting means 560 are assumed to comprise program instructions that, when executed, enable the apparatus to operate in accordance with the exemplary embodiments, as discussed above. Any of the first obtaining means 510, the processing means 520, the second obtaining means 530, the compensating means 540, the calculating means 550, and the transmitting means 560 as discussed above may be integrated together or implemented by separated components, and may be of any type suitable to the local technical environment, and may comprise one or more of general purpose computers, special purpose computers, microprocessors, digital signal processors (DSP) and processors based on multi-core processor architectures, as non-limiting examples. The ROM mentioned above may be of any type suitable to
the local technical environment and may be implemented using any suitable data storage technology, such as semiconductor based memory devices, flash memory, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory.
In general, the various exemplary embodiments may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. For example, some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto. While various aspects of the exemplary embodiments of this invention may be illustrated and described as block diagrams, flowcharts, or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
It will be appreciated that at least some aspects of the exemplary embodiments of the inventions may be embodied in computer-executable instructions, such as in one or more program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types when executed by a processor in a computer or other device. The computer executable instructions may be stored on a computer readable medium such as a hard disk, optical disk, removable storage media, solid state memory, random access memory (RAM), and etc. As will be realized by one of skill in the art, the functionality of the program modules may be combined or distributed as desired in various embodiments. In addition, the functionality may be embodied in whole or in part in firmware or hardware equivalents such as integrated circuits, field programmable gate arrays (FPGA), and the like.
Although specific embodiments of the invention have been disclosed, those having ordinary skill in the art will understand that changes can be made to the specific embodiments without departing from the spirit and scope of the invention. The scope of the invention is not to be restricted therefore to the specific embodiments, and it is intended that the appended claims cover any and all such applications, modifications, and embodiments within the scope of the present invention.
Claims
1. A method for facilitating video rendering in a device, which comprising:
obtaining the eye gaze position of the user in the screen of the device;
determining whether the eye gaze of the user is deep inside the screen or at the edge of the video frame based on the eye gaze position;
obtaining the motion of the device on condition of determining that the eye gaze is deep inside the screen; and
compensating the motion of the device by video frame shifting in the screen based on the motion of the device to make the video frame retain the position relative to the user same or nearly same with that before the motion.
2. The method according to claim 1 , wherein the step of compensating the motion of the device comprising:
shifting the video frame in the screen of the device in the direction reverse to the obtained motion by a distance same with the motion of the device.
3. The method according to claim 1 or 2, wherein the video frame is shifted by a distance smaller than or equal to a predetermined distance.
4. The method according to claim 1 or 2, wherein the video frame is shown in a size smaller than normal.
5. The method according to claim 1 or 2, the method further comprising:
obtaining the motion of the user's head;
calculating the relative movement between the motion of the user's head and the motion of the device; and
the step of compensating the motion of the device comprising: compensating the motion of the device by video frame shifting on the screen of the device based on the relative movement between the motion of the user's head and the motion of the device.
6. The method according to claim 5, the method further comprising:
determining the change of the eye gaze position is due to the motion of the user's head or eye saccadic movement; and
the step of detecting the motion of the user's head is performed on condition of determining that the change of the eye gaze position is due to the motion of the user's head.
7. The method according to claim 6, the method further comprising:
calculating the visual acuity of the user based on the eye gaze position; and transmitting the visual acuity to the video server which provides the video, so that the server can encode the video based on the visual acuity to reduce the quality of the video.
8. An apparatus for facilitating video rendering in a device, which comprising:
a first obtaining means, configured to obtain the eye gaze position of the user in the screen of the device;
a processing means, configured to determine whether the eye gaze is deep inside the screen or at the edge of the video frame based on the eye gaze position;
a second obtaining means, configured to obtain the motion of the device on condition of the processing means determining that the eye gaze is deep inside the screen; and
a compensating means, configured to compensate the motion of the device by video frame shifting in the screen of the device based on the motion of the device to
make the video frame retain the position relative to the user same or nearly same with that before the motion.
9. The apparatus according to claim 8, wherein the compensating means is further configured to:
shift the video frame in the screen of the device in the direction reverse to the obtained motion by a distance same with the motion of the device.
10. The apparatus according to claim 8 or 9, wherein the video frame is shifted by a distance smaller than or equal to a predetermined distance.
11. The apparatus according to claim 8 or 9, wherein the video frame is shown in a size smaller than normal.
12. The apparatus according to claim 8 or 9, wherein:
the first obtaining means is further configured to obtain the motion of the user's head;
the compensating means is further configured to calculate the relative movement between the motion of the user's head and the motion of the device; and
the compensating means is further configured to compensate the motion of the device by video frame shifting on the screen of the device based on the relative movement between the motion of the user's head and the motion of the device.
13. The apparatus according to claim 12, wherein:
the compensating means is further configured to determine the change of the eye gaze position is due to the motion of the user's head or eye saccadic movement; and
the first obtaining means is configured to obtain the motion of the user's head on condition of the compensating means determining that the change of the eye gaze position is due to the motion of the user's head.
14. The apparatus according to claim 13, the apparatus further comprising:
a calculating means, configured to calculate the visual acuity of the user based on the eye gaze position; and
a transmitting means, configured to transmit the visual acuity to the video server which provides the video, so that the server can encode the video based on the visual acuity to reduce the quality of the video.
15. The apparatus according to claim 8 or 9, wherein the first obtaining means comprises an eye tracker, and the second obtaining means comprises an accelerometer and gyroscope sensor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP16826156.8A EP3395074A1 (en) | 2015-12-24 | 2016-11-24 | A method and apparatus for facilitating video rendering in a device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510990448.7A CN106921890A (en) | 2015-12-24 | 2015-12-24 | A kind of method and apparatus of the Video Rendering in the equipment for promotion |
CN201510990448.7 | 2015-12-24 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017109567A1 true WO2017109567A1 (en) | 2017-06-29 |
Family
ID=57796755
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IB2016/001826 WO2017109567A1 (en) | 2015-12-24 | 2016-11-24 | A method and apparatus for facilitating video rendering in a device |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP3395074A1 (en) |
CN (1) | CN106921890A (en) |
WO (1) | WO2017109567A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP4336342A3 (en) * | 2022-09-12 | 2024-05-08 | FUJIFILM Corporation | Processor, image processing device, glasses-type information display device, image processing method, and image processing program |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109917923B (en) * | 2019-03-22 | 2022-04-12 | 北京七鑫易维信息技术有限公司 | Method for adjusting gazing area based on free motion and terminal equipment |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080199049A1 (en) * | 2007-02-21 | 2008-08-21 | Daly Scott J | Methods and Systems for Display Viewer Motion Compensation Based on User Image Data |
EP2505223A1 (en) * | 2011-03-31 | 2012-10-03 | Alcatel Lucent | Method and device for displaying images |
US20120249600A1 (en) * | 2011-03-31 | 2012-10-04 | Kabushiki Kaisha Toshiba | Information processing apparatus and method |
US20120320500A1 (en) * | 2011-06-16 | 2012-12-20 | Hon Hai Precision Industry Co., Ltd. | Portable electronic device and method for using the same |
US20130234929A1 (en) * | 2012-03-07 | 2013-09-12 | Evernote Corporation | Adapting mobile user interface to unfavorable usage conditions |
US20140111550A1 (en) * | 2012-10-19 | 2014-04-24 | Microsoft Corporation | User and device movement based display compensation |
US20140247277A1 (en) * | 2013-03-01 | 2014-09-04 | Microsoft Corporation | Foveated image rendering |
US20150235084A1 (en) * | 2014-02-20 | 2015-08-20 | Samsung Electronics Co., Ltd. | Detecting user viewing difficulty from facial parameters |
DE102014103621A1 (en) * | 2014-03-17 | 2015-09-17 | Christian Nasca | Image stabilization process |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012114544A (en) * | 2010-11-22 | 2012-06-14 | Jvc Kenwood Corp | Video encoder |
-
2015
- 2015-12-24 CN CN201510990448.7A patent/CN106921890A/en active Pending
-
2016
- 2016-11-24 EP EP16826156.8A patent/EP3395074A1/en not_active Withdrawn
- 2016-11-24 WO PCT/IB2016/001826 patent/WO2017109567A1/en unknown
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080199049A1 (en) * | 2007-02-21 | 2008-08-21 | Daly Scott J | Methods and Systems for Display Viewer Motion Compensation Based on User Image Data |
EP2505223A1 (en) * | 2011-03-31 | 2012-10-03 | Alcatel Lucent | Method and device for displaying images |
US20120249600A1 (en) * | 2011-03-31 | 2012-10-04 | Kabushiki Kaisha Toshiba | Information processing apparatus and method |
US20120320500A1 (en) * | 2011-06-16 | 2012-12-20 | Hon Hai Precision Industry Co., Ltd. | Portable electronic device and method for using the same |
US20130234929A1 (en) * | 2012-03-07 | 2013-09-12 | Evernote Corporation | Adapting mobile user interface to unfavorable usage conditions |
US20140111550A1 (en) * | 2012-10-19 | 2014-04-24 | Microsoft Corporation | User and device movement based display compensation |
US20140247277A1 (en) * | 2013-03-01 | 2014-09-04 | Microsoft Corporation | Foveated image rendering |
US20150235084A1 (en) * | 2014-02-20 | 2015-08-20 | Samsung Electronics Co., Ltd. | Detecting user viewing difficulty from facial parameters |
DE102014103621A1 (en) * | 2014-03-17 | 2015-09-17 | Christian Nasca | Image stabilization process |
Non-Patent Citations (1)
Title |
---|
AHMAD RAHMATI ET AL: "NoShake: Content stabilization for shaking screens of mobile devices", PERVASIVE COMPUTING AND COMMUNICATIONS, 2009. PERCOM 2009. IEEE INTERNATIONAL CONFERENCE ON, IEEE, PISCATAWAY, NJ, USA, 9 March 2009 (2009-03-09), pages 1 - 6, XP031453095, ISBN: 978-1-4244-3304-9 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP4336342A3 (en) * | 2022-09-12 | 2024-05-08 | FUJIFILM Corporation | Processor, image processing device, glasses-type information display device, image processing method, and image processing program |
Also Published As
Publication number | Publication date |
---|---|
EP3395074A1 (en) | 2018-10-31 |
CN106921890A (en) | 2017-07-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102088472B1 (en) | Adjustment of video rendering speed of virtual reality content and processing of stereoscopic images | |
US11789686B2 (en) | Facilitation of concurrent consumption of media content by multiple users using superimposed animation | |
US9626741B2 (en) | Systems and methods for configuring the display magnification of an electronic device based on distance and user presbyopia | |
US11816820B2 (en) | Gaze direction-based adaptive pre-filtering of video data | |
US10643307B2 (en) | Super-resolution based foveated rendering | |
US10200725B2 (en) | Adaptive data streaming based on virtual screen size | |
US20140118240A1 (en) | Systems and Methods for Configuring the Display Resolution of an Electronic Device Based on Distance | |
JP2014526157A (en) | Classification of the total field of view of the head mounted display | |
JP2017502338A (en) | Dynamic resolution control of GPU and video using retinal perception model | |
US10764499B2 (en) | Motion blur detection | |
JP6750697B2 (en) | Information processing apparatus, information processing method, and program | |
US20170186237A1 (en) | Information processing apparatus, information processing method, and storage medium | |
WO2017109567A1 (en) | A method and apparatus for facilitating video rendering in a device | |
US20230410699A1 (en) | Structured display shutdown for video pass-through electronic devices | |
JP2014174312A (en) | Data regeneration device, integrated circuit, mobile apparatus, and data regeneration method | |
KR20180101950A (en) | Method and apparatus for processing an image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16826156 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |