GB2568112A - Method and system for processing display data - Google Patents

Method and system for processing display data Download PDF

Info

Publication number
GB2568112A
GB2568112A GB1718421.9A GB201718421A GB2568112A GB 2568112 A GB2568112 A GB 2568112A GB 201718421 A GB201718421 A GB 201718421A GB 2568112 A GB2568112 A GB 2568112A
Authority
GB
United Kingdom
Prior art keywords
frame
colour
signature
display data
frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1718421.9A
Other versions
GB201718421D0 (en
GB2568112B (en
Inventor
Joveluro Prince
David Cooper Patrick
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
DisplayLink UK Ltd
Original Assignee
DisplayLink UK Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by DisplayLink UK Ltd filed Critical DisplayLink UK Ltd
Priority to GB1718421.9A priority Critical patent/GB2568112B/en
Publication of GB201718421D0 publication Critical patent/GB201718421D0/en
Priority to PCT/GB2018/052966 priority patent/WO2019092392A1/en
Publication of GB2568112A publication Critical patent/GB2568112A/en
Application granted granted Critical
Publication of GB2568112B publication Critical patent/GB2568112B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/60Memory management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/14Coding unit complexity, e.g. amount of activity or edge presence estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/18Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a set of transform coefficients
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/507Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction using conditional replenishment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/587Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal sub-sampling or interpolation, e.g. decimation or subsequent interpolation of pictures in a video sequence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/625Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using discrete cosine transform [DCT]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/174Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Discrete Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

A method of processing display data for a system, comprises: determining a first colour signature for a first frame, or a portion of a first frame, of generated display data; storing the first colour (e.g. RGB) signature; determining a second colour signature for a second frame, or frame portion, of generated display data, for consecutively displaying to the user following the first frame or portion, the portion of the second frame corresponding to the portion of the first frame; comparing the second color signature to the first color signature to determine a difference in the colour signatures; and comparing the difference in the colour signatures to a first threshold, and if the difference in the colour signatures is below the threshold, identifying the second frame or portion as a candidate for dropping (discarding). Colour signature determination may comprise compression, determining average frame colour and/or applying a transform e.g. Discrete Cosine Transform (DCT), generating a DC frame value. Colour signatures may comprise red, green, blue, luma and/or chroma data. The method is applicable to virtual reality (VR) systems, e.g. via a head mounted display (HMD).

Description

The following terms are registered trade marks and should be read as such wherever they occur in this document:
Wi-Fi (Page 4)
HDMI (Page 9)
Intellectual Property Office is an operating name of the Patent Office www.gov.uk/ipo
Method and system for processing display data
The present invention relates to a method and system for processing display data, and in particular to a method and system for processing display data for use in virtual reality systems.
Virtual reality systems typically comprise a host computer for generating display data, and a display device, such as a virtual reality headset, for displaying the display data to a user. The host computer may compress and encode the display data for sending to the display device. For such systems to be effective, to be able to effectively simulate a real-life situation, the resolution of the display data must be high and must have fast refresh and update rates. For this reason, virtual reality is currently one of the most bandwidth intensive technologies, typically requiring around 90 frames per second to be sent between the host computer and display device. Attempts have been made to try and reduce the bandwidth requirements, but typically only by compressing and/or encoding the display data at the host computer before the display data is sent to the display device. Any way in which the bandwidth requirements could be reduced would clearly be beneficial for allowing the virtual reality technology to develop further.
The applicant has identified a method for being able to rapidly process display data, and so a method that is suitable for such virtual reality systems, to identify when frames are similar enough to be dropped, and so to reduce bandwidth requirements without compromising on the quality of the user experience.
According to an aspect of the present invention, there is provided a method of processing display data for a system, the system comprising a host device for generating display data, and a display device for displaying the generated display data to a user, wherein the generated display data comprises frames of display data for displaying consecutively to a user, the method comprising:
determining a first colour signature for a first frame, or a portion of a first frame, of generated display data;
storing the first colour signature in a memory;
determining a second colour signature for a second frame, or a portion of a second frame, of generated display data, wherein the second frame or portion of the second frame is for consecutively displaying to the user following the first frame, or portion of the first frame respectively, and wherein the portion of the second frame corresponds to the portion of the first frame,
-2comparing the second colour signature to the first colour signature to determine a difference in the colour signatures; and comparing the difference in the colour signatures to a first threshold, wherein if the difference in the colour signatures is below the first threshold, the method further comprises identifying the second frame, or portion of the second frame, as a candidate for dropping.
The applicant has found that when a colour signature for the second frame (or portion) is determined to be the same as, or within a predetermined range of the colour signature for the first frame (or portion), it can be assumed with reasonable probability that the second frame (or portion) is unchanged from the first frame (or portion), at least for the purposes of perception by the human eye i.e. a user would see no change between the first frame and second frame if they were displayed consecutively to the user. This may allow for the second frame (or portion) to be dropped, and so not sent to the display device for displaying to a user. Processing data rapidly and dropping frames, or portions of frames, in this way can greatly reduce the bandwidth requirements of the system, which is clearly beneficial. This method may be particularly efficient since the frames typically need to be compressed or processed before being sent to the display device in any case, and this method can take advantage of values that would be generated during compression/processing. This method therefore requires very little additional processing, but has the potential to vastly reduce the bandwidth requirements of the system.
Each frame or portion of a frame will typically have thousands or millions of pixels and will always have at least 1000 pixels. Corresponding portions of frames may be located at the same position, or location, within a frame.
Determining a colour signature for a frame, or portion of a frame, may comprise compressing the frame, or portion of the frame, respectively. Determining a colour signature for a frame, or portion of a frame, may comprise determining an average colour value for the frame, or portion of the frame, respectively. Determining a colour signature for a frame, or portion of a frame, may comprise applying a transformation to the frame, or portion of the frame, respectively, and wherein the colour signature corresponds to a generated DC value for the frame, or portion of the frame, respectively, following the transform. The transform may comprise Haar encoding. The transform may comprise a discrete cosine transformation (DCT).
-3The colour signature for a frame, or portion of a frame, may correspond to any one or more of a red, green, blue, luma and chroma signature. Use of certain colour signatures may be more suitable for certain applications.
On identifying the second frame, or portion of the second frame, as a candidate for dropping, the method may further comprise dropping the second frame, or portion of the second frame. Given that it has been identified that the frames are unchanged for the purpose of perception by the human eye, it may be more efficient to just drop the frame or portion at this stage. This may be beneficial where it would be particularly helpful to reduce the bandwidth.
On identifying the second frame, or portion of the second frame, as a candidate for dropping, the method may further comprise determining whether to drop the second frame, or portion of the second frame, or whether to instead send the second frame, or portion of the second frame, to the display device. This second check may be performed to prevent longer term divergence, and error propagation in the system.
The memory may be configured to store a predetermined number of colour signatures, and wherein the method may further comprise tagging the colour signatures stored in the memory to indicate whether the corresponding frames, or portions of frames, were dropped.
The memory may be configured to store just the previous colour signature.
A counter may be increased when a frame, or portion of a frame, is dropped, and on identifying the second frame, or portion of the second frame, as a candidate for dropping, the method may further comprise comparing the counter value to a second threshold value to determine whether to drop the second frame, or portion of the second frame, wherein if the counter value exceeds the second threshold value, the method further comprises dropping the second frame, or portion of the second frame.
On identifying the second frame, or portion of the second frame, as a candidate for dropping, the method may further comprise determining a number of previous frames, or portions of frames, immediately preceding the second frame, or portion of the second frame, that were dropped, and comparing this number to a third threshold value, wherein if this number does not exceed this threshold value, the method comprises dropping the second frame, or portion of the second frame, and wherein if this number of previous frames, or portions of frames, does exceed this third threshold value, the method comprises sending the second frame, or portion of the second frame, to the display
-4device for displaying to the user. The method thereby reduces the chance of any longerterm divergence, and error propagation in the system. When it is determined that a certain number of consecutive frames have been dropped by the system, and so have not been sent to the display device for displaying to a user, the method ensures that the current frame is then sent to the display device for display to the user.
The third threshold value may be any one of 5, 6, 7, 8, 9 or 10. The third threshold may also be below 5 or above 10. It may be possible to alter the third threshold depending on the application. Particular thresholds may be more suited to particular applications.
The method may further comprise detecting movement of the display device, and if movement of the display device is detected, the first threshold may be reduced to provide for a higher similarity requirement. This may help reduce the chance of error propagation in the system. The first threshold may be reduced further where detected movement is faster.
The generated display data may be video data and a frame of the video data may correspond to image data.
The frames of display data may be generated, processed and sent to the display device at a rate of approximately at least 50 frames per second, or at least 60 frames per second, or at least 90 frames per second, or at least 120 frames per second for displaying to a user.
The system may be a virtual reality system. This method may be particularly advantageous in such systems.
The host device and the display device may be wirelessly connected. Having a method that allows for reduced bandwidth requirements may provide for greater ease in use of wireless display devices, and provide for a greater ability in further developing such systems. The wireless connection may use any one or more or Wi-Fi, Radio, Internet or any other suitable technology.
The display device may be a head mounted display, and/or may comprise augmented reality glasses.
The host device and the display device may be contained within a housing. The housing may be the casing for any one of a mobile phone, a PDA, a tablet or any other handheld portable device.
The memory may be a buffer. This may be particularly suitable. Reduced storage
-5requirements may free up the processing capacity of the system for other uses. Any other suitable memory or memory unit could be used.
According to an aspect of the present invention, there is provided a system for processing display data, the system comprising:
a host device for generating and processing display data, wherein the generated display data comprises frames of display data for displaying consecutively to a user, the host device comprising a processor and a memory; and a display device connected to the host device configured to receive generated display data from the host device and to display the generated display data to a user;
wherein the processor is configured to:
determine a first colour signature for a first frame, or a portion of a first frame, of generated display data and to store the first colour signature in the memory;
determine a second colour signature for a second frame, or a portion of a second frame, of generated display data, wherein the second frame, or portion of the second frame, is for consecutively displaying to the user following the first frame of generated display data, or portion of the first frame respectively, and wherein the portion of the second frame corresponds to the portion of the first frame;
compare the second colour signature to the first colour signature to determine a difference in the colour signatures; and compare the difference in the colour signatures to a first threshold, wherein if the difference in the colour signatures is below the first threshold, the processor is further configured to identify the second frame, or portion of the second frame, as a candidate for dropping.
The applicant has found that when a colour signature for the second frame (or portion) is determined to be the same as, or within a predetermined range of the colour signature for the first frame (or portion), it can be assumed with reasonable probability that the second frame (or portion) is unchanged from the first frame(or portion), at least for the purposes of perception by the human eye i.e. a user would see no change between the first frame and second frame if they were displayed consecutively to the user. This may allow for the second frame (or portion) to be dropped, and so not sent to the display device for displaying to a user. Processing data rapidly and dropping frames, or portions of frames, in this way can greatly reduce the bandwidth requirements of the system, which is clearly beneficial. This system may be particularly efficient since the
-6frames typically need to be compressed or processed before being sent to the display device in any case, and this system can take advantage of values that would be generated during compression/processing. This system therefore requires very little additional processing, but has the potential to vastly reduce the bandwidth requirements of the system.
To determine a colour signature for a frame or portion of a frame, the processor may be configured to compress the frame or portion of the frame, respectively.
To determine a colour signature for a frame or portion of a frame, the processor may be configured to determine an average colour value for the frame, or portion of the frame, respectively.
To determine a colour signature for a frame or portion of a frame, the processor may be configured to apply a transform to the frame, or portion of the frame, respectively, and the colour signature may correspond to a generated DC value for the frame, or portion of the frame, respectively, following the transform. The transform may comprise Haar encoding. The transform may comprise a discrete cosine transformation (DCT).
The colour signature for a frame, or portion of a frame, may correspond to any one or more of a red, green, blue, luma and chroma signature.
On identifying the second frame, or portion of the second frame, as a candidate for dropping, the processor may be further configured to drop the second frame, or portion of the second frame.
On identifying the second frame, or portion of the second frame, as a candidate for dropping, the processor may be further configured to determine whether to drop the second frame, or portion of the second frame, or whether to instead send the second frame, or portion of the second frame, to the display device.
The memory may be configured to store a predetermined number of colour signatures, and the processor may be configured to tag the colour signatures stored in the memory to indicate whether the corresponding frames, or portions of frames, were dropped.
The memory may be configured to store just the previous colour signature.
The system may include a counter, and the processor may be configured to increase the counter when a frame, or portion of a frame, is dropped, and on identifying the second frame, or portion of the second frame, as a candidate for dropping, the
-7processor may be further configured to compare the counter value to a second threshold value to determine whether to drop the second frame, or portion of the second frame, wherein if the counter value exceeds the second threshold value, the processor is further configured to drop the second frame, or portion of the second frame.
On identifying the second frame, or portion of the second frame, as a candidate for dropping, the processor may be configured to determine the number of previous frames, or portions of frames, immediately preceding the second frame, or portion of the second frame, that were dropped, and to compare this number to a third threshold value, wherein if this number does not exceed this threshold value, the processor may be configured to drop the second frame, or portion of the second frame, and wherein if this number does exceed this third threshold value, the processor may be configured to send the second frame, or portion of the second frame, to the display device for displaying to the user. The third threshold value may be any one of 5, 6, 7, 8, 9 or 10. The third threshold may also be below 5 or above 10. It may be possible to alter the third threshold depending on the application.
The system may comprise means for detecting movement of the display device, wherein the means are in communication with the processor, and wherein when movement of the display device is detected, the processor is configured to reduce the first threshold to provide for a higher similarity requirement. The means may comprise an accelerometer or any other suitable device. The first threshold may be reduced further where detected movement is faster.
The generated display data may be video data and a frame of the video data corresponds to image data.
The frames of display data may be generated, processed and sent to the display device at a rate of approximately at least 50 frames per second, or at least 60 frames per second, or at least 90 frames per second, or at least 120 frames per second for displaying to a user.
The system may be a virtual reality system. The host device and the display device may be wirelessly connected. The display device may be a head mounted display, and/or may comprise augmented reality glasses.
The host device and the display device may be contained within a housing. The housing may be the casing for any one of a mobile phone, a PDA, a tablet or any other handheld portable device.
-8The memory may be a buffer. This may be particularly suitable. Reduced storage requirements may free up the processing capacity of the system for other uses. Any other suitable memory or memory unit could be used.
Any one or more features from one embodiment or aspect of the present invention as described herein may be incorporated into any other embodiment or aspect of the present invention, as appropriate and applicable.
Embodiments of the present invention will now be described, by way of example only, with reference to the accompanying drawings, in which:
Figure 1 shows a block diagram overview of a system for processing display data;
Figure 2 shows a block diagram overview of a system for processing display data wherein the display device is a headset;
Figure 3 shows a block diagram overview of a system for processing display data, wherein the host device and display device are contained within a single casing, for example in a smartphone or other such mobile computing device;
Figure 4 shows a method for processing display data in accordance with an embodiment of the present invention;
Figure 5 shows an exemplary Haar encoding process.
Figure 1 shows a block diagram overview of a system according to the current art. A host computer [11] is connected to a display control device [12], which is in turn connected to a display device [13], The host [11] contains an application [14], which produces display data. The display data may be produced and sent for compression either as complete frames or as canvasses, which may, for example, be separate application windows. In either case, they are made up of tiles of pixels, where each tile is a geometrically-shaped collection of one or more pixels.
The display data is sent to a compression engine [15], which may comprise software running in a processor or an appropriate hardware engine. The compression engine [15] may perform an encoding of the data, to convert the data into a format that may then be further compressed, minimising data loss.
The compression engine [15] may then further compress the data and thereafter send the compressed data to an output engine [16], The output engine [16] manages the connection with the display control device [12] and may, for example, include a socket for a cable to be plugged into for a wired connection or a radio transmitter for a wireless connection. In either case, it is connected to a corresponding input engine [17] on the
-9display control device [12],
The input engine [17] is connected to a decompression engine [18], When it receives compressed data, it sends it to the decompression engine [18] or to a memory from which the decompression engine [18] can fetch it according to the operation of a decompression algorithm. In any case, the decompression engine [18] may decompress the data, if necessary, and performs a decoding operation. In the illustrated system, the decompressed data is then sent to a scaler [19], In the case where the display data was produced and compressed as multiple canvasses, it may be composed into a frame at this point.
If scaling is necessary, it is preferable for it to be carried out on a display control device [12] as this minimises the volume of data to be transmitted from the host [11] to the display control device [12], and the scaler [19] operates to convert the received display data to the correct dimensions for display on the display device [13], In some embodiments, the scaler may be omitted or may be implemented as part of the decompression engine. The data is then sent to an output engine [110] for transmission to the display device [13], This may include, for example, converting the display data to a display-specific format such as VGA, HDMI, etc.
In one example, the display device is a virtual reality headset [21], as illustrated in Figure 2, connected to a host device [22], which may be a computing device, gaming station, etc. The virtual reality headset [21] incorporates two display panels [23], which may be embodied as a single panel split by optical elements. In use, one display is presented to each of a viewer’s eyes. The host device [22] generates image data for display on these panels [23] and transmits the image data to the virtual reality headset [21].
In another example, the headset is a set of augmented reality glasses. As in the virtual reality headset [21] shown in Figure 2, there are two display panels, each associated with one of the user’s eyes, but in this example the display panels are translucent.
The host device [22] may be a static computing device such as a computer, gaming console, etc., or may be a mobile computing device such as a smartphone or smartwatch. As previously described, it generates image data and transmits it to the augmented reality glasses or virtual reality headset [21] for display.
The display device may be connected to the host device [11, 22] or display
-10control device [12] if one is present by a wired or wireless connection. While a wired connection minimises latency in transmission of data from the host to the display, wireless connections give the user much greater freedom of movement within range of the wireless connection and are therefore preferable. A balance must be struck between high compression of data, in particular video data, which can be used to enable larger amounts of data (e.g. higher resolution video) to be transmitted between the host and display, and the latency that will be introduced by processing of the data.
Ideally, the end-to-end latency between sensing a user’s head movement, generating the pixels in the next frame of the VR (virtual reality) scene and streaming the video should be kept below 20ms, preferably below 10ms, further preferably below 5ms.
The wireless link should be implemented as a high bandwidth short-range wireless link, for example at least 1 Gbit/s, preferably at least 2 Gbit/s, preferably at least 3 Gbit/s. An “extremely high frequency (EHF)” radio connection, such as a 60GHz radio connection is suitable for providing such high-bandwidth connections over short-range links. Such a radio connection can implement the WiFi standard IEEE 802.11 ad. The 7176, 81-86 and 92-95 GHz bands may also be used in some implementations.
The wireless links described above can provide transmission between the host and the display of more than 50 frames per second, preferably more than 60 frames per second, further preferably more than 90 frames per second, or even as high as 120 frames per second.
Figure 3 shows a system which is similar in operation to the example shown in Figure 2. In this case, however, there is no separate host device [22], The entire system is contained in a single casing [31], for example in a smartphone or other such mobile computing device. The device contains a processor [33], which generates display data for display on the integral display panel [32], The mobile computing device may be mounted such that the screen is held in front of the user’s eyes as if it were the screen of a virtual reality headset.
In accordance with an embodiment of the present invention, for example with reference to Figure 4, there is a system and method for processing display data. This system and method may comprise any or all of the features of the systems described above in relation to Figures 1-3. In this system, a computing device, or host, contains an application, which generates image, or display, data, for playing back to a user as video.
When such display data is generated by the host, it comprises frames of display
-11 data for consecutive viewing by the user, and each frame comprises a plurality of pixels, wherein each pixel may comprise values for the levels of red (R), green (G), and blue (B) therein. This is known as RGB. The pixel data can be processed as separate, or as a combination of, R, G, and B values, and/or the pixel data can be converted to luma (Y) and chroma (α, β) values where luma indicates the luminescence of the pixel and chroma indicates its colour. Luma may be calculated as follows: Y= k(R+G+B), where k is a constant to appropriately scale the Y value. The two chroma values may comprise parts of the original RGB value as follows: a = aR+bG+cB; and β = a’R+b’G+c’B, where a, b, c and a’, b’ and c’ are constants. A simple transform often used makes a=0, b=1, c=1 and a’=1, b-1, c-0, resulting in: a = G+B; and β = G+R. These constants are generally pre-programmed and are not changed to adapt to different circumstances. Where there is a preponderance of a colour, corrections can be made before the display data is displayed to a user.
The display data is sent for processing/compression as complete frames, or as tiles of pixels; the frames are made up of tiles of pixels, where each tile is a geometrically-shaped collection of one or more pixels. Processing/compression of a frame (or tile) in one embodiment involves performing a transform on the pixel values for that frame. This is preferably a Haar encoding process, but may also be a Discrete Cosine Transformation (DCT). This results in a single DC value being generated for a frame (or tile), associated with a particular pixel for that frame (or tile), and which provides a colour signature for that frame (or tile), and represents the average colour of the pixels in that frame (or tile). It will be appreciated that this could also be performed on individual tiles in the frames, as well as or instead. For a frame size of N pixels, performing such a transform also results in generation of N-1 AC values, one for each of the remaining pixels in the frame, these AC values representing the colour change between the pixels, across the frame. This may be performed on the display data in the RGB colour space and/or in a colour space related to luma and chroma.
These and/or other display data values may then be used in further processing of the image data, and for sending to an output engine in the host, which manages the connection with a display control device, and so display device. As such, the output engine is connected to a corresponding input engine on the display control device, and so display device.
Before being sent to the output engine, the DC value for a frame, and in some
-12 cases the corresponding AC values, are stored in a memory unit of the host. The memory unit may be a short term or temporary memory, for instance a buffer.
This allows the DC value of a frame to be compared to the DC value of the previous frame (and in some cases similarly comparing the AC values too). Whilst the above and foregoing describes the process in relation to use of the DC values for frames, the process could also use the DC value for tiles of frames, and/or the corresponding AC values for frames or tiles of frames in a similar manner.
The DC value of the current frame being processed is compared to the DC value of the previous frame that is stored in the buffer. This could be a DC value for each colour component, or for a combination. Either way, when the current DC value is within a predetermined range of DC (or colour) values of the corresponding previous DC value, wherein the predetermined range of DC (or colour) values is centred on the previous DC value for the previous frame, it can be assumed that the frame is unchanged (at least according to the perception of the human eye), and so the frame is identified as a candidate for dropping. In some embodiments, the current frame can just be dropped at this stage. If the frame is dropped, it is not forwarded to the display control device or display device for displaying to a user. The DC value corresponding to the dropped frame may be tagged to indicate that the frame was dropped. This current DC value is then stored in the buffer for comparison with the corresponding DC value for the next frame. In some embodiments, a further check is performed on the candidate before deciding whether the frame should be dropped.
The buffer may store a predetermined number of the DC values, for instance x DC values (where x may for instance be 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 or more). As mentioned, the DC values may be tagged or marked with whether the associated frames were dropped. Therefore, once a candidate for dropping is determined, before dropping the frame associated with the current DC value, it is determined whether a predetermined number of the previous frames, preceding the current frame, for instance whether the previous / frames (where y may for instance be 5, 6, 7, 8, 9, 10 or more), were sent for further processing and to the display device, using the previous y DC values stored in the buffer. If the previous / frames were not sent for further processing and to the display device, then the current frame is nonetheless sent for further processing for sending to the display device. This may be carried out by determining the number of DC values in the buffer, preceding the DC value associated with the current
-13frame, that are tagged to indicate that their associated frame was dropped, and if the number of previous frames that were dropped exceeds a predetermined threshold value, then the current frame is nonetheless sent for further processing and/or for sending to the display device. This prevents the propagation of errors in the system. Otherwise, the current frame is dropped, not sent for further processing, and not sent to the display device. Being able to process the data quickly in this way, and to determine whether to drop frames quickly in this way therefore enables a reduction in the bandwidth required, while still providing the user of the display device with an acceptable level of visual quality.
Rather than tagging the DC values in the buffer with whether the associated frames were dropped, it may be preferable to make use of a counter. There could be a counter that counts the frames as they are dropped, and then re-sets when a frame is not dropped. This counter could therefore instead be compared to a threshold value to determine whether a candidate for dropping should be dropped. The memory may only need to store a single colour signature, or colour value or DC value.
In some embodiments, the display device may comprise an accelerometer, or other means for detecting and/or measuring motion of the display device. The host device may be configured to monitor movement of the display device, in particular whether the display device is moving and how fast the display device may be moving. In some embodiments, when the display device is moving, the frame may be sent regardless. In some embodiments, the similarity requirements for the DC values, or other colour values, may be stricter when it is determined that the display device is moving and/or when it is determined that the display device is moving at a velocity above a particular threshold. For instance, when the DC value of the current frame is compared to the DC value of the previous frame that is stored in the buffer, the predetermined range of colour values may be narrower when the display device is moving. This is to ensure that it can still be assumed that the frame is unchanged, at least according to the perception of the human eye, and so the user.
The above method and system of an embodiment of the present invention has been described in relation to the use of a transform generally, but particular methods of compression and/or encoding may be preferable in certain situations. It may be preferable to use Haar encoding/ transformation process. This may produce suitable colour signatures or DC/AC values for use as described above.
-14A Haar transformation process that may be implemented in conjunction with the systems described herein will now be explained with reference to Figure 5, and Figures 1-3. The Haar transform takes place on the host [11], specifically in the compression engine [15], Decompression takes place on the display control device [12], specifically in the decompression engine [18], where the data is put through an inverse Haar transform to return it to its original form.
In the example shown, a group of four tiles [41] has been produced by the application [14] and passed to the compression engine [15], In this example, each tile [41] comprises one pixel, but of course may be much larger. Each pixel [41] has a value indicating its colour, here represented by the pattern of hatching. The first pixel [41 A] is marked with dots and considered to have the lightest colour. The second pixel [41B] is marked with diagonal hatching and is considered to have the darkest colour. The third pixel [41C] is marked with vertical hatching and is considered to have a light colour, and the fourth pixel [41D] is marked with horizontal hatching and is considered to have a dark colour. The values of the four pixels [41] are combined using the formulae [44] shown to the right of Figure 5 to produce a single pixel value [42], referred to as “W”, which is shaded in grey to indicate that its value is derived from the original four pixels [41], as well as a set of coefficients [43] referred to in Figure 5 as “x, y, z”. The pixel value [42] is generated from a sum of the values of all four pixels: ((A+B)+(C+D)). The three coefficients [43] are generated using the other three formulae [44] as follows:
• x: (A-B)+(C-D) y: (A+B)-(C+D) z: (A-B)-(C-D)
Any or all of these values may then be quantised: divided by a constant in order to produce a smaller number which will be less accurate but can be more effectively compressed and rounded. “W” may be used as a colour value for the tile, or frame where appropriate, alone and/or in combination with any or all of the coefficients.
The above embodiments and examples are described by way of example only, and are in no way intended to limit the scope of the present invention as defined by the appended claims.

Claims (40)

1. A method of processing display data for a system, the system comprising a host device for generating display data, and a display device for displaying the generated display data to a user, wherein the generated display data comprises frames of display data for displaying consecutively to a user, the method comprising:
determining a first colour signature for a first frame, or a portion of a first frame, of generated display data;
storing the first colour signature in a memory;
determining a second colour signature for a second frame, or a portion of a second frame, of generated display data, wherein the second frame or portion of the second frame is for consecutively displaying to the user following the first frame, or portion of the first frame respectively, and wherein the portion of the second frame corresponds to the portion of the first frame;
comparing the second colour signature to the first colour signature to determine a difference in the colour signatures; and comparing the difference in the colour signatures to a first threshold, wherein if the difference in the colour signatures is below the first threshold, the method further comprises identifying the second frame, or portion of the second frame, as a candidate for dropping.
2. A method as claimed in claim 1, wherein determining a colour signature for a frame, or portion of a frame, comprises compressing the frame, or portion of the frame, respectively.
3. A method as claimed in either of claims 1 or 2, wherein determining a colour signature for a frame, or portion of a frame, comprises determining an average colour value for the frame, or portion of the frame, respectively.
4. A method as claimed in any preceding claim, wherein determining a colour signature for a frame, or portion of a frame, comprises applying a transform to the frame, or portion of the frame, respectively, and wherein the colour signature corresponds to a generated DC value for the frame, or portion of the frame, respectively, following the transform.
5. A method as claimed in any preceding claim, wherein the colour signature for a frame, or portion of a frame, corresponds to any one or more of a red, green, blue, luma and chroma signature.
6. A method as claimed in any preceding claim, wherein a frame, or portion of a frame, comprises at least 1000 pixels.
7. A method as claimed in any preceding claim, wherein, on identifying the second frame, or portion of the second frame, as a candidate for dropping, the method further comprises dropping the second frame, or portion of the second frame.
8. A method as claimed in any of claims 1 to 6, wherein, on identifying the second frame, or portion of the second frame, as a candidate for dropping, the method further comprises determining whether to drop the second frame, or portion of the second frame, or whether to instead send the second frame, or portion of the second frame, to the display device.
9. A method as claimed in any preceding claim, wherein the memory is configured to store a predetermined number of colour signatures, and wherein the method further comprises tagging the colour signatures stored in the memory to indicate whether the corresponding frames, or portions of frames, were dropped.
10. A method as claimed in any preceding claim, wherein a counter is increased when a frame, or portion of a frame, is dropped, and wherein, on identifying the second frame, or portion of the second frame, as a candidate for dropping, the method further comprises comparing the counter value to a second threshold value to determine whether to drop the second frame, or portion of the second frame, wherein if the counter value exceeds the second threshold value, the method further comprises dropping the second frame, or portion of the second frame.
11. A method as claimed in any preceding claim, wherein, on identifying the second frame, or portion of the second frame, as a candidate for dropping, the method further comprises determining a number of previous frames, or portions of frames, immediately preceding the second frame, or portion of the second frame, that were dropped, and comparing this number to a third threshold value, wherein if this number does not exceed this threshold value, the method comprises dropping the second frame, or portion of the second frame, and wherein if this number of previous frames, or portions of frames, does exceed this third threshold value, the method comprises sending the second frame, or portion of the second frame, to the display device for displaying to the user.
12. A method as claimed in claim 11, wherein the third threshold value is any one of 5, 6, 7, 8, 9 or 10.
13. A method as claimed in any preceding claim, wherein the method further comprises detecting movement of the display device, and if movement of the display device is detected, the first threshold is reduced to provide for a higher similarity requirement.
14. A method as claimed in any preceding claim, wherein the generated display data is video data and a frame of the video data corresponds to image data.
15. A method as claimed in any preceding claim, wherein the frames of display data are generated, processed and sent to the display device at a rate of approximately at least 50 frames per second, or at least 60 frames per second, or at least 90 frames per second, or at least 120 frames per second for displaying to a user.
16. A method as claimed in any preceding claim, wherein the system is a virtual reality system.
17. A method as claimed in any preceding claim, wherein the host device and the display device are wirelessly connected.
18. A method as claimed in any preceding claim, wherein the display device is a head mounted display.
19. A method as claimed in any preceding claim, wherein the host device and the display device are contained within a housing.
20. A method as claimed in claim 19, wherein the housing is the casing for any one of a mobile phone, a PDA, a tablet or any other handheld portable device.
21. A system for processing display data, the system comprising:
a host device for generating and processing display data, wherein the generated display data comprises frames of display data for displaying consecutively to a user, the host device comprising a processor and a memory; and a display device connected to the host device configured to receive generated display data from the host device and to display the generated display data to a user;
wherein the processor is configured to:
determine a first colour signature for a first frame, or portion of a first frame, of generated display data and to store the first colour signature in the memory;
determine a second colour signature for a second frame, or portion of a second frame, of generated display data, wherein the second frame, or portion of the second frame, is for consecutively displaying to the user following the first frame of generated display data, or portion of the first frame respectively, and wherein the portion of the second frame corresponds to the portion of the first frame;
compare the second colour signature to the first colour signature to determine a difference in the colour signatures; and compare the difference in the colour signatures to a first threshold, wherein if the difference in the colour signatures is below the first threshold, the processor is further configured to identify the second frame, or portion of the second frame, as a candidate for dropping.
22. A system as claimed in claim 21, wherein to determine a colour signature for a frame or portion of a frame, the processor is configured to compress the frame or portion of the frame, respectively.
23. A system as claimed in claim 21 or 22, wherein to determine a colour signature for a frame or portion of a frame, the processor is configured to determine an average colour value for the frame, or portion of the frame, respectively.
24. A system as claimed in any of claims 21 to 23, wherein to determine a colour signature for a frame or portion of a frame, the processor is configured to apply a transform to the frame, or portion of the frame, respectively, and wherein the colour signature corresponds to a generated DC value for the frame, or portion of the frame, respectively, following the transform.
25. A system as claimed in any of claims 21 to 24, wherein the colour signature for a frame, or portion of a frame, corresponds to any one or more of a red, green, blue, luma and chroma signature.
26. A system as claimed in any of claims 21 to 25, wherein a frame, or portion of a frame, comprises at least 1000 pixels.
27. A system as claimed in any of claims 21 to 26, wherein on identifying the second frame, or portion of the second frame, as a candidate for dropping, the processor is further configured to drop the second frame, or portion of the second frame.
28. A system as claimed in any of claims 21 to 26, wherein, on identifying the second frame, or portion of the second frame, as a candidate for dropping, the processor is further configured to determine whether to drop the second frame, or portion of the second frame, or whether to instead send the second frame, or portion of the second frame, to the display device.
29. A system as claimed in any of claims 21 to 28, wherein the memory is configured to store a predetermined number of colour signatures, and wherein the processor is configured to tag the colour signatures stored in the memory to indicate whether the corresponding frames, or portions of frames, were dropped.
30. A system as claimed in any of claims 21 to 29, wherein the system includes a counter, and wherein the processor is configured to increase the counter when a frame, or portion of a frame, is dropped, and wherein, on identifying the second frame, or portion of the second frame, as a candidate for dropping, the processor is further configured to compare the counter value to a second threshold value to determine whether to drop the second frame, or portion of the second frame, wherein if the counter value exceeds the second threshold value, the processor is further configured to drop the second frame, or portion of the second frame.
31. A system as claimed in any of claims 21 to 30, wherein, on identifying the second frame, or portion of the second frame, as a candidate for dropping, the processor is configured to determine the number of previous frames, or portions of frames, immediately preceding the second frame, or portion of the second frame, that were dropped, and to compare this number to a third threshold value, wherein if this number does not exceed this threshold value, the processor is configured to drop the second frame, or portion of the second frame, and wherein if this number does exceed this third threshold value, the processor is configured to send the second frame, or portion of the second frame, to the display device for displaying to the user.
32. A system as claimed in claim 31, wherein the third threshold value is any one of 5, 6, 7, 8, 9 or 10.
33. A system as claimed in any of claims 21 to 32, wherein the system comprises means for detecting movement of the display device, wherein the means are in communication with the processor, and wherein when movement of the display device is detected, the processor is configured to reduce the first threshold to provide for a higher similarity requirement.
34. A system as claimed in any of claims 21 to 33, wherein the generated display data is video data and a frame of the video data corresponds to image data.
35. A system as claimed in any of claims 21 to 34, wherein the frames of display data are generated, processed and sent to the display device at a rate of approximately at least 50 frames per second, or at least 60 frames per second, or at least 90 frames per second, or at least 120 frames per second for displaying to a user.
36. A system as claimed in any of claims 21 to 35, wherein the system is a virtual reality system.
5
37. A system as claimed in any of claims 21 to 36, wherein the host device and the display device are wirelessly connected.
38. A system as claimed in any of claims 21 to 37, wherein the display device is a head mounted display.
39. A system as claimed in any of claims 21 to 38, wherein the host device and the display device are contained within a housing.
40. A system as claimed in claim 39, wherein the housing is the casing for any one of 15 a mobile phone, a PDA, a tablet or any other handheld portable device.
GB1718421.9A 2017-11-07 2017-11-07 Method and system for processing display data Active GB2568112B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB1718421.9A GB2568112B (en) 2017-11-07 2017-11-07 Method and system for processing display data
PCT/GB2018/052966 WO2019092392A1 (en) 2017-11-07 2018-10-15 Method and system for processing display data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1718421.9A GB2568112B (en) 2017-11-07 2017-11-07 Method and system for processing display data

Publications (3)

Publication Number Publication Date
GB201718421D0 GB201718421D0 (en) 2017-12-20
GB2568112A true GB2568112A (en) 2019-05-08
GB2568112B GB2568112B (en) 2022-06-29

Family

ID=60664900

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1718421.9A Active GB2568112B (en) 2017-11-07 2017-11-07 Method and system for processing display data

Country Status (2)

Country Link
GB (1) GB2568112B (en)
WO (1) WO2019092392A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110032984A1 (en) * 2008-07-17 2011-02-10 Guy Dorman Methods circuits and systems for transmission of video
GB2489798A (en) * 2011-04-04 2012-10-10 Advanced Risc Mach Ltd Reducing Write Transactions in a Windows Compositing System
WO2014088707A1 (en) * 2012-12-05 2014-06-12 Silicon Image, Inc. Method and apparatus for reducing digital video image data
GB2528265A (en) * 2014-07-15 2016-01-20 Advanced Risc Mach Ltd Method of and apparatus for generating an output frame
GB2531358A (en) * 2014-10-17 2016-04-20 Advanced Risc Mach Ltd Method of and apparatus for processing a frame

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9154749B2 (en) * 2012-04-08 2015-10-06 Broadcom Corporation Power saving techniques for wireless delivery of video

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110032984A1 (en) * 2008-07-17 2011-02-10 Guy Dorman Methods circuits and systems for transmission of video
GB2489798A (en) * 2011-04-04 2012-10-10 Advanced Risc Mach Ltd Reducing Write Transactions in a Windows Compositing System
WO2014088707A1 (en) * 2012-12-05 2014-06-12 Silicon Image, Inc. Method and apparatus for reducing digital video image data
GB2528265A (en) * 2014-07-15 2016-01-20 Advanced Risc Mach Ltd Method of and apparatus for generating an output frame
GB2531358A (en) * 2014-10-17 2016-04-20 Advanced Risc Mach Ltd Method of and apparatus for processing a frame

Also Published As

Publication number Publication date
GB201718421D0 (en) 2017-12-20
WO2019092392A1 (en) 2019-05-16
GB2568112B (en) 2022-06-29

Similar Documents

Publication Publication Date Title
US11151749B2 (en) Image compression method and apparatus
US10720124B2 (en) Variable pixel rate display interfaces
AU2018280337B2 (en) Digital content stream compression
US11615734B2 (en) Method and apparatus for colour imaging
TWI757303B (en) Image compression method and apparatus
KR102617258B1 (en) Image processing method and apparatus
WO2015167313A1 (en) Method and device for adaptively compressing image data
US20180308458A1 (en) Data compression method and apparatus
EP4046382A1 (en) Method and apparatus in video coding for machines
US20220382053A1 (en) Image processing method and apparatus for head-mounted display device as well as electronic device
WO2020098624A1 (en) Display method and apparatus, vr display apparatus, device, and storage medium
US9123090B2 (en) Image data compression device, image data decompression device, display device, image processing system, image data compression method, and image data decompression method
US9165538B2 (en) Image generation
US20240054623A1 (en) Image processing method and system, and device
CN109413445B (en) Video transmission method and device
GB2568112A (en) Method and system for processing display data
US11233999B2 (en) Transmission of a reverse video feed
US10652539B1 (en) In-band signaling for display luminance control
US20210058616A1 (en) Systems and Methods for Selective Transmission of Media Content
CN112929703A (en) Method and device for processing code stream data
US20090167757A1 (en) Apparatus and method for converting color of 3-dimensional image
US9571844B2 (en) Image processor
WO2022246653A1 (en) Image processing system, cloud serving end, and method
US20230395041A1 (en) Content Display Process