AU2015243117A1 - Method, apparatus and system for encoding and decoding image data - Google Patents

Method, apparatus and system for encoding and decoding image data Download PDF

Info

Publication number
AU2015243117A1
AU2015243117A1 AU2015243117A AU2015243117A AU2015243117A1 AU 2015243117 A1 AU2015243117 A1 AU 2015243117A1 AU 2015243117 A AU2015243117 A AU 2015243117A AU 2015243117 A AU2015243117 A AU 2015243117A AU 2015243117 A1 AU2015243117 A1 AU 2015243117A1
Authority
AU
Australia
Prior art keywords
colour
volume
ycbcr
values
container
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU2015243117A
Inventor
Jonathan GAN
Volodymyr KOLESNIKOV
Christopher James ROSEWARNE
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to AU2015243117A priority Critical patent/AU2015243117A1/en
Publication of AU2015243117A1 publication Critical patent/AU2015243117A1/en
Abandoned legal-status Critical Current

Links

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

- 47 Abstract METHOD, APPARATUS AND SYSTEM FOR ENCODING AND DECODING A method of encoding colour values of an image. A container colour volume determined from a colour primaries parameter of a video usability information message associated with the image is received. A signal colour volume is received. A boundary of a transformed signal colour volume in a YCbCr colour space is determined, the transformed signal colour volume representing the signal colour volume according to the colour primaries of the container colour volume. A position of a codeword range of chroma values is determined, in the YCbCr colour space, within the container colour volume according to the determined boundary. The colour values of the image are encoded using the codeword range of the YCbCr chroma values. P1 49730_SpeciAs Filed (10598199v1) -1 /13 r --- - - - - - - - - - - - - - -Encoding device Matrix coefficients generator R'G'B'eod Y- Video encoder Storage Y b 1 1 4 1 1 6 120 124 126 r -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --- , Display device Matrix coefficients generator Video decoder -- bC oPanel device 142 RGB 146 Fig. 1 P149730_FigsAs Filed (10598792v1)

Description

METHOD, APPARATUS AND SYSTEM FOR ENCODING AND DECODING
IMAGE DATA
TECHNICAL FIELD
The present invention relates generally to digital video signal processing and, in particular, to encoding and decoding video data whose signal occupies a colour volume that is smaller than a container colour volume. The present invention also relates to a method, apparatus and system for encoding and decoding colour values of an image. The present invention also relates to a computer program product including a computer readable medium having recorded thereon a computer program for encoding and decoding colour values of an image.
BACKGROUND
To reproduce the sensation of colour in the human visual system, the majority of consumer display devices are driven by three light sources, known as colour primaries, generally corresponding to the colours of red, green and blue. Such a tri-stimulus approach to reproducing the sensation of colour requires three colour channels throughout a video processing system. Moreover, due to the wide range of luminance perceptible to the human visual system, instead of coding values uniformly spaced in the physical space of linear light intensities, values are coded in a space providing a degree of perceptual uniformity, such as gamma. Accordingly, in a display the signals driving the intensity of the colour primaries are named R’, G’ and B\ Digital video transmission systems encode video data representing the above driving signals, subject to a quality/bitrate trade-off.
Video data in a R’G’B’ colour space has considerable signal correlation across each colour channel. Although capture devices, such as digital cameras, and display devices typically operate in an R’G’B’ colour space, video encoders and video decoders typically operate on data that has a greater degree of de-correlation between the colour channels. A video capture system, prior to performing video compression, typically converts the R’G’B’ signals into a different representation known as an opponent colour space, such as YCbCr. Y is the luma component, and Cb and Cr are the blue-difference and red difference chroma components, respectively.
One reason for converting from R’G’B’ colour components to an opponent colour space is that the R’G’B’ colour components are highly correlated, meaning that there is redundant information shared between the colour components. Conversion from R’G’B’ to YCbCr reduces the redundancy. Moreover, conversion to the YCbCr colour space concentrates the luminance information into the Y (luma) component, and the colour information into the Cb and Cr (chroma) components. The human visual system is known to be less sensitive spatially to detail in colour than in brightness. Spatially subsampling the chroma components (i.e. reducing the sampling rate in the chroma components), results in fewer chroma samples to be encoded and decoded, while leaving the luma component sample rate unchanged.
The YCbCr video data is supplied to a video encoder in a quantised form, due to representation using fixed bit-width paths for each colour component. In order to allow for propagating high quality colour information through a video processing system, it is necessary to allow for the use of fine quantisation steps in each chroma component, and thus make efficient use of the space of ‘codewords’ afforded by the fixed bit-depth paths present in a video processing system.
SUMMARY
It is an object of the present invention to substantially overcome, or at least ameliorate, one or more disadvantages of existing arrangements.
According to one aspect of the present disclosure, there is provided a method of encoding colour values of an image, the method comprising: receiving a container colour volume determined from a colour primaries parameter of a video usability information message associated with the image; receiving a signal colour volume; determining a boundary of a transformed signal colour volume in a YCbCr colour space, the transformed signal colour volume representing the signal colour volume according to the colour primaries of the container colour volume; and determining a position of a codeword range of chroma values, in the YCbCr colour space, within the container colour volume according to the determined boundary; and encoding the colour values of the image using the codeword range of the YCbCr chroma values.
According to another aspect of the present disclosure, there is provided an apparatus for encoding colour values of an image, the apparatus comprising: module for receiving a container colour volume determined from a colour primaries parameter of a video usability information message associated with the image; module for receiving a signal colour volume; module for determining a boundary of a transformed signal colour volume in a YCbCr colour space, the transformed signal colour volume representing the signal colour volume according to the colour primaries of the container colour volume; and module for determining a position of a codeword range of chroma values, in the YCbCr colour space, within the container colour volume according to the determined boundary; and module for encoding the colour values of the image using the codeword range of the YCbCr chroma values.
According to still another aspect of the present disclosure, there is provided a system for encoding colour values of an image, the system comprising: a memory for storing data and a computer program; a processor coupled to the memory for executing the computer program, the computer program comprising instructions for: receiving a container colour volume determined from a colour primaries parameter of a video usability information message associated with the image; receiving a signal colour volume; determining a boundary of a transformed signal colour volume in a YCbCr colour space, the transformed signal colour volume representing the signal colour volume according to the colour primaries of the container colour volume; and determining a position of a codeword range of chroma values, in the YCbCr colour space, within the container colour volume according to the determined boundary; and encoding the colour values of the image using the codeword range of the YCbCr chroma values.
According to still another aspect of the present disclosure, there is provided a computer readable medium comprising a program for encoding colour values of an image, the program comprising: code for receiving a container colour volume determined from a colour primaries parameter of a video usability information message associated with the image; code for receiving a signal colour volume; code for determining a boundary of a transformed signal colour volume in a YCbCr colour space, the transformed signal colour volume representing the signal colour volume according to the colour primaries of the container colour volume; and code for determining a position of a codeword range of chroma values, in the YCbCr colour space, within the container colour volume according to the determined boundary; and code for encoding the colour values of the image using the codeword range of the YCbCr chroma values.
According to still another aspect of the present disclosure, there is provided a method of decoding R’G’B’ colour values of an image, the method comprising: receiving a container colour volume determined from a colour primaries parameter of a video usability information message associated with the image; receiving a signal colour volume; determining a boundary of a transformed signal colour volume in a YCbCr colour space, the transformed signal colour volume representing the signal colour volume according to the colour primaries of the container colour volume; and determining a position of a codeword range of chroma values, in the YCbCr colour space, within the container colour volume according to the determined boundary; and decoding the R’G’B’ colour values of the image using the codeword range of the YCbCr chroma values.
According to still another aspect of the present disclosure, there is provided an apparatus for decoding R’G’B’ colour values of an image, the apparatus comprising: module for receiving a container colour volume determined from a colour primaries parameter of a video usability information message associated with the image; module for receiving a signal colour volume; module for determining a boundary of a transformed signal colour volume in a YCbCr colour space, the transformed signal colour volume representing the signal colour volume according to the colour primaries of the container colour volume; and module for determining a position of a codeword range of chroma values, in the YCbCr colour space, within the container colour volume according to the determined boundary; and module for decoding the R’G’B ’ colour values of the image using the codeword range of the YCbCr chroma values.
According to still another aspect of the present disclosure, there is provided a system for decoding R’G’B’ colour values of an image, the system comprising: a memory for storing data and a computer program; a processor coupled to the memory for executing the computer program, the computer program comprising instructions for: receiving a container colour volume determined from a colour primaries parameter of a video usability information message associated with the image; receiving a signal colour volume; determining a boundary of a transformed signal colour volume in a YCbCr colour space, the transformed signal colour volume representing the signal colour volume according to the colour primaries of the container colour volume; and determining a position of a codeword range of chroma values, in the YCbCr colour space, within the container colour volume according to the determined boundary; and decoding the R’G’B’ colour values of the image using the codeword range of the YCbCr chroma values.
According to still another aspect of the present disclosure, there is provided a computer readable medium comprising a program for decoding R’G’B’ colour values of an image, the program comprising: code for receiving a container colour volume determined from a colour primaries parameter of a video usability information message associated with the image; code for receiving a signal colour volume; code for determining a boundary of a transformed signal colour volume in a YCbCr colour space, the transformed signal colour volume representing the signal colour volume according to the colour primaries of the container colour volume; and code for determining a position of a codeword range of chroma values, in the YCbCr colour space, within the container colour volume according to the determined boundary; and code for decoding the R’G’B’ colour values of the image using the codeword range of the YCbCr chroma values.
Other aspects are also disclosed.
BRIEF DESCRIPTION OF THE DRAWINGS
At least one embodiment of the present invention will now be described with reference to the following drawings and and appendices, in which:
Fig. 1 is a schematic block diagram showing an encoding device and a display device;
Figs. 2A and 2B form a schematic block diagram of a general purpose computer system upon which one or both of the encoding device and display device of Fig. 1 may be practiced;
Fig. 3 is a schematic block diagram of a matrix coefficients generator;
Fig. 4 is a schematic flow diagram showing a method for encoding wide colour gamut (WCG) and/or high dynamic range (HDR) video data from R’G’B’ to a YCbCr opponent colour space;
Fig 5 is a schematic flow diagram showing a method for decoding WCG and/or HDR video data from a YCbCr opponent colour space to R’G’B’;
Fig. 6 is a schematic diagram showing the CIE xy chromaticity colour space, with example colour gamuts;
Fig. 7 shows an example BT.709 signal colour volume, transformed to the coordinate system of an example BT.2020 container colour volume;
Fig. 8 shows an example BT.709 signal colour volume, transformed to the coordinate system of an example BT.2020 YCbCr container colour volume, with unity chroma divisors;
Fig. 9 shows the colour volume of Fig. 8 rotated to demonstrate chroma boundaries, and the mapping of the region within the chroma boundaries to quantised codewords for a single chroma divisor arrangement;
Fig. 10 shows the colour volume of Fig. 8 rotated to demonstrate chroma boundaries, and the mapping of the region within the chroma boundaries to quantised codewords for an arrangement with separate positive and negative chroma divisors;
Fig. 11 shows the colour volume of Fig. 8 rotated to demonstrate chroma boundaries, and the mapping of the region within the chroma boundaries to quantised codewords for an arrangement with a single chroma divisor and offset; and
Fig. 12 is a schematic flow diagram showing a method of mapping the colour values of a video signal to a range of quantised YCbCr codewords.
DETAILED DESCRIPTION INCLUDING BEST MODE
Where reference is made in any one or more of the accompanying drawings to steps and/or features, which have the same reference numerals, those steps and/or features have for the purposes of this description the same function(s) or operation(s), unless the contrary intention appears.
The set of all colours that a driven display device can reproduce is known as a colour gamut. As the majority of display devices are driven by three primary colour sources, the associated colour gamut is contained in a triangular region when represented in the CIE1931 xy chromaticity colour space defined by the International Commission on Illumination (CIE). Consumer displays have typically complied with the BT.709 recommendation issued by the International Telecommunications Union - Radiocommunication Sector (ITU-R), which defines a colour gamut that is relatively narrow compared to the full range of colours perceivable by the human visual system. However, contemporary displays are capable of reproducing a wider range of colours than specified in the BT.709 recommendation. To support such a wider range of colours, there are a number of standardised wide colour gamuts (WCG) that may be used, such as the BT.2020 recommendation issued by the ITU-R, or the RP 431-2:2011 standard issued by the Society of Motion Picture Television Experts (SMPTE). The RP 431-2:2011 standard is also known as the “DCI-P3” colour gamut, as RP 431-2:2011 standard is adopted by Digital Cinema Initiatives.
Fig. 6 shows examples of various colour gamuts within a CIE xy chromaticity space 600 with an x axis 601 and y axis 602. A gamut 603 of a standard observer indicates the limits of colour perception of the observer. A narrow colour gamut 604 provides an example of a colour gamut such as BT.709. Colour gamut 604 is described by a red colour primary 605, a green colour primary 606, a blue colour primary 607, and a white point 608. A wide colour gamut 609 provides an example of a colour gamut such as BT.2020. The wide colour gamut 609 is described by a red colour primary 610, a green colour primary 611, a blue colour primary 612, and the white point 608. In the example of Fig. 6 the same white point is used for both colour gamuts 604 and 609. However, it should be appreciated that the chromaticity coordinate of the white point may be any arbitrary location within a corresponding colour gamut.
The dynamic range of a display device is a ratio between the smallest and the largest possible luminance that the display device can reproduce. A display device with a dynamic range of less than one to one thousand (1:1000) is generally considered to have a standard dynamic range (SDR). For example, consumer cathode ray tube displays typically have a dynamic range of one to three hundred (1:300). Consumer liquid crystal displays, backlit by a single instant fluorescent light emitter, typically have a dynamic range of one to eight hundred (1:800).
In recent times, higher dynamic ranges are possible in display devices. Liquid crystal displays backlit by a grid of independently modulated light emitting diodes are capable of a dynamic range of one to twenty thousand (1:20000). Organic light-emitting diode (OLED) displays, having no liquid crystal light modulator, may be considered to have an infinite dynamic range, since individual OLEDs can be switched off.
Physical light levels that are to be reproduced on a display are mapped to quantised codewords for digital transmission before being mapped back to physical light levels. The mapping to quantised codewords is defined by an opto-electrical transfer function (OETF), while the mapping to physical light levels is defined by an electro-optical transfer function (EOTF). Traditional EOTFs and OETFs, such as ITU-R BT.1886 and ITU-R BT.709, are suitable for standard dynamic range (SDR) applications. For HDR applications, different transfer functions are required. The increased dynamic range requires transfer functions that correspond more closely to perceptual Just Noticeable Differences (JNDs). For example, the SMPTE standard, ST 2084, defines an EOTF that maps codewords to absolute luminance levels. The Association of Radio Industries and Businesses (ARIB) standard, STD-B67, defines an OETF that maps relative luminance levels to codewords.
Together, the colour gamut and the dynamic range of a display device determines a colour volume of the considered display device. To support the enhanced capabilities of contemporary displays, digital video data should be transmitted using a high dynamic range (HDR) and wide colour gamut (WCG) format. Furthermore, the format of the video data should be known to both capture devices and display devices, to avoid confusion with legacy video content that has a standard dynamic range and narrow colour gamut. The dynamic range of the video data may be explicitly signalled by metadata, such as the Transfer Function Video Usability Information (VUI) message in the High Efficiency Video Coding (HEVC) standard. The colour gamut of the video data may be explicitly signalled by metadata, such as the Colour Primaries VUI message in the HEVC standard. Herein, the combination of the dynamic range and the colour gamut of the video data format will be referred to as the ‘container colour volume’. The container colour volume indicates the full range of colours and full range of luminance that the video signal data may exercise.
The container colour volume may be standardised, in order to facilitate the transmission of HDR/WCG video data from a multiplicity of heterogeneous capture devices to reproduction of the physical light levels on a multiplicity of heterogeneous display devices. Furthermore, the container colour volume may be sufficiently large to encompass the capabilities of HDR/WCG displays.
Accordingly, it is likely in the near future that video data signals mastered for transmission and reproduction on HDR/WCG displays will not exercise the full range afforded by the container colour volume. For example, at present there are no displays that can reproduce the full range of colours possible with the BT.2020 recommendation, with the full dynamic range of luminances possible with SMPTE ST2084.
As discussed in the background section, it is advantageous to transmit video data in an opponent colour space such as YCbCr rather than in the original R’G’B’ colour space. There are currently two methods for converting R’G’B’ values to YCbCr, namely constant luminance (CL), and non-constant luminance (NCL). For non-constant luminance (NCL), the luma component Y is determined according to Equation (1), as follows:
(1) where R’, G’ and B’ are expressed in normalised form - that is, within the range [0,1] - r, g and b are coefficients determined by the relative strengths with which the colour primaries of the considered displays contribute to the perceptual sensation of brightness. For example, if the display complies with the International Telecommunications Union -Radiocommunication Sector (ITU-R) standard BT.709, the coefficients take the values:
The chroma components are then determined in accordance with Equations (2) as follows:
(2) where the divisors Cbdiv and Crdiv are selected to ensure that Cb and Cr fill the range [-0.5,0.5]. Based upon the limits on R’, G’ and B’, the divisors are determined to be
The normalised YCbCr values may be mapped to quantised codewords, such that the signal may be passed as input to a digital encoder.
The YCbCr transform described above is useful for colour decorrelation when samples present in the video signal fully exercise the range of the normalised R’G’B’ components. However, the YCbCr transform as described is not optimal when the video signal data does not exercise the full range of the normalised R’G’B’ range. In such a case, quantised codewords are unnecessarily reserved for R’G’B’ sample values that never occur in the video data. The quantisation in a pre-processing step constitutes an early loss of quality which the encoder cannot control.
Fig. 1 is a schematic block diagram showing functional modules of a video transmission system 100. The video transmission system 100 comprises an encoding device 110, a display device 140, and a communication channel 130 interconnecting the encoding device 110 and display device 140. A camera input in the form of a video signal may be received by the encoding device 110 and be transmitted to the display device 140 where the video signal is decoded for display on a panel device 146 as seen in Fig. 1.
The encoding device 110 receives two inputs, R’G’B’ video data 120 and metadata 122. The metadata 122 is received and used by an encoder matrix coefficients generator 310 to produce a R’G’B’ to YCbCr matrix. Further details of how the R’G’B’ to YCbCr matrix is determined are described below with reference to Fig. 4. The R’G’B’ to YCbCr matrix coefficients are passed to R’G’B’ to YCbCr block 112, which then converts the R’G’B’ video data 120 to YCbCr video data 124. Video encoder 114 then takes both the YCbCr video data 124 and the metadata 122, and encodes the information to an encoded bitstream 126.
The encoded bitstream 126 may be stored in a storage unit 116, before being transmitted via communication channel 130 to display device 140. The communication channel 130 may take the form of a portable storage medium such as a Blu-Ray disc or a communication link such as an Ethernet link or wide area network.
The encoded bitstream 126 is received and decoded by video decoder 142. The video decoder 142 produces decoded YCbCr video data 150 and metadata 152 by unpacking the encoded bitstream 126. The metadata 152 is read and used by a decoder matrix coefficients generator 310, which operates in a similar manner to the encoder matrix coefficients generator 310, to produce a R’G’B’ to YCbCr matrix. The R’G’B’ to YCbCr matrix coefficients are passed to YCbCr to R’G’B’ block 144, which inverts the received matrix to convert decoded YCbCr video data 150 to decoded R’G’B’ video data 154. The decoded R’G’B’ video data 154 is then passed to a panel device 146 for display.
Notwithstanding the example devices mentioned above, each of the encoding device 110 and display device 140 may be configured within a general purpose computing system, typically through a combination of hardware and software components. Fig. 2A illustrates such a computer system 200, which includes: a computer module 201; input devices such as a keyboard 202, a mouse pointer device 203, a scanner 226, a camera 227, which may be configured as the video data 120, and a microphone 280; and output devices including a printer 215, a display device 214, which may be configured as the display device 140, and loudspeakers 217. An external Modulator-Demodulator (Modem) transceiver device 216 may be used by the computer module 201 for communicating to and from a communications network 220 via a connection 221. The communications network 220, which may represent the communication channel 130, may be a wide-area network (WAN), such as the Internet, a cellular telecommunications network, or a private WAN. Where the connection 221 is a telephone line, the modem 216 may be a traditional “dial-up” modem. Alternatively, where the connection 221 is a high capacity (e.g., cable) connection, the modem 216 may be a broadband modem. A wireless modem may also be used for wireless connection to the communications network 220. The transceiver device 216 may additionally be provided in the encoding device 110 and the display device 140 and the communication channel 130 may be embodied in the connection 221.
The computer module 201 typically includes at least one processor unit 205, and a memory unit 206. For example, the memory unit 206 may have semiconductor random access memory (RAM) and semiconductor read only memory (ROM). The computer module 201 also includes an number of input/output (I/O) interfaces including: an audiovideo interface 207 that couples to the video display 214, loudspeakers 217 and microphone 280; an I/O interface 213 that couples to the keyboard 202, mouse 203, scanner 226, camera 227 and optionally a joystick or other human interface device (not illustrated); and an interface 208 for the external modem 216 and printer 215. The signal from the audio-video interface 207 to the display 214 is generally the output of a computer graphics card and provides an example of ‘screen content’. In some implementations, the modem 216 may be incorporated within the computer module 201, for example within the interface 208. The computer module 201 also has a local network interface 211, which permits coupling of the computer system 200 via a connection 223 to a local-area communications network 222, known as a Local Area Network (LAN). As illustrated in Fig. 2A, the local communications network 222 may also couple to the wide network 220 via a connection 224, which would typically include a so-called “firewall” device or device of similar functionality. The local network interface 211 may comprise an Ethernet™ circuit card, a Bluetooth™ wireless arrangement or an IEEE 802.11 wireless arrangement; however, numerous other types of interfaces may be practiced for the interface 211. The local network interface 211 may also provide the functionality of the communication channel 120 may also be embodied in the local communications network 222.
The I/O interfaces 208 and 213 may afford either or both of serial and parallel connectivity, the former typically being implemented according to the Universal Serial Bus (USB) standards and having corresponding USB connectors (not illustrated). Storage devices 209 are provided and typically include a hard disk drive (HDD) 210. Other storage devices such as a floppy disk drive and a magnetic tape drive (not illustrated) may also be used. An optical disk drive 212 is typically provided to act as a non-volatile source of data. Portable memory devices, such optical disks (e g. CD-ROM, DVD, Blu-ray Disc™), USB-RAM, portable, external hard drives, and floppy disks, for example, may be used as appropriate sources of data to the computer system 200. Typically, any of the HDD 210, optical drive 212, networks 220 and 222 may also be configured to operate as the video data 120, or as a destination for decoded video data to be stored for reproduction via the display 214. The HDD 210 may also represent a bulk storage whereby an encoded bitstream 126 for a video sequence may be stored for subsequent broadcast, distribution and /or reproduction. The encoding device 110 and the display device 140 of the system 100 may be embodied in the computer system 200.
The components 205 to 213 of the computer module 201 typically communicate via an interconnected bus 204 and in a manner that results in a conventional mode of operation of the computer system 200 known to those in the relevant art. For example, the processor 205 is coupled to the system bus 204 using a connection 218. Likewise, the memory 206 and optical disk drive 212 are coupled to the system bus 204 by connections 219. Examples of computers on which the described arrangements can be practised include IBM-PC’s and compatibles, Sun SPARCstations, Apple Mac™ or alike computer systems.
Where appropriate or desired, the video encoder 114 and the video decoder 142, as well as methods described below, may be implemented using the computer system 200 wherein the video encoder 114, the video decoder 142 and methods to be described, may be implemented as one or more software application programs 233 executable within the computer system 200. In particular, the video encoder 114, the video decoder 142 and the steps of the described methods are effected by instructions 231 (see Fig. 2B) in the software 233 that are carried out within the computer system 200. The software instructions 231 may be formed as one or more code modules, each for performing one or more particular tasks. The software may also be divided into two separate parts, in which a first part and the corresponding code modules performs the described methods and a second part and the corresponding code modules manage a user interface between the first part and the user.
The software may be stored in a computer readable medium, including the storage devices described below, for example. The software is loaded into the computer system 200 from the computer readable medium, and then executed by the computer system 200. A computer readable medium having such software or computer program recorded on the computer readable medium is a computer program product. The use of the computer program product in the computer system 200 preferably effects an advantageous apparatus for implementing the video encoder 114, the video decoder 142 and the described methods.
The software 233 is typically stored in the HDD 210 or the memory 206. The software is loaded into the computer system 200 from a computer readable medium, and executed by the computer system 200. Thus, for example, the software 233 may be stored on an optically readable disk storage medium (e g., CD-ROM) 225 that is read by the optical disk drive 212.
In some instances, the application programs 233 may be supplied to the user encoded on one or more CD-ROMs 225 and read via the corresponding drive 212, or alternatively may be read by the user from the networks 220 or 222. Still further, the software can also be loaded into the computer system 200 from other computer readable media. Computer readable storage media refers to any non-transitory tangible storage medium that provides recorded instructions and/or data to the computer system 200 for execution and/or processing. Examples of such storage media include floppy disks, magnetic tape, CD-ROM, DVD, Blu-ray Disc™, a hard disk drive, a ROM or integrated circuit, USB memory, a magneto-optical disk, or a computer readable card such as a PCMCIA card and the like, whether or not such devices are internal or external of the computer module 201. Examples of transitory or nontangible computer readable transmission media that may also participate in the provision of the software, application programs, instructions and/or video data or encoded video data to the computer module 201 include radio or infra-red transmission channels as well as a network connection to another computer or networked device, and the Internet or Intranets including e-mail transmissions and information recorded on Websites and the like.
The second part of the application programs 233 and the corresponding code modules mentioned above may be executed to implement one or more graphical user interfaces (GUIs) to be rendered or otherwise represented upon the display 214. Through manipulation of typically the keyboard 202 and the mouse 203, a user of the computer system 200 and the application may manipulate the interface in a functionally adaptable manner to provide controlling commands and/or input to the applications associated with the GUI(s). Other forms of functionally adaptable user interfaces may also be implemented, such as an audio interface utilizing speech prompts output via the loudspeakers 217 and user voice commands input via the microphone 280.
Fig. 2B is a detailed schematic block diagram of the processor 205 and a “memory” 234. The memory 234 represents a logical aggregation of all the memory modules (including the HDD 209 and semiconductor memory 206) that can be accessed by the computer module 201 in Fig. 2A.
When the computer module 201 is initially powered up, a power-on self-test (POST) program 250 executes. The POST program 250 is typically stored in a ROM 249 of the semiconductor memory 206 of Fig. 2A. A hardware device such as the ROM 249 storing software is sometimes referred to as firmware. The POST program 250 examines hardware within the computer module 201 to ensure proper functioning and typically checks the processor 205, the memory 234 (209, 206), and a basic input-output systems software (BIOS) module 251, also typically stored in the ROM 249, for correct operation. Once the POST program 250 has run successfully, the BIOS 251 activates the hard disk drive 210 of Fig. 2A. Activation of the hard disk drive 210 causes a bootstrap loader program 252 that is resident on the hard disk drive 210 to execute via the processor 205. This loads an operating system 253 into the RAM memory 206, upon which the operating system 253 commences operation. The operating system 253 is a system level application, executable by the processor 205, to fulfil various high level functions, including processor management, memory management, device management, storage management, software application interface, and generic user interface.
The operating system 253 manages the memory 234 (209, 206) to ensure that each process or application running on the computer module 201 has sufficient memory in which to execute without colliding with memory allocated to another process. Furthermore, the different types of memory available in the computer system 200 of Fig. 2A must be used properly so that each process can run effectively. Accordingly, the aggregated memory 234 is not intended to illustrate how particular segments of memory are allocated (unless otherwise stated), but rather to provide a general view of the memory accessible by the computer system 200 and how such is used.
As shown in Fig. 2B, the processor 205 includes a number of functional modules including a control unit 239, an arithmetic logic unit (ALU) 240, and a local or internal memory 248, sometimes called a cache memory. The cache memory 248 typically includes a number of storage registers 244-246 in a register section. One or more internal busses 241 functionally interconnect these functional modules. The processor 205 typically also has one or more interfaces 242 for communicating with external devices via the system bus 204, using a connection 218. The memory 234 is coupled to the bus 204 using a connection 219.
The application program 233 includes a sequence of instructions 231 that may include conditional branch and loop instructions. The program 233 may also include data 232 which is used in execution of the program 233. The instructions 231 and the data 232 are stored in memory locations 228, 229, 230 and 235, 236, 237, respectively. Depending upon the relative size of the instructions 231 and the memory locations 228-230, a particular instruction may be stored in a single memory location as depicted by the instruction shown in the memory location 230. Alternately, an instruction may be segmented into a number of parts each of which is stored in a separate memory location, as depicted by the instruction segments shown in the memory locations 228 and 229.
In general, the processor 205 is given a set of instructions which are executed therein. The processor 205 waits for a subsequent input, to which the processor 205 reacts to by executing another set of instructions. Each input may be provided from one or more of a number of sources, including data generated by one or more of the input devices 202, 203, data received from an external source across one of the networks 220, 202, data retrieved from one of the storage devices 206, 209 or data retrieved from a storage medium 225 inserted into the corresponding reader 212, all depicted in Fig. 2A. The execution of a set of the instructions may in some cases result in output of data. Execution may also involve storing data or variables to the memory 234.
The video encoder 114, the video decoder 142 and the described methods may use input variables 254, which are stored in the memory 234 in corresponding memory locations 255, 256, 257. The video encoder 114, the video decoder 142 and the described methods produce output variables 261, which are stored in the memory 234 in corresponding memory locations 262, 263, 264. Intermediate variables 258 may be stored in memory locations 259, 260, 266 and 267.
Referring to the processor 205 of Fig. 2B, the registers 244, 245, 246, the arithmetic logic unit (ALU) 240, and the control unit 239 work together to perform sequences of microoperations needed to perform “fetch, decode, and execute” cycles for every instruction in the instruction set making up the program 233. Each fetch, decode, and execute cycle comprises: (a) a fetch operation, which fetches or reads an instruction 231 from a memory location 228,229, 230; (b) a decode operation in which the control unit 239 determines which instruction has been fetched; and (c) an execute operation in which the control unit 239 and/or the ALU 240 execute the instruction.
Thereafter, a further fetch, decode, and execute cycle for the next instruction may be executed. Similarly, a store cycle may be performed by which the control unit 239 stores or writes a value to a memory location 232.
Fig. 12 is a schematic flow diagram showing a method 1200 of mapping the colour values of a video signal to a range of quantised YCbCr codewords. The video signal does not fully exercise the range of the R’G’B’ colour space. Further, the YCbCr codewords utilise more of the YCbCr codeword range than would be achieved by conventional methods. The method 1200 will be described with reference to the method 1200 being executed by encoding device 110 as described above. The method 1200 may be implemented as one or more software code modules of the software application program 233 resident in the hard disk drive 110 and being controlled in its execution by the processor 205 of the system 200 implementing the encoding device 110. In another arrangement, the method 1200 may be performed by display device 140.
The method 1200 begins at a determine signal colour volume step 1202. At the determine signal colour volume step 1202, the signal colour volume is received, under execution of the processor 205, from metadata associated with the video signal data. The signal colour volume indicates the range of colours and luminances that are exercised by the video signal data. The signal colour volume may be described by the CIE1931 chromaticity coordinates of the colour primaries and the white point. The signal colour volume may be stored by the processor 205 in the memory 206.
At a determine container colour volume step 1204, the container colour volume is received, under execution of the processor 205, from metadata associated with the video signal data. The container colour volume indicates the range of colours and range of luminance that may be represented by the video signal format. The container colour volume may be described by the CIE1931 chromaticity coordinates of the colour primaries and the white point. The received container colour volume may be stored by the processor 205 in the memory 206.
At a generate R’G’B’ points step 1206, a number of R’G’B’ samples spaced in the normalised [0,1] cube are generated, under execution of the processor 205. Details of the number of points and locations of the points will be described below in relation to Fig. 4. The generated R’G’B’ samples may be stored by the processor 205 in the memory 206.
At a transform to container YCbCr coordinate space step 1208, the generated R’G’B’ samples are converted, under execution of the processor 205, to a YCbCr space corresponding to the container colour volume, using the signal colour volume and the container colour volume. At step 1208, the R’G’B’ samples, which are generated to fully exercise the signal colour volume, are remapped to the coordinate space of the YCbCr container colour volume. The coordinate space is appropriate for determining the chroma boundaries of the signal colour volume, which will be further described below in relation to Fig. 4. The Y component may be determined as described above. However, the Cb and Cr samples are determined using unity chroma divisors in accordance with Equations (3), as follows:
Cb = B' — Υ
Cr = R'-Y { }
At a determine chroma boundaries step 1210, the maximum and minimum values of Cb and Cr are determined, under execution of the processor 205, from the YCbCr samples corresponding to the container colour volume. The maximum and minimum values of Cb and Cr are identified as chroma boundaries.
At an optional precision check step 1212, each of the YCbCr samples corresponding to the chroma boundaries are compared with neighbouring YCbCr samples. If the difference in chroma value between a selected sample and neighbours of the selected sample is too large, then the method 1200 returns to generate R’G’B’ step 1206 to insert more points in the neighbourhood of the selected sample. Step 1212 will be described further below.
At a determine codeword range step 1214, an optimised mapping of normalised chroma values to chroma codewords is determined from the chroma boundaries, under execution of the processor 205. The optimised mapping of normalised chroma values may be stored by the processor 205 in the memory 206. The optimised mapping uses more of the possible dynamic range of the chroma values, which reduces the amount of error introduced during quantisation to chroma codewords. The optimised mapping may be extended to map normalised R’G’B’ values to chroma codewords. Further, the optimised mapping may be extended to map R’G’B’ codewords to chroma codewords. Several arrangements for the mapping of normalised chroma values to chroma codewords are further described below, with reference to Figs. 9, 10 and 11.
At an encode to codewords step 1216, the determined mapping from R’G’B’ codewords to YCbCr codewords is performed under execution of the processor 205. The mapping is performed on the video signal data, and the mapped YCbCr codewords are encoded into a video bitstream using any suitable standard video coding method. The method 1200 then terminates following step 1216.
While the method 1200 of Fig. 12 has been described in relation to the encoding device 110, the method 1200 may also be used as part of a video decoder such as the display device 140. When used as part of the display device 140, the encode to codewords step 1216 is replaced with a decode from codewords step, where the decode from codewords step performs an inverse mapping from YCbCr codewords to R’G’B’ codewords. The mapping from YCbCr codewords to R’G’B’ codewords is performed on the decoded video signal data.
Fig. 4 is a schematic flow diagram showing a method 400 of encoding WCG and/or HDR video data from R’G’B’ to a YCbCr opponent colour space. The method 400 implements the function of the matrix coefficients generator 310 and the function of the R’G’B’ to YCbCr block 112 of the encoding device 110. The method 400 may be implemented as one or more software code modules of the software application program 233 resident in the hard disk drive 110 and being controlled in execution by the processor 205 of the system 200 implementing the encoding device 110.
The method 400 begins at a determine signal colour volume step 402. At the determine signal colour volume step 402, the signal colour volume is received, under execution of the processor 205, from metadata associated with the video signal data. For example, the signal colour volume may be received from an effective colour volume supplemental enhancement information (SEI) message, where the metadata directly signals the constraints on the signal colour volume. The signal colour volume may be described by the CIE1931 chromaticity coordinates of the colour primaries and the white point. The signal colour volume may be stored by the processor 205 in the memory 206.
Alternatively, the signal colour volume may be received from a mastering display colour volume SEI message, where the metadata signals the constraints on the mastering display used to grade the video signal data. While the constraints on the mastering display used to grade the video signal data are not generally equivalent to the signal colour volume, the constraints are an upper bound. In lieu of an effective colour volume SEI, a received mastering display colour volume may be used as the signal colour volume.
At a determine container colour volume step 404, the container colour volume is received, under execution of the processor 205, from metadata associated with the video signal data. For example, the container colour volume may be received from a colour primaries VTJI message. The container colour volume may be described by the CIE1931 chromaticity coordinates of the colour primaries and the white point. The container colour volume may be stored by the processor 205 in the memory 206.
At a determine colour transform step 406, a colour transform appropriate for converting from RGB samples in the signal colour space to RGB samples in the container colour space, is determined under execution of the processor 205. Compared to R’G’B’ samples which correspond to quantised codewords, the RGB samples correspond to physical light levels, which is the appropriate space within which to perform a colour transform. The colour transform may be expressed as a 3x3 set of matrix coefficients, labelled T. Furthermore, the determination of the colour transform at step 406 may be separated into three components a), b) and c) as follows: a) Determine a 3x3 matrix 7\ which converts samples from the RGB coordinate system corresponding to the signal colour volume, to a CIE XYZ colour space with a white point corresponding to the signal colour volume. The determination of the 3x3 matrix Tx is described below. b) Determine a 3x3 chromatic adaptation matric C to convert samples from a CIE XYZ colour space with a white point corresponding to the signal colour volume, to a CIE XYZ colour space with a white point corresponding to the container colour volume. If the white points of the signal colour volume and container colour volume are the same, the chromatic adaptation matrix will be an identity matrix. If the white points of the signal colour volume and container colour volume are not the same, techniques such as Bradford, Von Kries, or XYZ scaling may be used to determine the chromatic adaptation matrix. c) Determine a 3x3 matrix T2 which converts samples from the RGB coordinate system corresponding to the container colour volume, to a CIE XYZ colour space with a white point corresponding to the container colour volume. The inverse of the 3x3 matrix T2 (i.e., matrix T2_1) converts samples from the CIE XYZ colour space with a white point corresponding to the container colour volume, to the RGB coordinate system corresponding to the container colour volume. The determination of the 3x3 matrix T2is described below.
The colour transform matrix T may be determined as the product of the three matrices in accordance with Equation (4), as follows: T = T21 * C *T1 (4)
The 3x3 matrix 7\ may be determined in accordance with Equation (5), as follows:
(5) where M, YR, YG and YB are determined in accordance with Equations (6) and (7), as follows:
(6) and
(7) where (xR,yR) is the chromaticity coordinate of the red colour primary associated with the signal colour volume. (xG,yG) is the chromaticity coordinate of the green colour primary associated with the signal colour volume. (xs,ys) is the chromaticity coordinate of the blue colour primary associated with the signal colour volume. (.Χ\νΎ\ν) is the chromaticity coordinate of the white point associated with the signal colour volume. zR, zG, zB and zw are determined from the chromaticity coordinates of the corresponding colour in accordance with Equations (8), as follows:
(8)
The 3x3 matrix T2 may be determined in the same manner as the method described above for determining 7^, with the chromaticity coordinates of the signal colour volume being replaced by the chromaticity coordinates of the container colour volume.
The method of determining 7\ is described above with reference to CIE1931 chromaticity coordinates. However, if the signal colour volume and container colour volume are represented by chromaticity coordinates in an alternative colour space, such as the CIE2006 colour space, the same method of determining 7\ may still be performed.
At a determine EOTF step 408, an electro-optical transfer function (EOTF) is received, under execution of the processor 205, from metadata associated with the video signal data. For example, the EOTF may be received from a transfer characteristics VUI message. The EOTF maps normalised R’G’B’ values corresponding to codewords, to normalised RGB values corresponding to physical light levels the display should reproduce. The EOTF may be described by a value selecting a function from a predetermined set of functions, or may be described by a look-up table (LUT). The received EOTF may be stored by the processor 205 in the memory 206.
In another arrangement, at determine EOTF step 408, an opto-electrical transfer function (OETF) is instead received from metadata associated with the video signal data. The OETF maps normalised RGB values corresponding to physical light levels representing the scene captured, to normalised R’G’B’ values corresponding to codewords. The OETF may be described by a value selecting a function from a predetermined set of functions, or may be described by a look-up table (LUT). A complete video transmission system should consist of both an OETF and an EOTF, where in general the OETF and EOTF are not inverses of each other. The concatenation of the OETF and EOTF is an opto-optical transfer function (OOTF), which represents rendering from physical light levels of scene luminance to physical light levels of display luminance. The video transmission system 100 excludes the rendering step, and is thus defined by only a single transfer function. When the transfer function is an EOTF, the video data signal is display-referred, meaning that the video data signal represents physical light levels of the display luminance. When the transfer function is an OETF, the video data signal is scene-referred, meaning that the video data signal represents physical light levels of the scene luminance.
At a generate R’G’B’ points step 410, a number of R’G’B’ samples spaced in the normalised [0,1] cube are generated under execution of the processor 205. The R’G’B’ samples should exercise the full range of values afforded by the signal colour volume. In one arrangement, the R’G’B’ samples correspond to the full range of quantised R’G’B’ codewords. For example, with an 8-bit quantisation, 2Λ(8*3) samples will be generated. Alternate arrangements are described below. The R’G’B’ samples spaced in the normalised [0,1] cube may be stored by the processor 205 in the memory 206.
At an apply inverse EOTF step 412, each of the generated R’G’B’ samples are mapped to a RGB sample, under execution of the processor 205, using the inverse of the EOTF received from the determine EOTF step 408.
In another arrangement, if the determine EOTF step 408 resulted in receiving an OETF, each of the generated R’G’B’ samples are instead mapped to a RGB sample using the OETF received from the determine EOTF step 408.
At an apply colour transform step 414, each of the RGB samples are mapped, under execution of the processor 205, from a coordinate space defined by the signal colour volume, to a coordinate space defined by the container colour volume. Step 414 may be performed by applying a 3x3 colour matrix T as described above, to each RGB sample, where each RGB sample is arranged as a 3x1 vector.
At an apply EOTF step 416, each of the RGB samples represented in a coordinate space defined by the container colour volume, are mapped to a R’G’B’ sample in a coordinate space defined by the container colour volume determined at step 404. The mapping is performed at step 416 using the EOTF received from the determine EOTF step 408.
In another arrangement, if the determine EOTF step 408 resulted in receiving an OETF, each of the RGB samples represented in a coordinate space defined by the container colour volume, are instead mapped to a R’G’B’ sample in a coordinate space defined by the container colour volume, using the inverse OETF received from the determine EOTF step 408.
An example of the output of the apply EOTF step 416 is shown in Fig. 7. In the example of Fig. 7, an example BT.709 signal colour volume is transformed to the coordinate system of an example BT.2020 container colour volume. The signal colour volume of Fig. 7 is the same as the gamut defined by BT.709, and the container colour volume is the same as the gamut defined by BT.2020.
At an apply YCbCr transform with unity divisors step 418, each of the R’G’B’ samples in a coordinate space defined by the container colour volume, is converted to a YCbCr sample under execution of the processor 205. The Y component may be determined as described above in accordance with Equation (9), as follows: Y = rR' + gG'+bB' (9) where r, g and b are coefficients expressing the relative strengths with which the colour primaries of the container colour volume contribute to the perceptual sensation of brightness, and are determined from the container colour volume.
However, the Cb and Cr samples are determined with unity chroma divisors, in accordance with Equations (10), as follows:
Cb=B'-Y
Cr = R'-Y K }
An example of the output of the apply YCbCr transform with unity divisors step 418 is shown in Fig. 8. In the example of Fig. 8, an example BT.709 signal colour volume is transformed to the coordinate system of an example BT.2020 YCbCr container colour volume, with unity chroma divisors. The signal colour volume of Fig. 8 is the same as the gamut defined by BT.709, and the container colour volume is the same as the gamut defined by BT.2020.
At a determine chroma boundaries step 420, the maximum and minimum values of Cb, and the maximum and minimum values of Cr are determined from the YCbCr samples determined with unity chroma divisors. The maximum value of Cb is labelled the Cb positive boundary. The minimum value of Cb is labelled the Cb negative boundary. The maximum value of Cr is labelled the Cr positive boundary. The minimum value of Cr is labelled the Cr negative boundary.
At a determine codeword range step 422, an optimised mapping is determined from Cb and Cr chroma values determined with unity chroma divisors, to Cb and Cr chroma codewords. Fig. 9 shows the signal colour volume of Fig. 8 rotated to demonstrate chroma boundaries, and the mapping of the region within the chroma boundaries to quantised codewords for a single chroma divisor. In Fig. 9, a Cb negative boundary 912 and a Cb positive boundary 914 are received from the determine chroma boundaries step 420. A Cb codeword range 916 is indicated, with a codeword L corresponding to the minimum codeword, a codeword C corresponding to the median codeword, and a codeword U corresponding to the maximum codeword.
At step 422, it is determined which of the Cb negative boundary 912 and Cb positive boundary 914 is larger in magnitude. In the example of Fig. 9, the Cb positive boundary 914 is larger in magnitude. When the Cb positive boundary 914 is larger in magnitude, or when the Cb positive boundary is equal to the Cb negative boundary, the chroma value equal to the Cb positive boundary 914 is mapped to codeword U, the chroma value equal to the negative of the Cb positive boundary 914 is mapped to codeword L, and the chroma value equal to zero is mapped to codeword C. The range of chroma values between the Cb positive boundary and the negative of the Cb positive boundary are uniformly divided by the number of codewords, and mapped to the corresponding codewords between U and L. If the Cb negative boundary is larger in magnitude, then the chroma value equal to the Cb negative boundary is mapped to codeword L, the chroma value equal to the negative of the Cb negative boundary is mapped to codeword U, and the chroma value equal to zero is mapped to C. The range of chroma values between the Cb negative boundary and the negative of the Cb negative boundary are uniformly divided by the number of codewords, and mapped to the corresponding codewords between L and U. An equivalent process to that described above may be applied to the Cr boundaries to determine a mapping to Cr codewords.
Alternative arrangements for the determine codeword range step 422 are described below in relation to Figs. 10 and 11.
At a determine chroma divisors step 424, the chroma divisors that implement the optimised mapping are determined, under execution of the processor 205, from the determine codeword range step 422. In the arrangement described above with reference to Fig. 9, a common divisor is determined for Cb, and a common divisor is determined for Cr. If the Cb positive boundary is larger than or equal in magnitude to the Cb negative boundary, then the Cb divisor is set to the Cb positive boundary multiplied by two (2). Otherwise, if the Cb negative boundary is larger in magnitude than the Cb positive boundary, the Cb divisor is set to the negative of the Cb negative boundary multiplied by two (2). The same process described above may be used to determine the Cr divisor.
The selected boundary values used for determining the chroma divisors are multiplied by two (2) in order to scale the (B’-Y) and (R’-Y) components to the range [-0.5,0.5].
Alternative arrangements for the determine chroma divisors step 424 are described below in relation to Figs. 10 and 11.
At a determine R’G’B’ to YCbCr matrix step 426, the Cb divisor (labelled db) and the Cr divisor (labelled dr) from the determine chroma divisors step 424 are used to determine an optimised R’G’B’ to YCbCr matrix Q that implements the optimised mapping from the determine codeword range step 422. The matrix Q may be determined at step 426 in accordance with Equation (11) as follows:
(11) where r, g and b are the coefficients expressing the relative strengths with which the colour primaries of the container colour volume contribute to the perceptual sensation of brightness, and are determined from the container colour volume.
At an apply R’G’B’ to YCbCr matrix step 428, the matrix Q from the determine R’G’B’ to YCbCr step 426 is used to convert the R’G’B’ video data 120 to an optimised YCbCr opponent colour space that exercises a greater range of the quantised chroma codeword space. The conversion may be implemented as a matrix multiplication in accordance with Equation (12), as follows:
(12)
In another arrangement of the apply R’G’B’ to YCbCr matrix step 428, the matrix Q may be modified to convert quantised R’G’B’ codewords to optimised YCbCr codewords. The coefficients of the matrix Q may each be multiplied by 2N, where N is a positive integer, and then the coefficients of the matrix Q may be rounded off to integers. The R’G’B’ codewords may be multiplied by the modified Q matrix using fixed precision calculations. The result then has 2W_1 added, before downshifting by N bits, to produce the output YCbCr codewords. The method 400 then terminates. A method 500 of decoding WCG and/or HDR video data from a YCbCr opponent colour space to an R’G’B’ colour space will now be described with reference to Fig. 5. The method 500 implements the function of the matrix coefficients generator 310 of the display device 140 and the function of the YCbCr to R’G’B’ block 144. The method 500 may be implemented as one or more software code modules of the software application program 233 resident in the hard disk drive 110 and being controlled in execution by the processor 205 of the system 200 implementing the display device 140.
The method 500 is used to convert decoded YCbCr video data that was produced by the optimised mapping of R’G’B’ to YCbCr samples determined in the method 400. The majority of the steps of the method 500 of Fig. 5 correspond to steps of the method 400 of Fig. 4. In particular, steps 502 through to 526 correspond to steps 402 through to 426. Each of the steps of the method 500 of Fig. 5 operate as described in relation to the corresponding steps of the method 400 of Fig. 4 above (i.e., including the alternative arrangements described above), with the exception of an apply YCbCr to R’G’B’ matrix step 528.
The method 500 begins at a determine signal colour volume step 502. At step 502, the signal colour volume is received, under execution of the processor 205, from metadata associated with the video signal data. As described above for step 502, alternatively, the signal colour volume may be received from a mastering display colour volume SEI message as with step 402. The signal colour volume may be stored by the processor 205 in the memory 206.
At a determine container colour volume step 504, the container colour volume is received, under execution of the processor 205, from metadata associated with the video signal data as with step 404. The container colour volume may be stored by the processor 205 in the memory 206.
At a determine colour transform step 506, a colour transform appropriate for converting from RGB samples in the signal colour space to RGB samples in the container colour space is determined under execution of the processor 205 as at step 406 of the method 400.
At a determine EOTF step 508, an EOTF is received, under execution of the processor 205, from metadata associated with the video signal data as above for step 408. The received EOTF may be stored by the processor 205 in the memory 206.
At a generate R’G’B’ points step 510, a number of R’G’B’ samples spaced in the normalised [0,1] cube are generated under execution of the processor 205. The R’G’B’ samples spaced in the normalised [0,1] cube may be stored by the processor 205 in the memory 206.
At an apply inverse EOTF step 512, each of the generated R’G’B’ samples are mapped to a RGB sample, under execution of the processor 205, using the inverse of the EOTF received from the determine EOTF step 508.
At an apply colour transform step 514, each of the RGB samples are mapped, under execution of the processor 205, from a coordinate space defined by the signal colour volume, to a coordinate space defined by the container colour volume as at step 414.
At an apply EOTF step 516, each of the RGB samples represented in a coordinate space defined by the container colour volume, are mapped to a R’G’B’ sample in a coordinate space defined by the container colour volume determined at step 504.
At an apply YCbCr transform with unity divisors step 518, each of the R’G’B’ samples in a coordinate space defined by the container colour volume, is converted to a YCbCr sample under execution of the processor 205 in accordance with Equation (9) as at step 418.
At a determine chroma boundaries step 520, the maximum and minimum values of Cb, and the maximum and minimum values of Cr are determined from the YCbCr samples determined with unity chroma divisors, as at step 420.
At a determine codeword range step 522, an optimised mapping is determined from Cb and Cr chroma values determined with unity chroma divisors, to Cb and Cr chroma codewords.
At a determine chroma divisors step 524, the chroma divisors that implement the optimised mapping are determined, under execution of the processor 205, from the determine codeword range step 522.
At a determine R’G’B’ to YCbCr matrix step 526, the Cb divisor (labelled db) and the Cr divisor (labelled dr) from the determine chroma divisors step 524 are used to determine an optimised R’G’B’ to YCbCr matrix Q that implements the optimised mapping from the determine codeword range step 522.
The method 500 then proceeds to the YCbCr to R’G’B’ matrix step 528. At step 528, the matrix Q from the determine R’G’B’ to YCbCr step 526 is used to convert the decoded YCbCr video data produced by the optimised mapping determined in method 400, back to decoded R’G’B’ video data. The conversion performed at step 528 may be implemented as a matrix multiplication in accordance with Equation (13), as follows:
(13)
As described above, in an alternate arrangement of the apply R’G’B’ to YCbCr matrix step 428, the matrix Q~x may be modified to convert optimised YCbCr codewords to quantised R’G’B’ codewords.
It should be noted that the decoded YCbCr video data is not identical to the YCbCr data produced by method 400, due to losses introduced by quantisation in the encoding process.
The method 500 then terminates.
Fig. 3 is a schematic block diagram showing a detailed breakdown of the matrix coefficients generator 310, which implements a portion of method 400 of Fig. 4. The generator 310 and each of the modules (i.e., 314, 330, 334, 338, 342, 346, 350, 354 and 358) thereof may be implemented as one or more software code modules of the software application program 233. The matrix coefficients generator 310 receives metadata 312 associated with video data. The metadata 312 is equivalent to the metadata 122 in the encoding device 110, or equivalent to the metadata 152 in the display device 140. The metadata extractor 314 executes the determine signal colour volume step 402 of Fig. 4 to extract a signal colour volume 324. The metadata extractor 314 executes the determine container colour volume step 404 to extract a container colour volume 326. The metadata extractor 314 executes the determine EOTF step 408 to extract an EOTF 322. The metadata extractor 314 may optionally extract a luminance information 320, which is used in an arrangement of the generate R’G’B’ points step 410 described below.
The generate R’G’B’ points module 330 executes the generate R’G’B’ points step 410, producing R’G’B’ samples (or “R’G’B’points”) 332. The R’G’B’ samples 332 are input to inverse EOTF module 334, which executes the apply inverse EOTF step 412, producing RGB samples (or “RGB points”) 336. The RGB samples 336 are input to colour transform module 338, which executes the apply colour transform step 414, producing RGB samples 340 in the coordinate space of the container colour volume. The RGB samples 340 are input to EOTF module 342, which executes the apply EOTF step 416, producing R’G’B’ samples 344 in the coordinate space of the container colour volume. Modules 330 through to 342 use the output of the metadata extractor 314. R’G’B’ samples 344 are received by unity divisor module 346, which executes the apply YCbCr transform with unity divisors step 418, producing intermediate YCbCr samples (or “YCbCr points”) 348. The intermediate YCbCr samples 348 are input to a chroma boundary determiner module 350, which executes the determine chroma boundaries step 420, producing chroma boundaries 352. The chroma boundaries 352 are input to a chroma divisor selectors module 354, which executes the determine codeword range step 422 and the determine chroma divisors step 424, producing chroma divisors 356. The chroma divisors 356 are input to a matrix calculation unit 358, which executes the calculate R’G’B’ to YCbCr matrix step 426, producing a matrix Q 360.
Fig. 10 shows an alternative arrangement of the determine codeword range step 422. In Fig. 10, the signal colour volume of Fig. 8 is rotated to demonstrate chroma boundaries, and the mapping of the region within the chroma boundaries to quantised codewords for an arrangement with separate positive and negative chroma divisors. In Fig. 10, a Cb negative boundary 1012 and a Cb positive boundary 1014 are received from the determine chroma boundaries step 420. A Cb codeword range 1016 is indicated, with a codeword L corresponding to the minimum codeword, a codeword C corresponding to the median codeword, and a codeword U corresponding to the maximum codeword.
The chroma value equal to the Cb positive boundary 1014 is mapped to codeword U, the chroma value equal to zero is mapped to codeword C, and the range of chroma values between zero and the Cb positive boundary 1014 is uniformly divided by the number of codewords between C and U, and mapped to the corresponding codewords between C and U. The chroma value equal to the Cb negative boundary 1012 is mapped to codeword L, and the range of chroma values between the Cb negative boundary 1012 and zero is uniformly divided by the number of codewords between L and C, and mapped to the corresponding codewords between L and C. The equivalent process described above may be applied to the Cr boundaries to determine a mapping to Cr codewords.
The arrangement of Fig. 10 has the advantage of using the full codeword range for Cb and Cr compared to the arrangement of Fig. 9. However, the arrangement of Fig. 9 has one advantage over the arrangement of Fig. 10, in that the arrangement of Fig. 9 applies the same degree of scaling to chroma samples regardless of position of the chroma samples in the chroma space. As a result, the perceptually uniformity of a chroma space is maintained.
In the arrangement of Fig. 10, the determine chroma divisors step 424 is also modified, to determine positive and negative divisors for Cb and Cr respectively. The positive Cb divisor is set to the Cb positive boundary multiplied by two (2). The negative Cb divisor is set to the negative of the Cb negative boundary multiplied by two (2). The positive and negative Cr divisors may be calculated in the same manner as the Cb divisors.
In the arrangement of Fig. 10, the determine R’G’B’ to YCbCr matrix step 426 is also modified. As there are separate positive and negative divisors for Cb and Cr respectively, there are four different matrices that may be applied. The matrix is selected according to in which quadrant each R’G’B’ sample that is to be converted resides. The axes of the quadrants are (B’-Y) and (R’-Y), with the centre of the quadrants located at the origin. The positive Cb divisor (labelled pb) and the positive Cr divisor (labelled pr) from the determine chroma divisors step 424 are used to calculate an optimised R’G’B’ to YCbCr matrix Q1 (as defined below) that partially implements the optimised mapping from the determine codeword range step 422. The negative Cb divisor (labelled nb) and the negative Cr divisor (labelled nr) from the determine chroma divisors step 424 are used to determine an optimised R’G’B’ to YCbCr matrix Q2 (as defined below) that partially implements the optimised mapping from the determine codeword range step 422. pb and nr are used to determine an optimised R’G’B’ to YCbCr matrix Q3 (as defined below) that partially implements the optimised mapping from the determine codeword range step 422. nb and pr are used to determine an optimised R’G’B’ to YCbCr matrix Q4 (as defined below) that partially implements the optimised mapping from the determine codeword range step 422.
Fig. 11 shows an alternative arrangement of the determine codeword range step 422. In Fig. 11, the colour volume of Fig. 8 is rotated to demonstrate chroma boundaries, and the mapping of the region within the chroma boundaries to quantised codewords for an arrangement with a single chroma divisor and offset. In Fig. 11, a Cb negative boundary 1112 and a Cb positive boundary 1114 are received from the determine chroma boundaries step 420. A Cb codeword range 1116 is indicated, with a codeword L corresponding to the minimum codeword, a codeword C corresponding to the median codeword, and a codeword U corresponding to the maximum codeword.
The chroma value equal to the Cb negative boundary 1112 is mapped to codeword L, and the chroma value equal to the Cb positive boundary 1114 is mapped to codeword U. The range of chroma values between the Cb negative boundary 1112 and the Cb positive boundary 1114 is uniformly divided by the number of codewords between L and U, and mapped to the corresponding codewords between L and U.
The arrangement of Fig. 11 has the advantage of using the full codeword range for Cb and Cr, as well as maintaining perceptual uniformity of the chroma space, compared to the arrangements of Figs. 9 and 10. However, the arrangements of Figs. 9 and 10 have one advantage over the arrangement of Fig. 11, in that the arrangements of Figs. 9 and 10 map the white point of the video data to the codeword C.
In the arrangement of Fig. 11, the determine chroma divisors step 424 is also modified, to determine a common divisor, and an offset for Cb and Cr respectively. The Cb offset is set to midpoint of the Cb positive boundary and the Cb negative boundary, which is determined by taking the sum of the Cb positive boundary and the Cb negative boundary, divided by two. The Cb divisor is set to distance of the Cb positive boundary from the midpoint multiplied by two, which is determined by subtracting the Cb offset from the Cb positive boundary, and multiplying by two. The Cr divisor and offset may be determined in the same manner as the Cb divisor and offset.
In the arrangement of Fig. 11, the calculate R’G’B’ to YCbCr matrix step 426 is also modified. The Cb divisor (labelled db\ the Cb offset (labelled ob\ the Cr divisor (labelled dr), and the Cr offset (labelled or) from the determine chroma divisors step 424 are used to determine an optimised R’G’B’ to YCbCr matrix Q and offset vector 0 that implement the optimised mapping from the determine codeword range step 422. The matrix and offset may be determined in accordance with Equation (14) as follows:
(14)
In the arrangement of Fig. 11, the apply R’G’B’ to YCbCr matrix step 428 is also modified to use both the matrix Q and the offset vector 0 from the determine R’G’B’ to YCbCr step 426. The matrix Q and the offset vector 0 are used to convert the R’G’B’ video data 120 to an optimised YCbCr opponent colour space that exercises a greater range of the quantised chroma codeword space. The conversion of the R’G’B’ video data 120 to an optimised YCbCr opponent colour space may be implemented as a matrix multiplication and addition, in accordance with Equation (15), as follows:
(15)
In an alternate arrangement of the determine codeword range step 422, the codewords L and U are not mapped to the precise values of the chroma boundaries. Instead, the codewords L and U may be mapped to values slightly smaller, or slightly larger in magnitude than the chroma boundaries, to improve the alignment of the codeword range [L, U] with the R’G’B’ codewords. Improving the alignment between the R’G’B’ codewords and the YCbCr codewords defined by the codeword range [L, U], reduces the amount of requantisation error introduced by the mapping from R’G’B’ codewords to YCbCr codewords.
The alignment between the R’G’B’ codewords and the codeword range [L, U] may be determined by modifying the generate R’G’B’ codewords step 410 to generate R’G’B’ samples that correspond with R’G’B’ codewords.
In an alternate arrangement of the apply YCbCr transform with unity divisors step 418, a constant luminance YCbCr transform with unity divisors is applied instead of the nonconstant luminance YCbCr transform with unity divisors. In the case where an EOTF has been received by the determine EOTF step 408, the constant luminance Y component is determined in accordance with Equation (16) as follows:
(16)
Otherwise, in the case where an OETF has been received by the determine EOTF step 408, the constant luminance Y component is determined in accordance with Equation (17) as follows:
(17) where r, g and b are coefficients expressing the relative strengths with which the colour primaries of the container colour volume contribute to the perceptual sensation of brightness, and are determined from the container colour volume.
The constant luminance transform determines the constant luminance Y component by mixing the relative strengths of the red, green and blue colours in the physical light domain. The Cb and Cr components with unity divisors are determined using the constant luminance Y component, in accordance with Equation (18), as follows:
Cb=B'-Y
Cr = R' — Y <18)
The constant luminance Y component is a more accurate measure of the perceived brightness of the video data. However, an advantage of the non-constant luminance determination of Y described in the method 400 is that the R’G’B’ to YCbCr conversion may be implemented with a matrix multiplication. In the arrangement using the constant luminance Y component, the chroma divisors are applied directly to the video data signal. For example, for the arrangement of Fig. 9, the Cb and Cr samples are determined in accordance with Equations (19) as follows:
(19)
In an alternate arrangement of the generate R’G’B’ points step 410, a peak luminance value of the transfer function is received from metadata associated with the video signal data. For example, if the received metadata is a transfer characteristics VUI message set to a codeword value signifying that the transfer function is in accordance with the SMPTE standard, ST2084, the peak luminance value associated with the transfer function is 10,000 candelas per square metre as defined in the SMPTE standard, ST2084. Alternatively, if the received metadata is a transfer characteristics VUI message set to a codeword value signifying that the transfer function is in accordance with SMPTE standard, ST428-1, the peak luminance value associated with the transfer function is forty eight (48) candelas per square metre as defined in the SMPTE standard, ST428-1. A maximum luminance value is then received from metadata associated with the video signal data. The maximum luminance value represents the maximum physical light level present in the video signal, expressed in units of candelas per metre squared, or nits. For example, the maximum luminance value may be received from a maximum content light level parameter from a Content Light Level Information SEI message. Alternatively, the maximum luminance value may be received as a maximum display master luminance parameter from a Mastering Display Colour Volume SEI message. A minimum luminance value may then optionally be received from metadata associated with the video signal data. The minimum luminance value represents the minimum physical light level present in the video signal, expressed in units of candelas per metre squared, or nits. For example, the minimum luminance value may be received from a minimum display master luminance parameter from a Mastering Display Color Volume SEI message. R’G’B’ samples spaced in a subrange [rL, ru] of the normalised [0,1] cube may then be generated, where ru is determined from the maximum luminance value, and rL is either determined from the optional minimum luminance value, or set to zero. rv is determined by normalising the maximum luminance value (labelled Ιυ) by the peak luminance value of the transfer function (labelled lp), then applying the inverse EOTF, in accordance with Equation (20), as follows:
(20) if an EOTF was received by the determine EOTF step 408.
Alternatively, ru is determined by normalising the maximum luminance value (labelled Ιυ) by the peak luminance value of the transfer function (labelled lp), and then applying the OETF, in accordance with Equation (21) below, if an OETF was received by the determine EOTF step 408:
(21)
Further, rL is determined in accordance with Equation (22), as follows
(22) if an EOTF was received by the determine EOTF step 408.
Alternatively, rL is determined in accordance with Equation (23), as follows:
(23) if an OETF was received by the determine EOTF step 408.
In an alternate arrangement relating to the optional precision check step 1212, the generate R’G’B’ points step 410 is modified to generate a small number of R’G’B’ samples. For example, the R’G’B’ samples may correspond to the comer points, and the midpoints of edges between the corner points, of the normalised [0,1] cube. In the case where the EOTF received from the determine EOTF step 408 is a linear function, generating only the comer points of the normalised [0,1] cube may be sufficient to determine the chroma boundaries. However, in general the EOTF received from the determine EOTF step 408 will be a nonlinear function, and thus the YCbCr samples output by the apply YCbCr transform with unity divisors step 418, will be representative of a complex colour volume. In the arrangement where the generate R’G’B’ points step 410 is modified to generate a small number of R’G’B’ samples, the optional precision check step 1212 may return execution to the generate R’G’B’ points step 410, with the instruction of generating R’G’B’ samples in the neighbourhood of R’G’B’ samples that correspond to the currently determined chroma boundaries. Over multiple iterations of the optional precision check step 1212, the effect is to progressively refine the accuracy of the determined chroma boundaries.
In an alternate arrangement of the calculate colour transform step 406, an additional check is made to verify that the signal colour volume is completely contained within the container colour volume. The additional check is performed by examining the coefficients of the determined colour transform matrix T. If any coefficient of the colour transform matrix T is negative, then at least some part of the signal colour volume lies outside of the container colour volume. If the check indicates that some part of the signal colour volume lies outside of the container colour volume, then the apply colour transform step 414 may be modified such that the output RGB samples in a coordinate space defined by the container colour volume are clipped to the range [0,1].
In an alternate arrangement, the matrix coefficients generator 310 may have some precalculated R’G’B’ to YCbCr matrices Q stored. If the received signal colour volume, the received EOTF, and the received container colour volume correspond to a pre-calculated matrix, then the matrix coefficients generator 310, under the control of method 400, skips steps 406 and 410-424. The determine R’G’B’ to YCbCr matrix step 426 is modified to fetch the corresponding pre-calculated matrix from memory.
In an alternate arrangement, the output of the encoder matrix coefficients generator 310 may be added to the metadata 122. For example, the R’G’B’ to YCbCr matrix Q may be added to the metadata 122 as a R’G’B’ to YCbCr matrix coefficients SEI. Alternatively, the corresponding chroma divisors db and dr may be added to the metadata 122. In the arrangement where the output of the encoder matrix coefficients generator 310 is added to the metadata 122, the decoder matrix coefficients generator 310 is replaced by a read R’G’B’ to YCbCr matrix coefficients step. The matrix coefficients, or the corresponding chroma divisors, are received from the metadata 152 at the display device 140 by the read R’G’B’ to YCbCr matrix coefficients step, and passed to the apply YCbCr to R’G’B’ matrix step 528.
In an alternate arrangement, the apply R’G’B’ to YCbCr matrix step 428 does not receive the R’G’B’ to YCbCr matrix Q. Instead, the apply R’G’B’ to YCbCr matrix step 428 is modified to use the matrix Q in accordance with Equation (24), as follows:
(24) where r, g and b are the coefficients expressing the relative strengths with which the colour primaries of the container colour volume contribute to the perceptual sensation of brightness, and are determined from the container colour volume.
In the arrangement of Equation (24), an additional dynamic range expansion step is performed on the YCbCr output of the modified apply R’G’B’ to YCbCr matrix step 428. The dynamic range expansion step applies a scaling factor to the Cb component, and a scaling factor to the Cr component, in accordance with Equations (25) as follows:
(25) where db and dr are the chroma divisors produced by the determine chroma divisors step 424. The scaling factor:
ire added to the metadata 122. For example, the scaling factors may be added as scaling factor parameters in a dynamic range adjustment SEI message. Alternatively, the scaling factors may be added as an equivalent lookup table of Cb to Cbrescaled values in a tone-mapping SEI, or a colour remapping SEI.
The Cb rescaled and Cr rescaled components may then be passed, along with the unchanged Y component, and the metadata 122, to the video encoder 114.The scaling factors, or the equivalent lookup table may be unpacked from the metadata 152 at the display device 140 and the steps relating to the decoder matrix coefficients generator 310 are not performed in the arrangement of Equation (24). The decoded YCbCr video data 150 is then rescaled according to the inverse scaling factors, or the equivalent lookup table.
INDUSTRIAL APPLICABILITY
The arrangements described are applicable to the computer and data processing industries and particularly for image processing.
The foregoing describes only some embodiments of the present invention, and modifications and/or changes can be made thereto without departing from the scope and spirit of the invention, the embodiments being illustrative and not restrictive.
In the context of this specification, the word “comprising” means “including principally but not necessarily solely” or “having” or “including”, and not “consisting only of’. Variations of the word "comprising", such as “comprise” and “comprises” have correspondingly varied meanings.

Claims (22)

  1. CLAIMS:
    1. A method of encoding colour values of an image, the method comprising: receiving a container colour volume determined from a colour primaries parameter of a video usability information message associated with the image; receiving a signal colour volume; determining a boundary of a transformed signal colour volume in a YCbCr colour space, the transformed signal colour volume representing the signal colour volume according to the colour primaries of the container colour volume; and determining a position of a codeword range of chroma values, in the YCbCr colour space, within the container colour volume according to the determined boundary; and encoding the colour values of the image using the codeword range of the YCbCr chroma values.
  2. 2. The method according to claim 1, wherein the signal colour volume is determined from a mastering display colour volume supplemental enhancement information message associated with the image.
  3. 3. The method according to claim 1, wherein the signal colour volume is determined from an effective colour volume supplemental enhancement information message.
  4. 4. The method according to claim 1, further comprising determining maximum and minimum values of Cb.
  5. 5. The method according to claim 1, further comprising determining maximum and minimum values of Cr.
  6. 6. The method according to claim 1, wherein the codeword range is determined based on a single chroma divisor.
  7. 7. The method according to claim 1, wherein the codeword range is determined based on separate positive and negative chroma divisors.
  8. 8. The method according to claim 1, wherein the codeword range is determined based on a single chroma divisor and an offset.
  9. 9. The method according to claim 1, wherein the method is performed by a display device.
  10. 10. An apparatus for encoding colour values of an image, the apparatus comprising: module for receiving a container colour volume determined from a colour primaries parameter of a video usability information message associated with the image; module for receiving a signal colour volume; module for determining a boundary of a transformed signal colour volume in a YCbCr colour space, the transformed signal colour volume representing the signal colour volume according to the colour primaries of the container colour volume; and module for determining a position of a codeword range of chroma values, in the YCbCr colour space, within the container colour volume according to the determined boundary; and module for encoding the colour values of the image using the codeword range of the YCbCr chroma values.
  11. 11. A system for encoding colour values of an image, the system comprising: a memory for storing data and a computer program; a processor coupled to the memory for executing the computer program, the computer program comprising instructions for: receiving a container colour volume determined from a colour primaries parameter of a video usability information message associated with the image; receiving a signal colour volume; determining a boundary of a transformed signal colour volume in a YCbCr colour space, the transformed signal colour volume representing the signal colour volume according to the colour primaries of the container colour volume; and determining a position of a codeword range of chroma values, in the YCbCr colour space, within the container colour volume according to the determined boundary; and encoding the colour values of the image using the codeword range of the YCbCr chroma values.
    11. A computer readable medium comprising a program for encoding colour values of an image, the program comprising: code for receiving a container colour volume determined from a colour primaries parameter of a video usability information message associated with the image; code for receiving a signal colour volume; code for determining a boundary of a transformed signal colour volume in a YCbCr colour space, the transformed signal colour volume representing the signal colour volume according to the colour primaries of the container colour volume; and code for determining a position of a codeword range of chroma values, in the YCbCr colour space, within the container colour volume according to the determined boundary; and code for encoding the colour values of the image using the codeword range of the YCbCr chroma values.
  12. 12. A method of decoding R’G’B’ colour values of an image, the method comprising: receiving a container colour volume determined from a colour primaries parameter of a video usability information message associated with the image; receiving a signal colour volume; determining a boundary of a transformed signal colour volume in a YCbCr colour space, the transformed signal colour volume representing the signal colour volume according to the colour primaries of the container colour volume; and determining a position of a codeword range of chroma values, in the YCbCr colour space, within the container colour volume according to the determined boundary; and decoding the R’G’B’ colour values of the image using the codeword range of the YCbCr chroma values.
  13. 13. The method according to claim 12, wherein the signal colour volume is determined from a mastering display colour volume supplemental enhancement information message associated with the image.
  14. 14. The method according to claim 12, wherein the signal colour volume is determined from an effective colour volume supplemental enhancement information message.
  15. 15. The method according to claim 12, further comprising determining maximum and minimum values of Cb.
  16. 16. The method according to claim 12, further comprising determining maximum and minimum values of Cr.
  17. 17. The method according to claim 12, wherein the codeword range is determined based on a single chroma divisor.
  18. 18. The method according to claim 12, wherein the codeword range is determined based on separate positive and negative chroma divisors.
  19. 19. The method according to claim 12, wherein the codeword range is determined based on a single chroma divisor and an offset.
  20. 20. An apparatus for decoding R’G’B’ colour values of an image, the apparatus comprising: module for receiving a container colour volume determined from a colour primaries parameter of a video usability information message associated with the image; module for receiving a signal colour volume; module for determining a boundary of a transformed signal colour volume in a YCbCr colour space, the transformed signal colour volume representing the signal colour volume according to the colour primaries of the container colour volume; and module for determining a position of a codeword range of chroma values, in the YCbCr colour space, within the container colour volume according to the determined boundary; and module for decoding the R’G’B’ colour values of the image using the codeword range of the YCbCr chroma values.
  21. 21. A system for decoding R’G’B’ colour values of an image, the system comprising: a memory for storing data and a computer program; a processor coupled to the memory for executing the computer program, the computer program comprising instructions for: receiving a container colour volume determined from a colour primaries parameter of a video usability information message associated with the image; receiving a signal colour volume; determining a boundary of a transformed signal colour volume in a YCbCr colour space, the transformed signal colour volume representing the signal colour volume according to the colour primaries of the container colour volume; and determining a position of a codeword range of chroma values, in the YCbCr colour space, within the container colour volume according to the determined boundary; and decoding the R’G’B’ colour values of the image using the codeword range of the YCbCr chroma values.
  22. 22. A computer readable medium comprising a program for decoding R’G’B’ colour values of an image, the program comprising: code for receiving a container colour volume determined from a colour primaries parameter of a video usability information message associated with the image; code for receiving a signal colour volume; code for determining a boundary of a transformed signal colour volume in a YCbCr colour space, the transformed signal colour volume representing the signal colour volume according to the colour primaries of the container colour volume; and code for determining a position of a codeword range of chroma values, in the YCbCr colour space, within the container colour volume according to the determined boundary; and code for decoding the R’G’B’ colour values of the image using the codeword range of the YCbCr chroma values.
AU2015243117A 2015-10-19 2015-10-19 Method, apparatus and system for encoding and decoding image data Abandoned AU2015243117A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2015243117A AU2015243117A1 (en) 2015-10-19 2015-10-19 Method, apparatus and system for encoding and decoding image data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2015243117A AU2015243117A1 (en) 2015-10-19 2015-10-19 Method, apparatus and system for encoding and decoding image data

Publications (1)

Publication Number Publication Date
AU2015243117A1 true AU2015243117A1 (en) 2017-05-04

Family

ID=58633981

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2015243117A Abandoned AU2015243117A1 (en) 2015-10-19 2015-10-19 Method, apparatus and system for encoding and decoding image data

Country Status (1)

Country Link
AU (1) AU2015243117A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112801922A (en) * 2021-04-01 2021-05-14 暨南大学 Color image-gray image-color image conversion method
CN114860986A (en) * 2022-07-06 2022-08-05 西安工业大学 Computer unstructured data storage method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112801922A (en) * 2021-04-01 2021-05-14 暨南大学 Color image-gray image-color image conversion method
CN112801922B (en) * 2021-04-01 2021-07-27 暨南大学 Color image-gray image-color image conversion method
CN114860986A (en) * 2022-07-06 2022-08-05 西安工业大学 Computer unstructured data storage method

Similar Documents

Publication Publication Date Title
US10419689B2 (en) Mapping between linear luminance values and luma codes
US11183143B2 (en) Transitioning between video priority and graphics priority
KR102129541B1 (en) Color volume transformations in coding of high dynamic range and wide color gamut sequences
RU2710291C2 (en) Methods and apparatus for encoding and decoding colour hdr image
JP6234920B2 (en) High dynamic range image signal generation and processing
US9264681B2 (en) Extending image dynamic range
JP6937695B2 (en) Methods and Devices for Encoding and Decoding Color Pictures
US20170034519A1 (en) Method, apparatus and system for encoding video data for selected viewing conditions
EP3185558A1 (en) Method, apparatus and system for determining a luma value
KR102367205B1 (en) Method and device for encoding both a hdr picture and a sdr picture obtained from said hdr picture using color mapping functions
KR102523233B1 (en) Method and device for decoding a color picture
KR20080045132A (en) Hardware-accelerated color data processing
JP2021013172A (en) Coding and decoding method as well as corresponding device
US20170085887A1 (en) Method, apparatus and system for displaying video data
AU2015243117A1 (en) Method, apparatus and system for encoding and decoding image data
JP2018507618A (en) Method and apparatus for encoding and decoding color pictures

Legal Events

Date Code Title Description
MK1 Application lapsed section 142(2)(a) - no request for examination in relevant period