GB2625218A - Method and apparatus for conversion of HDR signals - Google Patents

Method and apparatus for conversion of HDR signals Download PDF

Info

Publication number
GB2625218A
GB2625218A GB2402030.7A GB202402030A GB2625218A GB 2625218 A GB2625218 A GB 2625218A GB 202402030 A GB202402030 A GB 202402030A GB 2625218 A GB2625218 A GB 2625218A
Authority
GB
United Kingdom
Prior art keywords
colour
gamut
pixel
range
signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB2402030.7A
Other versions
GB202402030D0 (en
Inventor
Cotton Andrew
Thompson Simon
Dunne Andrew
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
British Broadcasting Corp
Original Assignee
British Broadcasting Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by British Broadcasting Corp filed Critical British Broadcasting Corp
Priority to GB2402030.7A priority Critical patent/GB2625218A/en
Priority claimed from GB2109857.9A external-priority patent/GB2608990A/en
Publication of GB202402030D0 publication Critical patent/GB202402030D0/en
Publication of GB2625218A publication Critical patent/GB2625218A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Of Color Television Signals (AREA)

Abstract

Video signal processing of source colour gamut (1201 defined by corners 1202 – 1204) to target device colour gamut (1205 defined by corners 1206 – 1208), comprising: intermediate gamut (1310 defined by corners 1202, 1203, and 1311) where primary colour 1311 lies on the boundary of source colour gamut and where hue lies between that of corresponding primaries of source gamut and target colour gamut, such that portion of source gamut lies outside intermediate colour gamut; providing colour component of pixel, indicating position on chromaticity diagram relative to source gamut; processing colour component to indicate position relative to intermediate colour gamut; if pixel is chromatic and processed colour lies inside source colour gamut, but outside intermediate colour gamut, the its position is adjusted to a point on the boundary of intermediate colour gamut which is between original position in source colour gamut and nearest boundary of target colour gamut. Further, converting pixel to provide output signal within target colour gamut, comprising: if pixel position outside target colour gamut, adjusting position of pixel to point on boundary, or inside, target colour gamut along constant line of hue 1312. Video signal processing further relating to: white point in colour space; changing source gamut from High Dynamic Range (HDR) to Standard Dynamic Range (SDR); generating luminance ranges using logs and exponents of original luminance range.

Description

Method and Apparatus for Conversion of HDR Signals
BACKGROUND OF THE INVENTION
This invention relates to processing a video signal from a source, to convert to a signal usable by target devices. The source may have a high dynamic range (HDR) for luminance values and the target may have a lower dynamic range. Alternatively or in addition the source may have a source colour gamut and the target may have a target colour gamut.
HDR TV signals, such as the "HLG" and "PO" formats specified in ITU-R Recommendation BT.2100, represent a wider range of luminances and a wider range of colours than is possible with standard dynamic range (SDR) TV formats, such as BT.709.
The ability of the HDR formats, such as HLG and PQ formats, to represent a wider range of luminances than SDR TV, is by virtue of their "efficient" non-linear transfer functions, which provide a better match to the human visual system than the conventional "gamma" curves specified in BT.709 (the camera opto-electrical transfer function -OETF) and BT.1886 (the display electo-optical transfer function -EOTF).
HDR video has a dynamic range for luminance, i.e. the ratio between the brightest and darkest parts of the image, of 10000:1 or more. Dynamic range for luminance is sometimes expressed as "stops" which is logarithm to the base 2 of the dynamic range. A dynamic range of 10000:1 therefore equates to 13.29 stops.
The best modern cameras can capture a dynamic range of 13.5 stops and this is improving as technology develops.
Conventional televisions (and computer displays) have a restricted dynamic range for luminance in the range of around 100:1 to 1000:1, depending on the circumstances. This is sometimes referred to as standard dynamic range (SDR).
The HDR formats -e.g. BT.2100 PQ and HLG formats -are also able to represent a wider range of colours (colour gamut) than SDR formats such as BT.709. This is by virtue of their wider and purer red, green and blue colour primaries. BT.2100 adopts the same colour primaries as ITU-R Recommendation BT.2020.
Figure 1 plots the BT.709 and BT.2020/BT.2100 colour gamuts on a CIE u'v" chromaticity chart. u'v" is shown here rather than the more usual CIE xy as the tiy" representation is more visually uniform than xy. The red, green and blue colour primaries are represented by the three corners of the colour gamut triangles, with the corners in figure 1 denoted as R, G, B according to the colour of the primary. BT.709 is able to represent all colours within the BT.709 gamut triangle.
BT.2100 and BT.2020 are able to represent all colour with the BT.2020 gamut triangle.
It is often helpful to consider the colour volume, that adds a luminance dimension to the chromaticity chart of Figure 1. As the intensity of all three colour components is increased or decreased towards zero, the size of the gamut triangles shrink, converging on the 065 white point shown in figure 1.
HDR video provides a subjectively improved viewing experience. It is sometime described as an increased sense of "being there" or alternatively as providing a more "immersive" experience. For this reason many producers of video would like to produce HDR video rather than SDR video. Furthermore since the industry worldwide is moving to HDR video, productions are already being made with high dynamic range, so that they are more likely to retain their value in a future HDR world.
However, as many legacy television displays only have SDR capabilities, a great deal of effort has been devoted to developing techniques for converting high dynamic range (HDR) TV signals to standard dynamic range (SDR) TV. Such conversions are required whenever a high dynamic range TV service is received in the home and needs to be displayed on an SDR (BT.709) TV, or when a programme is produced in HDR and needs to be distributed through standard dynamic range (BT.709 or similar) channels or media (e.g. DVD disc). Having effective conversion techniques allow signals to be produced in HDR, whilst still servicing both HDR and SDR displays.
Various attempts have been made to convert between HDR video signals and signals useable by devices using lower dynamic ranges (for simplicity referred to as standard dynamic range (SDR)). One such approach is to modify an opto electronic transfer function (OETF).
Figure 2 shows an example system in which a modified OETF may be used to attempt to provide such conversion. An OETF is a function defining conversion of a luminance value from a camera, sometimes referred to as a "scene-light" signal, to a "voltage" signal value for subsequent processing. For many years, a power law with exponent 0.5 (i.e. square root) has ubiquitously been used in cameras to convert from luminance to voltage. This opto-electronic transfer function (OETF) is defined in standard ITU Recommendation BT.709 (hereafter "Rec 709") as: 4.5L forOSL<0.018 V= 1.099r4s -0.099 for 0.018 -LL 1 where: L is luminance of the image 0.sL.s1 V is the corresponding electrical signal Note that although the Rec 709 characteristic is defined in terms of the power 0.45, overall, including the linear potion of the characteristic, the characteristic is closely approximated by a pure power law with exponent 0.5.
Combined with a display gamma of 2.4 this gives an overall system gamma of 1.2. This deliberate overall system non-linearity is designed to compensate for the subjective effects of viewing pictures in a dark surround and at relatively low brightness. This compensation is sometimes known as "rendering intent". The power law of approximately 0.5 is specified in Rec 709 and the display gamma of 2.4 is specified in ITU Recommendation 31.1886 (hereafter Rec 1886). Whilst the above processing performs well in many systems improvements are desirable for signals with extended dynamic range.
The arrangement shown in Figure 2 comprises an H DR OETF 10 arranged to convert linear light from a scene into R"G"13" signals. Note the prime symbol is used to denote a non-linear signal. This will typically be provided in a camera. The IR"G"B" signals may be converted to Y"C"BC-R signals in a converter 12 for transmission and then converted from Y"C"BC-R back to IR"G"B" at converters 14 and 16 at a receiver. The R"G"B" signals may then be provided to either an HDR display or SDR display. If the receiver is an HDR display then it will display the full dynamic range of the signal using the HDR EOTF 18 to accurately represent the original signal created by the HDR OETF. However, if the SDR display is used, the EOTF 20 within that display is unable to present the full dynamic range and so will necessarily provide some approximation to the appropriate luminance level for the upper luminance values of the signal. The way in which a standard dynamic range display approximates an HDR signal depends upon the relationship between the HDR OETF used at the transmitter side and the standard dynamic range EOTF used at the receiver side.
Figure 3 shows various modifications to OETFs including the OETF of Rec 709 for comparison. These include a known "knee" arrangement favoured by camera makers who modify the OETF by adding a third section near white, by using a "knee", to increase dynamic range and avoid clipping the signal. Also shown is the ITU-R BT.2100 PC) arrangement ("perceptual quantizer"). Lastly, a form of the ITU-R BT.21000 HLG arrangement ("Example 400%), using a curve that includes a power law portion and a log law portion, is also shown. The way in which an SDR display using the matched Rec 709 EOTF represents images produced using one of the HDR OETF depends upon the OETF selected. In the example of the Knee function, the OETF is exactly the same as the Rec 709 for most of the curve, and only departs there from for upper luminance values. The effect for upper luminance values at an SDR receiver or display will be some inaccuracy which will probably be acceptable. The "Perceptual Quantizer" on the other hand differs greatly from the conventional SDR Rec 709 OETF, so the results are likely to be unacceptable.
Unfortunately simply applying a modified HDR OETF to each of the colour components in the manner of Figures 2 and 3 and feeding the non-linear HDR signals into an SDR display changes their relative amplitudes and, consequently, changes the colour. Moreover, they cannot address the changes in colour gamut necessary for converting an HDR BT.2100 wide colour gamut signal for display on an SDR display with conventional BT.709 colour primaries. Thus techniques have been developed that introduce additional processing stages into the conversion process in an attempt to address these issues. Such a processing stage is shown in figure 4 in which an additional processing stage "processor 40" is placed between the HDR OETF 10 and the SDR EOTF 20. Figure 5 shows a simplified processing chain between a 312100 (HDR) input signal and a 31709 (SDR) output signal.
As HDR formats typically have both a wider luminance range and a colour range than SDR formats, both the luminance range and colour range may require compression when converting from HDR to SDR TV. The conversion may therefore involve converting from a source colour gamut, for example a colour gamut of an HDR source such as 31.2100, to a target colour gamut, for example a colour gamut of an SDR source such as BT.709, as well as involve converting from an HDR range of luminance values to a SDR range of luminance values.
As the wider range of luminances and the wider range of colours represented by HDR TV formats are each achieved through different signal properties, their conversion to the SDR TV ranges can be separated. Most often the luminance range is compressed first, followed by the colour range.
Several techniques exist for compressing the input signal's luminance range. The most basic technique is to "hard" clip the luminance range to a level approximating "diffuse white" -the level at which conventional SDR cameras are usually adjusted to deliver their maximum signal level. Highlights above diffuse white are discarded by the clipping process. More often a "soft" clipping function, known as a "tone-mapping" curve, is used to compress the highlights captured by the HDR signal, to fit within an upper SDR signal range (e.g. 90% to 100%) of the non-linear SDR output. In some instances the tone-mapping is applied directly to the non-linear red, green and blue (R"G"B") components of the HDR signal, or even the non-linear luma (C) component, but that usually results in hue shifts in the compressed parts of the picture. An alternative approach, that avoids the hue shifts, is to apply the tone-mapping curve to the linear luminance component of the HDR signal.
Once the input signal's luminance has been compressed, the colour range is compressed to complete the conversion from HDR to SDR format.
Figure 6 depicts a display-light conversion performed during a processing stage, such as by a processor 40, as an illustrative example of the separate treatment of luminance and colour mapping.
Here, the tone-mapping is applied to the luminance component of the image only, whilst leaving, as far as possible, the colours unchanged. This can be achieved by converting the input signal such as in R"G"B", Y"C"BC"R or other format into a subjective colour space that separates the luminance and colour aspects of the image. A suitable colour space is Yu"v", which is strongly related to the CIE 1976 L*u*v* colour space. The Y component in Yu"v" is simply the Y component from CIE 1931 XYZ colour space, from which L* is derived in CIE 1976 L*u*v*. The u"v" components, which represent colour information independent of luminance are simply the u" & V components defined in CIE 1976 L*u*v* as part of the conversion from CIE 1931 XYZ. Other similar colour spaces are known in the literature and might also be used.
Figure 6 shows the main functional components of the processing which takes as an input a signal such as R"G"B" that has been provided using an HDR OETF, and provides as an output a signal such as R"G"B" capable of being viewed on an SDR display or which can be processed using a reverse process to generate a full HDR signal for presentation on an HDR display. The received R"G"B" signal may have been provided using any appropriate HDR camera OETF -in this example the BT.2100 HLG OETF has been used. However, in the case of a computer generated signals such as graphics, these may be created using linear "display-light" processing, so an inverse HDR EOTF is sometimes used to derive the non-linear R"G"B" input.
Because the input signal is derived from linear light via an OETF (nonlinear) or inverse EOTF, the ITG"B' components are first transformed by converter 61 back to linear red, green and blue (RGB) display light using a model of the HLG display EOTF. If a different HDR OETF had been used, then converter 61 would be replaced by its corresponding EOTF. Linear display-light processing is used in this case as it preserves the displayed colours and tones after conversion. In some instances, for example when the goal is to match HDR cameras with SDR cameras after conversion, scene-light processing is preferred.
In which case, the input EOTF is replaced by an inverse OETF to create linear "scene light", and the output inverse EOTF is replaced by an OETF.
The conversion to XYZ may then simply be performed, as is well known in the literature, by pre-multiplying the RGB components (as a vector) by a 3x3 conversion matrix. The RGB to XYZ converter 60 receives the linear wide colour gamut RGB signals and converts to CIE 1931 XYZ format. At this stage, the XYZ signals represent the full dynamic range of linear RGB HDR signals. An XYZ to u"v" converter 62 receives the XYZ signals and provides an output in u"v" colour space. Separately the luminance component Y is provided to a compressor 63 which provides a function to compress using a "tone-mapping" curve to the SDR signal range, before being recombined with the u"v" colour components.
One of the tone-mapping curves illustrated in ITU-R Report BT.2446 is shown in Figure 7 as an example of how the tone-mapping may be performed.
Figure 7 shows a tone-mapping, or compression, function that maps HDR signal values to SDR signal values. The "diffuse white" signal level for HLG is specified in ITU-R Report BT.2408 as 75% of the non-linear R"G"B" signals. In the example, that represents a display light level of around 80 cd/m2. The figure illustrates how HDR input signals below approximately 60 cd/m2 are passed through to the output without any compression. Above that breakpoint a logarithmic curve applies an increasing amount of compression, such that all specular highlights above "diffuse white" fit within the BT.709 signal range. Note, in this example the output luminance is shown to extend above 100 cd/m2, often thought to be the limit of a reference SDR display, as this particular conversion exploits "headroom" available above the "nominal" BT.709 signal range, known as "super-whites". Other compression curves may be used.
After the compression function -the tone-mapping -has been applied to the luminance component, the Yu'v" signals colour mapping needs to be performed when converting back XYZ and then to BT.709 (standard gamut) red, green and blue signals. In other words, separately to tone-mapping with respect to luminance, the mapping between colour spaces is performed during conversion from H DR to SDR. This is performed by colour mapping module 64 of figure 6.
In particular, negative signals arise when colours are outside of the target output (e.g. BT.709) gamut triangle. Consider that an RGB signal representing a blue element, can be made more saturated by subtracting any remaining red and green light from the signal. Even though it is not possible for a display to emit negative light, mathematically the linear colour component signals can go negative.
When that occurs, the colour of an element has exceeded the colour gamut usually supported by the given RGB colour system.
Such negative linear light signals may be converted to the non-linear electrical signals by reflecting the ITU-R defined transfer functions around the zero light and zero signal axes. However, the video signal formats specified by the ITUR usually only allow IR"G-13" signals to extend to --7% and a maximum of -109%. The EBU specifies a tighter limiter for their "preferred signal range" in R.103 of -5%/+105%. If signals extend beyond the EBU limits they are likely suffer disproportionate distortion as a result downstream clipping or compression artefacts. If signals exceed the ITU-R specification limits, colour mapping module 64 may clip the signals. Clipping will usually change the ratio of red, green and blue as one component is likely to be clipped first. The hue of the signal will, therefore, be changed. The eye can be very sensitive to such hue shifts, so as an alternative, the colour mapping module 64 may desaturate a signal towards the white point by adding white light. For television systems the ITU-R defines the white point as D65 (per ISO 11664-2:2007), with chromaticity co-ordinates of x=0.3127, y=0.3290 or u"= 0.1978, v"=0.4683. The eye is more accommodating of desaturation as it is something that occurs in nature, for example on a misty day.
Thus, simply scaling the chrominance information towards the signal's white point until all RGB colour components are within the required output signal range can be sufficient in some cases The desaturation can be performed using a variety of colour representations, such as CIE Yxy, CIE L*a*b*, CIE Yu-v" or even the nonlinear Y-CB-CR" representation.
Unfortunately, however, lines of constant hue are not straight in any of the aforementioned colour representations. So as colours are desaturated towards the D65 White Point their hue is also shifted. Figure 8 illustrates the trajectory along which colours would need to be desaturated to maintain their hue. Thus, existing colour gamut conversions often distort the hue of the input signal when converting to the target gamut.
Thus, whilst these legacy conversion techniques described above often ensures that an achromatic HDR source signals fits entirely within the SDR colour volume, they do not adequately deal with the chrominance: * Colours outside of the BT.709 colour gamut result in negative red, green or blue signals, which are often simply clipped to zero during the conversion. Such clipping usually results in visible hue distortions for source colours outside of the BT.709 gamut triangle.
* Bright saturated colours, in particular blue, may result in over-range output signals, as their luminance value can often be relatively low, and therefore unaffected by the luminance based "tone-mapping". Over-range signals may also be simply clipped, which again causes visible hue distortions.
Furthermore, * Many tone-mapping conversions cannot fully account for the effects of glare that might occur in certain HDR scenes and the existing compression functions often lead to a loss of detail in image highlights.
* As the trials of live HDR production expand the requirements on HDR to SDR format conversion are becoming more exacting. For example, engineering test signals such as Colour Bars and PLUGE are expected to pass through the HDR to SDR format conversion process in a way that still allows them to be used to assess the integrity of the end-to-end signal chain.
SUMMARY OF THE INVENTION
The inventors of the invention described herein have appreciated the need to improve upon the existing techniques for converting between colour gamuts and between ranges of luminance values. The techniques described herein have been especially developed to deliver both excellent standard dynamic range pictures from an HDR TV source, and to handle engineering test signals in a useful and predictable manner. In more detail, aspects of the invention described herein improve conversion of a video signal from a source having a source colour gamut to produce a signal usable by target devices having a target colour gamut, and/or improve conversion of a video signal from a higher dynamic range source to produce a signal usable by target devices of a lower dynamic range. We have further appreciated the need to maintain usability of video signals produced by HDR devices with equipment having lower than HDR dynamic range. We have particularly appreciated the need to avoid undesired colour changes when processing an HDR signal to provide usability with existing standards.
The invention is defined in the claims to which reference is directed.
According to a first aspect embodiments provide a method of processing a video signal from a source having a source colour gamut to produce a signal usable by target devices having a target colour gamut. The source colour gamut defines a region that encompasses the target colour gamut. The boundaries of the source colour gamut are defined on a chromaticity diagram by straight lines connecting source colour primaries and the boundaries of the target colour gamut are defined by straight lines connecting target colour primaries. The method comprises receiving the video signal from the source, the video signal comprising a pixel, and converting using a converter that implements the following or an equivalent function. An intermediate colour gamut defining a region in colour space that encompasses the target gamut is provided, in which at least one of the primaries of the intermediate colour gamut lies on a boundary of the source colour gamut and has a hue that is between the hue of a corresponding primary of the source colour gamut and the hue of a corresponding primary of the target gamut. The position of the at least one primary is such that a portion of the source colour gamut lies outside of the intermediate colour gamut ace. Further, the colour components of the pixel are provided, the colour components indicating a position of the pixel on the chromaticity diagram relative to the source colour gamut. The colour components are processed so as to indicate the position of the pixel on the chromaticity diagram relative to the intermediate colour gamut. If the pixel is a chromatic pixel and the processed colour components indicate a position of the pixel in the source colour gamut that lies in the portion of the source colour gamut that is outside of the intermediate colour gamut, the position of the pixel is adjusted to a position on the boundary of the intermediate colour gamut that is between the original position in the source gamut and the boundary of the target colour gamut. The pixel is then converted to provide an output signal in the target colour gamut.
The converting the pixel to the target colour gamut comprises, if the pixel is positioned outside of the target colour gamut, adjusting the position of the pixel to a position on the boundary or inside the target colour gamut along a line of constant hue By providing the intermediate colour gamut, adjusting the position of the pixel in colour space to a position on the boundary of the intermediate colour gamut, and converting the pixel -e.g. from the position on the boundary of the intermediate colour gamut -pixels that lie on or near the colour primaries of the source -e.g. an HDR source -are converted to the target colour gamut in a manner that advantageously conserves both the hue and the saturation of the input signal. In particular, it allows conversion along a line of constant hue that intersects the target gamut near to the corresponding colour primary of the target gamut. This results in an output signal that has full or nearly full saturation in the target gamut, resulting in less colour distortion as a result of the conversion and an improved final image.
Optionally, the chromaticity diagram may be a CIE 1931 xy or a CIE 1976 u'v" chromaticity diagram.
Adjusting the position of the pixel to a position on the boundary or inside the target colour gamut along a line of constant hue may comprise providing the pixel in a hue linear colour representation and equally scaling the components of the pixel in the hue linear colour representation in the direction of the white point of the hue linear colour representation. The hue linear colour representation may comprise a hue linear chromaticity diagram in which the lines of constant hue extending from the boundaries of a gamut defined on the hue linear chromaticity diagram to the white point of the hue linear colour representation are straight lines.
A second primary of the intermediate colour gamut may be at the same position in colour space as a corresponding primary in the source colour gamut. A boundary between two primaries of a colour gamut may form a line of maximum saturation for varying hue.
Adjusting the position of the pixel to a position on the boundary of the intermediate colour gamut may comprise moving the pixel towards the nearest boundary of the intermediate colour gamut in a direction that is substantially perpendicular to the nearest boundary of the intermediate colour gamut. Adjusting the position of the pixel in colour space may comprise providing red, green and blue colour components of the pixel and clipping any negative red, green and blue colour components of the pixel to zero which places the pixel on a boundary position in the intermediate colour gamut.
In some embodiments, if the pixel is an achromatic pixel, indicated by the processed colour components falling within a pre-defined achromaticity threshold, the clipping process is skipped.
The line of constant hue may be a line running from the pixel position outside the target colour space to a white point in which every point along the line has the same hue.
The step of providing the intermediate colour gamut may comprise clipping the source gamut to form the intermediate gamut.
Providing the intermediate colour gamut may comprise determining a point that is along a line of increasing saturation and constant hue of the corresponding primary of the target colour gamut that intersects the boundary of the source gamut; and defining the position of the primary of the intermediate colour gamut as the point between the point of intersection and the corresponding source primary. The at least one primary of the intermediate colour gamut may the blue primary.
The blue primary of the intermediate gamut may be defined as the point of intersection. Alternatively, the at least one primary may be the green primary.
Providing the intermediate colour gamut may comprise determining the second and third remaining colour primaries of the intermediate colour gamut to be in the same position as the corresponding primaries of the source colour gamut. The method may further comprise adjusting the position of the second primary of the intermediate colour gamut primary by selecting a position at the boundary of the source colour gamut that is between the hue of a corresponding primary of the source colour gamut and the hue of a corresponding primary of the target colour gamut; and defining an area that is bounded by the source colour gamut and a clipping boundary line between the primary of the intermediate colour gamut and a position outside the source colour gamut; and clipping pixels values in the defined area to the clipping boundary line.
Each colour gamut may have a red, green and blue colour primary, wherein the primary for a given colour represents the position of maximal saturation for that colour.
The position of a pixel relative to a colour gamut, as indicated by the colour components of the pixel, may provide the hue and saturation for the pixel.
According to the first aspect embodiments provide a converter for processing a video signal from a source having a source colour gamut to produce a signal usable by target devices having a target colour gamut, wherein the source colour gamut defines a region that encompasses the target colour gamut, wherein the boundaries of the source colour gamut are defined on a chromaticity diagram by source colour primaries and the boundaries of the target colour gamut in colour space are defined by target colour primaries. The converter is configured to receive the video signal from the source, the video signal comprising a pixel, and wherein the converter is configured to implement the following or an equivalent function. An intermediate colour gamut defining a region that encompasses the target colour gamut is provided, in which at least one primary of the intermediate colour gamut lies on a boundary of the source colour gamut and has a hue that is between the hue of a corresponding primary of the source colour gamut and the hue of a corresponding primary of the target colour gamut. The position of the at least one primary of the intermediate colour gamut is such that a portion of the source colour gamut lies outside of the intermediate colour gamut. The colour components of the pixel are provided, the colour components indicating a position of the pixel on the chromaticity diagram relative to the source colour gamut. The colour components are processed so as to indicate the position of the pixel on the chromaticity diagram relative to the intermediate colour gamut, wherein if the pixel is a chromatic pixel and the processed colour components indicate a position of the pixel in the source colour gamut that lies in the portion of the source colour gamut that is outside of the intermediate colour gamut, the position of the pixel is adjusted to a position on the boundary of the intermediate colour gamut, that is between the original position in the source colour gamut and the nearest boundary of the target colour gamut. The pixel is converted to provide an output signal within the target colour gamut, wherein converting the pixel to the target colour gamut comprises, if the pixel is positioned outside of the target colour gamut, adjusting the position of the pixel to a position on the boundary or inside the target colour gamut along a line of constant hue.
According to the first aspect embodiments provide a method of generating a 3D-LUT having values obtained by performing the method of the first aspect. The method may comprise receiving sample RGB values for sample pixels that lie within the source colour gamut, converting using the function, receiving pixel values in the target colour gamut, converting the pixel values in the target colour gamut to output RGB values, and storing the sample RGB values and output RGB values to provide the 3D-LUT.
According to the first aspect embodiments provide a converter comprising a 3D-LUT generated according to the method of generating a 3D-LUT of the first aspect.
According to the first aspect embodiments provide a 3D-LUT, having values generated according to the method of generating a 3D-LUT of the first aspect.
According to a second aspect embodiments provide a method of processing a video signal for glare compensation. The method comprises receiving the video signal from a source, the video signal comprising pixels defined in a colour space, and converting using a converter that implements the following or an equivalent function. The colour space is provided having a white point defined at a particular position in the colour space. The received signal is provided as a luminance component and separate colour components for each pixel. The pixel values are processed by multiplication so as to shift the position of the white point in the colour space in the direction of a particular colour so as to increase the luminance value of pixels of the particular colour. The luminance component of each pixel is processed by applying a compression function to the luminance component of each pixel to produce a processed luminance component. The pixel values are processed by multiplication so as to shift the position of the white point back to the particular position in the colour space. The processed pixels are provided in the colour space having a white point defined at the particular position in the colour space so as to provide a processed video signal compensated for glare.
By shifting the white point, the luminance values can be adjusted such that they better conform with the compression function, such that the adjusted luminance values produce a superior result when operated on by the compression function than the original values. For example, in certain cases, it is advantageous that pixels of certain values map to a certain portion of the compression function (a lower portion, an upper portion etc.). The white point can be shifted such that the adjusted luminance values fall within the correct portion of the compression function. This may optimise the output signal.
The colour space may be one of a Yuv a XYZ, or an RGB colour representation. The multiplication may be matrix multiplication in the colour space.
The white point may be shifted in a blue colour direction. For a lower range of luminance values, the compression function may compress the luminance component, wherein, shifting the position of the white point in the colour space in the direction of the particular colour, shifts the luminance value of pixels of the particular colour outside of the lower range of luminance values. The luminance value of pixels of the particular colour may be shifted to an upper range of luminance value to which compression is not applied.
According to the second aspect embodiments provide a converter for glare compensation configured to receive the video signal from a source, the video signal comprising pixels defined in a colour space. The converter is configured to implement the following or an equivalent function. The colour space is provided having a white point defined at a particular position in the colour space. The received signal is provided as a luminance component and separate colour components for each pixel. The pixel values are processed by multiplication so as to shift the position of the white point in the colour space in the direction of a particular colour so as to increase the luminance value of pixels of the particular colour. The luminance component of each pixel is processed by applying a compression function to the luminance component of each pixel to produce a processed luminance component. The pixel values are processed by multiplication so as to shift the position of the white point back to the particular position in the colour space. The processed pixels are provided in the colour space having a white point defined at the particular position in the colour space so as to provide a processed video signal compensated for glare.
According to the second aspect embodiments provide a method of generating a 3D-LUT having values obtained by performing the method of the second aspect. The method may comprise receiving sample RGB values for sample pixels that lie within the colour space, converting using the function, receiving processed pixel values to output RGB values, and storing the sample RGB values and output RGB values to provide the 3D-LUT.
According to the second aspect embodiments provide a converter comprising a 3D-LUT generated according to the method of generating a 3D-LUT of the second aspect.
According to the second aspect embodiments provide a 3D-LUT, having values generated according to the method of generating a 3D-LUT of the second aspect.
According to a third aspect embodiments provide a method of processing a video signal from a higher dynamic range source provided in a source colour space with a source colour gamut to produce a signal usable by target devices of a lower dynamic range and having a target colour space with a target colour gamut.
The method comprises receiving the video signal from the source, the video signal comprising pixels, and converting using a converter that implements the following or an equivalent function. The received signal is provided as separate colour components for each pixel. A scale factor is provided for compressing the colour components of each pixel whereby the dynamic range of the luminance of the pixel is compressed, wherein the scale factor is based on the values of the colour components when provided in the target colour space. The dynamic range of the luminance of each pixel is compressed using the scale factor operable on the colour components to provide an output signal of the lower dynamic range Here, the colour components, when provided in the target colour space, are defined relative to the target colour gamut. In other words, the coordinate values of the pixel are defined relative to the target colour gamut. Traditional methods, whereby the compression function is based on the value of the luminance signal rather than individual colour components, may fail to apply any compression to blue pixels, which have a disproportionately low luminance value. So luminance based approaches often result in over-range blue signals on the lower dynamic range output. Other approaches whereby the compression function is based on the values of the colour components, have used the source colour gamut primaries. They are not suitable for situations where the source and target colour gamuts are different.
The scale factor for each pixel may depend upon the largest value out of the separate colour components of the pixel when provided in the target colour space. The scale factor may be operable on the colour components by multiplying each colour component by the scale factor. Alternatively, the scale factor for each pixel may depend upon the norm of the colour components of the pixel when provided in the target colour space.
The scale factor may be the ratio of the output value of a compression function divided by the input value to the compression function. The compression function may be a non-linear function that reduces the range of values from input to output. The input value for each pixel may be either i) the largest value out of the separate colour components of the pixel when provided in the target colour space, or ii) the norm of the colour components of the pixel when provided in the target colour space.
Providing the scale factor for each pixel may comprises providing the separate colour components of the pixel in the source colour gamut, the values of the colour components indicating the position of the pixel in the source colour space; processing the colour components so as to indicate the position of the pixel relative to the target colour gamut; determining an input value for a compression function, the input value being either i) the largest value out of the processed colour components of the pixel, or ii) the norm of the processed colour components of the pixel; processing the input value with the compression function to determine an output value; determining a ratio of the output value and the input value to determine the scale factor; and providing the scale factor for pixel.
The scale factor may be operable on the colour components by multiplying each colour component by the scale factor.
The colour components may be red, green and blue colour components.
The colour components when provided in the target colour space may represent red, green and blue colour primaries of the target colour gamut. Here, the red, green and blue colour components represent the amount of the red, green and blue colour primaries that define the target colour gamut. The proportions of those colour primary signals-i.e. the relative values of the R, G and B components -defines the position of a pixel on the chromaticity chart relative to those primaries and the target gamut. These are also the colour primaries of the target colour space.
According to a third aspect embodiments provide a converter of processing a video signal from a higher dynamic range source having a source colour space with a colour gamut to produce a signal usable by target devices of a lower dynamic range and having a target colour space with a target colour gamut, and wherein the converter is configured to implement the following or an equivalent function.
The received signal is provided as separate colour components for each pixel. A scale factor is provided for compressing the colour components of each pixel whereby the dynamic range of the luminance of the pixel is compressed, wherein the scale factor is based on the values of the colour components when provided in the target colour space. The dynamic range of the luminance of each pixel is compressed using the scale factor operable on the colour components to provide an output signal of the lower dynamic range.
According to the third aspect embodiments provide a method of generating a 3D-LUT having values obtained by performing the method of the second aspect.
The method may comprise receiving sample RGB values for sample pixels that lie within the colour space, converting using the function, receiving processed pixel values to output RGB values, and storing the sample RGB values and output RGB values to provide the 3D-LUT.
According to the third aspect embodiments provide a converter comprising a 3D-LUT generated according to the method of generating a 3D-LUT of the third aspect.
According to the third aspect embodiments provide a 3D-LUT, having values generated according to the method of generating a 3D-LUT of the third aspect.
According to a fourth aspect embodiments provide a method of processing a video signal from a source to produce an output signal. The method comprises converting between a luminance value and signal value using a converter that implements the following or an equivalent function. For a first range of luminance values the signal value is derived using a first function that includes a power of the luminance value. For a second range of luminance values the signal value is derived using a second function that includes a log of the luminance value, the second range being a higher range that the first range. For a third range of luminance values the signal value is derived using a third function that includes an exponent of the luminance value, the third range being a higher range than the second range. For an fourth range of luminance values the signal value is derived using a fourth function that includes an log of the luminance value, the fourth range being a higher range than the third range.
Using these multiple functions -a "Multi-step" tone-mapping curve -greater control over the preservation of highlight detail is achieved.
The first, second, third and fourth functions may be respectively joined together at pre-determined breakpoints. The gradients of first, second, third and fourth functions may be respectively matched at the breakpoints. The converter may comprise a 3D-LUT having values to provide the conversion.
According to the fourth aspect embodiments provide a converter for processing a video signal from a source to produce an output signal. The converter converts between a luminance value and signal value by implementing the following or equivalent function. For a first range of luminance values the signal value is derived using a first function that includes a power of the luminance value.
For a second range of luminance values the signal value is derived using a second function that includes a log of the luminance value, the second range being a higher range than the first range. For a third range of luminance values the signal value is derived using a third function that includes an exponent of the luminance value, the third range being a higher range than the second range. For a fourth range of luminance values the signal value is derived using a fourth function that includes an log of the luminance value, the fourth range being a higher range than the third range.
According to the fourth aspect embodiments provide a method of generating a 3D-LUT having values obtained by performing the method of the second aspect. The method may comprise receiving sample RGB values for sample pixels that lie within the colour space, converting using the function, receiving processed pixel values to output RGB values, and storing the sample RGB values and output RGB values to provide the 3D-LUT.
According to the fourth aspect embodiments provide a converter comprising a 3D-LUT generated according to the method of generating a 3D-LUT of the fourth aspect. According to the fourth aspect embodiments provide a 3DLUT, having values generated according to the method of generating a 3D-LUT of the fourth aspect.
BRIEF DESCRIPTION OF THE DRAW NGS
The invention will be described in more detail by way of example with reference to the accompanying drawings, in which: Fig. 1 shows schematically the arrangement of colour gamuts on a chromaticity chart.
Fig. 2 shows the functional components of a typical image processing chain Fig. 3 is a graph showing a comparison of opto electronic transfer functions; Fig. 4 shows the functional components of a typical image processing chain with additional processing units included Fig. 5 shows the functional components of another typical image processing chain with additional processing units included Fig. 6 shows the functional components of a processing module; Fig. 7 is a graph showing an exemplary compression function; Fig. 8 shows schematically a chromaticity chart showing curved lines of constant hue moving from the boundaries of the chromaticity chart to the white point; Fig. 9 shows the functional components of the processing modules according to the invention; Fig. 10 shows the functional components of a compression unit of the processing module according an aspect of the invention; Fig. 11 shows schematically a chromaticity chart showing straight lines of constant hue moving from the boundaries of the chromaticity chart to the white point; Fig. 12 shows schematically the arrangement of colour gamuts on a chromaticity chart showing an SDR colour gamut within an HDR colour gamut; Fig. 13 shows schematically the arrangement of colour gamuts on a chromaticity chart according to an aspect of the invention showing an SDR colour gamut within an HDR colour gamut and an intermediate colour gamut; Fig. 14 shows the functional components of a colour conversion unit of the processing module according an aspect of the invention; Fig. 15 shows schematically the arrangement of colour gamuts on a chromaticity chart according to an aspect of the invention showing an SDR colour gamut within an HDR colour gamut and an intermediate colour gamut; Fig. 16 is a graph showing an exemplary compression function according to an aspect of the invention; Fig. 17 shows the functional components of a compression unit of the processing module according an aspect of the invention; Fig. 18 is a graph showing an exemplary compression function according to an aspect of the invention; Fig. 19 shows the functional components of a compression unit of the processing module according an aspect of the invention; Fig. 20 shows an example Waveform Monitor; Fig. 21 shows an illustration of the EBU Tech 3373 HLG Colour Bars; Fig. 22 shows an description of the EBU Tech 3373 HLG Colour Bars;; Fig. 23 shows an example of Colour Bar Conversion Desaturating in ICTCp; Fig. 24 shows desaturation with "gamut mapping" to improve Colour Bars, Fig. 25 shows a pathological Test Image without colour conversion; Fig. 26 shows a pathological Test Image after conversion with complex gamut mapping; Fig. 27 shows colour bars after conversion with green and blue gamut clipping; and Fig. 28 shows a pathological Test Image after conversion with green and blue gamut clipping; Fig. 29 shows schematically the arrangement of colour gamuts on a chromaticity chart according to an aspect of the invention showing an SDR colour gamut within an HDR colour gamut and an intermediate colour gamut; Fig. 30 shows the functional components of a colour conversion unit of the processing module according an aspect of the invention; Fig. 31 is a graph showing an exemplary compression function according to an aspect of the invention.
DESCRIPTION OF PREFERRED EMBODIMENTS OF THE INVENTION
The present disclosure will discuss a number of aspects of the invention. These will be discussed in detail below under separate headings. Each aspect relates to conversion between colour gamuts and/or conversion from an HDR of luminance values to a lower range of luminance values.
Firstly, however, an exemplary general processing unit in which these conversions will be carried out will be described in relation to figure 9. This processing unit may be referred to as a processor 90 for ease of discussion, and may perform the function of processor 40 in the end-to-end system chain shown in figure 4 and. Aspects of the invention broadly relate to improvements to the operation of legacy processing units such as that shown in figure 6, and may relate to portions of this overall unit, such as converters or conversion units that sit within the processing unit, or may relate to the processing unit as a whole.
As noted above, the processor 90 addresses two impediments to the wider adoption of high dynamic range (HDR) video. Firstly it is necessary to convert HDR video to signals recognisable as standard dynamic range (SDR) so that they may be distributed via conventional video channels using conventional video technology. Secondly a video format is needed that will allow video to be produced using existing infrastructure, video processing algorithms, and working practices. To address both these requirements, and others, it is necessary to convert HDR video into SDR video algorithmically, hence allowing automatic conversion.
The processor 90 is to be understood as a functional module that may be implemented in hardware or software within another device or as a standalone component. The function may be implemented as one or more look up tables, such as a 3D LUT, but the separate functional blocks are described for clarity. In other words, the entire processing chain shown in figure 9 may be implemented in a 3D-LUT generated to have values that perform the conversions described in relation to each separate functional block. Alternatively, each functional block may be implemented in it's own individual 3D-LUT having values to perform the relevant conversion. In this sense, the various functional steps described hereinafter in relation to these blocks may be considered equivalent to the operation of a 3D-LUT. For example, in the below it is described that function block 91 receives an input signal and performs a number of steps on the input signal to produce an output signal. The operation of a 3D-LUT may be considered as an "equivalent function" to these various steps if the same input signal to the 3D-LUT would produce the same output signal when operated on by the 3D-LUT -i.e. that the operation of the 3D-LUT on the input signal, and the operation of the various steps of functional unit 91, both result in the same output.
The values of the 3D-LUT may be pre-determined by performing the processing steps described herein on a range of input pixel values to determine the output values. The mapping between the input and output can then be provided by the 3D-LUT. The input and output values are preferably RGB values.
Turning now to figure 9, first the non-linear HDR red, green, blue signals are provided to converter 91 to be converted to linear light using a model of either the HDR signal's display EOTF or the HDR signal's inverse OETF. This is to reverse the effect of the HDR OETF 10 applied at the beginning of the processing chain shown in figure 4. In this example the prime symbol, ', at the input indicates a non-linear signal and the "BT2100" suffix indicates the BT.2100 RGB colour primaries and an HDR signal range.
An EOTF is used for a "display-light" conversion that preserves the appearance of content after conversion.
An inverse OETF is used for a "scene-light" conversion that is used to match the appearance of HDR and SDR cameras.
Typically, the luminance range is then compressed to a standard dynamic range, indicated by the "BT2020" suffix using the compression unit 92. This will involve compressing the luminance of the signal using a compression function (a tone-mapping function). The red, green and blue signals are then converted from the wide colour gamut BT.2020 container to the standard colour gamut BT.709 container, indicated by the BT.709 suffix, by colour converter 93. Lastly the linear light signal is converted back to a non-linear SDR output signal by converter 94.
For display-light conversions the inverse of the reference BT.1886 EOTF is used. For scene-light conversions the BT.709 OETF is used. It will be appreciated that RGB signal format has been used for simplicity of explanation. Any suitable colour format may be used and conversion between colour formats, such as those discussed in relation to figure 6, may be performed where appropriate depending on the use case. The broad functionality shown is to illustrate the general stages in the conversion processes Taken overall, the pre-processor shown in Figure 9 has received an HDR signal provided from a camera that used an HDR OETF, applied an inverse of that OETF and then the subsequent processing steps described above and then at the output applied a Rec 709 OETF for an SDR display. The signal is therefore similar at the output as would have been provided from a SDR camera using a Rec 709 OETF, but importantly the signal still contains much of the information that was provided by the HDR camera.
When discussing the concepts at a high level, the conversion may be referred to broadly as an HDR to SDR conversion for ease of discussion. This should be understood to mean the general conversion of a signal suitable for an HDR display such as BT.2100 to a signal suitable for an SDR display such as BT.709. It is a shorthand for the overall conversion process. However, as noted above, such a conversion may involve both the conversion of luminance from HDR to SDR and conversion from a wide colour gamut to a narrower colour gamut. When discussing the details of these conversions, i.e. when discussing the details of the particular aspects of the invention, references to "HDR" and "SDR" will generally be used to refer to the dynamic range of luminance, and other notation, such as references to BT.2100 or BT.709 will be used to refer to the colour gamuts. For example, the notation HDR RGBBT2100 can be used to refer to a signal having an HDR in luminance and the wide BT.2100 colour gamut -the "HDR" referring to luminance range and the "BT.2100" referring to the colour gamut.
At an SDR receiver, the RGB signals may be used directly using a Rec 1886 EOTF.
The path from the HDR camera to a SDR display will now be considered. Recall that the RGB signal provided from the HDR device 61 has been provided according to a particular OETF. The first stage of the processor reversed the camera OETF to generate linear HDR RGBBT2100, . The luminance and colour of this input signal could go beyond those displayable on an SDR display and so the signal will be processed to a) bring the luminance of the pixel to a SDR, and b) convert the colour of the pixel into the range of the colour gamut BT.709. This is provided by the various units of figure 9. At the output, an inverse BT.1886 EOTF is used so the signal provided looks to a receiver like SDR Rec 709 and can be displayed at the receiver using a normal SDR EOTF.
The choice of OETF or EOTF does not particularly impact the operation of the processor because whatever the input, the first step is effectively conversion to linear light (i.e. no OETF) and with sufficient precision (i.e. enough bits) to avoid artefacts. The arrangement may operate with any OETF (or inverse EOTF) that encodes HDR into a limited number of bits (e.g. 10 bits). The simplest LUT implementation would be a single 3D LUT or a combination of 1D LUTs multipliers and a 3D-LUT. Both the 1D LUTs and the 3D LUT might reasonably be implemented in the camera or set-top box.
With the above as background, the remaining discussion will focus on improvements to aspects of this processing chain.
In the following description, for ease of understanding, the concept of colour gamut will be used as a visual way of explaining the process steps undertaken. A colour gamut is typically represented with two colour axes such as u'v' defining the colour and further axis that is not typically shown for luminance. The concepts of clipping pixel values within colour gamuts can be represented visually in such diagrams. The computational steps undertaken, though, may be actually performed in individual colour channels such as RGB. Thus, for a given value in a colour gamut that is to be shifted or clipped, that value may be altered in a signal such as Yu'v' by altering the equivalent values in RGB. It is for this reason that the computational steps may be undertaken directly in a colour gamut such as Yu'v' may be performed by some equivalent function either by processing RGB values as individual steps or by implementing a 3D-LUT to convert directly from input RGB values to output RGB values.
The term colour space will also be used variously in the below. Depending on the aspect, the usage may simply refer to a region defined by a colour chart, or may refer to a particular colour representation such as XYZ and Yu'v'. If referring to a region defined by the colour chart, a gamut is simply a limited region on that chart. When discussing red, green and blue colour components, the term colour components refer to pixel values that represent the amount of the red, green and blue colour primaries of a working colour gamut that the pixel contains. The proportions of those colour primary signals -i.e. the relative values of the R, G and B components -defines the position of a pixel on the chromaticity chart relative to those primaries and the target gamut. RGB colour components can be easily converted to different colour representations (XYZ and Yu'v') by appropriate processing.
In the below, four aspects are described separately. However, it will be appreciated that the aspects may be combined with one another as appropriate. For example, aspect two, three and four relate to processing unit 92. Aspect two, relating to glare compensation may be performed by unit 92 first, before aspect three performed the compression from HDR of luminance to SDR of luminance. It is noted that aspect four may provide an alternative way of compressing luminance from HDR to SDR, or may be integrated into aspect three, wherein the tone mapping function of the fourth aspect is used as the third aspect's tone mapping function. Thus, aspect two and four may be used together, or all three aspects may be utilised by unit 92 to provide a conversion from HDR luminance to SDR luminance. The first aspect of the invention relates to unit 93 of figure 9 and therefore can be used in isolation or combined with anyone of the other three aspects. If more than one aspect is used in the end to end signal chain, the aspects may be implemented by a processor performing the steps described below in relation to those aspects, or a 3D-LUT may be provided with values that provide the end to end conversion. In other words, if multiple aspects are being utilised, a single 3D-LUT may still be used to equivalently provide the conversion. The values of the 3D-LUT may be pre-determined by processing input values according to the steps of each aspect and determining the output values, and then providing the 3D-LUT to provide such a conversion.
Luminance enhancement based on output colour gamut For ease of discussion, we will start by discussing the third aspect of the invention. This relates to the stage in the HDR to SDR conversion chain of compression of the luminance of the HDR input signal, that is this aspect relates to the operation of compression unit 92.
One of the problems highlighted with the luminance-based tone-mapping is that, although it ensures that the output luminance signal is within the desired SDR signal range, it does not guarantee that the individual red, green and blue components will also be within range, even if those colours are within the output gamut triangle.
The difficulty in compressing colour signals arises because the video signal's luminance is calculated as a weighted sum of the linear red, green and blue components. For BT.2100 wide colour gamut HDR signals the equation is given below: Y2100 -0.2627xR2lo0 + 0.6780 x G2100 + 0.0593 " B2100 Straightaway it can be seen that a 100% red signal or 100% blue signal will result in a luminance signal of just 26% or 6% respectively. When dealing with compression curves such as the one shown in figure 7 and discussed above, those primary colour components pass through the tone-mapping process without being subject to any compression.
Moreover, as the BT.2100 red primary is different from the BT.709 red primary, it requires some negative green and negative blue to be faithfully reproduced in BT.709. Similarly, as the BT.2100 blue primary is different from the BT.709 blue primary, it requires a small amount of red and some negative green be faithfully reproduced in BT.709. However, without any compression, one or more of the colour components will be crudely clipped on the converter's output, distorting the hue.
We have appreciated that there are techniques that apply the dynamic range compression indirectly, based on the maximum of the red, green and blue components (MaxRGB) of the HDR input signal, rather than a weighted sum of red, green and blue such as the luminance. Here, the RGB components are HDR components and are defined in relation to the BT.2100 colour gamut. A compression ratio is calculated by applying a "tone-mapping" function, similar to that of Figure 7, to the MaxRGB signal and dividing the output value by the input, thus: = Max(HDR RGBB-rzioo) Where HDR RG136-r2100 are the HDR RGB values the input pixel takes in the BT.2100 colour gamut (they are display-light pixel output values), and where M,n represents the input for the tone mapping function, Mnut= F[M] Where 9] is a tone-mapping function (SDR RGBErzioo) = (M0/M1) * (HDR RGB61.2100) The compression ratio, M01/M, is then applied as a single scale factor to red, green and blue components, which is equivalent to scaling the signal luminance. Here, SDR RGBBT.2100 are components that have been compressed to the SDR range of signal values -i.e. the luminance of the pixel has been scaled to the SDR by the scaling factorM /M -out -in. The signal hue is therefore preserved through the scaling, and the signal is still defined in relation to the BT.2100 colour gamut. As the compression ratio is based on red, green and blue input signal components, it is straightforward to ensure that the "tone-mapping" function, FS, also ensures that the corresponding red, green and blue output signals lie within the desired SDR signal range.
One disadvantage of basing the compression ratio on the maximum of red, green and blue is that the blue channel is often quite noisy. This can make the compression ratio noisy when blue is the dominant component. That will the propagate noise from the blue channel into red and green, where it will be much more visible. So, it is often better to base the compression ratio on a "norm" function rather than a simple "MaxRGB", such as, (RecP + Green' + Blue)/ (Red' + Green' + Blue) The exponents may be selected to balance the noise sensitivity of the compression algorithm against its ability to fill the SDR colour volume. Lower powers will be less susceptible to noise, but may apply too great a compression factor; thereby failing fill the available SDR colour volume.
Previous approaches have not, however, considered indirect MaxRGB or Norm based compression combined with colour space conversion, where the primaries of the input and output RGB signals differ. This is exactly the case when converting from BT.2100 HDR signals to BT.709. Where that is the case, it is appreciated by embodiments of this aspect of the invention that the MaxRGB signal or "Norm" signal should be calculated using RGB signal values defined in relation to the output colour gamut rather than the input colour gamut. Moreover, this approach can also be used with CAM based down-mapping whereby the input "lightness" may be scaled by a compression ratio derived through "tone-mapping" of the MaxRGB or Norm signal. This is illustrated in Figure 31 which represents a compression function mapping the value of a given Max(HDR RGIB6T.709) to its corresponding SDR value. The exact form of the curve can be chosen based on a given use case, as would be clear to one skilled in the art. An exemplary curve is shown in figure 31.
This distinguishes this technique from those such as that shown in figure 6 in which the signal is separated into separate chrominance and luminance components and the luminance component is compressed, according to its magnitude. Rather the compression is performed, based on the magnitude of the individual red, green and blue components. Further, it is distinguished from the above described techniques that based their conversion on input RGB values. Here, the compression is performed by calculating MaxRGB signal or "Norm" signal using RGB signals in the output SDR colour space.
Thus, the operation of compression unit 92 is altered such that it processes a video signal from a higher dynamic range source having a source colour gamut to produce a signal usable by target devices of a lower dynamic range and having a target colour gamut. The processing, in broad terms comprises receiving the video signal from the source, the video signal comprising pixels, and converting using a converter that implements the following steps or a function that is equivalent to these steps, such as via a 3D-LUT. The received signal as separate colour components for each pixel are provided. A scale factor for compressing the dynamic range of the colour components of each pixel is also provided, wherein the scale factor is based on the values of the colour components when provided in the target colour space -i.e. the colour components defined relative to the target colour gamut. The dynamic range of the colour components of each pixel is compressed using the scale factor to provide an output signal of the lower dynamic range. The scale factor is operable on the colour components by multiplying each colour component by the scale factor.
Figure 10 shows an arrangement of compression unit 92 according to embodiments of this aspect of the invention. Linear HDR RGBBT.2100 input pixel is passed to the compression unit 92 from converter 91 of figure 9.
The input pixel is provided in the HDR BT.2100 representation. In other words, the luminance of the pixel has a high dynamic range of values and the pixel has a wide colour gamut. To illustrate this, the colour components of the input pixel are given the following notation: HDR RGBBT2100 with the "HDR" indicating that the input pixels have an HDR of luminance, and "BT.2100" indicating that the input RGB values are defined in relation to the wide BT.2100 colour gamut.
The values of the input pixel are then defined in relation to the BT.709 colour gamut. This conversion may be provided by a converter (e.g. a processor or a suitable [UT) that applies a suitable matrix multiplication in order to perform the conversion. Here, functional unit 1003 receives the linear HDR RGI3s-r 2100 signal and converts the values to define the values relative to the BT.709 colour gamut.
Note here that the pixel values are not shifting in colour space, nor is the luminance of the pixel being affected. The coordinates of the pixel are simply being expressed in relation to the BT.709 gamut rather than the BT2100 gamut. For example, if the pixel has values that put it outside the BT.709 gamut, those values that are outside the BT.709 gamut will take on negative values when defined in relation to the BT.709 gamut. In other words, the colour components are provided in the target colour space (the target colour space being the area of a chromaticity chart for example), with their values being defined relative to the target gamut. The result of this conversion are the values HDR RG1361.709'. Note that the ' symbol is being used to illustrate that the RGB values are defined relative to the BT.709 colour gamut, not that the values themselves have been adjusted in colour space to bring them within the BT.709 colour gamut. This notation is used for ease of discussion and it is noted that in other aspects of the invention, the' symbol is omitted to illustrate a signal whose values have been adjusted to bring them into the BT.709 colour gamut.
These values are then provided to processor 1005 that determines Max(HDR RGIBBT.709) for the input signal. Max(RGB) is the maximum of each of the values of R, G and B -e.g. if R=0.5, G=0.6, 8=0.7, Max(RGB) = 0.7. Here, the important difference between prior arrangements can be appreciated. The Max(HDR RG1361.709) is determined based on the component values defined relative to the BT.709 colour gamut-the maximum value out of the R, G and B components, when defined relative to the BT.709 colour gamut is determined.
Once Max(HDR RGBEr.7o9,) is determined, Mout/Min can then be found as follows.
Min = Max(HDR RGBst.700 Mout= F[M in] Where Ft] is a tone-mapping function that performs a compression function, for example as shown in figure 31.
This can be performed either by processor 1005 where M /M passed -o is ut. -in *_ to the compressor 1004, or this can be performed by the compressor 1004, in which case processor passes Max(HDR RG135+709) to the compressor 1004 and the compressor then calculates Mout/Min.
In parallel to this, the original input pixel HDR RGBB+2100is provided to the compressor 1004. The compressor therefore has both the input pixel and m /M -out. -in.
From this, the compressor applies the scale factor to the input pixel to determine the signal value SDR RGBB+.2too -SDR RGBB+.2100 = (Mout/Min)* (HDR RGBB+.2loo).
The compression ratio, M /M1in, ._ iS applied as a single scale factor to red, -out. -green and blue components of the HDR BT.2100 input signal, which is equivalent to scaling the signal luminance. Here, SDR RGBB+2100 are components that have been compressed to the SDR range of signal values -i.e. the luminance of the pixel -but remain defined according to the BT.2100 colour gamut. The input signal has been scaled to the SDR range of luminance by the scaling factor Mout/Min while leaving colour unchanged; the signal hue is preserved through the scaling.
The same processing may be performed for "norm", where (Red3 + Green3+ Blue3)/ (Red2 + Green2 + Blue2) is calculated for an input HDR RGB6+2100 signal. Again, here, the Red3, Green3, Blue3 values are calculated by translating the HDR RGB input values to their values in the BT.709 output colour format, and calculating the norm based on those values.
The scaling of RGB signals based on MaxRGB or a Norm function in this way can limit over-range signals on the SDR output in a manner that minimises colour distortions of the output signal.
Gamut Clipping We will now discuss the first aspect of the invention, which relates to the colour conversion/colour mapping stage in the HDR to SDR conversion chain, that is, colour converter 93 of figure 9.
Even though the scaling of RGB signals based on MaxRGB or a Norm function can limit over-range signals on the SDR output, it does not address the issue of negative RGB signals discussed above in relation to figure 6. As discussed above, existing colour gamut conversions often distort the hue of the input signal when converting to the target gamut.
For that reason, interest is growing in using other colour representations for the desaturation, such as ICTCp and JzAzBz, which been optimised to improve their perceptual uniformity and hue linearity. Desaturating the "colourfulness" signal obtained through the CAM based down-mapping approach also works well. Figure 11 illustrates the improvement in hue linearity using the ICTCp colour representation.
Desaturafing colours towards the white point using a "hue linear" colour representation or CAM, results in improved natural images; with fewer visible hue distortions. However, this desaturating results in extreme desaturation of colour signals that are on, or near to the BT.2100/BT.2020 blue and green colour primaries when converted to BT.709.
As with Figure 1, the outer triangle 1201 in Figure 12 represents the source colour gamut which defines a region of a colour chart or diagram, here the BT.2100/BT.2020 colour gamut in u"v" colour chart, with each tip of the triangle 1201 at a colour primary co-ordinate in the u"v" colour representation. Corner 1202 represents the HDR red colour primary in u"v" colour format, corner 1203 represents the HDR green colour primary in u"v" colour format, and corner 1204 represents the HDR blue colour primary in u"v" colour format. The boundaries of the source colour gamut are defined by source colour primaries, wherein a boundary between two primaries of a colour gamut forms a line of maximum saturation for varying hue. Here colour primary means the red, green and blue light sources that can be mixed together in varying amounts to produce the desired colour pixel.
The source colour gamut defines a region of colour that encompasses the target colour gamut, here the BT.709 colour gamut, represented by the inner triangle 1205. Each tip of the triangle 1205 is at a colour primary co-ordinate in the u"v" colour representation. Corner 1206 represents the SDR red colour primary in u"v" colour format, corner 1207 represents the SDR green colour primary in u"v" colour format, and corner 1208 represents the SDR blue colour primary in u"v" colour format.
The thin blue curved line 1209 illustrates the line of constant hue when plotted in u'v' colour representation. It is the path taken when desaturating the BT.2020/BT.2100 blue primary using the JzAzBz colour representation i.e. simply scaling the Az and Bz components equally towards zero. These would be straight lines from the colour primary co-ordinates to the origin if plotted on an AzBz chromaticity chart. We will, however, continue to use the u"v" colour representation, as the BT.2100/BT.2020 and BT.709 colour gamuts have an unusual and uneven shape when plotted in AzBz, making it harder to interpret the colour gamuts. As will be appreciated, these lines of constant hue can be determined regardless of the colour space, via appropriate transformations and/or mathematical operations.
Comparing the AzBz desaturation traces against the lines of constant hue depicted in Figure 8 shows a close match. The maximum saturation at which the BT.2100 blue colour primary can be represented, without a hue distortion, is given by the point at which the blue curve 1209 crosses the BT.709 gamut triangle 1205. The distance between that intersection and the D65 white point is quite short compared with the distance between the 065 white point and the BT.709 blue primary. This means that the hue of the BT.2100 (the H DR) blue colour primary is a great deal less saturated than the BT.709 colour primary itself, when limited to the BT.709 colour gamut. It is simply not possible to reproduce the hue of the BT.2100 blue primary at a reasonable saturation within that gamut.
This results in signals on or near the colour of the BT.2100 blue primary appearing very washed out in the SDR colour gamut.
One option for dealing with this is to distort the hue of the BT.2100/BT.2020 colour primaries, to a hue that is easier to represent in BT.709, but to do that in a very controlled manner. If care is taken, the hue of the colour primary (and nearby colours) need only be changed by a small amount to give what appears to be a more visually correct representation for signals such as those near the BT.2100 blue colour primary (e.g. bright blue LED or neon lights).
However, this is requires complex processing and the complex gamut mapping algorithm may have difficulty with images having colours near the BT.2100 colour primaries (e.g. bright blue LED or neon lights). Further, there are concerns regarding how such algorithms handle, saturated but slowly varying coloured surfaces, which may result in sharp transitions from saturated to non-saturated output colours. Although not generally apparent in natural images, when they do occur, the effect of the sudden changes in output colour saturation may appear as "speckled" noise patterns, as the input signal traverses the saturation boundaries.
A far more elegant approach has been appreciated. This approach is based on shaping the input colour gamut prior to conversion through a form of clipping.
This has been found to give better and more predictable results with both engineering test signals and natural images.
Each of the wide colour gamut input primaries is treated differently, as their relationship to the corresponding output colour primary differs, but the principles for each colour primary are the same.
As we saw previously, the BT.2100 blue primary cannot be represented at high saturation within the BT.709 colour volume. The most saturated blue that can be reproduced with BT.709 is of course the BT.709 blue primary colour itself. So, in order to deliver saturated colour bars after conversion, the hue of the BT.2100 blue primary is shifted to the same hue as the BT.709 output primary. But most importantly, it is shifted to that hue, at a much higher saturation than the BT.709 blue primary itself. The exact position of the shifted blue primary can be chosen to balance the degree of hue distortion and saturation of the output, but in practice moving it to exactly match the most saturated BT.709 blue hue that can be represented in the input BT.2100 colour gamut has been found to work well.
This is shown in figure 13, which is a recreation of figure 12 with like features shown with like reference numbers, but showing an intermediate gamut 1310 formed by the BT.2100 blue colour primary 1204 being moved to position 1311.
If the BT.2100 blue primary 1204 were to be shifted to exactly the colour of the BT.709 blue primary 1208, for example by hard clipping the input gamut 1201 to the target BT.709 gamut 1205, unacceptable hue distortions are unnecessarily introduced into many other colours that are just outside of the BT.709 gamut 1205. Blue skies and deep blue underwater scenes can be particularly adversely affected.
So, rather than clip the input gamut to exactly the BT.709 output gamut 1205, it is clipped to the intermediate gamut triangle 1310, with its blue primary 1311 exactly the same hue as the BT.709 colour primary 1208, but the most saturated version of that blue hue that can be achieved within the BT.2100 gamut 1201. Figure 13 illustrates how this is achieved in its simplest form.
Using a "hue linear" colour representation, in this case J,A,B,, the output BT.709 blue primary 1208 is increased in saturation until it lies on the input colour gamut boundary -BT.2100 in the case of our example. In other words, the saturation of the corresponding blue primary of the target colour gamut is increased until the corresponding blue primary intersects the boundary of the source gamut. This is achieved by working in the JzA,B, colour space and equally scaling its Az and Bz components. The point at which the BT.2100 gamut 1201 boundary is reached can easily be determined by converting the JzAzBz values back to red, green and blue signals with BT.2100/BT.2020 colour primaries. As discussed previously, the BT.2100 gamut boundary is exceeded when one or more of the RGB components starts to go negative. Once this point is found, the position of the corresponding blue primary of the source colour gamut is adjusted to the point of intersection to form the intermediate colour gamut. This may be done by a matrix multiplication and clipping operation, for example, as described in more detail below.
The most saturated BT.709 red and green primary hues that can be accommodated within the BT.2100/BT.2020 gamut are calculated in the same way.
Their u"v" values, along with those of the original BT.709 R, G and B colour primaries are shown in Table 1. The values will differ slightly for different colour representations or CAMs.
Table 1-Increasing the saturation of 81709 primaries in JzAzBz BT.709 Colour Primaries Most saturated versions in BT.2100 x Y u v u' Red Red 0.640 0.330 0.451 0.523 0.498 0.525 Green 0.300 0.600 0.125 0.563 0.108 0.579 Blue 0.150 0.060 0.175 0.158 0.177 0.144 The determination of the intermediate gamut 1310 may be performed by the colour converter 93, or may be pre-determined outside of the signal processing chain of figure 9 by a suitable pre-processor, and pre-programmed into the converter 93 to use when processing input signals.
Either way, the intermediate colour gamut defining a region of colour that encompasses the target gamut is provided. At least one of the primaries of the intermediate colour gamut, here the blue primary, lies on a boundary of the source colour gamut and has a hue that is between a corresponding primary of the source colour gamut and a corresponding primary of the target gamut, the position of the at least one primary being such that a portion 1305 of the source colour gamut lies outside of the intermediate colour gamut.
Once the new saturated blue primary has been provided, the input signals can be processed according to the new intermediate colour gamut, bounded by the most saturated BT.709 primary, and the BT.2100 red and green primaries. The new gamut 1310 is illustrated by the dashed "wide blue" and solid BT.2100 blue lines in Figure 13. The input signal may represent one or more pixels of an image or video.
Referring now to Figure 14, an embodiment of the colour converter 93 according to this aspect of the invention is shown. The input signals are linear signals that have undergone luminance compression in converter 92 of figure 9, for example following the method of the third aspect of the invention discussed above. The input signals thus have their luminance values in the target -SDR -range, but are still in the source colour gamut, here the BT.2100 gamut. It will be appreciated that the processing may be performed at a different location in the signal chain. For example, the colour mapping may be performed as a pre-processing step between blocks 91 and 92 of figure 9.
The input signals are input into converter 1401, where they are converted to the CIE 1931 XYZ colour representation using the standard matrix transformation, thus: nY = [0.6370 0.1446 0.16891 rm-2101 GBT2100 0.2627 0.6870 0.0593 BBT2100 0 0.0281 1.0610 The XYZ signals are then fed into converter 1402, where they are converted to RGB signals, with the new intermediate colour gamut primaries i.e. BT2100 red and green and the saturated BT.709 blue given in Table 1 and shown in Figure 13.
These are referred to as RGBINT. The conversion matrix can again be calculated using a standard transformation technique. An example of the calculation can be found in Annex G of ITU-R Report BT.2408. For the "wide-blue" intermediate gamut 1310 that uses the blue primary given in the rightmost columns of Table 1, and illustrated in Figure 13, the matrix conversion is show below: riNT, - 1.7953 -0.3701 -0.30871 Y GINT -0.6667 1.6165 0.0158 Z BINT 0.0176 -0.0428 0.9421 It is noted here that the RGBINT colour components indicate a position of the pixel in colour space relative to the intermediate colour gamut; their position on the colour chart has not yet been modified. Rather, their original coordinates relative to the BT.2100 gamut have been translated to the values of those same coordinates in the intermediate gamut (this is akin to the RGIBBT.70s, signal discussed above). Hence, the negative values indicate a coordinate position outside of the intermediate gamut.
Once the XYZ input signal is converted to the new RGB intermediate primaries, RGBINT is passed to clipping unit 1403. Here, negative RGB signals are clipped to zero, the clipping to 0 is simply a process of replacing any of the negative values of each of RGB with the value zero, thereby ensuring that values that fall outside the intermediate colour gamut fall on the boundary of that colour gamut.
An exception to the clipping process may be provided for achromatic signals where red green blue. This is because negative achromatic signals frequently appear in engineering test signals, so should be preserved, if at all possible. This is done by defining a range of RGB values considered as achromatic -e.g. by defining thresholds within which achromatic pixels fall -and then skipping the clipping processing for any pixels falling within the thresholds -i.e. any pixels defined as achromatic (here meaning any pixels having values falling within the predefined thresholds). These thresholds can simply be pre-set for a certain use case. For example, value ranges that result in substantially white or substantially black pixels can be found during initial calibration, and thresholds then set such that pixels with values falling within the value ranges fall within the thresholds and are thus determined as achromatic pixels.
The result is of the clipping process is a clipped signal RGBaip in which the negative RGBINT values have been shifted such that the equivalent values represented in the colour gamut of figure 13 are shifted from outside the intermediate gamut 1310 onto the boundary of the intermediate gamut 1310. In other words, any values lying between the dotted line of gamut 1310 shown in figure 13 and the adjacent boundary of the source gamut 1201 of figure 13 -i.e. in region 1305-are clipped onto the boundary of the gamut 1310. This means that the BT.2100 signals are adjusted to lie within the intermediate colour gamut 1310.
The result of this clipping in u'v' colour representation is shown in figure 29. It is noted that the operations of this aspect are, in this embodiment, performed in RGB and XYZ colour representations. The effect of these operations have been shown in u'v' space as this most clearly shows, visually, the effect of the operations.
It can be seen that pixels falling in region 1305 between the source colour boundary and the intermediate gamut boundary are clipped to the intermediate gamut boundary. The clipping direction -i.e. the direction the pixels are translated in u'v' colour space -depends on the position of the pixel in region 1305. As is shown in figure 29, falling half way between colour primary 1203 and 1204 of the source gamut, are shifted in a direction perpendicular to the boundary line between primary 1203 and 1204, as shown by arrow 2902. As pixel locations move closer to primary 1203 or primary 1204, the direction in which the pixel is shifted tends towards a direction parallel to the boundary line joining primary 1203 and 1202 (as shown by arrow 2903) and a direction parallel to the boundary line joining primary 1204 and 1202 (as shown by arrow 2904) respectively. The direction of movement is a result of the clipping of the negative values in RGB space to 0.
After clipping, the RGBINT signals are converted back to XYZ, by converter 1404 using the inverse of the forward transformation matrix.
This signal is then passed to desaturation unit 1405, which then desaturates the signal such that if the pixel is positioned in colour space outside of the target colour space, the position of the pixel in colour space is adjusted to a position inside or on the boundary of the target colour gamut along a line of constant hue. Thus, even though unit 1405 is shown with inputs and outputs in the XYZ colour representation for ease of discussion, its internal processing will use a hue linear colour representation such as JzAzBz or ICTCp and will equally scale the components of the signal in the hue line representation directly towards -i.e. in the direction of -the white point. Again, we have shown this processing graphically in u'v' space in the figures for ease of illustration. As can be seen from figure 13, a point near the intermediate blue primary will be converted to a position in the target gamut with much less desaturation than if the original source primary was used. This is because the intermediate blue primary is in line with the hue of the target gamut. This means that, when a pixel is desaturated along a line of constant hue to bring the pixel inside the target gamut, it crosses the target gamut at a point of far higher saturation than that of the original source gamut. This can be seen by comparing the point of intersection of line 1209, with that of 1312 in figure 13. The result of this is a signal XYZ6T.709 that lies within the target gamut (note the lack of the to signify this).
Finally, the signal XYZBT709 is passed to converter 1406 to convert the signal to an output RGBBT mg signal to complete the conversion from HDR to SDR.
This completes the processing of unit 93 of Figure 9.
Whilst the 31.2100 blue primary is the most problematic in terms of saturation, in BT.2100 green primaries also appear desaturated after conversion.
So, a similar gamut clip can be applied to green signals. However, whilst the hues of the BT.2100 and 81.709 blue primaries are similar, the BT.2100 and BT.709 greens are quite different. The BT.2100 green primary is much more cyan in appearance than the BT.709 green. So rather than clip the BT.2100 green to exactly the same hue as the BT.709 green, better results are obtained by clipping to an intermediate green primary, that lies partway between the BT.2100 green and the most saturated BT.709 green given in Table 1. This is shown in figure 15 by point 1502. Its precise position is chosen to give the desired balance of hue distortion and saturation. The closer the primary is to the hue of the BT.709 primary, the greater its saturation but also the greater the hue distortion.
To avoid excessive clipping (and therefore hue distortions) of cyans and blues during the green clipping, Figure 15 shows how the clipping process for the green primary may use a position outside the source colour gamut namely point 1503. The point outside the source colour gamut of the "wide green" intermediate gamut shown in dotted green lines, is outside of the visible spectrum. But this "virtual" primary can still be used to define the mathematical gamut clipping triangle for saturated greens.
The transformation between the various XYZ signals and the RGB signals for the green intermediate gamut are performed by colour conversion unit 93 in a similar manner as described for the blue primary transformation described with respect to Figure 14, and will not be repeated in full here. The matrices used for the transformation may be calculated in a similar manner to the transformation matrices described above.
This green primary clipping can be implemented instead of the blue primary clipping, or in addition to the blue primary clipping. If implemented instead of blue primary clipping, then the same processing is performed as shown in figure 14. The main difference is that the region within which pixels are clipped is defined by the dotted line 1501. In particular, any pixel falling between the source colour gamut and the dotted line 1501 -i.e. region 1504 -will be clipped to the green dotted line. The clipped pixels are then shifted along lines of constant hue as described in relation to the blue primary clipping.
A processing chain which applies both the blue primary clipping and the green primary clipping is shown in figure 16. Here, the final intermediate gamut produced as a result of the two clipping operations will no longer have a triangular shape but will in fact have a shape carved out by the two consecutive clipping operations.
In this example, the order of the clipping is blue and then green, however it will be appreciated that the order may be reversed if appropriate. First, the pixel is processed according to the blue primary clipping up until processing unit 1403. Up until this point, the processing is identical to the processing of figure 14. The result is that any pixel falling in region 1305 (defined by the line between primary 1203 and primary 1204 and the line between primary 1203 and primary 1311 of figure 13) is clipped to the wide blue dotted line 1506 between primary 1203 and primary 1311. This is the first clipping operation. This outputs a signal RGBaipi as shown in figure 30.
Next the coordinates of the pixel -RGBchpi -are defined in relation to the "wide green" intermediate colour gamut. This is the gamut defined by boundary lines joining the primaries 1502, 1503 and 1202. This may be done by appropriate matrix multiplication as described for the blue primary clipping above. For example, in figure 30, RGBaipi is converted back into XYZ as per conversion unit 1404 of figure 14 and then from XYZ to the wide green colour primaries -RGBINT2-by a matrix multiplication. This is performed by conversion unit 3002 in figure 30. Any pixels falling within region 1507 will have negative values in RGB space when defined in relation to the wide green intermediate colour gamut. If the pixel has a negative value, it is clipped to zero to form RGBchp2. This is performed by clipping unit 3003 of figure 30. This brings any pixels falling within region 1507 onto the wide green intermediate colour boundary 1508 that runs between the between point 1502 and the intersection between the dotted lines of the wide blue and wide green gamuts shown as point 1508.
The result of this double clipping operation is that pixels that fall within the region 1305 but not 1507 are affected by the first clipping operation and not the second. Pixels that fall within region 1507 but not 1305 are not affected by the first clip operation but are affected by the second. The pixels falling within region 1305 which fall within region 1507 after the first clip, are then clipped again by the second clip (they will have a negative value of one of RGB when defined in relation to the wide green gamut). As it is generally advantageous to avoid double clipping where possible, the wide green gamut primaries are chosen so that the region in which pixels are clipped twice is as small as possible. This is why primary 1503 is extended beyond the corresponding source primary.
The net result of this process is that pixels falling in the regions 1305 and 1507 are ensured to be on the boundary of a final intermediate gamut that has boundary lines running between point 1502 and the intersection point 1508, the line running between point 1508 and the colour primary 1311, the line running between colour primary 1311 and colour primary 1202 and, finally, the line running between colour primary 1202 and colour primary 1502. In particular, the clipped pixels will lie somewhere on the boundary line running between point 1502 and the intersection point 1508, or the line running between point 1508 and the colour primary 1311.
From there, the pixel is processed in the same way as shown in figure 14, units 1404, 1405 and 1406-i.e. converted to XYZ, then desaturated along a line of constant hue to a boundary of the target gamut, and finally converted back to RGB in the BT.709 colour space.
The steps of this method may be performed by a CPU or by a GPU implementing an equivalent function (in which any input signal results in the same output as the CPU performing the steps). The steps may also be performed in order to generate the 3D-LUT for subsequent use.
As previously explained, the clipping function may be performed by implementing the process steps described in relation to the figures in a colour space such as Yu'v'. Equally, though, the clipping function may be performed by defining the intermediate colour gamut as described above and then computing the equivalent RGB values in relation to the intermediate gamut and performing the clipping by setting any of the negative RGB values to 0. In other words, rather than converting between colour spaces such as between RGB and XYZ, the colour gamuts can be provided as described above, and then the pixels are provided just in RGB format, with the pixel values being defined first relative to the intermediate gamut and clipping any negative values to 0, and then shifting RGB values to the target gamut along a line of constant hue. Such a process achieves the same result. It is equivalent to the process described above (same input values give the same output values), there are simply no conversions between colour representations. Similarly, as described in relation to deriving a 3D-LUT, sample colour values may be provided across a colour space and processed according to the clipping arrangement described to produce output RGB values. Such values then can be used as a 3D-LUT. Conversion direct from RGB values to output RGB values may then be performed by using a look up on the 3D-LUT including performing interpolation between any points in the 3D-LUT as necessary.
To summarise then the operations involved in this aspect, a method is provided to convert a video signal from a source having a source colour gamut to produce a signal usable by target devices having a target colour gamut. The source colour gamut defines a region on a chromaticity chart that encompasses the target colour gamut. The method comprises providing an intermediate colour gamut defining a region in colour space that encompasses the target colour gamut. A portion of the source colour gamut lies outside of the intermediate colour gamut.
Colour components of a pixel of a video signal are provided. The colour components indicate a position of the pixel in colour space relative to the source colour gamut. The colour components are processed so as to indicate the position of the pixel in relative to the intermediate colour gamut. If the pixel is a chromatic pixel and the processed colour components indicate a position of the pixel in colour space that lies in the portion of the source colour gamut that is outside of the intermediate colour gamut, -adjusting the position of the pixel to a position inside the intermediate colour gamut that is between the original position in the source gamut and the nearest boundary of the target colour gamut.
Finally, the method involves converting the pixel to provide an output signal in the target colour gamut. Here, converting the pixel to the target colour gamut comprises: -if the pixel is positioned in colour space outside of the target colour gamut, adjusting the position of the pixel in colour space to a position on the boundary or inside the target colour gamut along a line of constant hue White point shifting We now turn to the second aspect of the invention. This relates to the stage in the HDR to SDR conversion chain of compression of the luminance of the HDR input signal, that is this aspect relates to the operation of compression unit 92. For ease of discussion, this aspect will be discussed as a separate compression technique to that discussed in relation to the third aspect of the invention above. However, it will be appreciated that it will generally be utilised in addition to the third aspect, i.e. as an additional step in the compression process shown in Figure 10 to further enhance the HDR to SDR conversion.
A static "tone-mapping" conversion from HDR to SDR cannot fully account for the effects of glare that might occur in certain HDR scenes. If a bright light such as sunlight or an electric light bulb appears in a scene, the eye will adapt to the light source and our ability to see detail in the shadows within the scene will be compromised. That is because the dynamic range that the eye can see without adaptation is less than that reproduced by a typical HDR display. If the bright light source is masked, perhaps by covering it with a hand in front of the screen, detail in the shadows immediately becomes more visible.
When down-mapping to SDR, the luminance of that bright light source will be greatly reduced and lie closer to the average luminance of the scene. It will, therefore, have a smaller effect on the adaptation state of the eye. Detail in the shadows is therefore likely to be more visible in the SDR output than in the HDR original.
A dynamic converter can take account of a bright light source appearing within a scene, and adjust the tone-mapping curve accordingly. However, a static tone-mapper cannot. A single tone-mapping curve must be chosen to give good results across a broad range of material.
A typical static tone-mapping curve 1601 with "glare" compensation is shown in Figure 16, plotted on logarithmic axes. Logarithmic axes are chosen as the eye's sensitivity to light intensity is approximately logarithmic. The HDR input signal is normalised such that 1.0 represents "diffuse white". The SDR outputs signal is normalised such that 1.0 represents the nominal SDR peak luminance.
Over the upper signal ranges representing image highlights, high luminance values within the input signal are compressed to fit within the SDR signal range. Over the mid-tone range 1602, there's a linear relationship between the input and output signals. However, over the lower signal range 1604 On this example below 0.05) the output signal is slightly compressed to better match the appearance of shadow detail in HDR scenes containing a light source. The amount of compression is chosen to give a good visual match between the input and output images across a broad range of content.
Unfortunately, however, compressing the shadow detail in such a way will also affect the luminance of at least one of the SDR colour primaries, for example, the BT.709 equivalent blue colour bars within the EBU Tech 3373 test patterns. In this normalised signal representation, the luminance of that colour bar is just 0.036, well below the 0.05 breakpoint 1603 shown in Figure 16 for the "glare correction", chosen to give the best-looking natural images. This means the luminance of the BT.709 equivalent blue bar will be reduced by the glare compensation as it will fall within the region of the curve of Figure 16 for which compression is applied -the lower range of values 1604. This will distort the measured output signal.
The luminance of other colour bars is unaffected, provided they are above the 0.05 breakpoint 1603, which they typically are. However, the luminance of blue objects is always surprisingly low, due to the CIE 1931 "2 degree observer" colour matching functions (CMFs) on which all TV systems are based. The more recent CIE 1964 "10 degree observer" CMFs, give a higher value of luminance for blue.
The "glare compensation" could be applied to the "luminance" or "intensity" signals in a different colour representation, such as JzAzBz or ICTCp, where blue signals have a higher equivalent luminance, allowing the breakpoint to be set below the blue colour bar so that it falls within the linear portion of the compression curve of Figure 16 and thus passes through the compensation unaffected. However, compressing the Jz or intensity, I, components can have unforeseen side-effects as it may change their chromaticity. Similarly, it may be possible to convert video signals created with the 1931 CM Fs to signals based on the 1964 CMFs, but that conversion can be very complex.
A simpler and better approach appreciated according to this second aspect of the invention is to increase the "colour temperature" of the white point of the colour working space, to give signals of certain colour -blue in this embodiment -a higher luminance. Here, what we mean by increasing the colour temperature is shifting the position of the white point in the colour space in the direction of a particular colour so as to increase the luminance of pixels of that particular colour. Here, the term colour space is used to mean a particular colour representation, such as XYZ, Yu'v' etc. Changing the colour temperature is easily achieved through a simple matrix multiplication of the pixel values in the working colour representation, e.g. XYZ.
Turning now to Figure 17, the converter 92 of the conversion chain of Figure 9 according to this second aspect of the invention is shown.
Firstly the Linear HDR format signal -i.e. having HDR of luminance and the wide colour gamut -is provided to conversion unit 1703. This conversion unit shifts the position of the white point in the working colour space in the direction of a particular colour, here the blue primary, so as to increase the luminance of the input pixels of the particular colour. In particular, the position of the white point in the working colour space is shifted so that the luminance of the pixels of the particular colour are increased so as to fall within the range of luminance values that are operated on by the linear portion of the compression curve 16. Their luminance values are shifted above break point 1603.
For example, by shifting the white point of the video signal from the standard D65 white point (x=0.3127, y=0.3290, u"=-1.9, v"=-4.7 as per figure 12) to a higher temperature white point with chromaticity x=0.2831, y=0.2971, the luminance of the BT.709 equivalent colour bar of the EBU Tech 3373 test signal is increased from 0.036 to 0.051: above the 0.05 breakpoint for the "glare compensation" compression function. In other words, the luminance of pixels having a blue colour that corresponds to the BT.709 equivalent colour bar of the EBU Tech 3373 test signal is shifted from the portion of the compression curve of Figure 16 within which compression is applied-portion 1604 -to the linear portion 1602 within which no compression is applied. In other words, the effect of shifting the white point is a shift in all input pixel values -this shift in pixel values is determined by a matrix multiplication in which the shift in individual pixel values is determined by the shift in white point. It is noted that not all pixel positions will be shifted by the same amount. If the white point is shifted towards a particular colour, then the pixels in the region of the particular colour will be shifted more than pixels that are distant from the particular colour. For example, if the white point is shifted towards blue, blue pixels will be shifted more than red pixels. This can be performed, for example, by determining pixel values in XYZ and performing a matrix multiplication to shift the pixel values according to the white point shift.
Conceptually, this can be understood by considering a sheet of white paper illuminated by blue light. This illumination would cause the sheet of paper to appear blue. However, white balancing can be applied to account for the blue light to shift the colour of the sheet back to white. In other words, the white point is shifted to the hue of the blue light, which would accordingly shift the colour of the sheet of paper back to white. The same principle is being applied here.
The shifted signal -i.e. the values the input pixels take when the white point of the colour space has been shifted -is then input into compressor 1704 wherein the compression function, e.g. the compression function 1601 is applied to the luminance component of each pixel to produce a compressed luminance component. This produces a signal in which the luminance values have been compressed, but which have the luminance values resulting from the shifted white point.
Thus,following "glare compensation", the video signal is converted back to the D65 white point by converter 1705 to conform with the existing TV standards.
In the example, to convert from a D65 white point to the higher temperature white point (x=0.2831, y=0.2971) the following matrix is applied by unit 1703: [X hig h temp I 0.9029 -0.0438 0.1272 XD 65 Yhigh temp = -0.0530 1.0046 0.0421 YD6 5 Z high temp 0.0251 -0.0419 1.65751 Z D65 To convert back from the high colour temperature colour working space to D65, the inverse matrix is applied.
Achromatic signals are unaffected by the change in white point, so the "glare compensation" still has the desired affected on shadow details. The BT.709 equivalent bars on the other hand can pass through the glare-compensation 20 untouched.
In practice the highlight compression provided by the upper range of the curve above 1602, may be performed separately from the low-light compression provided by the lower range to compensate for glare, as highlight compression is best performed using the original D65 white point. Thus, the signal output from this processing may be passed on for further compression -i.e. tone mapping -for example the compression described in relation to aspect three and/or four of the invention discussed below. In other words, the signal input into unit 92 of figure 9 may undergo the processing shown in figure 17, but with the output signal actual retaining its HDR for the upper range of values. These will then be passed to a further processing unit within unit 92, for compression of the upper range values. Thus, although "HDR" and "SDR" labels have been used in figure 17 for ease of discussion, it will be appreciated, that the upper range of values that fall above 1602 may need further compression to be converted to SDR.
As noted above, this technique can be performed using any appropriate colour representation, and the appropriate matrix multiplications can be easily determined according to the chose representation. The technique may be performed by a processor that performs the steps described above. Alternatively, the glare compensation can be performed by a 3D-LUT that has been preprogrammed with values to perform the compensation. The values of the 3D-LUT may be predetermined by performing the above described steps for a range of input pixel values to determine the appropriate values for the 3D-LUT. The input and output values are preferably RGB.
Preserving detail in image highlights We will now discuss the fourth aspect of the invention. This relates to the stage in the HDR to SDR conversion chain of compression of the luminance of the HDR input signal, that is this aspect relates to the operation of compression unit 92. This aspect relates to improvements to the tone mapping process. The improvements to tone mapping can be applied in the case where the input signal is provided as a luminance component and separate colour components for each pixel, shown in figure 6, and wherein a compression function is applied to the luminance component. Or they can be applied to the case where the tone-mapping function is applied indirectly, as shown in figure 10, based on the maximum of the red, green and blue components (MaxRGB), or the norm the red, green and blue components.
In particular, this aspect relates to improvements to the compression functions -the tone-mapping functions -used in such processing techniques.
A typical tone-mapping curve for HDR to SDR luminance conversion was illustrated in Figure 7. In this example, low-lights and mid-tones were directly mapped from HDR to SDR with the linear portion 701 of the curve, and highlights compressed to the SDR signal range using a logarithmic curve 702. The gradients of the linear and logarithmic curves are matched at their intersection 703 to avoid artefacts.
As can be seen in the Figure, however, the logarithmic compression of high luminance values is quite severe leading to a loss of detail in extreme image highlights. Better results can often be achieved with a multi-step highlight compression curve, such as that shown in 18. The portion of the curve dealing with image highlights is reproduced in for greater clarity.
In this embodiment, a logarithmic compression curve 1802 is spliced onto the linear tone-mapping curve 1801 at "Breakpoint 1" (bi). The exact location of the breakpoint is not important, but it is usually chosen to sit just above the usual signal level for light skin tones, to avoid loss of detail in facial features (See BT.2408 Table 2). In order reduce the amount of compression applied to bright highlights and reduce the loss of detail, an exponential curve 1803 is then spliced onto the logarithmic curve 1802 at "Breakpoint 2" (b2). As before, the gradients of the logarithmic curve and exponential curves are matched, to avoid artefacts at their intersection. Finally, to constrain the amount of detail enhancement in the extreme highlights and help ensure the output signal lies within the permitted SDR signal range, a logarithmic curve 1804 is spliced onto the exponential curve 1803 at "Breakpoint 3" (b3).
The tone-mapping sections could be combined into a single continuous function in a number of ways, but most conveniently the four separate functions can be cascaded, thus, YSDR = RYHDR) ;Linear portion 1801 If YHDR> b1 Then YSDR -gerrsoR) ;Logarithmic portion 1802 If YHDR>b2 Then YSDR -h(YSDR) ;Exponential portion 1803 If YHDR > b3 Then YSDR = i(YSDR) ;Logarithmic portion 1804 The linear portion of the curve, 1801, may be expressed as, f ('s DR) -aYaDR + 13 Where most often a = 1 and)5' = 0 The logarithmic portion of the curve, 1802, may be expressed as, 9(YsDR) = YLN(ItsDR + 6) + E The exponential portion of the curve, 1803, may be expressed as, h(YsDR) = e(YsDR+0 +17 The final logarithmic portion, 1804, takes the same form as 1802, i(YsDR) = OLN(Y.sion + + The gradients of each section are matched at the breakpoints, by ensuring that the derivatives of the sections either side of the breakpoint match. Ensuring that there are no discontinuities in output value at the breakpoints is somewhat harder, but can be solved by iteration.
The locations of breakpoints 2 and 3 are chosen to balance the loss of detail across the entire range of image highlights.
"Multi-step" tone-mapping curves offer greater control over the preservation of highlight detail than the single step compression curves illustrated in Figure 7.
When integrated into the signal processing chain of Figure 9, unit 92 may take the form that is functionally similar to units 60,62 and 63 of Figure 6, but where tone-mapping unit 63 has been updated according to the present aspect of the invention. This is shown in Figure 19.
Here, linear H DR RGB signal values that are in the BT.2100 colour gamut are provided to converter 1901, where they are converted to XYZ format. They are then provided to converter 1902 where the XYZ signals are converted to Yu"v" format. Next, the Y component, the luminance component, of the HDR Yu'v" signal is input into the tone-mapping unit 1903 in which: * for a first range of luminance values the signal value is derived using a first function that includes a power of the luminance value; * for a second range of luminance values the signal value is derived using a second function that includes a log of the luminance value, the second range being a higher range that the first range; * for a third range of luminance values the signal value is derived using a third function that includes an exponent of the luminance value, the third range being a higher range that the second range; and * for a fourth range of luminance values the signal value is derived using a fourth function that includes an log of the luminance value, the fourth range being a higher range that the third range.
The result of this is a compressed luminance component for the Yu'v" signal. The SDR Y component is then passed to converter 1904 along with the u"v" chrominance components that have been untouched by the tone-mapping. The signal is then converted back to RGB (e.g. via conversion to XYZ) to output an SDR RGB signal. This signal will still be in the HDR colour gamut and thus will be passed to unit 93 for colour gamut conversion Alternatively, the processing steps can be applied to the case where the tone-mapping function is applied to the luminance indirectly, as shown in figure 10, based on the maximum of the red, green and blue components (MaxRGB), or the norm the red, green and blue components.
Example Application of First Aspect of Invention -Engineering Test Signals In any Television production and distribution chain, engineering test signals are used to measure the performance of the chain and ensure the correct configuration of equipment. Typically, a test signal, most often comprising a set of primary and secondary "colour bars" with some additional elements, is inserted at the signal source e.g. an input to the production gallery vision mixer, and technical measurements taken of the signal as it progresses through the signal chain. Examples of such signals are SMPTE RP.219, commonly used for high definition SDR production and the ITU-R BT.2111 and EBU Tech 3373 colour bars increasingly used for HLG HDR production.
A colour bar test pattern comprises bars of the system primary and secondary colours, most often at 100% or 75% signal amplitude. These are termed "100% bars" or "75% bars" respectively. The colour bar signal can be checked for accuracy at the end of a signal chain using a waveform monitor, which plots the R"G"B" or Y"C"BC-R signal waveform. An example plot can be seen in the bottom left quadrant of Figure 20.
Most often, however, a vector-scope is used to check the integrity of a signal path or processing chain. A vector-scope plots the hue angle and colour saturation on a polar display. The greater the saturation the further away the trace is from the origin. The display graticule may also show target areas for each colour on the colour bar test pattern, within which the trace is expected to land if the signal path is below acceptable distortion levels. An example of a vector-scope trace, including targets, is shown in the top right quadrant of Figure 20.
The EBU Tech 3373 HDR colour bars are illustrated in Figure 21, with Figure 22 illustrating what each portion of the colour bars of Figure 21 represents.
They are similar to the ITU-R BT.2111 HDR colour bars, in that they both comprise 100% and 75% HLG bars as the top two rows of bars of figure 21 and figure 22. The ITU-R bars also include a set of 100% BT.709 "equivalent" bars embedded within the HDR signal container. However, these are at such a high signal level that they will usually be affected by any HDR to SDR "tone-mapping", so cannot be expected to be reproduced as 100% SDR BT.709 colour bars after down-mapping to BT.709. The EBU colour bars, however, include two additional sets of 75% BT.709 "equivalent" bars (denoted by "SL" and "DL" -representing a scene referred input signal and a display referred input signal respectively) embedded within the HLG signal.
As the EBU's 75% SL and DL bars are at a lower signal amplitude, they should pass through most HDR to SDR "down-mapping" processes. If the down-mapper is correctly configured, the SL bars should be reproduced as conventional 75% BT.709 colour bars following a "scene-light" conversion to SDR BT.709, and the DL bars should be reproduced as conventional 75% BT.709 colour bars following a "display-light" conversion to SDR BT.709. Checking the output waveform after the respective conversions will quickly highlight any conversion errors or problems with intermediate equipment.
As discussed above, desaturating colours towards the white point using a "hue linear" working space or CAM, results in improved natural images; with fewer visible hue distortions. However, the results of desaturating a BT.2100 HDR Colour Bar input, such as BT.2111 or EBU Tech 3373, looks very odd and causes concern amongst many video engineers. An example can be seen in Figure 23.
The blue, and to a lesser extent green colour bars appear extremely desaturated when converted to BT.709 for the reasons discussed in relation to the first aspect of the present invention and Figure 12.
The complex controlled hue distortion method discussed above can be used to address this. This results in an improved colour bar output, as shown in Figure 24, where in particular the saturation of the 75% blue bar is increased.
However, whilst the appearance of colour bars is subjectively better than the previous approach, the complex gamut mapping algorithm had difficulty with pathological test images, such as that shown in Figure 25. Note, no colour conversion has been applied to the BT.2100 input image, prior to inclusion in the image. So colours in Figure 25 are distorted. It should be clear, however; that the test pattern contains "sweeps" of colours covering the entire colour gamut.
The output image following conversion to SDR BT.709 is shown in Figure 26. Of particular concern are the sharp transitions from saturated to non-saturated colours. Although not generally apparent in natural images, when they do occur the effect of the sudden changes in output colour saturation may appear as "speckled" noise patterns, as the input signal traverses the saturation boundaries.
Applying the principles of the first aspect of the invention to the EBU colour bars and the same image as that used for Figures 25 and 25 results in a marked improvement. The improvements in saturation of the blue and green colour bars after conversion can clearly be seen in comparing Figure 24 and Figure 27. The smooth colour gradation in the pathological test image shown in Figure 28 illustrates how this elegant technique of the invention also avoids artefacts introduced by the sharp colour transitions that can arise as a result of the more complex techniques.
A similar clipping triangle could be applied to the red primary, but the red trace in Figure 12 shows how the hues of the BT.709 and BT.2100 red primaries are already similar. So, saturated red colour bars can be achieved without the need for special gamut clipping.
Embodiments of the invention can be described with reference to the following numbered clauses, with preferred features laid out in the dependent clauses: 1. A method of processing a video signal from a source having a source colour gamut to produce a signal usable by target devices having a target colour gamut, wherein the source colour gamut defines a region that encompasses the target colour gamut, wherein the boundaries of the source colour gamut are defined on a chromaticity diagram by straight lines connecting source colour primaries and the boundaries of the target colour gamut are defined by straight lines connecting target colour primaries, the method comprising receiving the video signal from the source, the video signal comprising a pixel, and converting using a converter that implements the following or an equivalent function: -providing an intermediate colour gamut defining a region that encompasses the target colour gamut, in which at least one primary of the intermediate colour gamut lies on a boundary of the source colour gamut and has a hue that is between the hue of a corresponding primary of the source colour gamut and the hue of a corresponding primary of the target colour gamut, the position of the at least one primary of the intermediate colour gamut being such that a portion of the source colour gamut lies outside of the intermediate colour gamut; -providing the colour components of the pixel, the colour components indicating a position of the pixel on the chromaticity diagram relative to the source colour gamut; -processing the colour components so as to indicate the position of the pixel on the chromaticity diagram relative to the intermediate colour gamut, wherein -if the pixel is a chromatic pixel and the processed colour components indicate a position of the pixel in the source colour gamut that lies in the portion of the source colour gamut that is outside of the intermediate colour gamut, -adjusting the position of the pixel to a position on the boundary of the intermediate colour gamut, that is between the original position in the source colour gamut and the nearest boundary of the target colour gamut; and -converting the pixel to provide an output signal within the target colour gamut, wherein converting the pixel to the target colour gamut comprises: -if the pixel is positioned outside of the target colour gamut, adjusting the position of the pixel to a position on the boundary or inside the target colour gamut along a line of constant hue.
2. A method according to clause 1, wherein the chromaticity diagram is a CIE 1931 xy or a CIE 1976 tiv-chromaficity diagram.
3. A method according to clause 1 or 2, wherein adjusting the position of the pixel to a position on the boundary or inside the target colour gamut along a line of constant hue comprises: providing the pixel in a hue linear colour representation; and equally scaling the components of the pixel in the hue linear colour representation in the direction of the white point of the hue linear colour representation.
4. A method according to clause 3, wherein the hue linear colour representation comprises a hue linear chromaticity diagram in which the lines of constant hue extending from the boundaries of a gamut defined on the hue linear chromaticity diagram to the white point of the hue linear colour representation are straight lines.
5. A method according to any previous clause, wherein a second primary of the intermediate colour gamut is at the same position on the chromaticity diagram as a corresponding primary in the source colour gamut.
6. A method according to any previous clause, wherein a boundary between two primaries of a colour gamut forms a line of maximum saturation for varying hue.
7. A method according to any preceding clause, wherein adjusting the position of the pixel to a position on the boundary of the intermediate colour gamut comprises moving the pixel towards the nearest boundary of the intermediate colour gamut in a direction that is substantially perpendicular to the nearest boundary of the intermediate colour gamut.
8. A method according to any preceding clause, wherein adjusting the position of the pixel comprises providing red, green and blue colour components of the pixel and clipping any negative red, green and blue colour components of the pixel to zero which places the pixel on a boundary position in the intermediate colour gamut.
9. A method according to any preceding clause, wherein, if the pixel is an achromatic pixel, indicated by the processed colour components falling within a pre-defined achromaticity threshold, the clipping process is skipped.
10. A method according to any preceding clause, wherein the line of constant hue is a line in colour space running from the pixel position outside the target colour gamut to the white point in colour space in which every point along the line has the same hue.
11. A method according to any preceding clause, wherein the step of providing the intermediate colour gamut comprises clipping the source gamut to form the intermediate gamut.
12. A method according to any preceding clause, wherein providing the intermediate colour gamut comprises: -determining a point that is along a line of increasing saturation and constant hue of the corresponding primary of the target colour gamut that intersects the boundary of the source gamut; and -defining the position of the primary of the intermediate colour gamut as the point between the point of intersection and the corresponding source primary.
13. A method according to any of clauses 1 to 12, wherein the at least one primary of the intermediate colour gamut is the blue primary.
14. A method according to clause 12, wherein the at least one primary of the intermediate colour gamut is the blue primary, and the blue primary of the intermediate gamut is defined as the point of intersection.
15. A method according to any of clauses 1 to 12, wherein the at least one primary is the green primary.
16. A method according to any of clauses 1 to 14, wherein providing the intermediate colour gamut comprises determining the second and third remaining colour primaries of the intermediate colour gamut to be in the same position as the corresponding primaries of the source colour gamut.
17. A method according to clause 16, further comprising adjusting the position of the second primary of the intermediate colour gamut primary by selecting a position at the boundary of the source colour gamut that is between the hue of a corresponding primary of the source colour gamut and the hue of a corresponding primary of the target colour gamut; and defining an area that is bounded by the source colour gamut and a clipping boundary line between the primary of the intermediate colour gamut and a position outside the source colour gamut; and -clipping pixels values in the defined area to the clipping boundary line.
18. A method according to any of clauses 1 to 12, further comprising defining a position of a second primary of the intermediate colour gamut primary by selecting a position at the boundary of the source colour gamut that is between the hue of a corresponding primary of the source colour gamut and the hue of a corresponding primary of the target colour gamut; and defining an area that is bounded by the source colour gamut and a clipping boundary line between the primary of the intermediate colour gamut and a position outside the source colour gamut; and -clipping pixels values in the defined area to the clipping boundary line.
19. A method according to clause 17 or 18, wherein the second primary is the green primary.
20. A method according to according to any preceding clause, wherein each colour gamut has a red, green and blue colour primary, wherein the primary for a given colour represents the position in colour space of maximal saturation for that colour.
21. A method according to according to any preceding clause, wherein a position of a pixel relative to a colour gamut, as indicated by the colour components of the pixel, provides the hue and saturation for the pixel.
22. A converter for processing a video signal from a source having a source colour gamut to produce a signal usable by target devices having a target colour gamut, wherein the source colour gamut defines a region that encompasses the target colour gamut, wherein the boundaries of the source colour gamut are defined on a chromaticity diagram by source colour primaries and the boundaries of the target colour gamut in colour space are defined by target colour primaries, the converter being configured to receive the video signal from the source, the video signal comprising a pixel, and wherein the converter is configured to implement the following or an equivalent function: -providing an intermediate colour gamut defining a region that encompasses the target colour gamut, in which at least one primary of the intermediate colour gamut lies on a boundary of the source colour gamut and has a hue that is between the hue of a corresponding primary of the source colour gamut and the hue of a corresponding primary of the target colour gamut, the position of the at least one primary of the intermediate colour gamut being such that a portion of the source colour gamut lies outside of the intermediate colour gamut; -providing the colour components of the pixel, the colour components indicating a position of the pixel on the chromaticity diagram relative to the source colour gamut; -processing the colour components so as to indicate the position of the pixel on the chromaticity diagram relative to the intermediate colour gamut, wherein -if the pixel is a chromatic pixel and the processed colour components indicate a position of the pixel in the source colour gamut that lies in the portion of the source colour gamut that is outside of the intermediate colour gamut, -adjusting the position of the pixel to a position on the boundary of the intermediate colour gamut, that is between the original position in the source colour gamut and the nearest boundary of the target colour gamut; and -converting the pixel to provide an output signal within the target colour gamut, wherein converting the pixel to the target colour gamut comprises: -if the pixel is positioned outside of the target colour gamut, adjusting the position of the pixel to a position on the boundary or inside the target colour gamut along a line of constant hue.
23. A method of generating a 3D-LUT having values obtained by performing the method of any of clauses 1 to 21.
24. The method according to clause 23, comprising receiving sample RGB values for sample pixels that lie within the source colour gamut, converting using the function, receiving pixel values in the target colour gamut, converting the pixel values in the target colour gamut to output RGB values, and storing the sample RGB values and output RGB values to provide the 3D-LUT.
25. A converter comprising a 3D-LUT generated according to the method of clause 23 or 24.
26. A 3D-LUT, having values generated according to the method of clause 23 or 24.
27. A method of processing a video signal for glare compensation, comprising receiving the video signal from a source, the video signal comprising pixels defined in a colour space, and converting using a converter that implements the following or an equivalent function: -providing the colour space having a white point defined at a particular position in the colour space; - providing the received signal as a luminance component and separate colour components for each pixel; -processing pixel values by multiplication so as to shift the position of the white point in the colour space in the direction of a particular colour so as to increase the luminance value of pixels of the particular colour; - processing the luminance component of each pixel by applying a compression function to the luminance component of each pixel to produce a processed luminance component; - processing the pixel values by multiplication so as to shift the position of the white point back to the particular position in the colour space; and -providing the processed pixels in the colour space having a white point defined at the particular position in the colour space so as to provide a processed video signal compensated for glare.
28. A method according to clause 27, wherein the colour space is one of a Yu'v' a XYZ, or an RGB colour representation.
29. A method according to clause 28, wherein the multiplication is matrix multiplication in the colour space.
30. A method according to any of clauses 27 to 29, wherein the white point is shifted in a blue colour direction.
31. A method according to any of clauses 27 to 30, wherein, for a lower range of luminance values, the compression function compresses the luminance component, wherein, shifting the position of the white point in the colour space in the direction of the particular colour, shifts the luminance value of pixels of the particular colour outside of the lower range of luminance values.
32. A method according to clause 31, wherein the luminance value of pixels of the particular colour are shifted to an upper range of luminance value to which compression is not applied.
33. A method according to any of clauses 27 to 32, wherein shifting the white point shifts the luminance value of all input pixels, wherein shifting the white point shifts the luminance value of pixels nearer to the particular colour more than pixels further away from the particular colour.
34. A method according to clause 33, wherein the luminance value of pixels within a range of the particular colour are shifted to an upper range of luminance value to which compression is not applied.
35. A converter for processing a video signal for glare compensation, the converter configured to receive the video signal from a source, the video signal comprising pixels defined in a colour space, and wherein the converter is configured to implement the following or an equivalent function: -providing the colour space having a white point defined at a particular position in the colour space; - providing the received signal as a luminance component and separate colour components for each pixel; - processing pixel values by multiplication so as to shift the position of the white point in the colour space in the direction of a particular colour so as to increase the luminance value of pixels of the particular colour; -processing the luminance component of each pixel by applying a compression function to the luminance component of each pixel to produce a processed luminance component; - processing the pixel values by multiplication so as to shift the position of the white point back to the particular position in the colour space; and -providing the processed pixels in the colour space having a white point defined at the particular position in the colour space so as to provide a processed video signal compensated for glare.
36. A method of generating a 3D-LUT having values obtained by performing the method of any of clauses 27 to 34.
37. The method according to clause 36, comprising receiving sample RGB values for sample pixels that lie within the colour space, converting using the function, receiving processed pixel values to output RGB values, and storing the sample RGB values and output RGB values to provide the 3D-LUT.
38. A converter comprising a 3D-LUT generated according to the method of clause 36 or 37.
39. A 3D-LUT, having values generated according to the method of clause 36 or 37.
40. A method of processing a video signal from a higher dynamic range source provided in a source colour space with a source colour gamut to produce a signal usable by target devices of a lower dynamic range and having a target colour space with a target colour gamut, comprising receiving the video signal from the source, the video signal comprising pixels, and converting using a converter that implements the following or an equivalent function: -providing the received signal as separate colour components for each pixel; -providing a scale factor for compressing the colour components of each pixel whereby the dynamic range of the luminance of the pixel is compressed, wherein the scale factor is based on the values of the colour components when provided in the target colour space; -compressing the dynamic range of the luminance of each pixel using the scale factor operable on the colour components to provide an output signal of the lower dynamic range.
41. A method according to clause 40, wherein the scale factor for each pixel depends upon the largest value of the separate colour components of the pixel when provided in the target colour space.
42. A method according to clause 40, wherein the scale factor for each pixel depends upon the norm of the colour components of the pixel when provided in the target colour space.
43. A method according to any of clauses 40 to 42, wherein the scale factor is the ratio of the output value of a compression function divided by the input value to the compression function.
44. A method according to clause 43, wherein the compression function is a non-linear function that reduces the range of values from input to output.
45. A method according to clause 43 or 44, wherein the input value for each pixel is either i) the largest value out of the separate colour components of the pixel when provided in the target colour space, or ii) the norm of the colour components of the pixel when provided in the target colour space.
46. A method according to clause 40, wherein providing the scale factor for each pixel comprises: providing the separate colour components of the pixel in the source colour space, the values of the colour components indicating the position of the pixel in the source colour gamut; processing the colour components so as to indicate the position of the pixel relative to the target colour gamut; determining an input value for a compression function, the input value being either i) the largest value out of the processed colour components of the pixel, or ii) the norm of the processed colour components of the pixel; processing the input value with the compression function to determine an output value; determining a ratio of the output value and the input value to determine the scale factor; and providing the scale factor for pixel.
47. A method according to any of clauses 40 to 46, wherein the scale factor is operable on the colour components by multiplying each colour component by the scale factor.
48. A method according to any of clauses 40 to 47, wherein the colour components are red, green and blue colour components.
49. A method according to clause 48, wherein the colour components when provided in the target colour space represent red, green and blue colour primaries of the target colour gamut.
A converter of processing a video signal from a higher dynamic range source having a source colour space with a source colour gamut to produce a signal usable by target devices of a lower dynamic range and having a target colour space with a target colour gamut, and wherein the converter is configured to implement the following or an equivalent function: -providing the received signal as separate colour components for each pixel; -providing a scale factor for compressing the colour components of each pixel whereby the dynamic range of the luminance of the pixel is compressed, wherein the scale factor is based on the values of the colour components when provided in the target colour space; -compressing the dynamic range of the luminance of each pixel using the scale factor operable on the colour components to provide an output signal of the lower dynamic range.
51. A method of generating a 3D-LUT having values obtained by performing the method of any of clauses 40 to 49.
52. The method according to clause 51, comprising receiving sample RGB values for sample pixels that lie within the source colour gamut, converting using the function, receiving processed pixel values to output RGB values, and storing the sample RGB values and output RGB values to provide the 3D-LUT.
53. A converter comprising a 3D-LUT generated according to the method of clause 51 or 52.
54. A 3D-LUT, having values generated according to the method of clause 51 or 52.
55. A method of processing a video signal from a source to produce an output signal, comprising converting between a luminance value and signal value using a converter that implements the following or an equivalent function: for a first range of luminance values the signal value is derived using a first function that includes a power of the luminance value; for an second range of luminance values the signal value is derived using a second function that includes a log of the luminance value, the second range being a higher range that the first range; for an third range of luminance values the signal value is derived using a third function that includes an exponent of the luminance value, the third range being a higher range that the second range; and for an fourth range of luminance values the signal value is derived using a fourth function that includes an log of the luminance value, the fourth range being a higher range that the third range.
56. A method according to clause 55, wherein the first, second, third and fourth functions are respectively joined together at pre-determined breakpoints.
57. A method according to clause 56, wherein the gradients of first, second, third and fourth functions are respectively matched at the breakpoints.
58. A converter for processing a video signal from a source to produce an output signal, and wherein the converter converts between a luminance value and signal value by implementing the following or equivalent function: for a first range of luminance values the signal value is derived using a first function that includes a power of the luminance value; for an second range of luminance values the signal value is derived using a second function that includes a log of the luminance value, the second range being a higher range that the first range; for an third range of luminance values the signal value is derived using a third function that includes an exponent of the luminance value, the third range being a higher range that the second range; and for an fourth range of luminance values the signal value is derived using a fourth function that includes an log of the luminance value, the fourth range being a higher range that the third range.
59. A method of generating a 3D-LUT having values obtained by performing the method of any of clauses 55 to 57.
60. The method according to clause 59, comprising receiving sample RGB values for sample pixels, converting using the function, receiving processed pixel values to output RGB values, and storing the sample RGB values and output RGB values to provide the 3D-LUT.
61. A converter comprising a 3D-LUT generated according to the method of clause 59 or 60.
62. A 3D-LUT, having values generated according to the method of clause 59 or 60.

Claims (8)

  1. CLAIMS1. A method of processing a video signal from a source to produce an output signal, comprising converting between a luminance value and signal value using a converter that implements the following or an equivalent function: for a first range of luminance values the signal value is derived using a first function that includes a power of the luminance value; for an second range of luminance values the signal value is derived using a second function that includes a log of the luminance value, the second range being a higher range that the first range; for an third range of luminance values the signal value is derived using a third function that includes an exponent of the luminance value, the third range being a higher range that the second range; and for an fourth range of luminance values the signal value is derived using a fourth function that includes an log of the luminance value, the fourth range being a higher range that the third range.
  2. 2. A method according to claim 1, wherein the first, second, third and fourth functions are respectively joined together at pre-determined breakpoints.
  3. 3. A method according to claim 2, wherein the gradients of first, second, third and fourth functions are respectively matched at the breakpoints.
  4. 4. A converter for processing a video signal from a source to produce an output signal, and wherein the converter converts between a luminance value and signal value by implementing the following or equivalent function: for a first range of luminance values the signal value is derived using a first function that includes a power of the luminance value; for an second range of luminance values the signal value is derived using a second function that includes a log of the luminance value, the second range being a higher range that the first range; for an third range of luminance values the signal value is derived using a third function that includes an exponent of the luminance value, the third range being a higher range that the second range; and for an fourth range of luminance values the signal value is derived using a fourth function that includes an log of the luminance value, the fourth range being a higher range that the third range.
  5. 5. A method of generating a 3D-LUT having values obtained by performing the method of any of claims ito 3.
  6. 6. The method according to claim 5, comprising receiving sample RGB values for sample pixels, converting using the function, receiving processed pixel values to output RGB values, and storing the sample RGB values and output RGB values to provide the 3D-LUT.
  7. 7. A converter comprising a 3D-LUT generated according to the method of claim 5 or 6.
  8. 8. A 3D-LUT, having values generated according to the method of claim 5 or 6.
GB2402030.7A 2021-07-08 2021-07-08 Method and apparatus for conversion of HDR signals Pending GB2625218A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB2402030.7A GB2625218A (en) 2021-07-08 2021-07-08 Method and apparatus for conversion of HDR signals

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB2109857.9A GB2608990A (en) 2021-07-08 2021-07-08 Method and apparatus for conversion of HDR signals
GB2402030.7A GB2625218A (en) 2021-07-08 2021-07-08 Method and apparatus for conversion of HDR signals

Publications (2)

Publication Number Publication Date
GB202402030D0 GB202402030D0 (en) 2024-03-27
GB2625218A true GB2625218A (en) 2024-06-12

Family

ID=91079174

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2402030.7A Pending GB2625218A (en) 2021-07-08 2021-07-08 Method and apparatus for conversion of HDR signals

Country Status (1)

Country Link
GB (1) GB2625218A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2539917A (en) * 2015-06-30 2017-01-04 British Broadcasting Corp Method and apparatus for conversion of HDR signals
US20180047141A1 (en) * 2016-08-11 2018-02-15 Intel Corporation Brightness control for spatially adaptive tone mapping of high dynamic range (hdr) images
WO2018152063A1 (en) * 2017-02-15 2018-08-23 Dolby Laboratories Licensing Corporation Tone curve mapping for high dynamic range images

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2539917A (en) * 2015-06-30 2017-01-04 British Broadcasting Corp Method and apparatus for conversion of HDR signals
US20180047141A1 (en) * 2016-08-11 2018-02-15 Intel Corporation Brightness control for spatially adaptive tone mapping of high dynamic range (hdr) images
WO2018152063A1 (en) * 2017-02-15 2018-08-23 Dolby Laboratories Licensing Corporation Tone curve mapping for high dynamic range images

Also Published As

Publication number Publication date
GB202402030D0 (en) 2024-03-27

Similar Documents

Publication Publication Date Title
JP7101288B2 (en) Methods and devices for converting HDR signals
US10255879B2 (en) Method and apparatus for image data transformation
CN107079078B (en) Mapping image/video content to a target display device having variable brightness levels and/or viewing conditions
US20180367778A1 (en) Method And Apparatus For Conversion Of HDR Signals
KR100834762B1 (en) Method and apparatus for gamut mapping for cross medias
Saravanan Color image to grayscale image conversion
US7599551B2 (en) Color correction device and color correction method
US8525933B2 (en) System and method of creating or approving multiple video streams
JP6396596B2 (en) Luminance modified image processing with color constancy
EP3446284B1 (en) Method and apparatus for conversion of dynamic range of video signals
Šikudová et al. A gamut-mapping framework for color-accurate reproduction of HDR images
GB2625218A (en) Method and apparatus for conversion of HDR signals
GB2625216A (en) Method and apparatus for conversion of HDR signals
GB2625217A (en) Method and apparatus for conversion of HDR signals
GB2608990A (en) Method and apparatus for conversion of HDR signals
JP2003244458A (en) Image display device and color conversion method
CN116167950B (en) Image processing method, device, electronic equipment and storage medium
KR100461018B1 (en) Natural color reproduction method and apparatus on DTV
Vandenberg et al. A survey on 3d-lut performance in 10-bit and 12-bit hdr bt. 2100 pq
RU2782432C2 (en) Improved repeated video color display with high dynamic range
JP3641402B2 (en) Color correction circuit and color correction method
Kim et al. Wide color gamut five channel multi-primary display for HDTV application
KR20220143932A (en) Improved HDR color handling for saturated colors
JP2002271645A (en) Image processing method and image processor
Kim New display concept for realistic reproduction of high-luminance colors