CN110855912A - Suppressing pixel shading errors in HDR video systems - Google Patents

Suppressing pixel shading errors in HDR video systems Download PDF

Info

Publication number
CN110855912A
CN110855912A CN201910453404.9A CN201910453404A CN110855912A CN 110855912 A CN110855912 A CN 110855912A CN 201910453404 A CN201910453404 A CN 201910453404A CN 110855912 A CN110855912 A CN 110855912A
Authority
CN
China
Prior art keywords
noise
color component
value
hdr
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910453404.9A
Other languages
Chinese (zh)
Inventor
孙艳波
G·彼得罗夫
L·海沃瑞恩
刘伟万
柳东韩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nvidia Corp
Original Assignee
Nvidia Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/156,890 external-priority patent/US10681321B2/en
Application filed by Nvidia Corp filed Critical Nvidia Corp
Publication of CN110855912A publication Critical patent/CN110855912A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/646Circuits for processing colour signals for image enhancement, e.g. vertical detail restoration, cross-colour elimination, contour correction, chrominance trapping filters

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Of Color Television Signals (AREA)
  • Image Processing (AREA)

Abstract

The present disclosure relates to a process of suppressing pixel shading errors, partially or fully suppressing or limiting pixel shading errors in HDR video systems. These pixel rendering errors, represented by small noise values, may be introduced during signal processing of High Dynamic Range (HDR) video signals. Converting visual content to a half-precision floating-point representation (e.g., FP16) may introduce a small amount of signal noise due to value rounding. During HDR signal processing, noise can multiply and accumulate, creating visual artifacts and reducing image quality. The present disclosure may detect these amounts of noise in the pixel color component values and suppress or partially suppress the noise to prevent the noise from accumulating during subsequent HDR signal processing.

Description

Suppressing pixel shading errors in HDR video systems
Cross Reference to Related Applications
The present application claims the benefit of U.S. provisional application serial No. 62/720635 filed by Yanbo Sun et al on 21/8.2018 entitled "improving HDR QUALITY WITH QUANTIZATION NOISE SUPPRESSION (HDR QUANTIZATION NOISE SUPPRESSION) WITH HDR QUANTIZATION NOISE SUPPRESSION", which is commonly assigned to the present application and incorporated herein by reference.
Technical Field
The present application relates generally to pixel color component correction methods and, more particularly, to suppressing shading errors in High Dynamic Range (HDR) video systems.
Background
In computer graphics and video systems, HDR graphics and video content may be generated in a linearly extended red-green-blue (scRGB) color space, with a half-precision floating point format (FP16, defined in IEEE 754-. To drive an HDR display, FP16 pixels may be converted into Perceptual Quantizer (PQ) encoded RGB or YCbCr (defined by the ITU-R BT2100 standard) HDR signals. During this encoding process, the quantization error present in the original FP16 content may be amplified due to the combination of color space conversion, PQ coding, chroma filtering and sub-sampling. These errors will be further magnified by the HDR display, which processes the PQ encoded HDR signal by performing, for example, chroma upsampling, color-space conversion matrices, and PQ decoding operations. This magnification error results in artifacts being displayed on the HDR display that are visible to a user viewing the content on the HDR display, thereby reducing the quality of the displayed content.
Disclosure of Invention
In one aspect, a method of suppressing pixel color component noise in a High Dynamic Range (HDR) signal is described. In one embodiment, the method comprises: (1) converting a color value representation of a pixel to a color value representation having a lower precision, wherein the converting creates a noise level in one or more color component values of the pixel, (2) comparing the color component values to a noise threshold value of the pixel, and (3) reducing the one or more color component values of the pixel using the comparing, wherein the noise threshold value is calculated from the noise level.
In another aspect, a computer program product is described having a series of operational instructions stored on a non-transitory computer readable medium which, when executed, instruct a data processing apparatus to perform operations to suppress noise in color component values for pixels of an HDR signal. In one embodiment, the operations comprise: (1) converting a color value representation of a pixel to a color value representation having a lower precision, wherein the converting creates a noise level in one or more color component values of the pixel, (2) comparing the color component values to a noise threshold value of the pixel, and (3) reducing the one or more color component values of the pixel using the comparing, wherein the noise threshold value is calculated from the noise level.
In another aspect, a noise suppression system for pixel color component values is described. In one embodiment, a noise suppression system includes: (1) a receiver operable to receive HDR content as an HDR signal; and (2) wherein one of the HDR signal processing is noise suppression processing for one or more pixels of the HDR signal, wherein the noise suppression processing compares a calculated noise threshold with color component values of the pixels to clamp the one or more color component values of the pixels to a zero value, and wherein the calculated noise threshold is calculated using a noise level and a noise factor of a pixel.
Drawings
Reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
FIG. 1 is an illustration of a block diagram of an example High Dynamic Range (HDR) signal processing system;
FIG. 2 is an illustration of a block diagram of an example HDR signal processing flow;
FIG. 3 is an illustration of a flow chart of an example method of suppressing noise in an HDR signal;
FIG. 4 is an illustration of a flow chart of an example method of calculating and utilizing a noise threshold to suppress noise in an HDR signal based on the demonstration of FIG. 3; and
fig. 5 is an illustration of a flow chart of an example method of using negative noise and a pixel color component matrix to suppress noise in an HDR signal based on the demonstration of fig. 3.
Detailed Description
Users through their actions may cause content to be displayed on viewing systems such as televisions, monitors, smart phones, and other devices. The content may be various types of content, such as visual output from an application executing on a computing system or video playback from a server or data center. Content can be provided to viewing systems in different formats, such as Standard Dynamic Range (SDR) (a type typically used for some web content and older television programs) and High Dynamic Range (HDR) (a type typically used for gaming applications, newer television and movie content, and other high definition content).
During display of graphics, images, or video content on an HDR viewing system, the content may be processed through one or more HDR signal processes before being displayed for a user. HDR signals representing content can be processed in different ways. The processing steps may be selected to reduce storage and transmission bandwidth requirements or to improve HDR content, for example by adjusting color or lighting due to ambient lighting in the room in which the viewing system is located. The processing step may be to alter the HDR signal to reduce the memory requirement of the signal using a half-precision floating-point format (e.g., FP 16). Other floating point representations, such as 24-bit and 12-bit, may also be used. The present disclosure will use FP16 as the presentation representation. Encoding an HDR signal using Perceptual Quantizer (PQ) encoding may reduce the bit width of the HDR signal to allow for more efficient storage and transmission of the encoded signal. Another process that may be applied to HDR signals is chroma sub-sampling to further reduce storage and transmission bandwidth requirements. Other techniques and processing for HDR signals may also be applied. These techniques and processes may also be applied in various combinations, for example to reduce storage requirements and also to improve HDR signals.
One potential side effect of applying these techniques and processes to HDR signals may be to amplify or accumulate small amounts of noise, such as errors, in the signal during various processing steps. The noise in the signal is imperceptible to the user at the beginning of the process and is a noticeable disturbance (visual artifact) to the user at the end of the process step, as these processes may inadvertently increase the noise contribution to the signal. Presented herein are additional processing steps to identify noise in the HDR signal and to gently reduce or eliminate the noise from the signal. Thus, at the end of the processing step, the signal does not have amplified or accumulated noise resulting in visual artifacts for the user. Additional processing steps may be included in the HDR signal processing flow. Additional processing steps may be included near the beginning of the loop or at various points in the loop.
In more detail, in computer graphics systems, HDR pixels are typically represented in the form of a light linear signal or a PQ encoded signal (i.e., HDR10 or other standard). Color Space Conversion (CSC) and other optical to optical processes require a light linear signal. Pixels can be represented as a ray-like signal using a 32-bit integer or FP16 due to the high dynamic range of the signal, typically 0.0-10000 nit (nit).
Some quantization error may be introduced after transforming the content into a representation of the HDR signal FP16 scRGB. This quantization error (i.e., noise) is typically lower than the ability of an average user to detect and therefore has no noticeable visual artifacts. The quantization error in each scRGB component value may be distributed and contributed to the other components after CSC. A small absolute noise value may produce a large noise value in the PQ because of the PQ inverse electro-optic transfer function (EOTF)-1) Sensitive to small values. In addition, the RGB to YCbCr conversion processing steps and sub-sampling and corresponding inverse processing in video processing, distribution and transmission change the noise distribution. Noise may accumulate through these processes and become a large part of each color component value. Due to PQ EOTF-1May be sensitive to larger color component values (of which noise is a part) and thus PQ EOTF when a pixel is displayed-1Significant color errors can be generated in linear light spaceThereby creating a visual artifact that can be noticed by the user.
Since FP16 quantization error may be a source of noise, one solution is to use full single point floating point precision (FP32) or dual floating point precision (FP64) data values. This solution will result in the need for more frame buffer storage and memory bandwidth, and may require additional processing resources, increasing hardware and power requirements. Another option may be to introduce dithering techniques to reduce quantization noise. This can be achieved by introducing a spatially random noise cancellation repetitive noise pattern. This approach may result in a loss of content quality. The third method of clamping the color component values below the absolute noise threshold (i.e., the global clamp value) may prevent noise from being carried through the remaining processing steps. In some signal cases, very dark colors that should not be clipped may be clipped, possibly preventing the content display from meeting HDR low light emission specifications and standards.
Accordingly, the present disclosure describes additional steps or methods that include quantization noise suppression processing for HDR signals with pixel level clipping while maintaining storage and transmission bandwidth requirements, and maintaining content visual quality. Typically, the quantization noise suppression step may be included between the CSC and the PQ encoding step, but the quantization noise suppression may also be included in other points in the HDR signal processing flow. In an alternative aspect, quantization noise suppression may be included in the processing if spatial pixel processing or temporal pixel processing is to be performed on the PQ signal.
The quantization noise suppression may be a separate step or it may be included in an existing step, e.g. in combination with CSC or in combination with PQ coding. Various methods may be used to achieve quantization noise suppression. For example, for each pixel in the HDR signal, the process may determine the maximum color component value from the three color component values (i.e., the maximum value), as shown in equation 1.
Equation 1: example for finding maximum color component value of pixel
maxComponentValue=max(R,G,B)
Wherein R, G, B represents the red, green, and blue component values.
Alternatively, the luminance value Y of the pixel may be calculated from the color component values as shown in equation 2.
Equation 2: example for calculating luminance values of pixels
Y=(C1*R)+(C2*G)+(C3*B)
Where C1, C2, and C3 are constants for converting RGB values into luminance values Y.
The quantization noise threshold may be calculated using the maximum component value (maxComponentValue) or Y and a noise factor. The determination of whether to use the maximum component value (maxComponentValue) or Y may be based on several factors, such as the hardware used and software capabilities and limitations. In general, it is more efficient to use the luminance Y during processing. Luminance Y may also result in better visual display quality of the signal, since the color component value of green is usually the dominant color component, Y better explains this advantage.
The noise factor may be determined by a process, application, or other system component. The noise factor is typically initialized to 1/2048 for an HDR signal. Other values may also be used, including, for example, 1/1024, 1/512, 1.5/2048. A longer HDR signal processing stream may result in a greater likelihood that less noise is accumulated or amplified to a sufficiently large value that may exceed the target value of the noise. Such target values may be set at or above a point where the user may notice the noise, i.e. causing noticeable visual artifacts. In a longer HDR signal processing flow, an increase in the noise factor value may be beneficial for clipping an appropriate amount of noise from each pixel. As the noise factor increases, the likelihood of affecting the HDR low brightness capability also increases, which may adversely affect the display content to the user.
The noise threshold may be calculated by multiplying the noise factor by the maximum component value (maxComponentValue) or Y, as shown in equations 3 and 4.
Equation 3: example of computing noise threshold using maximum component value (maxComponentValue)
noise Threshold=noise factor*maxComponentValue
Equation 4: example of computing noise threshold Using Y
noiseThershold=noise factor*Y
After the noise threshold is calculated, a comparison is performed on each color component value. If the absolute value of the color component value is less than the noise threshold, the color component value may be dominated by noise. Thus, the color component values may be reset (i.e., clamped) to zero. This is a pixel level clamp where each pixel analyzed may or may not have a different noise threshold for comparison. Equation 5 shows an example algorithm for clipping color component values.
Equation 5: examples of clamping color component values
RNS=(abs(R)>noiseThreshold)R:0
GNS=(abs(G)>noiseThreshold)G:0
BNS=(abs(B)>noiseThreshold)B:0
Wherein R isNS,GNSAnd BNSRespectively, the color component values after noise suppression. An example implementation in pseudo code is shown in table 1.
Table 1: pseudo code example for HDR Signal noise suppression in GLSL ES shader with noise factor of 1/2048
Figure BDA0002075820360000061
Another method of implementing the quantization noise suppression process is to detect whether the color component values of the pixels are dominated by quantization noise and then remove, i.e., eliminate, the noise. The noise removal may be achieved by adding a negative amount equal to the noise of the detected Y value of the pixel. A negative quantity may be designated as negative noise. The noise threshold may be calculated as shown in equation 4. Negative noise is a noise threshold of opposite sign.
If the resulting color component value is below zero, it may be clipped to zero. Clipping can be done before PQ coding because clipping is a common operation of the PQ transfer function with an input range of 0.0 to 1.0. This approach may utilize a fixed hardware pipeline with matrix conversion functionality. Equation 6 demonstrates two forms of example matrix operations for suppressing noise by using negative noise values.
Equation 6: example matrix for noise suppression Using negative noise
Figure BDA0002075820360000062
It can be combined into a single matrix:
Figure BDA0002075820360000063
where R ', G ' and B ' are the resulting color component values from the above matrix operation.
Alternative matrix operations may be used as shown in equation 7. The reduced matrix may not provide the same noise threshold for the three color components as the matrix shown in equation 6. The calculation may be more efficient and not result in the noise accumulation exceeding the target value. Equation 7 utilizes a negative noise value determined directly by the system, similar to the noise factor used in equations 3 and 4. The negative noise labeled negative noise' in equation 7 may be a negative factor, such as-1.5/2048 (negative representation of the noise figure listed above) or other negative value.
Equation 7: example matrix operations to suppress noise
In alternative aspects, the negative noise matrix may be applied as a separate operation or in combination with other matrix operations, as shown in the various equations above. For example, a combination matrix for including noise suppression with scRGB to Rec2020RGB transfer may be shown in equation 8.
Equation 8: example combining matrix combining transfer function with noise suppression
Figure BDA0002075820360000072
Equation 8 effectively adds additional negative quantization noise to the R, G and B components. The color noise level of the pixel may increase. As long as the color error remains below the noise accumulation target value, the noise does not affect the visual experience of the user. If the color component value is less than the negative noise of the luminance value or the negative noise of the sum of the other two components, then that component may be dominated by noise and may be clamped to zero prior to PQ encoding, which may avoid propagating noise to the PQ signal.
Turning now to the drawings, FIG. 1 is an exemplary block diagram of an exemplary HDR signal processing system 100. HDR signal processing system 100 may be used to transform graphics, images, or video content into a format that may be transmitted to a viewing system. HDR signal processing system 100 includes content creation system 105, HDR signal processor 120, and viewing system 130.
The content creation system 105 may create various types of content, such as graphics, images, and videos. Content may be generated from applications or extracted from other sources, for example, data centers, databases, hard drives, computing systems, the internet, intranets, and other sources. The content may be sent to the HDR signal processor 120 as an HDR signal. There may be one or more processes applied to the HDR signal, and each process may be applied in various orders and combinations.
By way of the illustrated example, the received HDR signal may first be processed by the CSC process 122. The HDR signal may then be processed by an HDR noise suppression process 124. The HDR signal may then be further processed by one or more additional HDR signal processes 126. The adjusted HDR signal may be transmitted to another process or system using a communicator (not shown). In this example, the HDR signal is sent to the viewing system 130, and the viewing system 130 may be a conventional type of viewing system, display, monitor, projector, or other viewing system. In alternative aspects, the HDR signal may be transmitted to other systems, for example, over an intranet, the internet, or other transmission medium. In other aspects, the HDR signal may be transmitted to a storage medium, such as a database, a data center, a server, a system, a hard disk, a USB key, or other type of storage medium.
HDR signal processor 120 represents a logical functional description and may be implemented in various combinations. For example, HDR signal processor 120 may be a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), or a general purpose processor. HDR signal processor 120 may be included in a single processor, or the functions may be performed on two or more processors. For example, some of the functions of the HDR signal processor 120 may be performed within the GPU and some of the HDR signal processing may be performed within the viewing system 130.
Fig. 2 is an illustration of an example HDR signal processing flow 200. The HDR signal processing flow 200 is an example flow of HDR signals from the point where the content is identified to the final output to the viewing system. The process described in this flow illustrates an example flow. The process used in other flows may vary with the particular process included in the overall HDR signal processing flow. In general, the more processing involved in the overall HDR signal processing flow, the greater the noise factor that should be compensated for the possible accumulation of noise through the processing flow. In the content creation process 205, content may be created and collected. For example, the application may generate content, or the content creation process 205 may retrieve content from a source such as the internet, an intranet, or a storage medium.
In the content creation quantization process flow 206, the high precision signals (single and double floating point signals) from the content are quantized to FP16 representation with rounding or truncation. Each quantized FP16 representation may contain a maximum rounding error of "1/2048" relative to the ideal value. This may result in a signal-to-noise ratio (SNR) > 2048 at each color component value of the pixel. The color of the pixel, represented by all three color component values, may have an error less than or equal to 1/2048. The color error (or noise) is well below the target value threshold for noise accumulation and is not noticeable to the user. The output of stream 206 is represented by signal RGB 1.
In the CSC process flow 207, the conversion may be performed using a matrix operation such that the quantization noise from each RGB1 color component value is spread across all color components represented by the signal RGB 2. Several noise levels may be considered at this point in the process flow. One such noise level is overall color noise relative to the color of the signal RGB2, as represented by the three color component values. Another noise level may be represented by the noise in each RGB2 component relative to each RGB2 color component value. The color noise level may remain unchanged before and after the matrix operation. Component noise may accumulate depending on the sign of the noise as well as the component and matrix coefficient values. The color component values depend on the component and matrix values. For example, the noise-to-component ratio of the color R may be as shown in equation 9.
Equation 9: example noise of classification scale
Figure BDA0002075820360000091
Wherein R1, G1, B1 and R2 are ideal color component values;
noise.r1, noise.g1 and noise.r1 are the quantization errors introduced when R1, G1, B1 are quantized to FP 16; and
c1r, c1g, and c1b are matrix factors for performing CSC, for example (c1r, c1g, c1b) ═ (0.627403896077687,0.329283038602093, 0.043313065649504).
After the matrix operation, the component noise to signal ratio may be high when the new expected color component value is close to 0.0. In some cases, the component values become dominated by noise. In the HDR signal noise suppression processing flow 210, the color component values of each pixel of the HDR signal may be analyzed, adjusted, and clipped as needed, as described herein. The process flow 210 represents the present disclosure throughout the HDR signal processing flow. The output is represented by the signals RGB2-NS (RGB2 with noise suppression).
At PQ EOTF-1In process flow 215, if 80 nits is normalized to a floating point representation (e.g., 1.0f), RGB2-NS may be scaled by a factor such as 0.008 and then clipped to a range of 0.0 to 1.0, with a value of 1.0 representing a 10000 nit light signal and a value of 0.0 representing true black. However, other factors may be used, conventionally 1.0f for representing 80 nits, e.g., the PC sRGB (standard dynamic range) standard. For example, RGB2-NS' cut (RGB2-NS 0.008, 0.0, 1.0). The normalized pixel RGB2-NS' may then be further encoded by a standard PQ transfer function. For example, R3 ═ PQ _ EOTF-1(R2-NS'),G3=PQ_EOTF-1(G2-NS'),B3=PQ_EOTF-1(B2-NS '), wherein R2-NS ', G2-NS ', and B2-NS ' represent the color component values of the signal RGB2-NS '. PQ EOTF-1This can be achieved using equation 10.
Equation 10: PQ EOTF-1Examples of (2)
Figure BDA0002075820360000092
Wherein x represents a linear HDR optical color value;
m1=2610/(4096*4)=0.15930175;
m2=(2523*128)/4096=78.84375;
c1=3424/4096=0.8359375;
c2 ═ 2413 × 32)/4096 ═ 18.8515625; and
c3=(2392*32)/4096=18.6875。
the inverse PQ function may provide efficient coding based on how the human visual system contrasts sensitivity (e.g., uses more color values for low intensity (dark) tones to minimize relative contrast error). The inverse PQ transfer function may also be very sensitive to small absolute noise. The noise to signal ratio in PQ encoded signals does not typically result in color noise that is noticeable to the user. Furthermore, if a function such as PQ electro-optical transfer function (EOTF) is placed immediately after the process flow to reverse the PQ EOTF-1Then the PQ noise is reversible. PQ EOTF-1The output of process flow 215 is represented by signal RGB 3.
In the RGB to YUV processing flow 216, the signal RGB3 may be further converted to YCbCr color format by matrix operation, as shown in equation 11.
Equation 11: example of RGB to YUV conversion
Figure BDA0002075820360000101
Wherein R3, G3, and B3 represent color component values from the signal RGB 3;
y is a light emission value.
Cb is the blue difference chrominance component; and
cr is the red difference chrominance component.
If not suppressed, the absolute value of any PQ noise from the RGB3 components of the signal may be assigned to each Y, Cb and Cr component of the output signal YUV 1. The signal YUV1 using, for example, the standard HDR YUV444 may contain large absolute and relative component noise, which may be contributed by large noise on one or more component values of the signal RGB 3.
In YUV sub-sampling process flow 220, neighboring YCbCr pixels may be used for filtering and sub-sampling to generate output signals YUV2 using, for example, standard HDR YUV 420 or 422. YUV2 is represented by a dashed line, indicating that the point of progress can be crossed to another system, such as a display device. If not suppressed, large absolute and relative noise in the CbCr components can be assigned and accumulated. The resulting signal YUV2 may be further processed and displayed on a viewing system.
In the YUV upsampling process flow 230, the sub-sampled YCbCr pixel groups may be used for filtering to produce missing CbCr pixel components. If not suppressed, the PQ noise from the previous step may be further distributed and accumulated. The output may be represented by the signal YUV3, e.g. using the standard HDR YUV 444.
In the YUV to RGB process flow 232, the signal YUV3 may be converted to a signal RGB4 using a conventional CSC matrix, as shown in equation 12. The signal RGB4 may be encoded using, for example, the standard BT2020 PQ.
Equation 12: example of YUV to RGB conversion
Figure BDA0002075820360000111
In the PQ EOTF process flow 234, the PQ code signal RGB4 may be transformed into a linear signal by a PQ EOTF transfer function. The output signal may be represented by the signal RGB5 and may be encoded using, for example, the standard BT2020 RGB. Equation 13 shows an example PQ EOTF.
Equation 13: examples of PQ EOTF
Figure BDA0002075820360000112
Wherein x represents a nonlinear HDR electrical value;
m1=2610/(4096*4)=0.15930175;
m2=(2523*128)/4096=78.84375;
c1=3424/4096=0.8359375;
c2 ═ 2413 × 32)/4096 ═ 18.8515625; and
c3=(2392*32)/4096=18.6875。
if HDR signal noise suppression is not included in the overall HDR signal processing flow, small errors in the input values may result in significant errors in the output values, since the PQ EOTF-1Is sensitive to large component values where noise is a part. In addition, an unsuppressed error in one color component value may be extended to other color component values by various transformation algorithms. The resulting unsuppressed errors in linear light space can be noticeable and perceptible to the user and result in visual artifacts and a reduction in the visual quality of the displayed content.
Fig. 3 is an illustration of a flow chart of an example method 300 for suppressing noise in an HDR signal. The method 300 begins at step 301 and proceeds to step 305. In step 305, the received HDR signal is analyzed and a noise threshold is calculated. Proceeding to step 310, the color component values of the pixels are adjusted using the noise threshold. The adjustment may suppress detected noise and prevent noise from accumulating through the remaining process flow steps. The suppression may reduce noise or clamp color component values to zero. The method ends at step 350.
Fig. 4 is an illustration of a flow chart of an example method 400 of calculating and using a noise threshold to suppress noise in an HDR signal based on the demonstration of fig. 3. The method 400 starts at step 401 and proceeds to step 405. In step 405, HDR signal processing begins with a received signal, where the signal represents visual content. Proceeding to step 410, CSC is performed on the HDR signal. Depending on the implementation, two paths are possible, one to step 420 and one to step 425.
If step 420 is selected, method 400 proceeds to step 420. A maximum color component value for the pixel is determined. Proceeding to step 422, a noise threshold may be calculated using the noise factor and the results from step 420. The method 400 proceeds to decision step 430.
If step 425 is selected from step 410, method 400 proceeds to step 425. The light emission value of the pixel is calculated. Proceeding to step 427, a noise threshold may be calculated using the noise factor and the results of step 425. The method 400 proceeds to decision step 430.
In decision step 430, the noise accumulation may be verified against the target value. If the value is exceeded, the method 400 proceeds to step 435. In step 435, the noise factor may be increased to compensate for the noise accumulation. The method 400 returns to the respective step 422 or 427 to recalculate the noise threshold. If decision step 430 results in the noise accumulation target value not being exceeded, then method 400 proceeds to step 440.
In step 440, the color component values are adjusted to zero, wherein the respective color component values are less than the noise threshold. Step 420 and 440 or step 425 and 440 are repeated for the remaining pixels in the HDR signal. Proceeding to step 445, EOTF is passed PQ-1The HDR signal is processed and any remaining HDR signal processing is completed. In alternative aspects, steps 420-440 and 425-440 may be combined with steps 410 or 445. The method ends at step 450.
Fig. 5 is a flow diagram of an example method 500 for suppressing noise in an HDR signal using negative noise and a pixel color component matrix based on the demonstration of fig. 3. The method 500 begins at step 501 and proceeds to step 505. In step 505, a noise threshold is calculated as a negative noise value. In step 510, the noise threshold is multiplied by a matrix of pixel color component values. In step 515, the color component values are adjusted using a matrix operation. Step 505 and 515 may be repeated for the remaining pixels in the received HDR signal. The method 500 ends at step 550.
Portions of the above-described apparatus, systems, or methods may be embodied in or performed by various digital data processors or computers, which are programmed or have stored therein executable programs of sequences of software instructions to perform one or more steps of the methods. The software instructions of these programs may represent algorithms and be encoded in machine-executable form on a non-transitory digital data storage medium, such as a magnetic or optical disk, Random Access Memory (RAM), magnetic hard disk, flash memory, and/or Read Only Memory (ROM), to enable various types of digital data processors or computers to perform one or more of the above-described methods or one, more, or all of the steps of a function, system, or apparatus described herein.
Portions of the disclosed embodiments may relate to a computer storage product with a non-transitory computer-readable medium having program code thereon for performing various computer-implemented operations embodying a portion of the apparatus, devices, or steps of the methods described herein. Non-transitory as used herein refers to all computer-readable media except transitory propagating signals. Examples of non-transitory computer readable media include, but are not limited to: magnetic media such as hard disks, floppy disks, and magnetic tape; optical disk media such as CD-ROM disks; magneto-optical media such as optical disks; and hardware devices that are specially configured to store and execute program code, such as ROM and RAM devices. Examples of program code include both machine code, such as produced by a compiler, and files containing higher level code that may be executed by a computer using an interpreter.
In interpreting the disclosure, all terms should be interpreted in the broadest possible manner consistent with the context. In particular, the terms "comprises" and "comprising" should be interpreted as referring to elements, components, or steps in a non-exclusive manner, indicating that the referenced elements, components, or steps may be present, or utilized, or combined with other elements, components, or steps that are not expressly referenced.
Those skilled in the art to which this application relates will appreciate that other and further additions, deletions, substitutions and modifications may be made to the described embodiments. It is also to be understood that the terminology used herein is for the purpose of describing particular embodiments only, and is not intended to be limiting, since the scope of the present disclosure will be limited only by the claims. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. Although any methods and materials similar or equivalent to those described herein can also be used in the practice or testing of the present disclosure, a limited number of exemplary methods and materials are described herein.
It should be noted that, as used herein and in the appended claims, the singular forms "a," "an," and "the" include plural referents unless the context clearly dictates otherwise.

Claims (23)

1. A method of suppressing pixel color component noise in a High Dynamic Range (HDR) signal, comprising:
converting a color value representation of a pixel to a color value representation having a lower precision, wherein the converting creates a noise level in one or more color component values of the pixel;
comparing the color component value to a noise threshold for the pixel; and
reducing one or more of the color component values of the pixel using the comparison, wherein the noise threshold is calculated from the noise level.
2. The method according to claim 1, wherein the reducing clamps the color component value to zero when the color component value is less than the noise threshold.
3. The method of claim 1, wherein the reducing occurs during processing of the HDR signal, after Color Space Conversion (CSC) processing, and Perceptual Quantizer (PQ) inverse electro-optic transfer function (EOTF)-1) Before treatment.
4. The method of claim 1, wherein spatial or temporal pixel processing is performed on the HDR signal.
5. The method of claim 1, wherein the noise threshold utilizes a noise factor and a maximum value from the color component values.
6. The method of claim 1, wherein the noise threshold utilizes a noise factor and a luminance value computed from the color component values.
7. The method of claim 6, wherein the noise factor is 1/2048.
8. The method of claim 7, further comprising:
the noise factor is increased, wherein the accumulated noise exceeds a target value.
9. The method of claim 1, wherein the noise threshold is a negative noise value, and wherein the reducing comprises multiplying the noise threshold by a matrix of the color component values.
10. The method of claim 9, wherein the reducing is combined with a scRGB to Rec2020RGB conversion process.
11. The method according to claim 9, wherein the reducing clamps the color component value to zero when the color component value is less than an absolute value of the noise threshold.
12. A computer program product having a series of operational instructions stored on a non-transitory computer readable medium, which when executed, instruct a data processing apparatus to perform operations to suppress noise in color component values of pixels for a High Dynamic Range (HDR) signal, the operations comprising:
converting a color value representation of a pixel to a color value representation having a lower precision, wherein the converting creates a noise level in one or more color component values of the pixel;
comparing the color component value to a noise threshold for the pixel; and
reducing one or more of the color component values of the pixel using the comparison, wherein the noise threshold is calculated from the noise level.
13. The computer program product of claim 12, wherein the reducing clamps the color component value to zero when the color component value is less than an absolute value of the noise threshold.
14. The computer program product of claim 12, wherein the reducing occurs during processing of the HDR signal, after Color Space Conversion (CSC) processing, and Perceptual Quantizer (PQ) inverse electro-optic transfer function (EOTF)-1) Before treatment.
15. The computer program product of claim 12, wherein spatial or temporal pixel processing is performed on the HDR signal.
16. The computer program product of claim 12, wherein the noise threshold utilizes a noise factor and a maximum value from the color component values or a luminous value computed from the color component values.
17. The computer program product according to claim 12, wherein the noise threshold is a negative noise value and wherein the reducing comprises multiplying the noise threshold with a matrix of the color component values.
18. The computer program product of claim 12, further comprising:
increasing a noise factor used to calculate the noise threshold, wherein the accumulated noise exceeds a target value.
19. A noise suppression system for color component values of a pixel, comprising:
a receiver operable to receive High Dynamic Range (HDR) content as an HDR signal; and
an HDR signal processor operable to perform HDR signal processing on the HDR signal to generate an adapted HDR signal, wherein one of the HDR signal processing is noise suppression processing for one or more of the pixels of the HDR signal, and wherein the noise suppression processing compares a calculated noise threshold to the color component values of the pixels to clamp one or more of the color component values of the pixels to a zero value, and wherein the calculated noise threshold is calculated using a noise level and a noise factor of the pixels.
20. The noise suppression system of claim 19, further comprising:
a communicator operable to communicate the adjusted HDR signal.
21. The noise suppression system of claim 19, wherein the HDR signal processing comprises at least one of spatial pixel processing and temporal pixel processing.
22. The noise suppression system of claim 19, wherein the HDR signal processor is operable to analyze accumulated noise and increase the noise factor to compensate for the accumulated noise.
23. The noise suppression system of claim 19, wherein the HDR signal processor is operable to perform matrix operations and to combine one or more HDR signal processes.
CN201910453404.9A 2018-08-21 2019-05-28 Suppressing pixel shading errors in HDR video systems Pending CN110855912A (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US201862720635P 2018-08-21 2018-08-21
US62/720,635 2018-08-21
US201862723625P 2018-08-28 2018-08-28
US62/723,625 2018-08-28
US16/156,890 US10681321B2 (en) 2018-08-21 2018-10-10 Suppress pixel coloration errors in HDR video systems
US16/156,890 2018-10-10

Publications (1)

Publication Number Publication Date
CN110855912A true CN110855912A (en) 2020-02-28

Family

ID=69594661

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910453404.9A Pending CN110855912A (en) 2018-08-21 2019-05-28 Suppressing pixel shading errors in HDR video systems

Country Status (1)

Country Link
CN (1) CN110855912A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101466046A (en) * 2007-12-21 2009-06-24 三星Techwin株式会社 Method and apparatus for removing color noise of image signal
US20130322752A1 (en) * 2012-05-31 2013-12-05 Apple Inc. Systems and methods for chroma noise reduction
US20150078661A1 (en) * 2013-08-26 2015-03-19 Disney Enterprises, Inc. High dynamic range and tone mapping imaging techniques
CN106031143A (en) * 2014-02-21 2016-10-12 皇家飞利浦有限公司 Color space and decoder for video
US20160343115A1 (en) * 2014-03-04 2016-11-24 Canon Kabushiki Kaisha Image processing method, image processing apparatus, image capturing apparatus, image processing program and non-transitory computer-readable storage medium
US20170061576A1 (en) * 2015-08-31 2017-03-02 Apple Inc. Applying chroma suppression to image data in a scaler of an image processing pipeline
US20170118489A1 (en) * 2015-10-23 2017-04-27 Broadcom Corporation High Dynamic Range Non-Constant Luminance Video Encoding and Decoding Method and Apparatus

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101466046A (en) * 2007-12-21 2009-06-24 三星Techwin株式会社 Method and apparatus for removing color noise of image signal
US20130322752A1 (en) * 2012-05-31 2013-12-05 Apple Inc. Systems and methods for chroma noise reduction
US20150078661A1 (en) * 2013-08-26 2015-03-19 Disney Enterprises, Inc. High dynamic range and tone mapping imaging techniques
CN106031143A (en) * 2014-02-21 2016-10-12 皇家飞利浦有限公司 Color space and decoder for video
US20160343115A1 (en) * 2014-03-04 2016-11-24 Canon Kabushiki Kaisha Image processing method, image processing apparatus, image capturing apparatus, image processing program and non-transitory computer-readable storage medium
US20170061576A1 (en) * 2015-08-31 2017-03-02 Apple Inc. Applying chroma suppression to image data in a scaler of an image processing pipeline
US20170118489A1 (en) * 2015-10-23 2017-04-27 Broadcom Corporation High Dynamic Range Non-Constant Luminance Video Encoding and Decoding Method and Apparatus

Similar Documents

Publication Publication Date Title
US9501818B2 (en) Local multiscale tone-mapping operator
JP6362793B2 (en) Display management for high dynamic range video
US9842385B2 (en) Display management for images with enhanced dynamic range
US20200014880A1 (en) Display method and display device
US9710890B2 (en) Joint enhancement of lightness, color and contrast of images and video
KR102157032B1 (en) Display management for high dynamic range video
US9210439B2 (en) High dynamic range codecs
CA2563523C (en) Encoding, decoding and representing high dynamic range images
RU2433477C1 (en) Image dynamic range expansion
US8639050B2 (en) Dynamic adjustment of noise filter strengths for use with dynamic range enhancement of images
US11100888B2 (en) Methods and apparatuses for tone mapping and inverse tone mapping
US8660352B2 (en) Quality assessment of images with extended dynamic range
US9747674B2 (en) Method and device for converting an image sequence whose luminance values belong to a high dynamic range
JP2008067373A (en) Image correction method and apparatus
GB2554669A (en) Image processing
JP2021524647A (en) Reduction of banding artifacts in HDR imaging by adaptive SDR to HDR reconstruction function
US7738723B2 (en) System and method for image display enhancement
KR20070104632A (en) Sparkle processing
CN110855912A (en) Suppressing pixel shading errors in HDR video systems
US10681321B2 (en) Suppress pixel coloration errors in HDR video systems
Boitard et al. Video Tone Mapping
US8587725B2 (en) Method of digital signal processing
CN114240813A (en) Image processing method, apparatus, device and storage medium thereof
CN113242424A (en) Image coding and decoding method and device
Myszkowski et al. LDR2HDR: Recovering Dynamic Range in Legacy Content

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination