CN115862534A - Color correction method, color correction device, electronic device and storage medium - Google Patents

Color correction method, color correction device, electronic device and storage medium Download PDF

Info

Publication number
CN115862534A
CN115862534A CN202211477646.XA CN202211477646A CN115862534A CN 115862534 A CN115862534 A CN 115862534A CN 202211477646 A CN202211477646 A CN 202211477646A CN 115862534 A CN115862534 A CN 115862534A
Authority
CN
China
Prior art keywords
reference vertex
saturation
pixel
color
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211477646.XA
Other languages
Chinese (zh)
Inventor
田维军
苏晃永
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Eswin Computing Technology Co Ltd
Original Assignee
Beijing Eswin Computing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Eswin Computing Technology Co Ltd filed Critical Beijing Eswin Computing Technology Co Ltd
Priority to CN202211477646.XA priority Critical patent/CN115862534A/en
Publication of CN115862534A publication Critical patent/CN115862534A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)
  • Color Image Communication Systems (AREA)
  • Facsimile Image Signal Circuits (AREA)

Abstract

The embodiment of the application provides a color correction method and device, electronic equipment and a storage medium. The color correction method includes: the method comprises the steps of obtaining initial pixel data of a plurality of pixel points of an image to be displayed, then determining a correction pixel value of a reference vertex based on a display lookup table and a saturation difference between the pixel point and the reference vertex, wherein the reference vertex is a vertex of a color space formed based on a reference color, further performing linear interpolation processing on the initial pixel data based on the correction pixel value of the reference vertex to obtain correction pixel data corresponding to the pixel point, and the correction pixel data are used for outputting the display image. The embodiment of the application is used for solving the problem of color cast of the gray scale picture of the display image displayed by the current display equipment in the related art.

Description

Color correction method, color correction device, electronic device and storage medium
Technical Field
The present application relates to the field of display technologies, and in particular, to a color correction method, an apparatus, an electronic device, and a storage medium.
Background
Different display devices have different display color capabilities, such as different color filters for LCDs, different backlight sources, such as different luminescent materials for OLEDs, etc., which affect the color area that the final display device is capable of displaying. As such, different display colors are often displayed on different display devices for the same image.
In order to solve this problem, CIE (international commission on illumination) defines a standard color space, such as sRGB, adobe RGB or DCI-P3, and the display device processes an input image through a certain color space algorithm, so that a picture displayed by the display device can match the standard color space, where the color space algorithm is generally called color management. For example, the color space algorithm of 3D _LUT (3 direction look up table,3D display look-up table).
However, after the input image is processed by using the existing color space algorithm, the display image displayed by the display device still has the color cast problem of the gray-scale picture.
Disclosure of Invention
The present application provides a color correction method, a color correction device, an electronic device, and a storage medium, which are used to solve the color shift problem of the gray-scale image of the display image displayed by the current display device in the related art.
In a first aspect, an embodiment of the present application provides a color correction method, including:
acquiring initial pixel data of a plurality of pixel points of an image to be displayed;
determining a corrected pixel value of a reference vertex based on a display look-up table and a saturation difference between the pixel point and the reference vertex, wherein the reference vertex is a vertex of a color space formed based on reference colors;
performing linear interpolation processing on the initial pixel data based on the correction pixel value of the reference vertex to obtain correction pixel data corresponding to the pixel point;
the corrected pixel data is used to output a display image.
In one possible implementation, the determining a corrected pixel value of the reference vertex based on the display lookup table and the saturation difference between the pixel point and the reference vertex includes:
determining a first saturation of the pixel point based on the initial pixel data, determining a second saturation of the reference vertex based on an input value of the reference vertex, the input value of the reference vertex being a pixel value of the reference vertex in an original color space;
determining a saturation gap between the pixel point and the reference vertex based on the first saturation and the second saturation;
determining a corrected pixel value for the reference vertex based on the display lookup table and the saturation gap.
In one possible implementation, the determining a corrected pixel value of the reference vertex based on the display lookup table and the saturation gap includes:
inputting the input value of the reference vertex into the display lookup table to obtain the output value of the reference vertex;
determining a corrected pixel value for the reference vertex based on the input value for the reference vertex, the output value for the reference vertex, and the saturation gap.
In one possible implementation, the determining a corrected pixel value of the reference vertex based on the input value of the reference vertex, the output value of the reference vertex, and the saturation gap includes:
weighting the saturation differences;
and determining a correction pixel value of the reference vertex based on the input value of the reference vertex, the output value of the reference vertex and the weighted saturation difference.
In one possible implementation, the weighting the saturation gaps includes:
weighting the saturation gap based on display characteristics of the display panel.
In one possible implementation, the image to be displayed includes three color channels; the display lookup table is a three-dimensional display lookup table;
the performing linear interpolation processing on the initial pixel data based on the corrected pixel value of the reference vertex to obtain corrected pixel data corresponding to the pixel point includes:
and performing cubic linear interpolation processing on the initial pixel data based on the correction pixel value of the reference vertex to obtain correction pixel data corresponding to the pixel point.
In a second aspect, an embodiment of the present application provides a color correction apparatus, including:
the acquisition module is used for acquiring initial pixel data of a plurality of pixel points contained in an image to be displayed;
a determining module, configured to determine a corrected pixel value of a reference vertex based on a display lookup table and a saturation difference between the pixel point and the reference vertex, where the reference vertex is a vertex of a color space formed based on a reference color; performing linear interpolation processing on the initial pixel data based on the correction pixel value of the reference vertex to obtain correction pixel data corresponding to the pixel point; the corrected pixel data is used to output a display image.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor and a memory, where the processor and the memory are connected to each other;
the memory is used for storing a computer program;
the processor is configured to perform the method as described above when the computer program is invoked.
In one possible implementation, the electronic device further includes a display panel;
the display panel is electrically connected with the processor and used for receiving the corrected pixel data of the image to be displayed output by the processor so as to output a display image.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, which stores a computer program, where the computer program is executed by a processor to implement the method described above.
The beneficial technical effects brought by the technical scheme provided by the embodiment of the application at least comprise:
the color space conversion method and the color space conversion device have the advantages that the saturation parameter is added in the color space conversion process and used for evaluating the color influence of the reference vertex on the pixel point, the correction pixel value of the reference vertex is determined specifically based on the saturation difference between the pixel point and the reference vertex, the reference vertex is the vertex of the color space formed based on the reference color, the color distance comprises the saturation difference, and the color distance between the pixel point and the reference vertex is quantitatively expressed by utilizing the saturation difference. Based on the correction pixel value of the reference vertex, linear interpolation processing is carried out on the initial pixel data, so that correction pixel data corresponding to the pixel points can be obtained, and the correction pixel data are used for outputting a display image; the color cast of the gray scale picture of the display image displayed by the display equipment can be improved, and the color accuracy of the display image displayed by the display equipment can be improved.
Moreover, different display devices can adopt different saturation differences to correct, differences of different display devices can be balanced, matching degree of correction pixel data of pixel points and the corresponding display devices can be improved, and color accuracy of display images displayed by different display devices can be improved.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a diagram of input values corresponding to 8 reference vertices in an original color space of a related art 3D_LUT;
FIG. 2 is a diagram illustrating output values corresponding to 8 reference vertices in color space after tuning (color space conversion) of a related art 3D_LUT;
FIG. 3 is a diagram illustrating a non-linear correspondence between an input value and an output value of a pixel in a related art 3D_LUT;
FIG. 4 is a gamma plot of the w/r/g/b of an OLED screen actually measured;
fig. 5 is a schematic flowchart of a color correction method according to an embodiment of the present disclosure;
FIG. 6 is a schematic flowchart illustrating a process of determining a corrected pixel value of a reference vertex in a color correction method according to an embodiment of the present disclosure;
fig. 7 is a schematic diagram of the saturation of each reference vertex and pixel point provided in the embodiment of the present application;
FIG. 8 is a schematic flowchart illustrating a method for determining a corrected pixel value of a reference vertex in another color correction method according to an embodiment of the present disclosure;
FIG. 9 is a graph showing the relationship between the weighting factor weight 1 and the normalized value of the display screen loading;
FIG. 10 is a schematic diagram of a cube (i.e., a tuned color space) of corrected pixel values for 8 reference vertices;
fig. 11 is an input/output curve diagram of a pixel point obtained by an exemplary color correction method according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a color correction device according to an embodiment of the present disclosure;
fig. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described below in conjunction with the drawings in the present application. It should be understood that the embodiments set forth below in connection with the drawings are exemplary descriptions for explaining technical solutions of the embodiments of the present application, and do not limit the technical solutions of the embodiments of the present application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of other features, information, data, steps, operations, elements, components, and/or groups thereof, that may be implemented as required by the art. The term "and/or" as used herein means at least one of the items defined by the term, e.g., "a and/or B" may be implemented as "a", or as "B", or as "a and B".
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The color space algorithm of the prior 3D_LUT (the LUT is fully spelled into Look-Up-Table, namely, a display lookup Table) is that once address is input for looking Up each time a signal is input, the content corresponding to the address is found and output, and the function of color space conversion can be realized for a display, the RGB 3 1D_LUTs form the 3D _LUT, the input RGB three channel color values are mapped according to the three display lookup tables of the 3D _LUTto obtain the converted colors), the color space conversion (namely, tunning) is carried out on a certain number of reference colors, and then the result is applied to all colors in a three-time linear interpolation mode, so that the standard color space matching can be well realized.
The linear interpolation refers to an interpolation mode in which an interpolation function is a first-order polynomial, and the interpolation error of the interpolation function on an interpolation node is zero. Compared with other interpolation modes, such as parabolic interpolation, the linear interpolation has the characteristics of simplicity and convenience. The geometric meaning of linear interpolation is that the original function is approximately represented by a straight line passing through the points A and B in the overview chart. Linear interpolation can be used to approximate instead of primitive functions, or can be used to compute values that are not present in the table lookup process.
The inventor finds that, since the 3d_lut is a pure mathematical theoretical calculation, when the color space algorithm of the 3d_lut is applied to the OLED screen for image display, the color shift problem, i.e., the color change of the gray-scale transition picture, occurs due to the display characteristics of the OLED screen. In addition, for the OLED screen, the color distribution corresponding to the same RGB (three primary optical colors) data is nonlinear, and expanding the color by means of cubic linear interpolation inevitably causes color errors.
Specifically, as shown in fig. 1 and 2, fig. 1 shows an input value (original color space) of the 3d _lut, and fig. 2 shows an output value (post-tuning color space) of the 3d _lut. For simplicity, the color space is taken as an example of a cube.
Referring to fig. 1, the reference colors are red, green, and blue, respectively, with vector coordinates of red being (256, 0), vector coordinates of green being (0, 256, 0), and vector coordinates of blue being (0, 256). The three vectors may constitute a three-dimensional stereo space, and in particular, the three reference colors of red, green, and blue form the original color space (i.e., the cube shown in fig. 1).
With continued reference to fig. 1, 8 vertices of a color space (i.e., the cube shown in fig. 1) formed based on the three reference colors of red, green, and blue are the reference vertices. The input values of the 8 reference vertices LUT 0-LUT 7 are (0, 0), (256, 0), (0, 256, 0), respectively (0, 256), (256, 0), (256, 0, 256), (0, 256), (256 ). The input value of the point P of the pixel point is (R, G, B), and R, G, B are respectively the red component, the green component and the blue component of the input value of the point P.
Referring to fig. 2, the input values of the 8 reference vertices LUT0 to LUT7 are subjected to tuning (color space conversion, i.e., mapping according to the display lookup table of the 3d_lut), and the output values of the 8 reference vertices LUT0 to LUT7 are (r 0, g0, b 0), (r 1, g1, b 1), (r 2, g2, b 2), (r 3, g3, b 3), (r 4, g4, b 4), (r 5, g5, b 5), (r 6, g6, b 6), (r 7, g7, b 7), respectively. That is, the output value LUTi = (ri, gi, bi) of the ith reference vertex, and ri, gi, bi are the red, green, and blue components of the output value of the ith reference vertex, respectively. The output values (R ', G ', B ') of the point P are red, green and blue components, respectively, of the output value of the point P.
That is, the output values of the 8 reference vertices LUT0 to LUT7 are 8 vertices of the tuned color space.
When the input values of the 8 reference vertices LUT0 to LUT7 are known, the output values of the 8 reference vertices LUT0 to LUT7 are obtained by color space conversion (i.e., tuning).
When the P-point input value is known as (R, G, B), the output value (R ', G ', B ') of the P-point can be obtained by a trilinear interpolation method.
Table 1 shows the values of LUT0 to LUT7, i.e., the corrected pixel values of the reference vertices, for actual tuning (color space conversion) of one OLED screen.
LUT r g b
LUT0
0 0 0
LUT1 218 40 33
LUT2 117 237 71
LUT3 251 246 83
LUT4 55 0 233
LUT5 227 41 237
LUT6 133 244 245
LUT7 255 246 243
TABLE 1
As shown in FIG. 3, when the input value of the P point is changed from (0, 0) to (255 ) in consideration of the gray-scale transition picture, the relationship between the output value and the input value of the P point is obtained by the tri-linear interpolation calculation.
A pixel is typically represented by 8 bits, so that there are a total of 256 gray levels (pixel values between 0 and 255), each level representing a different brightness. P points are input as a grayscale picture, R = G = B (i.e., red component is equal to green component is equal to blue component), so P points can be expressed as0 to 255.
In other possible embodiments, a pixel may be represented by 10 bits, and in this case, the input value of the P point varies from (0, 0) to (1023 ), the P point is input as a gray-scale picture, and R = G = B, so that the P point may be expressed as0 to 1023. Of course, other numbers of bits may be used to represent a pixel, and are not limited herein.
The gray scale is a change in luminance between the brightest and darkest, and is divided into a plurality of parts. So as to control the screen brightness corresponding to the signal input. Each sub-pixel, the light source behind it, may exhibit a different brightness level. And gray levels represent gradation levels of different brightness from the darkest to the brightest.
For color images, each pixel of the color image contains a plurality of color components, each of which is called a Channel (Channel). The number of channels for all pixels in the image is uniform, i.e. each channel can be represented as a component image with the same content as the original image but different color. Taking an RGB format color image as an example, a complete image can be divided into three primary color images of blue (B component), green (G component) and red (R component).
The color change of each dot on the screen is actually caused by the gray scale change of the three RGB sub-pixels constituting the dot.
In fig. 3, r, g, and b are input/output curves plotted with a red component, green component, and blue component, respectively.
As can be seen from FIG. 3, as the input gray scale increases, the three r/g/b curves show different variation trends, and at the position close to 250 gray scales, even the relative positions of g and b are exchanged.
Since the final color of the OLED screen depends on the mixture of three colors r, g, and b, when the ratio of the three colors changes and even ratio inversion occurs, the final color changes inevitably, and color shift appears visually. This phenomenon is more pronounced, especially in the case of low brightness, to which the human eye is sensitive.
In addition, FIG. 4 is a gamma curve of w/r/g/b of the OLED screen actually measured.
In fig. 4, w _ gamma, r _ gamma, g _ gamma, and b _ gamma are respectively a white gamma curve, a red gamma curve, a green gamma curve, and a blue gamma curve of the OLED screen. The abscissa is a gray scale input to the display, and the ordinate is a ratio of a current luminance output on the display to a maximum luminance.
The Gamma curve is a special tone curve, and when the Gamma value is equal to 1, the curve is a straight line forming 45 degrees with the coordinate axis, which represents that the input and output densities are the same. Gamma values above 1 will cause output dimming, and Gamma values below 1 will cause output brightening.
As can be seen from fig. 4, the gamma curve of r/g/b is not uniform, i.e. the contribution trend from r/g/b data to the final brightness is not uniform and is non-linear, therefore, if the r/g/b is not differentiated in the solution of 3d_lut and the linear interpolation is fully used, the deviation between the final color and the ideal color will be caused.
The present application provides a color correction method, an apparatus, an electronic device, and a computer-readable storage medium, which are intended to solve the above technical problems of the related art.
The technical solutions of the embodiments of the present application and the technical effects produced by the technical solutions of the present application will be described below through descriptions of several exemplary embodiments. It should be noted that the following embodiments may be referred to, referred to or combined with each other, and the description of the same terms, similar features, similar implementation steps and the like in different embodiments is not repeated.
An embodiment of the present application provides a color correction method, as shown in fig. 5, which may be applied to a terminal or a server, and the method includes:
s101: acquiring initial pixel data of a plurality of pixel points of an image to be displayed.
Specifically, the terminal or the server for color correction downloads initial pixel data of a plurality of pixel points of the image to be displayed from the image acquisition device, the image storage device or the cloud storage.
The image acquisition device may include a camera, a video camera, a scanner, or other devices with a photographing function. The image storage device may include a hard disk, a usb disk, or the like.
Cloud storage is a model of online storage over the internet (Cloud storage) that stores data on multiple virtual servers, usually hosted by third parties, rather than on dedicated servers. The data center operator prepares the storage virtualized resources at the back end according to the needs of the customer, and provides the resources in a storage resource pool (storage pool), so that the customer can use the storage resource pool to store the files or objects by himself. In practice, these resources may be distributed over numerous server hosts. The cloud storage service is accessed through a Web services Application Program Interface (API) or through a Web-enabled user interface.
Each image has one or more color channels, and the default number of color channels in an image depends on its color mode, i.e., the color mode of an image will determine the number of color channels. Each color channel stores information of the color elements in the image. The colors in all color channels are mixed in superposition to produce the colors of the pixels in the image.
In this embodiment, the image to be displayed may be an RGB image, and the RGB image has 3 color channels, which are a red channel, a green channel, and a blue channel, respectively. That is, each pixel point includes three primary colors of blue (B component), green (G component), and red (R component). The initial pixel data of each pixel point may be represented as (R, G, B).
In this embodiment, a pixel point can be represented by 8 bits, so that 256 gray scales (pixel values between 0 and 255) are provided, and each scale represents different brightness. That is, the pixel value of each channel in the initial pixel data of each pixel point can be represented by 0 to 255.
S102: and determining a corrected pixel value of the reference vertex based on the display lookup table and the saturation difference between the pixel point and the reference vertex.
Wherein the reference vertex is a vertex of a color space formed based on the reference color.
Saturation of color (saturation) refers to the degree of vividness of a color. The saturation of a color is a measure of its purity. A highly saturated color will contain a very narrow set of wavelengths. Therefore, the color cast of the reference vertex can be adjusted based on the saturation difference between the pixel point and the reference vertex, so that the problem of color cast of a displayed picture due to the corrected pixel data obtained by applying the corrected pixel value of the reference vertex is avoided.
S103: and performing linear interpolation processing on the initial pixel data based on the corrected pixel value of the reference vertex to obtain corrected pixel data corresponding to the pixel point.
The corrected pixel data is used to output a display image.
As will be understood by those skilled in the art, the "terminal" used herein may be a Mobile phone, a tablet computer, a PDA (Personal Digital Assistant), an MID (Mobile Internet Device), etc.; a "server" may be implemented as a stand-alone server or as a server cluster comprised of multiple servers.
The color space conversion method and the color space conversion device have the advantages that the saturation parameter is added in the color space conversion process and used for evaluating the color influence of the reference vertex on the pixel point, the correction pixel value of the reference vertex is determined specifically based on the saturation difference between the pixel point and the reference vertex, the reference vertex is the vertex of the color space formed based on the reference color, the color distance comprises the saturation difference, and the color distance between the pixel point and the reference vertex is quantitatively expressed by utilizing the saturation difference. Based on the correction pixel value of the reference vertex, linear interpolation processing is carried out on the initial pixel data, so that correction pixel data corresponding to the pixel point can be obtained, and the correction pixel data are used for outputting a display image; the color cast of the gray scale picture of the display image displayed by the display equipment can be improved, and the color accuracy of the display image displayed by the display equipment can be improved.
Moreover, different display devices can adopt different saturation differences to correct, differences of different display devices can be balanced, matching degree of correction pixel data of pixel points and the corresponding display devices can be improved, and color accuracy of display images displayed by different display devices can be improved.
In one possible implementation manner provided in the embodiment of the present application, as shown in fig. 6, determining a corrected pixel value of a reference vertex based on a display lookup table and a saturation difference between a pixel point and the reference vertex includes:
s201: a first saturation of the pixel point is determined based on the initial pixel data, and a second saturation of the reference vertex is determined based on the input value of the reference vertex.
Wherein the input value of the reference vertex is a pixel value of the reference vertex in an original color space.
Since the formula for saturation is: saturation =1-min (r, g, b)/max (r, g, b), where saturation is saturation, r, g, b are the red, green and blue components of the pixel point, respectively, min (r, g, b) is the minimum value among the red, green and blue components, and max (r, g, b) is the maximum value among the red, green and blue components.
For the 8 reference vertices LUT0 to LUT7, the input values of LUT0 to LUT7 can be referred to as shown in fig. 1. In particular, the method comprises the following steps of, the input values of LUT 0-LUT 7 are (0, 0), (256, 0), (0, 256, 0), respectively (0, 256), (256, 0), (256, 0, 256), (0, 256), (256 ).
At this time, the saturation S0=0 of the LUT0, the saturation S7=0 of the lut7, and the saturation S1= S2= S3= S4= S5= S6=1 of the other points (LUT 1 to LUT 6), specifically, see fig. 7.
That is, the second saturation of the reference vertex includes saturations of a plurality of reference vertices, S0, S1, S2, S3, S4, S5, S6, S7, respectively, where S1= S7=0, S1= S2= S3= S4= S5= S6=1.
The input values of the 8 reference vertex LUTs 0-7 shown in FIG. 1 are characterized by performing color space conversion (i.e., tuning) based on red (256, 0), green (0, 256, 0), and blue (0, 256), and then applying the converted results of red (256, 0), green (0, 256, 0), and blue (0, 256) to all colors by means of cubic linear interpolation.
It should be noted that, in other possible embodiments, the input values of the 8 reference vertices LUT0 to LUT7 may also adopt other values, for example, color space conversion may be performed based on red (245, 0), green (0, 250, 0) and blue (0, 254), and in this case, the input values of the 8 reference vertices LUT0 to LUT7 are (0, 0), (245, 0), (0, 250, 0), (0, 254), (245, 250, 0), (245, 0, 254), (0, 250, 254), and (245, 250, 254), respectively.
For the pixel point corresponding to the initial pixel data, taking the P point as an example, the input value of the P point is (R, G, B), and the saturation of the P point is Sp =1-min (R, G, B)/max (R, G, B).
That is, the input value of the initial pixel data is (R, G, B), and the first saturation of the pixel point is Sp =1-min (R, G, B)/max (R, G, B).
S202: and determining the saturation difference between the pixel point and the reference vertex based on the first saturation and the second saturation.
In this embodiment, the saturation difference may include a difference between a first saturation of the pixel and a second saturation of each reference vertex. Specifically, the saturation difference may include DeltaS0 \ 8230and DeltaS7, wherein,
DeltaS0=(Sp-S0);
DeltaS1=(Sp-S1);
DeltaS2=(Sp-S2);
DeltaS3=(Sp-S3);
DeltaS4=(Sp-S4);
DeltaS5=(Sp-S5);
DeltaS6=(Sp-S6);
DeltaS7=(Sp-S7)。
that is, the saturation difference DeltaSi = (Sp-Si) for any reference vertex, where Si is the saturation of the ith reference vertex.
S203: based on the display lookup table and the saturation differences, a corrected pixel value for the reference vertex is determined.
In one possible implementation, determining the corrected pixel value of the reference vertex based on the display lookup table and the saturation gap may include: inputting the input value of the reference vertex into a display lookup table to obtain the output value of the reference vertex; and determining a correction pixel value of the reference vertex based on the input value of the reference vertex, the output value of the reference vertex and the saturation difference.
In one possible implementation, determining the corrected pixel value of the reference vertex based on the input value of the reference vertex, the output value of the reference vertex, and the saturation gap may include: weighting the saturation difference; and determining a correction pixel value of the reference vertex based on the input value of the reference vertex, the output value of the reference vertex and the weighted saturation difference. By weighting the saturation difference, the influence of each reference vertex on the pixel point can be better balanced.
For ease of understanding, the following description is made in conjunction with FIG. 8:
s301: and inputting the input value of the reference vertex into a display lookup table to obtain the output value of the reference vertex.
The input value of the reference vertex is the pixel value of the reference vertex in the original color space, and the output value of the reference vertex represents the pixel value of the reference vertex in the converted color space (i.e., in the color space formed by the reference color). The display lookup table includes a correspondence between input values of the reference vertices and output values of the reference vertices.
In this embodiment, the display lookup table may be a three-dimensional display lookup table, the input values of the 8 reference vertices may be as shown in fig. 1, and the output values corresponding to the 8 reference vertices through tuning may be as shown in fig. 2.
S302: the saturation differences are weighted.
Specifically, the weighted saturation differences may include DeltaS0 '\ 8230and DeltaS7', wherein,
DeltaS0’=(Sp-S0)*weigh1;
DeltaS1’=(Sp-S1)*weigh2;
DeltaS2’=(Sp-S2)*weigh3;
DeltaS3’=(Sp-S3)*weigh4;
DeltaS4’=(Sp-S4)*weigh5;
DeltaS5’=(Sp-S5)*weigh6;
DeltaS6’=(Sp-S6)*weigh7;
DeltaS7’=(Sp-S7)*weigh8。
that is, the saturation difference DeltaSi' = (Sp-Si) × weighi at any one reference vertex.
Wherein, weight is the weighting coefficient of the ith reference vertex, and weight 1 \8230andweight 8 are the weighting coefficients of 8 reference vertices LUT 0-LUT 7 respectively.
In practical application, weight 1 \8230andweight 8 can be preset according to actual requirements. If the display screen to which the color correction method is applied is an LCD screen, weight 1 to weight 8 can all be equal to 1. If the display panel to which the color correction method is applied is an OLED screen, the weights 1 to 8 may be set based on the display characteristics (e.g., voltage drop characteristics) of the OLED screen.
In this embodiment, weighting the saturation level difference may include: the saturation differences are weighted based on the display characteristics of the display panel. That is, different weighting coefficients weight 1 \8230andweight 8 can be set based on the display characteristics of the display panel to better adapt to different display screens.
Further, weighting the saturation gap based on the display characteristics of the display panel may include: the saturation differences are weighted based on the voltage drop characteristics of the display panel. Since the sub-pixels with different colors have different influences on the voltage drop of the display screen, the loading (load) of the display screen is different when different pictures are displayed, and therefore, the setting of weight 1 to weight 8 can consider the voltage drop characteristic of the display screen to further finely correct the color deviation.
In practical application, the weight curves of weight 1 to weight 8 can be determined according to the measurement result of the display color of the current picture. Specifically, when a plurality of sample pictures are displayed (when each sample picture is displayed, the load of the display screen is different), the weighting coefficients weight 1 \ 8230and weight 8 corresponding to the sample pictures can be adjusted according to experience. According to the weighting coefficients weight (which may be weight 1 to weight 8) of a plurality of sample pictures, a relation graph between each weighting coefficient weight and the normalized value of the display screen loading is obtained.
For example, from the weighting coefficient weight 1 of each sample picture, a graph of the relationship between the weighting coefficient weight 1 and the normalized value of the display screen loading is obtained. According to the weighting coefficient weight 2 of each sample picture, a relation curve chart between the weighting coefficient weight 2 and the normalized value of the display screen loading is obtained.
As shown in fig. 9, fig. 9 is a graph showing the relationship between the weighting coefficient weight 1 and the normalized value of the display screen load amount. Wherein, the abscissa is the normalized value of the loading (load) of the display screen, and the ordinate is the value of the weighting coefficient weight 1.
When loading is maximum (i.e., loading = 1), weight 1 is maximum; when loading is minimum (i.e., loading = 0), weighi is the minimum. And obtaining the current value of weight 1 according to the current loading normalized value of the display screen. Along with the continuous change of the display picture, the loading normalized value of the display screen is changed, and the value of weight 1 is also changed.
In addition, the determination of the weight coefficient Weigh2 \8230andweight 8 is similar to the weight coefficient Weigh1, and will not be described herein again.
S303: and determining a correction pixel value of the reference vertex based on the input value of the reference vertex, the output value of the reference vertex and the weighted saturation difference.
In the embodiment, the corrected pixel values of the reference vertex include LUT0_ S \8230andLUT 7_ S, wherein,
LUT0_S=(0,0,0)*DeltaS0’+(r0,g0,b0)*(1-DeltaS0’);
LUT1_S=(256,0,0)*DeltaS1’+(r1,g1,b1)*(1-DeltaS1’);
LUT2_S=(0,256,0)*DeltaS2’+(r2,g2,b2)*(1-DeltaS2’);
LUT3_S=(0,0,256)*DeltaS3’+(r3,g3,b3)*(1-DeltaS3’);
LUT4_S=(0,0,256)*DeltaS4’+(r4,g4,b4)*(1-DeltaS4’);
LUT5_S=(256,0,256)*DeltaS5’+(r5,g5,b5)*(1-DeltaS5’);
LUT6_S=(0,256,256)*DeltaS6’+(r6,g6,b6)*(1-DeltaS6’);
LUT7_S=(256,256,256)*DeltaS7’+(r7,g7,b7)*(1-DeltaS7’)。
that is, the corrected pixel value of any one of the reference vertices is LUTi _ S = (input value of ith reference vertex) × DeltaSi '+ (ri, gi, bi) × (1-DeltaSi'), where (ri, gi, bi) is the output value of ith reference vertex. Specifically, the input value of each reference vertex may refer to fig. 1, and the output value of each reference vertex may refer to fig. 2.
It should be noted that there is no fixed sequence between step S301 and step S302, and step S301 may be executed first and then step S302 is executed, or step S302 may be executed first and then step S301 is executed, which is not limited herein.
When the input values of the points 1,P are changed from (0, 0) to (255,255,255) in weight 1 \8230andweight 8 (i.e. R, G, B are equal, and Sp is 0), LUT0_ S to LUT7_ S obtained by actual calculation can refer to Table 2:
Figure BDA0003959906210000151
Figure BDA0003959906210000161
TABLE 2
The foregoing describes how to determine the corrected pixel value of the reference vertex, and the following describes how to apply the corrected pixel value of the reference vertex to all the pixel points to achieve color correction of each pixel point of the image to be displayed.
In this embodiment, the to-be-displayed image may include three color channels, the display lookup table may be a three-dimensional display lookup table, and linear interpolation processing is performed on the initial pixel data based on the corrected pixel value of the reference vertex to obtain corrected pixel data corresponding to the pixel point, including: and performing cubic linear interpolation processing on the initial pixel data based on the corrected pixel value of the reference vertex to obtain corrected pixel data corresponding to the pixel point.
As shown in fig. 10, fig. 10 is a cube (i.e., a tuned color space) formed by the corrected pixel values of the 8 reference vertices. First assume that a left-handed coordinate system is used, with x-axis, y-axis, and z-axis see FIG. 10.
In fig. 10, the corrected pixel values of the 8 reference vertices are C000, C100, C010, C001, C101, C011, C110, and C111, respectively.
In the x direction, the values C00, C01, C10, and C11 of four points on each edge are obtained by interpolating the four edges using the x coordinate value of the initial pixel data. Specifically, interpolation is carried out on C000 and C100 to obtain C00; interpolating the C001 and the C101 to obtain C01; interpolating the C010 and the C110 to obtain C10; and interpolating the C011 and C111 to obtain C11.
Then, in the y direction, four points C00, C01, C10, and C11 are interpolated using the y coordinate value of the initial pixel data to obtain two line segments and values C0 and C1 of the middle two points.
Finally, in the z direction, the z coordinate value of the initial pixel data is used to interpolate the two points C0 and C1, so as to obtain the final value of the point C (i.e. the corrected pixel data corresponding to the pixel point).
In consideration of the gray-scale transition picture, when the input value of the P point changes from (0, 0) to (255 ), the relationship between the output value and the input value of the P point obtained by the cubic linear interpolation calculation using the corrected pixel values LUT0_ S to LUT7_ S of the reference vertices is as shown in fig. 11.
The input value of the P point represents the initial pixel data of the pixel point, and the output value of the P point represents the corrected pixel data corresponding to the pixel point.
In fig. 11, r, g, and b are input/output curves plotted with a red component, an input/output curve plotted with a green component, and an input/output curve plotted with a blue component, respectively.
Referring to fig. 11, it can be seen that the three r/g/b curves all exhibit a linear variation trend, and the relative ratio is fixed, so that the color cast phenomenon of the display screen finally exhibited by the OLED screen can be improved.
The color space conversion method and the color space conversion device have the advantages that the saturation parameter is added in the color space conversion process and used for evaluating the color influence of the reference vertex on the pixel point, the correction pixel value of the reference vertex is determined specifically based on the saturation difference between the pixel point and the reference vertex, the reference vertex is the vertex of the color space formed based on the reference color, the color distance comprises the saturation difference, and the color distance between the pixel point and the reference vertex is quantitatively expressed by utilizing the saturation difference. Based on the correction pixel value of the reference vertex, linear interpolation processing is carried out on the initial pixel data, so that correction pixel data corresponding to the pixel points can be obtained, and the correction pixel data are used for outputting a display image; the color cast of the gray scale picture of the display image displayed by the display equipment can be improved, and the color accuracy of the display image displayed by the display equipment can be improved.
Moreover, different display devices can adopt different saturation differences to correct, differences of different display devices can be balanced, matching degree of correction pixel data of pixel points and the corresponding display devices can be improved, and color accuracy of display images displayed by different display devices can be improved.
Based on the same inventive concept, an embodiment of the present application provides a color correction apparatus, as shown in fig. 12, including: an acquisition module 401 and a determination module 402.
The acquiring module 401 acquires initial pixel data of a plurality of pixel points included in an image to be displayed;
a determining module 402, configured to determine a corrected pixel value of a reference vertex based on the display lookup table and a saturation difference between the pixel point and the reference vertex, where the reference vertex is a vertex of a color space formed based on a reference color; performing linear interpolation processing on the initial pixel data based on the corrected pixel value of the reference vertex to obtain corrected pixel data corresponding to the pixel point; the corrected pixel data is used to output a display image.
In one possible implementation, the determining module 402, when determining the corrected pixel value of the reference vertex based on the display look-up table and the saturation difference between the pixel point and the reference vertex, is configured to:
determining a first saturation of a pixel point based on initial pixel data, and determining a second saturation of a reference vertex based on an input value of the reference vertex, wherein the input value of the reference vertex is a pixel value of the reference vertex in an original color space;
determining a saturation difference between the pixel point and the reference vertex based on the first saturation and the second saturation;
based on the display lookup table and the saturation differences, a corrected pixel value for the reference vertex is determined.
In one possible implementation, the determining module 402, when determining the corrected pixel values of the reference vertices based on the display lookup table and the saturation differences, is configured to:
inputting the input value of the reference vertex into a display lookup table to obtain the output value of the reference vertex;
and determining a correction pixel value of the reference vertex based on the input value of the reference vertex, the output value of the reference vertex and the saturation difference.
In one possible implementation, the determining module 402 is configured to, when determining the corrected pixel value of the reference vertex based on the input value of the reference vertex, the output value of the reference vertex, and the saturation gap:
weighting the saturation difference;
and determining the correction pixel value of the reference vertex based on the input value of the reference vertex, the output value of the reference vertex and the weighted saturation difference.
In one possible implementation, when the determining module 402 weights the saturation gap, it is configured to:
the saturation differences are weighted based on the display characteristics of the display panel.
In one possible implementation, the image to be displayed contains three color channels; the display lookup table is a three-dimensional display lookup table;
the determining module 402 performs linear interpolation processing on the initial pixel data based on the corrected pixel value of the reference vertex, and is configured to, when corrected pixel data corresponding to the pixel point is obtained:
and performing cubic linear interpolation processing on the initial pixel data based on the corrected pixel value of the reference vertex to obtain corrected pixel data corresponding to the pixel point.
The apparatus of the embodiment of the present application may execute the method provided by the embodiment of the present application, and the implementation principle is similar, the actions executed by the modules in the apparatus of the embodiments of the present application correspond to the steps in the method of the embodiments of the present application, and for the detailed functional description of the modules of the apparatus, reference may be specifically made to the description in the corresponding method shown in the foregoing, and details are not repeated here.
The color space conversion method and the color space conversion device have the advantages that the saturation parameter is added in the color space conversion process and used for evaluating the color influence of the reference vertex on the pixel point, the correction pixel value of the reference vertex is determined specifically based on the saturation difference between the pixel point and the reference vertex, the reference vertex is the vertex of the color space formed based on the reference color, the color distance comprises the saturation difference, and the color distance between the pixel point and the reference vertex is quantitatively expressed by utilizing the saturation difference. Based on the correction pixel value of the reference vertex, linear interpolation processing is carried out on the initial pixel data, so that correction pixel data corresponding to the pixel points can be obtained, and the correction pixel data are used for outputting a display image; the color cast of the gray scale picture of the display image displayed by the display equipment can be improved, and the color accuracy of the display image displayed by the display equipment can be improved.
Moreover, different display devices can adopt different saturation differences to correct, differences of different display devices can be balanced, matching degree of correction pixel data of pixel points and the corresponding display devices can be improved, and color accuracy of display images displayed by different display devices can be improved.
Based on the same inventive concept, embodiments of the present application provide an electronic device, including a processor and a memory, where the processor and the memory are connected to each other, the memory is used for storing a computer program, and the processor is configured to, when invoking the computer program, perform the steps of the method as described above, and compared with the related art, implement: the color space conversion method comprises the steps of adding a saturation parameter in the color space conversion process for evaluating the color influence of a reference vertex on a pixel point, specifically determining a correction pixel value of the reference vertex based on a saturation difference between the pixel point and the reference vertex, wherein the reference vertex is a vertex of a color space formed based on reference colors, the color distance comprises the saturation difference, and quantizing and expressing the color distance between the pixel point and the reference vertex by using the saturation difference. Based on the correction pixel value of the reference vertex, linear interpolation processing is carried out on the initial pixel data, so that correction pixel data corresponding to the pixel point can be obtained, and the correction pixel data are used for outputting a display image; the color cast of the gray scale picture of the display image displayed by the display equipment can be improved, and the color accuracy of the display image displayed by the display equipment can be improved. Moreover, different display devices can adopt different saturation differences to correct, differences of different display devices can be balanced, matching degree of correction pixel data of pixel points and the corresponding display devices can be improved, and color accuracy of display images displayed by different display devices can be improved.
In an alternative embodiment, there is provided an electronic device, as shown in fig. 13, the electronic device 50 shown in fig. 13 including: a processor 501 and a memory 503. Wherein the processor 501 is coupled to the memory 503, such as via the bus 502.
In one possible implementation, the electronic device 50 may further include a display panel electrically connected to the processor 501, and configured to receive the corrected pixel data of the image to be displayed output by the processor 501, so as to output the display image.
Optionally, the electronic device 50 may further include a transceiver 504, and the transceiver 504 may be used for data interaction between the electronic device and other electronic devices, such as transmission of data and/or reception of data. It should be noted that the transceiver 504 is not limited to one in practical application, and the structure of the electronic device 50 is not limited to the embodiment of the present application.
The Processor 501 may be a CPU (Central Processing Unit), a general-purpose Processor, a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate Array) or other Programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor 501 may also be a combination of implementing computing functionality, e.g., comprising one or more microprocessors, a combination of DSPs and microprocessors, and the like.
Bus 502 may include a path that transfers information between the above components. The bus 502 may be a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus 502 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 12, but this is not intended to represent only one bus or type of bus.
The Memory 503 may be a ROM (Read Only Memory) or other type of static storage device that can store static information and instructions, a RAM (Random Access Memory) or other type of dynamic storage device that can store information and instructions, an EEPROM (Electrically Erasable Programmable Read Only Memory), a CD-ROM (Compact Disc Read Only Memory) or other optical Disc storage, optical Disc storage (including Compact Disc, laser Disc, optical Disc, digital versatile Disc, blu-ray Disc, etc.), a magnetic Disc storage medium, other magnetic storage devices, or any other medium that can be used to carry or store a computer program and that can be Read by a computer, without limitation.
The memory 503 is used for storing computer programs for executing the embodiments of the present application, and is controlled by the processor 501 to execute the computer programs. The processor 501 is adapted to execute a computer program stored in the memory 503 to implement the steps shown in the aforementioned method embodiments.
Wherein, the electronic device includes but is not limited to: mobile terminals such as mobile phones, notebook computers, PADs, etc. and fixed terminals such as digital TVs, desktop computers, etc.
Based on the same inventive concept, embodiments of the present application provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the steps and corresponding contents of the foregoing method embodiments can be implemented.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device realizes the following when executed:
acquiring initial pixel data of a plurality of pixel points of an image to be displayed;
determining a corrected pixel value of a reference vertex based on the display lookup table and the saturation difference between the pixel point and the reference vertex, wherein the reference vertex is a vertex of a color space formed based on a reference color;
performing linear interpolation processing on the initial pixel data based on the corrected pixel value of the reference vertex to obtain corrected pixel data corresponding to the pixel point;
the corrected pixel data is used to output a display image.
The terms "first," "second," "third," "fourth," "1," "2," and the like in the description and in the claims of the present application and in the above-described drawings (if any) are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used are interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in other sequences than illustrated or otherwise described herein.
It should be understood that, although each operation step is indicated by an arrow in the flowchart of the embodiment of the present application, the implementation order of the steps is not limited to the order indicated by the arrow. In some implementation scenarios of the embodiments of the present application, the implementation steps in the flowcharts may be performed in other sequences as needed, unless explicitly stated otherwise herein. In addition, some or all of the steps in each flowchart may include multiple sub-steps or multiple stages based on an actual implementation scenario. Some or all of these sub-steps or stages may be performed at the same time, or each of these sub-steps or stages may be performed at different times, respectively. In a scenario where execution times are different, an execution sequence of the sub-steps or the phases may be flexibly configured according to requirements, which is not limited in the embodiment of the present application.
The foregoing is only an optional implementation manner of a part of implementation scenarios in the present application, and it should be noted that, for those skilled in the art, other similar implementation means based on the technical idea of the present application are also within the protection scope of the embodiments of the present application without departing from the technical idea of the present application.

Claims (10)

1. A color correction method, comprising:
acquiring initial pixel data of a plurality of pixel points of an image to be displayed;
determining a corrected pixel value of a reference vertex based on a display lookup table and a saturation difference between the pixel point and the reference vertex, wherein the reference vertex is a vertex of a color space formed based on a reference color;
performing linear interpolation processing on the initial pixel data based on the correction pixel value of the reference vertex to obtain correction pixel data corresponding to the pixel point;
the corrected pixel data is used to output a display image.
2. The method of claim 1, wherein determining the corrected pixel value for the reference vertex based on the display lookup table and the saturation difference between the pixel point and the reference vertex comprises:
determining a first saturation of the pixel point based on the initial pixel data, determining a second saturation of the reference vertex based on an input value of the reference vertex, the input value of the reference vertex being a pixel value of the reference vertex in an original color space;
determining a saturation gap between the pixel point and the reference vertex based on the first saturation and the second saturation;
determining a corrected pixel value for the reference vertex based on the display lookup table and the saturation gap.
3. The method of claim 2, wherein determining the corrected pixel value for the reference vertex based on the display lookup table and the saturation gap comprises:
inputting the input value of the reference vertex into the display lookup table to obtain the output value of the reference vertex;
determining a corrected pixel value for the reference vertex based on the input value for the reference vertex, the output value for the reference vertex, and the saturation gap.
4. The method of claim 3, wherein determining the corrected pixel value for the reference vertex based on the input value for the reference vertex, the output value for the reference vertex, and the saturation gap comprises:
weighting the saturation differences;
and determining a correction pixel value of the reference vertex based on the input value of the reference vertex, the output value of the reference vertex and the weighted saturation difference.
5. The method of claim 4, wherein weighting the saturation gaps comprises:
weighting the saturation gap based on display characteristics of the display panel.
6. The method of claim 1, wherein the image to be displayed comprises three color channels; the display lookup table is a three-dimensional display lookup table;
the performing linear interpolation processing on the initial pixel data based on the corrected pixel value of the reference vertex to obtain corrected pixel data corresponding to the pixel point includes:
and performing cubic linear interpolation processing on the initial pixel data based on the correction pixel value of the reference vertex to obtain correction pixel data corresponding to the pixel point.
7. A color correction apparatus, characterized by comprising:
the acquisition module is used for acquiring initial pixel data of a plurality of pixel points contained in an image to be displayed;
a determining module, configured to determine a corrected pixel value of a reference vertex based on a display lookup table and a saturation difference between the pixel point and the reference vertex, where the reference vertex is a vertex of a color space formed based on a reference color; performing linear interpolation processing on the initial pixel data based on the correction pixel value of the reference vertex to obtain correction pixel data corresponding to the pixel point; the corrected pixel data is used to output a display image.
8. An electronic device comprising a processor and a memory, the processor and the memory being interconnected;
the memory is used for storing a computer program;
the processor is configured to perform the method of any of claims 1 to 6 when the computer program is invoked.
9. The electronic device of claim 8, further comprising a display panel;
the display panel is electrically connected with the processor and used for receiving the corrected pixel data of the image to be displayed output by the processor so as to output the display image.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which is executed by a processor to implement the method of any one of claims 1 to 6.
CN202211477646.XA 2022-11-23 2022-11-23 Color correction method, color correction device, electronic device and storage medium Pending CN115862534A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211477646.XA CN115862534A (en) 2022-11-23 2022-11-23 Color correction method, color correction device, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211477646.XA CN115862534A (en) 2022-11-23 2022-11-23 Color correction method, color correction device, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN115862534A true CN115862534A (en) 2023-03-28

Family

ID=85665589

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211477646.XA Pending CN115862534A (en) 2022-11-23 2022-11-23 Color correction method, color correction device, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN115862534A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118098176A (en) * 2024-04-26 2024-05-28 Tcl华星光电技术有限公司 Image compensation method of display device and display device
CN118098176B (en) * 2024-04-26 2024-07-05 Tcl华星光电技术有限公司 Image compensation method of display device and display device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118098176A (en) * 2024-04-26 2024-05-28 Tcl华星光电技术有限公司 Image compensation method of display device and display device
CN118098176B (en) * 2024-04-26 2024-07-05 Tcl华星光电技术有限公司 Image compensation method of display device and display device

Similar Documents

Publication Publication Date Title
US7081899B2 (en) Image processing support system, image processing device and image display device
US10063824B2 (en) Mapping image/video content to target display devices with variable brightness levels and/or viewing conditions
US8830256B2 (en) Color correction to compensate for displays' luminance and chrominance transfer characteristics
US9076252B2 (en) Image perceptual attribute adjustment
US7554557B2 (en) Device and method for image compression and decompression
US20200098333A1 (en) Method and System for Display Color Calibration
US20130169826A1 (en) Method of viewing virtual display outputs
KR20080003737A (en) Color correction circuit, driving device, and display device
KR20190001466A (en) Display apparatus and method for processing image
CN111312141A (en) Color gamut adjusting method and device
KR20110073376A (en) Color correction to compensate for displays' luminance and chrominance transfer characteristics
CN108898987B (en) Gray scale conversion method, gray scale conversion device and display device
CN115862534A (en) Color correction method, color correction device, electronic device and storage medium
US6950111B2 (en) Image display unit
JP2005196156A (en) Color image display apparatus, color converter, color-simulating apparatus, and methods for them
CN113920927B (en) Display method, display panel and electronic equipment
CN111615714A (en) Color adjustment method for RGB data
CN113724644A (en) Method for compensating brightness and chroma of display device and related equipment
US7126591B2 (en) Image display device
JP5321089B2 (en) Image processing apparatus, image display apparatus, and image processing method
EP1262950A1 (en) Image display unit
US20050052476A1 (en) Display color adjust
CN112289274B (en) Display method and device
CN115170681B (en) Gamma lookup table generation method and device, electronic equipment and storage medium
US11557265B2 (en) Perceptual color enhancement based on properties of responses of human vision system to color stimulus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination