Detailed Description
The following description is of the best mode for carrying out the invention and is intended to illustrate the general spirit of the invention and not to limit the invention. Reference must be made to the following claims for their true scope of the invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of further features, integers, steps, operations, elements, components, and/or groups thereof.
FIG. 1 is a block diagram of an electronic device for performing a sub-pixel rendering method according to an embodiment of the invention. The electronic device 100 includes at least one processor 110 and a storage 120. The processor 110 may be implemented in a variety of ways, such as with dedicated hardware circuitry or general purpose hardware (e.g., a single processor, multiple processors with parallel processing capabilities, a graphics processor, or other processor with computing capabilities) to convert a source image into a target image suitable for use with a display having a particular subpixel arrangement. In the embodiment of the present invention, the height of the sub-pixel of the display (not shown) is 2/3 of the height of the source pixel of the source image, and the width thereof is 3/4 of the width of the source pixel of the source image. The source pixel line number of the source image is the same as the target pixel line number of the target image, and the source pixels of every 3 adjacent source images in each line are rendered as target pixels of 2 target images by the processor 110. Each target pixel of the target image comprises 3 sub-pixels respectively corresponding to R, G and the B channel, each source pixel of the source image also comprises 3 pixel values respectively corresponding to R, G and the B channel, and the pixel values of the sub-pixels of each channel of the target pixel are respectively calculated based on the pixel values of the corresponding channel of the source pixel. The storage 120 may be a non-volatile memory (e.g., ROM, flash memory, etc.) for storing at least a source image and information required for converting the source image into a target image suitable for a display having a specific sub-pixel arrangement. For example, the information required to convert a source image into a target image suitable for a display having a particular subpixel arrangement includes the relevant algorithms to convert source pixels into subpixels, the relevant parameters for distance-based and area-based subpixel rendering methods, and the like.
In one embodiment, the electronic device 100 is a display panel controller coupled between a Graphics Processing Unit (GPU) (not shown) and a display (not shown). The electronic device 100 receives a source image from the GPU, converts the received source image into a target image, and transmits the target image to the display for display.
The processor 110 may use a distance-based subpixel rendering method (described in detail below) and/or an area-based subpixel rendering method (described in detail below) to convert a source image into a target image suitable for a display having a particular subpixel arrangement. The distance-based sub-pixel rendering method is described first below.
Fig. 2 is a flowchart illustrating a distance-based sub-pixel rendering method according to an embodiment of the invention, and the distance-based sub-pixel rendering method illustrated in fig. 2 will be described in detail with reference to fig. 3 to 12.
First, in step S201, the processor 110 obtains a source image from the storage 120. In one embodiment, the processor 110 receives the source image from the GPU and stores the received source image in the storage 120 before proceeding to step S201. The processor 110 then executes step S205.
In step S205, when the display displaying the target image has a notch and/or a rounded corner, the processor 110 performs anti-aliasing on the source pixels in the source image located at the notch or the rounded corner. In detail, the processor 110 first determines whether the display has a notch or a rounded corner. As shown in fig. 3, 310 is a notch at the edge of the display and 320 is a rounded corner at the edge of the display. In an embodiment, if the display has a notch and/or a rounded corner, the storage 120 stores therein coordinate information of source pixels in all source images corresponding to the notch and/or the rounded corner. If the processor 110 can retrieve coordinate information of source pixels in the source image corresponding to the notch and/or the rounded corner from the storage 120, it indicates that the display has the notch and/or the rounded corner. When the display has notches and/or rounded corners, the processor 110 multiplies the pixel values of the source pixels in the source image at the notches and/or rounded corners by an attenuation coefficient to perform anti-aliasing processing. In an example, the processor 110 multiplies pixel values of source pixels of the source image that are at the edge and are notches and/or rounded corners by an attenuation coefficient to soften the jaggies exhibited by the edge pixels. The subsequent step calculates the pixel value of each sub-pixel of the target image according to the pixel value of the source pixel of the softened source image. The attenuation coefficient is related to the area of the edge pixel cut by the circular arc, and can be obtained by the following formula:
Areaarch=(2*offset-1)/(2*step)
wherein, AreaarchFor the attenuation coefficient, offset is the index of the position of the source pixel in the sawtooth, and step is the width of the sawtooth.
For example, as shown in FIG. 4, region 410 is a region without light emitters, region 420 (depicted in dashed lines) is one of a plurality of serrations of a fillet or notch, and solid line 450 is a near ideal tangent of a circular arc in region 420. For the source pixel 421, the area 421a is the area located inside the tangent line of the circular arc, and the area 421b is the area located outside the tangent line of the circular arc. From the contents of fig. 4, it can be seen thatThe width of region 420 is 5 source pixels, and source pixel 421 is the 1 st source pixel of region 420 (i.e., offset is 1). Therefore, the attenuation coefficient corresponding to the source pixel 421 can be calculated as Area according to the above formulaarch(2 × 1-1)/(2 × 5) ═ 1/10. In other words, the area corresponding to the region 421a is 1/10 of the whole area of the source pixel 421, and the pixel value of the softened pixel 421 is 1/10. In another example, source pixel 423 may correspond to an attenuation factor of AreaarchThe softened source pixel 423 has a pixel value of 5/10, in other words, the area corresponding to the region 423a is 5/10 of the entire area of the source pixel 423, and the softened source pixel 423 has a pixel value of 5/10. And so on.
In one embodiment, the processor 110 sets the pixel value of the source pixel in the source image having no corresponding sub-pixel in the target image to 0, i.e. sets the pixel value of the source pixel in the source image corresponding to the area of the display having no light emitter (area 410 in fig. 4) to 0.
In addition, in an embodiment of the invention, when the storage 120 stores the related information of the source pixel corresponding to the jagged region, only the coordinates of the starting point of the jagged region, the offset direction corresponding to the x-direction or the y-direction, and the offset amount of the source pixel may be stored. For example, as shown in the area 420 of FIG. 4, when storing the saw-tooth corresponding to the area 420, only the information of the coordinates corresponding to the source pixel 421, the offset direction corresponding to the x-direction, and the offset amount of 5 source pixels may be stored.
After the processor 110 performs anti-aliasing processing on the source pixels located at the notches and/or the rounded corners, the process proceeds to step S210. In step S210, the processor 110 determines the coordinates (x) of a target pixel to be rendered in the target imagesprY), then proceeds to step S215.
In step S215, the processor 110 calculates a target pixel (x) in the source imagesprY) of the source pixel corresponding to each sub-pixel of the image to determine a target pixel (x) to be renderedsprY) texture information around each sub-pixel. In detail, in order toTarget pixel (x)sprY), the processor 110 renders the target pixel (x) to be rendered in the source image with the target imagesprY) performing edge detection on a window centered on the source pixel corresponding to each sub-pixel of the image to obtain a target pixel (x) to be renderedsprY) of the source pixel, and determining a target pixel (x) based on the obtained edge codesprY) of the target pixel, and for a target pixel (x) having a different texturesprY) sub-pixels are rendered by different rendering methods to obtain a target pixel (x)sprY) pixel value of each sub-pixel. Compute and target pixel (x)sprY) of the source image corresponding to each sub-pixel of the target pixel to be rendered (x) is determined according to the x coordinate of the source pixel in the source imagesprY) are located at different odd or even lines of the target image, the calculation method will be different.
For better description of the target pixel (x) latersprY), the definition of even rows, odd rows, even columns and odd columns is explained first. With the target pixel (x)sprY) is an example, when xsprWhen% 2 is 0, the target pixel (x) is expressedsprY) is located in even columns of the target image when xsprWhen% 2 is 1, the target pixel (x) is expressedsprY) are located in odd columns of the target image, where% 2 represents the remainder for 2, and when the target pixel (x)sprY) when it is in column 1, xsprWhen the target pixel (x) is 0sprY) x in column 2spr1, and so on; when y% 2 is 0, the target pixel (x) is expressedsprY) is located in an even row of the target image, and represents the target pixel (x) when y% 2 is 1sprY) are located in odd rows of the target image, where% 2 represents the remainder for 2, and when the target pixel (x)sprY) is 0 when the target pixel (x) is located in the 1 st rowsprY) is 1 on row 2, and so on. The above-described method of determining even rows/even columns and odd rows/odd columns is equally applicable to source pixels in a source image and will not be described again here.
When the target pixel (x) is to be renderedsprY) when the target image is located in the even-numbered row, the x-coordinate of the source pixel in the corresponding source image is obtained by the following formula (the R and G channels of the target pixel are different from the source pixel corresponding to the sub-pixel of the B channel):
and, when the target pixel (x) is to be renderedsprY) when the target image is located in an odd row, the x-coordinate of the corresponding source pixel is obtained by the following formula (the R and G channels of the target pixel are different from the source pixel corresponding to the sub-pixel of the B channel):
wherein floor () represents a downward integer. And target pixel (x)sprY) is (x, y) for each sub-pixel in the source image. The coordinates of all source pixels in the window can be obtained from the coordinates of the source pixel (x, y) by using the source pixel (x, y) as a center pixel. Taking a 3 × 3 window as an example, the coordinates of the source pixel above the source pixel (x, y) are (x, y-1), the coordinates of the source pixel to the left of the source pixel (x, y) are (x-1, y), and so on, the coordinates of all 8 source pixels around the source pixel (x, y) can be obtained. And acquiring the pixel value of the corresponding source pixel from the source image according to the coordinate of the source pixel. When the processor 110 calculates the pixel values of the sub-pixels at the edge, the processor 110 will first mirror the source pixels at the edge to obtain the pixels of the virtual source pixels outside the edgeThe value is obtained. For example, as shown in FIG. 5, the source pixels 0-15 of the shaded area in the lower right corner are the source pixels located in the source image. Since the virtual source pixels located outside the source image are used when the processor 110 calculates the target pixels located at the edge, that is, the pixel values of the virtual source pixels located at the left side of the source pixels 4 are used when calculating the pixel values of the sub-pixels located corresponding to the source pixels 4 of the source image, the pixel values of the virtual source pixels located at the left side of the source pixels 4 are mapped to the pixel values of the source pixels 5 located at the right side of the source pixels 4, and so on, so that the processor 110 can perform correlation operation according to the pixel values of the virtual source pixels when calculating the pixel values of the sub-pixels corresponding to the edge.
Obtaining a target pixel (x) from a source imagesprY), the processor 110 starts to calculate the edge code. With the target pixel (x)sprY) subtracting the pixel values of a plurality of adjacent source pixels in one of the plurality of directions of the window from the pixel value of the source pixel corresponding to the sub-pixel to obtain a plurality of first difference values; subtracting the pixel value of the source pixel from the pixel values of a plurality of adjacent source pixels to obtain a plurality of second difference values; obtaining a first code according to the comparison result of the first difference and the first threshold; obtaining a second code according to the comparison result of the second difference value and a second threshold value; combining the first code and the second code to obtain a code in one of a plurality of directions; and finally, combining the codes in the multiple directions to obtain an edge code.
The target pixel (x) is shown belowsprAnd y) is specifically described by taking a 3 × 3 window corresponding to the sub-pixel corresponding to the R channel as an example. The edge code may be composed of four hexadecimal bits, and the bits of the code composing the edge code correspond to the coding of the horizontal (h), top-left-right-bottom (1), vertical (v), and top-right-left-bottom (r) directions of a 3 × 3 window, respectively, from left to right, where each bit represents texture information of one direction. It should be noted that although four bits are used as an example in the above embodiment, the present invention is not limited thereto, and is determined by the number of directions that the edge code needs to represent. For example, as shown in FIG. 6, the horizontal (h) direction is encoded by the second in the Sudoku3. The 4 th and 5 th source pixels (i.e., V3 to V5 shown in the figure), the left-upper-right-lower (1) direction codes are calculated from the 0 th, 4 th and 8 th source pixels (i.e., V0, V4 and V8 shown in the figure) in the nine-grid, the vertical (V) direction codes are calculated from the 1 st, 4 th and 7 th source pixels (i.e., V1, V4 and V7 shown in the figure) in the nine-grid, and the right-upper-left-lower (r) direction codes are calculated from the 2 nd, 4 th and 6 th source pixels (i.e., V2, V4 and V6 shown in the figure) in the nine-grid. The first two bits of the code for each direction are generated by subtracting the pixel value of the central pixel from the pixel values of the surrounding pixels, and the second two bits of the code are generated by subtracting the pixel values of the surrounding pixels from the pixel value of the central pixel. For example, taking the horizontal (H) direction as an example, the codes for the horizontal (H) direction are H (f (V3-V4), f (V5-V4), f (V4-V3), f (V4-V5)). Wherein, f () represents a function, 1 is output when the value in the parentheses is larger than a predetermined threshold, and 0 is output when the value in the parentheses is smaller than a predetermined threshold; h () represents a function that converts the four binary digits in brackets into a hexadecimal digit. For example, assuming that the threshold is 10, when V3 is 151, V4 is 148, and V5 is 150, V3-V4 are equal to 3, and thus V3-V4 are less than 10, so f (V3-V4) outputs 0, V5-V4 are equal to 2, and thus V5-V4 are less than 10, so f (V5-V4) outputs 0, V4-V3 are equal to-3, and thus V4-V3 are less than 10, so outputs 0, V4-V5 are equal to-2, and thus V4-V5 are less than 10, so outputs 0; the encoding in the horizontal (h) direction is 0x0 (i.e., binary 0000). Referring now to fig. 7, fig. 7 is a diagram illustrating 9 cases corresponding to encoding in the horizontal (h) direction according to an embodiment of the present invention. As shown in 701 of fig. 7, 0x0 indicates that the luminance values (i.e., pixel values, the same applies hereinafter) of V3, V4, and V5 do not differ much; v3, V4 and V5 are all indicated with white fill. V3, V4 and V5 may also be all filled with black (not shown), and the filled colors of the squares V3, V4 and V5 are the same, i.e. the luminance values of V3, V4 and V5 are not different. When V3 is 151, V4 is 120, V5 is 150, V3-V4 are equal to 31, and thus V3-V4 are greater than 10, so f (V3-V4) output is 1, V5-V4 are equal to 30, and thus V5-V4 are greater than 10, so f (V5-V4) output is 1, V4-V3 are equal to-31, and thus V4-V3 are less than 10, so output is 0, V4-V5 are equal to-30, and thus V4-V5 is less than 10, so 0 is output, so the encoding in the horizontal (h) direction is 0xC (i.e. binary 1100), as shown in 703 in fig. 7, encoding 0xC means that the luminance values of V3 and V5 are both greater than the luminance value of V4. Similarly, as shown at 702 in fig. 7, encoding 0x3 indicates that the luminance values of V3 and V5 are both less than the luminance value of V4; as shown at 704, the code 0x1 indicates that the luminance values of V3 and V4 are both greater than the luminance value of V5; as indicated at 705, the code 0x4 indicates that the luminance values of V3 and V4 are both less than the luminance value of V5; as shown in 706, encoding 0x6 indicates that the brightness value of V3 is less than the brightness value of V4, and the brightness value of V4 is less than the brightness value of V5; as indicated at 707, the code 0x2 indicates that the luminance values of V4 and V5 are both greater than the luminance value of V3; as shown at 708, the code 0x8 indicates that the luminance values of V4 and V5 are both less than the luminance value of V3; as shown in 709, the code 0x9 indicates that the luminance value of V3 is greater than the luminance value of V4 and the luminance value of V4 is greater than the luminance value of V5.
The same way can get the coding of the left upper right lower (1) direction, the vertical (v) direction and the right upper left lower (r) direction. Arranging the codes in the horizontal (h) direction, the left-up-right-down (1) direction, the vertical (v) direction and the right-up-left-down (r) direction from left to right to obtain an edge code containing four hexadecimal bits, and obtaining the target pixel (x) through the finally output edge codesprY) texture information around the sub-pixels of the R channel. In one embodiment, when the edge code in the horizontal (h) direction is 0x4 or 0x8, the target pixel (x) is representedsprY) is weak. When the edge code is 0x0111, 0x0222, 0x0333, 0x0444, 0x0CCC, 0xCC0C, 0x1102, 0x2201, 0x3303, 0x4408, or 0x8804 (as shown in fig. 16), the target pixel (x) is representedsprY) is stronger. And when the edge code is 0x3030 (as shown in fig. 8A) or 0xC0C0 (as shown in fig. 8B), it represents the target pixel (x)sprY) is a specific pattern.
The processor 110 finishes calculating the target pixel (x)sprY) and determining the target pixel (x)sprY) of the sub-pixels, the process proceeds to step S216. In step S216, the processor 110 determines the target pixels (x) respectivelysprY) of the sub-pixelsWhether the surrounding texture information is a specific pattern. In one embodiment, the target pixel (x) is determinedsprY) whether the edge code of the sub-pixel is 0x3030 or 0xC0C 0; if "no" is entered in step S220 (described in detail later), and if "yes", it is entered in step S218.
For better description of the target pixel (x) latersprY), the positional relationship between the source pixel in the source image and the sub-pixels of the target pixel in the target image is explained first. As shown in fig. 10, the "o" in fig. 10 represents the source pixel in the source image. As shown in fig. 11, "Δ" in fig. 11 denotes the R-channel sub-pixel of the target pixel in the target image, "diamond" denotes the G-channel sub-pixel of the target pixel in the target image, and "□" denotes the B-channel sub-pixel of the target pixel in the target image. As can be seen from the content of fig. 9, in the embodiment of the present invention, the height of the sub-pixel of the target pixel in the target image is 2/3 (the number of the target pixel rows is the same as the number of the source pixel rows) of the height of the source pixel of the source image, and the width of each channel of the sub-pixel of the target pixel in the target image is 3/4 (the number of the target pixel in each row is 2/3 of the number of the source pixel) of the channel width of the source pixel of the source image. In other words, as shown in fig. 12, when the source image is displayed in the target image, the position of the source pixel of the source image does not overlap with the position of the sub-pixel of the target image, so when calculating the pixel values of the sub-pixels of the R channel, the G channel and the B channel of the target pixel of the target image, the processor 110 performs interpolation calculation on the pixel values of the left and right source pixels most adjacent to the sub-pixel of the target pixel to obtain the pixel value of the sub-pixel of the target pixel. Step S218 is described next.
In step S218, the processor 110 directly interpolates the calculation target pixel (x)sprY) pixel value of the sub-pixel. In detail, the processor 110 calculates the target pixel (x) by the following formulasprY) pixel value of the sub-pixel. When the target pixel (x) of the target imagesprY) is located at an even row of the target image, the target pixel (x) of the target imagesprY) R and G areThe pixel values of the sub-pixels of the channel and the B channel are calculated by the following formula:
*factor
aveWhen is coming into contact with
*factor
kepWhen is coming into contact with
*factor
aveWhen is coming into contact with
And, when the target pixel (x) of the target imagesprY) is located in an odd line of the target image, the target pixel (x) of the target imagesprY) pixel values of the R and G channels and the B channel sub-pixels are calculated by the following formula:
Wherein, the first and the second end of the pipe are connected with each other,
refers to the coordinates (x) in the target image
sprY) pixel values of sub-pixels of the R-channel or G-channel of the target pixel,
refers to the coordinates (x) in the target image
sprY) pixel value of a sub-pixel of B channel of the target pixel, R '(G')
x,yRefers to the pixel value of R channel or G channel of a source pixel with coordinates (x, y) in the source image, B'
x,yRefers to the pixel value of the B channel of the source pixel with coordinates (x, y) in the source image, each in the formula
All include an integer down operation, factor
kepIs a preset value, factor
aveIs a default value, and edgecode refers to edge code. And, in embodiments of the invention, the factor
kepHas a value of 1.0, and factor
aveThe value of (A) is 0.5. It is noted that the factor
kepAnd factor
aveThe value of (c) is adjustable according to the user's requirement, and is not limited by the present invention. In one example, x of the sub-pixel of the R channel of the target pixel is
sprAt coordinate 5, the processor 110 multiplies the pixel value of the source pixel with coordinate x of 7 by the factor
kepTo obtain the pixel value of the sub-pixel corresponding to the target pixel.
In step S220, the processor 110 calculates a target pixel (x) to be rendered based on the distance according to the texture informationsprY) pixel value of the sub-pixel. In detail, the target pixel to be rendered (x) corresponding to the even-numbered linesprY) the pixel values of the sub-pixels of the R, G, and B channels mayThe following formula is used to obtain:
and, a target pixel (x) to be rendered corresponding to the odd-numbered linesprY) the pixel values of the sub-pixels of the R channel, the G channel and the B channel can be obtained by the following equations:
wherein the content of the first and second substances,
refers to the target pixel (x)
sprY) pixel values of sub-pixels of the R-channel or the G-channel,
refers to the target pixel (x)
sprY) pixel value of a sub-pixel of B channel, R '(G')
(x,y)Refers to the pixel value of R channel or G channel of the source pixel with coordinates of (x, y), B'
(x,y)Refers to the pixel value of the corresponding B channel in the source pixel with coordinates (x, y), and each of the formulas
All involve a down-integer operation (e.g., 3 x 3/2 equals 4) where% 2 represents the remainder for 2, and thus x
spr% 2-0 denotes even columns, and x
spr% 2-1 represents odd columns. In one embodiment, when the target pixel (x)
sprY) is weak, that is, when the encoding of the edge code in the horizontal (h) direction is 0x8 or 0x4, the target pixel (x) is calculated
sprY) is required to be smoothed. In detail, when the encoding in the horizontal (h) direction is 0x8 and x
sprWhen% 2 is 0 or when the coding in the horizontal (h) direction is 0x4 and x
sprWhen% 2 is 1, factor is used
smoothReplacement factor
rg(b)**Wherein the factor
smoothIs a preset value, factor
rg(b)**Representing factor
rg00、factor
rg01、factor
rg10、factor
rg11、factor
b00、factor
b01、factor
b10Or factor
b11And factor
smooth、factor
rg00、factor
rg01、factor
rg10、factor
rg11、factor
b00、factor
b01、factor
b10And factor
b11Are all preset values.
For example, when the processor 110 calculates the pixel value of the R channel sub-pixel of the target pixel located at (3, 1) in the target image, it can be obtained according to the pixel values of the R channels of the source pixels located at (3, 1) and (4, 1) in the source image, and so on.
In one embodiment of the present invention, the factorrg00、factorrg01、factorrg10、factorrg11、factorb00、factorb01、factorb10、factorb11All values of (A) are 0.7. Alternatively, according to another embodiment of the present invention, the factorrg00、factorrg10、factorrg11、factorb00、factorb01、factorb10Has a value of 1.0, and factorrg01、factorb11The value of (b) is then 0.7. In other words, the values of the factors applied to the R-channel, G-channel and B-channel sub-pixels of the target pixels of different rows/columns can be changed according to the user's requirements for color display. In addition, when the texture around the sub-pixel of the target pixel is relatively smooth or has no texture, the value of the factor can also be directly applied by 0.5.
The processor 110 has calculated the target pixel (x)sprY), the process proceeds to step S225. In step S225, the processor 110 checks whether there are any unrendered target pixels in the target image. If not, it indicates that all the target pixels of the target image have been rendered, and the process ends. The processor 110 may then send the rendered target image to a display for display. Otherwise, returning to step S210, continuing to render the next unrendered target pixel.
The area-based sub-pixel rendering method is described below. FIG. 13 is a flowchart illustrating a method for rendering sub-pixels based on area and according to texture information according to an embodiment of the present invention. Fig. 13 will be described with reference to fig. 14 to 16.
Fig. 14 is a schematic diagram illustrating an arrangement of source pixels of a source image and an arrangement of sub-pixels corresponding to a target image are overlapped according to another embodiment of the invention. Fig. 15A to 15D are schematic diagrams illustrating a pixel value of a sub-pixel of a B channel of a target pixel according to texture information and area calculation according to an embodiment of the invention. Fig. 16 is a diagram illustrating texture information corresponding to 12 edge codes requiring sharpening according to an embodiment of the present invention.
As shown in fig. 13, steps S1301, S1305, S1310, S1316, S1318, and S1325 are the same as the operations of steps S201, S205, S210, S216, S218, and S225 in fig. 2, respectively, and a repeated description thereof will not be repeated. S1315 and step S1320 are described below, respectively. FIG. 13 illustrates that in step S1320, the area-based calculation is used to calculate the target pixel (x) to be renderedsprY) of the sub-pixels, and in step S220 of fig. 2, the distance-based calculation is used to calculate the target pixel (x) to be renderedsprY), and thus step S1315 in fig. 13 is different from step S215 in fig. 2, i.e., the embodiments of fig. 13 and 2 calculate and target pixel (x)sprY) the coordinates of the source pixels in the source image corresponding to the sub-pixels are also different, as follows:
when the target pixel (x) is to be renderedsprY) when the target image is in the even-numbered row, the x-coordinate of the source pixel in the corresponding source image is obtained by the following formula (the R and G channels of the target pixel may be different from the source pixel corresponding to the sub-pixel of the B channel):
and, when the target pixel (x) is to be renderedsprY) when the target image is in the odd-numbered row, the x-coordinate of the source pixel in the source image corresponding to the target image is obtained by the following formula (the target pixel R and the source pixel corresponding to the sub-pixel of the G channel and the sub-pixel of the B channel may be different):
where floor () represents a downward integer. And target pixel (x) to be renderedsprY) is (x, y), where% 2 represents the remainder to 2, so x isspr% 2-0 denotes even columns, and xspr% 2-1 represents odd columns.
Step S1320 is described again below. In step S1320, the processor 110 calculates a target pixel to be rendered (x) based on the area according to the texture informationsprY) pixel value of the sub-pixel. In detail, as shown in fig. 14, "Δ" represents a sub-pixel of the R channel or the G channel of the target pixel to be rendered, and "□" represents a sub-pixel of the B channel of the target pixel to be rendered, and the center of the small square of each dashed box is the position of one source pixel of the source image. Processor 110When calculating the pixel values of the sub-pixels corresponding to the target pixel to be rendered of a target image, a window corresponding to the target pixel to be rendered in the source image is obtained with the sub-pixel of the target pixel to be rendered of the target image as the center, and it should be noted that the window is different from the window obtained by calculating the edge code in the step S1315. The following takes a 3 × 3 window as an example, and the following description is given.
When the sub-pixels of the target pixel to be rendered are the sub-pixels of the R channel or the G channel of the target pixel, the sub-pixels of the target pixel to be rendered, which are positioned in the even-row even-numbered column, the even-row odd-numbered column and the odd-row even-numbered column of the target image, comprise the source pixels in the corresponding windows in the source image:
and the source pixels contained in the view windows corresponding to the source images by the sub-pixels to be rendered positioned in the odd rows and the odd columns of the target image are as follows:
wherein, R '(G')(x,y)Refers to a pixel value corresponding to an R channel or a G channel among source pixels having coordinates (x, y).
In addition, when the sub-pixels of the target pixel to be rendered are the sub-pixels of the B channel of the target pixel, the sub-pixels of the target pixel to be rendered, which are located in the even row and even column, the odd row and even column, and the odd row and odd column of the target image, include the source pixels in the corresponding windows in the source image:
and the source pixels contained in the corresponding windows in the source image by the sub-pixels of the target pixels to be rendered, which are positioned in the even rows and the odd columns of the target image, are:
wherein, B'(x,y)Refers to the pixel value of the B channel of the source pixel with coordinates (x, y) in the source image.
After obtaining the source pixels included in the window corresponding to the sub-pixels of the target pixel to be rendered in the source image (e.g., the small square with 3 × 3 dashed boxes in fig. 14), the processor 110 obtains a diamond region based on the sub-pixels of the target pixel to be rendered above, below, to the left, and to the right of the sub-pixels of the target pixel to be rendered. As shown in fig. 15A to 15D, "Δ" represents the R channel/G channel sub-pixel of the target pixel to be rendered, "□" represents the B channel sub-pixel of the target pixel to be rendered, and 9 small squares represent the source pixels included in the corresponding window in the source image. There are 4 different types of diamond regions, diamond 1550 in fig. 15A is a diamond region obtained when the position of the sub-pixel of the B-channel of the target pixel to be rendered in the target image is even row even column, diamond 1560 in fig. 15B is a diamond region obtained when the position of the sub-pixel of the B-channel of the target pixel to be rendered in the target image is even row odd column, diamond 1560 in fig. 15C is a diamond region obtained when the position of the sub-pixel of the B-channel of the target pixel to be rendered in the target image is odd column even row, and diamond in fig. 15D is a diamond region obtained when the position of the sub-pixel of the B-channel of the target pixel to be rendered in the target image is odd row odd column.
Then, the processor 110 calculates the pixel value of the sub-pixel of the target pixel to be rendered of the corresponding target image according to the area ratio of the diamond-shaped area to the surrounding source pixels. That is, the processor 110 determines the area ratio of the diamond region to the surrounding source pixels, and finally multiplies and sums up the pixel values of the corresponding source pixels according to the area ratio of each sub-region to the corresponding source pixels to obtain the pixel values of the sub-pixels of the target pixel to be rendered.
As shown in fig. 15A, when the processor 110 wants to obtain the pixel value of the sub-pixel 1501 of the B channel of the target pixel to be rendered, the processor 110 first obtains a diamond-shaped region 1550 based on the R channel/G channel sub-pixels 1502-1505 of the target pixel above, below, left and right of the sub-pixel 1501 of the B channel of the target pixel to be rendered, so as to obtain the pixel value of the sub-pixel of the target pixel to be rendered according to the area occupied by the diamond-shaped region in the source pixels of the source images. The diamond-shaped areas 1550 are respectively formed by sub-areas 1550a 1550f, and the sub-areas 1550a 1550f are respectively a portion of the source pixels occupying two right columns of the source pixels of the source image included in the 3 × 3 window (i.e., the source pixels 1511 to 1516 of the source image shown in the figure). The processor 110 then obtains the area ratio of the sub-region 1550a to the source pixel 1511 of the source image, the area ratio of the sub-region 1550B to the source pixel 1512 of the source image, the area ratio of the sub-region 1550c to the source pixel 1513 of the source image, the area ratio of the sub-pixel 1550d to the source pixel 1514 of the source image, the area ratio of the sub-pixel 1550e to the source pixel 1515 of the source image, and the area ratio of the sub-pixel 1550f to the source pixel 1516 of the source image, respectively, and then multiplies the area ratios corresponding to the sub-regions by the B-channel pixel values corresponding to the 3 × 3 source pixels, respectively, and adds up the values corresponding to the sub-regions, thereby obtaining the pixel value of the B-channel sub-pixel 1501 of the target pixel to be rendered. For example, the sub-region 1550d occupies an area ratio of the source pixel 1514 of the source image of 54/144, if the source pixel 1514 of the source image corresponds to a B-channel pixel value of 144, the sub-region 1550d corresponds to a value of 54, and so on.
As shown in fig. 15B, when the processor 110 wants to obtain the pixel value of the sub-pixel 1521 of the B channel of the target pixel, the processor 110 first obtains a diamond region 1560 based on the R channel/G channel sub-pixels 1522-1525 above, below, left and right of the sub-pixel 1521 of the B channel of the target pixel to be rendered, so as to obtain the pixel value of the sub-pixel 1521 of the B channel of the target pixel to be rendered according to the area occupied by the diamond region in the source pixels of the source images. The diamond-shaped regions 1560 are respectively composed of sub-regions 1560 a-1560 e, and the sub-regions 1560 a-1560 e are respectively a portion of the 2 nd, 4 th to 6 th, and 8 th pixels of the source pixel of the 3 × 3 source image (i.e., the source pixels 1531-1535 of the source image shown in the figure). Next, the processor 110 obtains the area ratio of the sub-region 1560a to the source pixel 1531 of the source image, the area ratio of the sub-region 1560B to the source pixel 1532 of the source image, the area ratio of the sub-region 1560c to the source pixel 1533 of the source image, the area ratio of the sub-pixel 1560d to the source pixel 1534 of the source image, and the area ratio of the sub-pixel 1560e to the source pixel 1535 of the source image, and multiplies the obtained area ratio of each sub-region by the B-channel pixel value corresponding to the source pixel of the source image of 3 × 3, and adds up the pixel value corresponding to each sub-region, thereby obtaining the pixel value 1521521521521521 of the B-channel of the target pixel to be rendered.
The manner of calculating the pixel values of the sub-pixels of the target pixel to be rendered as shown in fig. 15C and 15D is similar to that shown in fig. 15A and 15B, except that the areas of the source pixels of the source image occupied by the regions of the diamond shape are different, and will not be described herein for brevity. It should be noted that, as mentioned above, the area configuration information of the source pixel of the source image and the sub-pixels of each channel of the target pixel to be rendered is stored in the storage 120 in advance, and the processor 110 may access the corresponding area configuration information according to the row and column corresponding to the sub-pixel of the target pixel and substitute the pixel value of the source pixel of the source image of 3 × 3 corresponding to each sub-region to obtain the pixel value of the corresponding sub-pixel of the target pixel to be rendered. It is to be noted that the configuration of the diamond-shaped regions of the sub-pixels of the R channel or the G channel corresponding to the target pixel to be rendered in the 3 × 3 source pixels is reverse to the configuration of the diamond-shaped regions of the sub-pixels of the B channel of the target pixel to be rendered in the 3 × 3 source pixels. In other words, when "Δ" and "□" are interchanged, that is, when "□" is the sub-pixel of the R channel/G channel corresponding to the target pixel to be rendered and "Δ" is the sub-pixel of the B channel corresponding to the target pixel to be rendered, fig. 15A is a case when the sub-pixel of the R channel or G channel of the target pixel to be rendered is an odd-row odd-column, fig. 15B is a case when the sub-pixel of the R channel or G channel of the target pixel to be rendered is an odd-row even-column, fig. 15C is a case when the sub-pixel of the R channel or G channel of the target pixel to be rendered is an even-column odd-row, and fig. 15D is a case when the sub-pixel of the R channel or G channel of the target pixel to be rendered is an even-row even-column.
For example, the source pixels of the source image located around the sub-pixels of the R channel of the target pixel to be rendered of (1, 1) are:
since (1, 1) available according to the above description corresponds to odd rows and odd columns, the processor 110 calculates the pixel value of the sub-pixel corresponding to the R channel of the target pixel to be rendered according to the pixel value of the source pixel obtained by the above description multiplied by the area configuration of the diamond-shaped area of fig. 15A.
In addition, according to an embodiment of the present invention, in the area-based sub-pixel rendering method, when the texture information corresponding to the sub-pixel of the target pixel to be rendered is any one of the 12 patterns shown in fig. 16 (where white represents "1" and black represents "0"), that is, when the texture corresponding to the sub-pixel of the target pixel to be rendered is stronger, the processor 110 further sharpens the sub-pixel of the target pixel to be rendered, so that the target image is clearer. In detail, when the texture corresponding to the sub-pixel of the target pixel to be rendered is strong, the processor 110 performs convolution operation on the 3 × 3 source pixel corresponding to the sub-pixel of the target pixel to be rendered in the source image by using a diamond filter, so as to obtain a sharpening parameter. And then adding the pixel value of the sub-pixel of the target pixel to be rendered, which is obtained based on the area, with the sharpening parameter, wherein the obtained value is the pixel value of the sub-pixel of the sharpened target pixel to be rendered. The following is an example of a diamond filter in one embodiment of the invention:
according to another embodiment of the present invention, after the processor 110 obtains the pixel values of the sub-pixels of the R channel, the G channel and the B channel corresponding to the target pixel to be rendered according to the distance-based sub-pixel rendering method and the area-based sub-pixel rendering method, the processor 110 may further combine the two calculation results according to the predetermined weight to obtain the final pixel value. For example, the processor 110 may give a weight of 0.5 to each of the two calculation results to average the pixel values obtained by the distance-based sub-pixel rendering method and the area-based sub-pixel rendering method.
In summary, according to the sub-pixel rendering method and apparatus of the present invention, under the condition of not changing the image quality, the target pixels whose number is only 2/3 of the source pixels of the source image are obtained by interpolating two source pixels or a plurality of source pixels of the source image, so that 1/3 number of light emitters can be saved. In addition, when obtaining the pixel value of each sub-pixel of the target pixel to be rendered, the method of the present invention performs special processing on the pixel having a special texture (for example, the edge code described in the foregoing description is a specific pattern) according to the texture information around the sub-pixel of the target pixel, and when calculating the pixel value of the sub-pixel of the target pixel to be rendered by using different methods (for example, based on distance and/or area), performs smoothing or sharpening processing on the calculated pixel value of the sub-pixel of the target pixel to be rendered correspondingly according to different texture information (for example, the case where the texture described in the foregoing description is weak or strong) to obtain the best image conversion effect. Moreover, in response to the display having a rounded corner or a notch, the method of the present invention may perform anti-aliasing on the source pixels corresponding to the rounded corner or notch area in advance, so as to make the quality of the finally output image better.
While the present invention has been described with reference to the above embodiments, it should be noted that the description is not intended to limit the invention. Rather, this invention covers modifications and similar arrangements apparent to those skilled in the art. Therefore, the scope of the appended claims should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements as is readily apparent.