CN110060619B - Sub-pixel rendering method and device - Google Patents

Sub-pixel rendering method and device Download PDF

Info

Publication number
CN110060619B
CN110060619B CN201910332550.6A CN201910332550A CN110060619B CN 110060619 B CN110060619 B CN 110060619B CN 201910332550 A CN201910332550 A CN 201910332550A CN 110060619 B CN110060619 B CN 110060619B
Authority
CN
China
Prior art keywords
pixel
rendered
source
target
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910332550.6A
Other languages
Chinese (zh)
Other versions
CN110060619A (en
Inventor
李响
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Granfei Intelligent Technology Co ltd
Original Assignee
Glenfly Tech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Glenfly Tech Co Ltd filed Critical Glenfly Tech Co Ltd
Priority to CN201910332550.6A priority Critical patent/CN110060619B/en
Publication of CN110060619A publication Critical patent/CN110060619A/en
Priority to US16/592,061 priority patent/US11030937B2/en
Priority to US17/318,126 priority patent/US11158236B2/en
Application granted granted Critical
Publication of CN110060619B publication Critical patent/CN110060619B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • G09G3/2003Display of colours
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • G09G3/2007Display of intermediate tones
    • G09G3/2074Display of intermediate tones using sub-pixels
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2300/00Aspects of the constitution of display devices
    • G09G2300/04Structural and physical details of display devices
    • G09G2300/0439Pixel structures
    • G09G2300/0452Details of colour pixel setup, e.g. pixel composed of a red, a blue and two green components
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/0457Improvement of perceived resolution by subpixel rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Abstract

A method of subpixel rendering to generate a target image from a source image, the method comprising: acquiring the source image; determining a target pixel to be rendered in the target image; calculating edge codes of source pixels corresponding to the sub-pixels of the target pixel to be rendered in the source image; determining texture information around the sub-pixels of the target pixel to be rendered according to the edge code; and when the edge code is not a specific pattern, calculating a pixel value of the sub-pixel of the target pixel to be rendered according to the texture information and based on a distance.

Description

Sub-pixel rendering method and device
Technical Field
The present invention relates to a method and an apparatus for rendering sub-pixels, and more particularly, to a method and an apparatus for rendering sub-pixels based on distance and/or area according to texture information.
Background
In the prior art, when a display displays an image in a conventional sub-pixel driving manner, one sub-pixel in the display corresponds to one color component of one source pixel in a source image. However, as the threshold of manufacturing technology rises, the number of sub-pixels on a display is also limited. In other words, the resolution of the display will be difficult to continue to increase. Therefore, when a high resolution image is to be displayed on a lower resolution display, how to retain more details of the source image is a problem to be solved.
Disclosure of Invention
The following disclosure is illustrative only and is not intended to be limiting in any way. In addition to the illustrative aspects, embodiments, and features, other aspects, embodiments, and features will be apparent by reference to the drawings and the following detailed description. That is, the following disclosure is provided to introduce concepts, points, benefits and novel and non-obvious technical advantages described herein. Selected, but not all, embodiments are described in further detail below. Accordingly, the following disclosure is not intended to identify essential features of the claimed subject matter, nor is it intended to be used in determining the scope of the claimed subject matter.
In a preferred embodiment, the present invention provides a method of subpixel rendering, for generating a target image from a source image, the method comprising: acquiring the source image; determining a target pixel to be rendered in the target image; calculating edge codes of source pixels corresponding to the sub-pixels of the target pixel to be rendered in the source image; determining texture information around the sub-pixels of the target pixel to be rendered according to the edge code; and when the edge code is not a specific pattern, calculating a pixel value of the sub-pixel of the target pixel to be rendered according to the texture information and based on a distance.
In a preferred embodiment, the present invention provides a sub-pixel rendering apparatus, comprising: the storage is used for storing a source image and a target image; and a processor for generating the target image from the source image; wherein the processor takes the source image from the storage, determines a target pixel to be rendered in the target image, calculates an edge code of a source pixel in the source image corresponding to a sub-pixel of the target pixel to be rendered, determines texture information around the sub-pixel of the target pixel to be rendered according to the edge code, and calculates a pixel value of the sub-pixel of the target pixel to be rendered based on a distance according to the texture information when the edge code is not a specific pattern.
Drawings
The accompanying drawings are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this disclosure. The drawings are used to illustrate embodiments of the disclosure and, together with the description, serve to explain the principles of the disclosure. It is to be understood that the figures are not necessarily to scale, some elements may be shown larger than in actual implementation to clearly illustrate the concepts of the present disclosure.
FIG. 1 is a block diagram of an electronic device for performing a sub-pixel rendering method according to an embodiment of the invention.
FIG. 2 is a flowchart illustrating a method for distance-based sub-pixel rendering according to an embodiment of the invention.
Figure 3 is a schematic diagram illustrating a notch and a fillet according to an embodiment of the present invention.
FIG. 4 is a diagram illustrating a drawing process for multiple tiles of a source image according to an embodiment of the present invention.
Fig. 5 is a schematic diagram illustrating how to perform a mirror process on an edge pixel according to an embodiment of the invention.
FIG. 6 is a diagram illustrating four directions of a window, namely, a horizontal (h) direction, a top-left-bottom (l) direction, a vertical (v) direction, and a top-right-bottom (r) direction according to an embodiment of the present invention.
Fig. 7 is a diagram showing 9 cases of edge codes corresponding to the horizontal (h) direction according to an embodiment of the present invention.
Fig. 8A is a schematic diagram of texture information corresponding to an edge code of 0x3030 according to an embodiment of the present invention.
Fig. 8B is a diagram illustrating texture information corresponding to an edge code of 0xC0C0 according to an embodiment of the invention.
Fig. 9 is a diagram illustrating the size of a source pixel and the size of a sub-pixel according to an embodiment of the invention.
FIG. 10 is a diagram illustrating an arrangement of source pixels in a source image according to an embodiment of the invention.
FIG. 11 is a diagram illustrating an arrangement of sub-pixels according to an embodiment of the invention.
Fig. 12 is a schematic diagram illustrating an arrangement of source pixels of a source image and an arrangement of sub-pixels corresponding to a target image are overlapped according to an embodiment of the invention.
FIG. 13 is a flowchart illustrating a method for rendering sub-pixels based on area and according to texture information according to an embodiment of the present invention.
Fig. 14 is a schematic diagram illustrating an arrangement of source pixels of a source image and an arrangement of sub-pixels corresponding to a target image are overlapped according to another embodiment of the invention.
Fig. 15A to 15D are schematic diagrams illustrating a pixel value of a sub-pixel of a B channel of a target pixel according to texture information and area calculation according to an embodiment of the invention.
Fig. 16 is a diagram illustrating texture information corresponding to 12 edge codes requiring sharpening according to an embodiment of the present invention.
Description of the symbols
100 electronic device
110 processor
120 storage
S201, S205, S210, S215, S216, S218, S220, S225
310 recess
320 round corner
420 saw tooth
421. 423 rounded source pixels
421a, 423a are located in the sub-area of the arc tangent
421b is located in the sub-area outside the arc tangent line
450 arc tangent
h. l, v, r pixel directions
701 to 709 coding cases
S1301, S1305, S1310, S1315, S1316, S1318, S1320, S1325
1501. 1521 target sub-pixel
1502 to 1505, 1522 to 1525 sub-pixels forming diamond regions
1511 ~ 1516, 1531 ~ 1535 source pixels
1550. 1560 diamond region
1550 a-1550 f, 1560 a-1560 e constitute sub-regions of the diamond region
Pixel values corresponding to V0-V83 x3 windows
Detailed Description
The following description is of the best mode for carrying out the invention and is intended to illustrate the general spirit of the invention and not to limit the invention. Reference must be made to the following claims for their true scope of the invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of further features, integers, steps, operations, elements, components, and/or groups thereof.
FIG. 1 is a block diagram of an electronic device for performing a sub-pixel rendering method according to an embodiment of the invention. The electronic device 100 includes at least one processor 110 and a storage 120. The processor 110 may be implemented in a variety of ways, such as with dedicated hardware circuitry or general purpose hardware (e.g., a single processor, multiple processors with parallel processing capabilities, a graphics processor, or other processor with computing capabilities) to convert a source image into a target image suitable for use with a display having a particular subpixel arrangement. In the embodiment of the present invention, the height of the sub-pixel of the display (not shown) is 2/3 of the height of the source pixel of the source image, and the width thereof is 3/4 of the width of the source pixel of the source image. The source pixel line number of the source image is the same as the target pixel line number of the target image, and the source pixels of every 3 adjacent source images in each line are rendered as target pixels of 2 target images by the processor 110. Each target pixel of the target image comprises 3 sub-pixels respectively corresponding to R, G and the B channel, each source pixel of the source image also comprises 3 pixel values respectively corresponding to R, G and the B channel, and the pixel values of the sub-pixels of each channel of the target pixel are respectively calculated based on the pixel values of the corresponding channel of the source pixel. The storage 120 may be a non-volatile memory (e.g., ROM, flash memory, etc.) for storing at least a source image and information required for converting the source image into a target image suitable for a display having a specific sub-pixel arrangement. For example, the information required to convert a source image into a target image suitable for a display having a particular subpixel arrangement includes the relevant algorithms to convert source pixels into subpixels, the relevant parameters for distance-based and area-based subpixel rendering methods, and the like.
In one embodiment, the electronic device 100 is a display panel controller coupled between a Graphics Processing Unit (GPU) (not shown) and a display (not shown). The electronic device 100 receives a source image from the GPU, converts the received source image into a target image, and transmits the target image to the display for display.
The processor 110 may use a distance-based subpixel rendering method (described in detail below) and/or an area-based subpixel rendering method (described in detail below) to convert a source image into a target image suitable for a display having a particular subpixel arrangement. The distance-based sub-pixel rendering method is described first below.
Fig. 2 is a flowchart illustrating a distance-based sub-pixel rendering method according to an embodiment of the invention, and the distance-based sub-pixel rendering method illustrated in fig. 2 will be described in detail with reference to fig. 3 to 12.
First, in step S201, the processor 110 obtains a source image from the storage 120. In one embodiment, the processor 110 receives the source image from the GPU and stores the received source image in the storage 120 before proceeding to step S201. The processor 110 then executes step S205.
In step S205, when the display displaying the target image has a notch and/or a rounded corner, the processor 110 performs anti-aliasing on the source pixels in the source image located at the notch or the rounded corner. In detail, the processor 110 first determines whether the display has a notch or a rounded corner. As shown in fig. 3, 310 is a notch at the edge of the display and 320 is a rounded corner at the edge of the display. In an embodiment, if the display has a notch and/or a rounded corner, the storage 120 stores therein coordinate information of source pixels in all source images corresponding to the notch and/or the rounded corner. If the processor 110 can retrieve coordinate information of source pixels in the source image corresponding to the notch and/or the rounded corner from the storage 120, it indicates that the display has the notch and/or the rounded corner. When the display has notches and/or rounded corners, the processor 110 multiplies the pixel values of the source pixels in the source image at the notches and/or rounded corners by an attenuation coefficient to perform anti-aliasing processing. In an example, the processor 110 multiplies pixel values of source pixels of the source image that are at the edge and are notches and/or rounded corners by an attenuation coefficient to soften the jaggies exhibited by the edge pixels. The subsequent step calculates the pixel value of each sub-pixel of the target image according to the pixel value of the source pixel of the softened source image. The attenuation coefficient is related to the area of the edge pixel cut by the circular arc, and can be obtained by the following formula:
Areaarch=(2*offset-1)/(2*step)
wherein, AreaarchFor the attenuation coefficient, offset is the index of the position of the source pixel in the sawtooth, and step is the width of the sawtooth.
For example, as shown in FIG. 4, region 410 is a region without light emitters, region 420 (depicted in dashed lines) is one of a plurality of serrations of a fillet or notch, and solid line 450 is a near ideal tangent of a circular arc in region 420. For the source pixel 421, the area 421a is the area located inside the tangent line of the circular arc, and the area 421b is the area located outside the tangent line of the circular arc. From the contents of fig. 4, it can be seen thatThe width of region 420 is 5 source pixels, and source pixel 421 is the 1 st source pixel of region 420 (i.e., offset is 1). Therefore, the attenuation coefficient corresponding to the source pixel 421 can be calculated as Area according to the above formulaarch(2 × 1-1)/(2 × 5) ═ 1/10. In other words, the area corresponding to the region 421a is 1/10 of the whole area of the source pixel 421, and the pixel value of the softened pixel 421 is 1/10. In another example, source pixel 423 may correspond to an attenuation factor of AreaarchThe softened source pixel 423 has a pixel value of 5/10, in other words, the area corresponding to the region 423a is 5/10 of the entire area of the source pixel 423, and the softened source pixel 423 has a pixel value of 5/10. And so on.
In one embodiment, the processor 110 sets the pixel value of the source pixel in the source image having no corresponding sub-pixel in the target image to 0, i.e. sets the pixel value of the source pixel in the source image corresponding to the area of the display having no light emitter (area 410 in fig. 4) to 0.
In addition, in an embodiment of the invention, when the storage 120 stores the related information of the source pixel corresponding to the jagged region, only the coordinates of the starting point of the jagged region, the offset direction corresponding to the x-direction or the y-direction, and the offset amount of the source pixel may be stored. For example, as shown in the area 420 of FIG. 4, when storing the saw-tooth corresponding to the area 420, only the information of the coordinates corresponding to the source pixel 421, the offset direction corresponding to the x-direction, and the offset amount of 5 source pixels may be stored.
After the processor 110 performs anti-aliasing processing on the source pixels located at the notches and/or the rounded corners, the process proceeds to step S210. In step S210, the processor 110 determines the coordinates (x) of a target pixel to be rendered in the target imagesprY), then proceeds to step S215.
In step S215, the processor 110 calculates a target pixel (x) in the source imagesprY) of the source pixel corresponding to each sub-pixel of the image to determine a target pixel (x) to be renderedsprY) texture information around each sub-pixel. In detail, in order toTarget pixel (x)sprY), the processor 110 renders the target pixel (x) to be rendered in the source image with the target imagesprY) performing edge detection on a window centered on the source pixel corresponding to each sub-pixel of the image to obtain a target pixel (x) to be renderedsprY) of the source pixel, and determining a target pixel (x) based on the obtained edge codesprY) of the target pixel, and for a target pixel (x) having a different texturesprY) sub-pixels are rendered by different rendering methods to obtain a target pixel (x)sprY) pixel value of each sub-pixel. Compute and target pixel (x)sprY) of the source image corresponding to each sub-pixel of the target pixel to be rendered (x) is determined according to the x coordinate of the source pixel in the source imagesprY) are located at different odd or even lines of the target image, the calculation method will be different.
For better description of the target pixel (x) latersprY), the definition of even rows, odd rows, even columns and odd columns is explained first. With the target pixel (x)sprY) is an example, when xsprWhen% 2 is 0, the target pixel (x) is expressedsprY) is located in even columns of the target image when xsprWhen% 2 is 1, the target pixel (x) is expressedsprY) are located in odd columns of the target image, where% 2 represents the remainder for 2, and when the target pixel (x)sprY) when it is in column 1, xsprWhen the target pixel (x) is 0sprY) x in column 2spr1, and so on; when y% 2 is 0, the target pixel (x) is expressedsprY) is located in an even row of the target image, and represents the target pixel (x) when y% 2 is 1sprY) are located in odd rows of the target image, where% 2 represents the remainder for 2, and when the target pixel (x)sprY) is 0 when the target pixel (x) is located in the 1 st rowsprY) is 1 on row 2, and so on. The above-described method of determining even rows/even columns and odd rows/odd columns is equally applicable to source pixels in a source image and will not be described again here.
When the target pixel (x) is to be renderedsprY) when the target image is located in the even-numbered row, the x-coordinate of the source pixel in the corresponding source image is obtained by the following formula (the R and G channels of the target pixel are different from the source pixel corresponding to the sub-pixel of the B channel):
Figure BDA0002038166890000071
Figure BDA0002038166890000072
and, when the target pixel (x) is to be renderedsprY) when the target image is located in an odd row, the x-coordinate of the corresponding source pixel is obtained by the following formula (the R and G channels of the target pixel are different from the source pixel corresponding to the sub-pixel of the B channel):
Figure BDA0002038166890000073
Figure BDA0002038166890000074
wherein floor () represents a downward integer. And target pixel (x)sprY) is (x, y) for each sub-pixel in the source image. The coordinates of all source pixels in the window can be obtained from the coordinates of the source pixel (x, y) by using the source pixel (x, y) as a center pixel. Taking a 3 × 3 window as an example, the coordinates of the source pixel above the source pixel (x, y) are (x, y-1), the coordinates of the source pixel to the left of the source pixel (x, y) are (x-1, y), and so on, the coordinates of all 8 source pixels around the source pixel (x, y) can be obtained. And acquiring the pixel value of the corresponding source pixel from the source image according to the coordinate of the source pixel. When the processor 110 calculates the pixel values of the sub-pixels at the edge, the processor 110 will first mirror the source pixels at the edge to obtain the pixels of the virtual source pixels outside the edgeThe value is obtained. For example, as shown in FIG. 5, the source pixels 0-15 of the shaded area in the lower right corner are the source pixels located in the source image. Since the virtual source pixels located outside the source image are used when the processor 110 calculates the target pixels located at the edge, that is, the pixel values of the virtual source pixels located at the left side of the source pixels 4 are used when calculating the pixel values of the sub-pixels located corresponding to the source pixels 4 of the source image, the pixel values of the virtual source pixels located at the left side of the source pixels 4 are mapped to the pixel values of the source pixels 5 located at the right side of the source pixels 4, and so on, so that the processor 110 can perform correlation operation according to the pixel values of the virtual source pixels when calculating the pixel values of the sub-pixels corresponding to the edge.
Obtaining a target pixel (x) from a source imagesprY), the processor 110 starts to calculate the edge code. With the target pixel (x)sprY) subtracting the pixel values of a plurality of adjacent source pixels in one of the plurality of directions of the window from the pixel value of the source pixel corresponding to the sub-pixel to obtain a plurality of first difference values; subtracting the pixel value of the source pixel from the pixel values of a plurality of adjacent source pixels to obtain a plurality of second difference values; obtaining a first code according to the comparison result of the first difference and the first threshold; obtaining a second code according to the comparison result of the second difference value and a second threshold value; combining the first code and the second code to obtain a code in one of a plurality of directions; and finally, combining the codes in the multiple directions to obtain an edge code.
The target pixel (x) is shown belowsprAnd y) is specifically described by taking a 3 × 3 window corresponding to the sub-pixel corresponding to the R channel as an example. The edge code may be composed of four hexadecimal bits, and the bits of the code composing the edge code correspond to the coding of the horizontal (h), top-left-right-bottom (1), vertical (v), and top-right-left-bottom (r) directions of a 3 × 3 window, respectively, from left to right, where each bit represents texture information of one direction. It should be noted that although four bits are used as an example in the above embodiment, the present invention is not limited thereto, and is determined by the number of directions that the edge code needs to represent. For example, as shown in FIG. 6, the horizontal (h) direction is encoded by the second in the Sudoku3. The 4 th and 5 th source pixels (i.e., V3 to V5 shown in the figure), the left-upper-right-lower (1) direction codes are calculated from the 0 th, 4 th and 8 th source pixels (i.e., V0, V4 and V8 shown in the figure) in the nine-grid, the vertical (V) direction codes are calculated from the 1 st, 4 th and 7 th source pixels (i.e., V1, V4 and V7 shown in the figure) in the nine-grid, and the right-upper-left-lower (r) direction codes are calculated from the 2 nd, 4 th and 6 th source pixels (i.e., V2, V4 and V6 shown in the figure) in the nine-grid. The first two bits of the code for each direction are generated by subtracting the pixel value of the central pixel from the pixel values of the surrounding pixels, and the second two bits of the code are generated by subtracting the pixel values of the surrounding pixels from the pixel value of the central pixel. For example, taking the horizontal (H) direction as an example, the codes for the horizontal (H) direction are H (f (V3-V4), f (V5-V4), f (V4-V3), f (V4-V5)). Wherein, f () represents a function, 1 is output when the value in the parentheses is larger than a predetermined threshold, and 0 is output when the value in the parentheses is smaller than a predetermined threshold; h () represents a function that converts the four binary digits in brackets into a hexadecimal digit. For example, assuming that the threshold is 10, when V3 is 151, V4 is 148, and V5 is 150, V3-V4 are equal to 3, and thus V3-V4 are less than 10, so f (V3-V4) outputs 0, V5-V4 are equal to 2, and thus V5-V4 are less than 10, so f (V5-V4) outputs 0, V4-V3 are equal to-3, and thus V4-V3 are less than 10, so outputs 0, V4-V5 are equal to-2, and thus V4-V5 are less than 10, so outputs 0; the encoding in the horizontal (h) direction is 0x0 (i.e., binary 0000). Referring now to fig. 7, fig. 7 is a diagram illustrating 9 cases corresponding to encoding in the horizontal (h) direction according to an embodiment of the present invention. As shown in 701 of fig. 7, 0x0 indicates that the luminance values (i.e., pixel values, the same applies hereinafter) of V3, V4, and V5 do not differ much; v3, V4 and V5 are all indicated with white fill. V3, V4 and V5 may also be all filled with black (not shown), and the filled colors of the squares V3, V4 and V5 are the same, i.e. the luminance values of V3, V4 and V5 are not different. When V3 is 151, V4 is 120, V5 is 150, V3-V4 are equal to 31, and thus V3-V4 are greater than 10, so f (V3-V4) output is 1, V5-V4 are equal to 30, and thus V5-V4 are greater than 10, so f (V5-V4) output is 1, V4-V3 are equal to-31, and thus V4-V3 are less than 10, so output is 0, V4-V5 are equal to-30, and thus V4-V5 is less than 10, so 0 is output, so the encoding in the horizontal (h) direction is 0xC (i.e. binary 1100), as shown in 703 in fig. 7, encoding 0xC means that the luminance values of V3 and V5 are both greater than the luminance value of V4. Similarly, as shown at 702 in fig. 7, encoding 0x3 indicates that the luminance values of V3 and V5 are both less than the luminance value of V4; as shown at 704, the code 0x1 indicates that the luminance values of V3 and V4 are both greater than the luminance value of V5; as indicated at 705, the code 0x4 indicates that the luminance values of V3 and V4 are both less than the luminance value of V5; as shown in 706, encoding 0x6 indicates that the brightness value of V3 is less than the brightness value of V4, and the brightness value of V4 is less than the brightness value of V5; as indicated at 707, the code 0x2 indicates that the luminance values of V4 and V5 are both greater than the luminance value of V3; as shown at 708, the code 0x8 indicates that the luminance values of V4 and V5 are both less than the luminance value of V3; as shown in 709, the code 0x9 indicates that the luminance value of V3 is greater than the luminance value of V4 and the luminance value of V4 is greater than the luminance value of V5.
The same way can get the coding of the left upper right lower (1) direction, the vertical (v) direction and the right upper left lower (r) direction. Arranging the codes in the horizontal (h) direction, the left-up-right-down (1) direction, the vertical (v) direction and the right-up-left-down (r) direction from left to right to obtain an edge code containing four hexadecimal bits, and obtaining the target pixel (x) through the finally output edge codesprY) texture information around the sub-pixels of the R channel. In one embodiment, when the edge code in the horizontal (h) direction is 0x4 or 0x8, the target pixel (x) is representedsprY) is weak. When the edge code is 0x0111, 0x0222, 0x0333, 0x0444, 0x0CCC, 0xCC0C, 0x1102, 0x2201, 0x3303, 0x4408, or 0x8804 (as shown in fig. 16), the target pixel (x) is representedsprY) is stronger. And when the edge code is 0x3030 (as shown in fig. 8A) or 0xC0C0 (as shown in fig. 8B), it represents the target pixel (x)sprY) is a specific pattern.
The processor 110 finishes calculating the target pixel (x)sprY) and determining the target pixel (x)sprY) of the sub-pixels, the process proceeds to step S216. In step S216, the processor 110 determines the target pixels (x) respectivelysprY) of the sub-pixelsWhether the surrounding texture information is a specific pattern. In one embodiment, the target pixel (x) is determinedsprY) whether the edge code of the sub-pixel is 0x3030 or 0xC0C 0; if "no" is entered in step S220 (described in detail later), and if "yes", it is entered in step S218.
For better description of the target pixel (x) latersprY), the positional relationship between the source pixel in the source image and the sub-pixels of the target pixel in the target image is explained first. As shown in fig. 10, the "o" in fig. 10 represents the source pixel in the source image. As shown in fig. 11, "Δ" in fig. 11 denotes the R-channel sub-pixel of the target pixel in the target image, "diamond" denotes the G-channel sub-pixel of the target pixel in the target image, and "□" denotes the B-channel sub-pixel of the target pixel in the target image. As can be seen from the content of fig. 9, in the embodiment of the present invention, the height of the sub-pixel of the target pixel in the target image is 2/3 (the number of the target pixel rows is the same as the number of the source pixel rows) of the height of the source pixel of the source image, and the width of each channel of the sub-pixel of the target pixel in the target image is 3/4 (the number of the target pixel in each row is 2/3 of the number of the source pixel) of the channel width of the source pixel of the source image. In other words, as shown in fig. 12, when the source image is displayed in the target image, the position of the source pixel of the source image does not overlap with the position of the sub-pixel of the target image, so when calculating the pixel values of the sub-pixels of the R channel, the G channel and the B channel of the target pixel of the target image, the processor 110 performs interpolation calculation on the pixel values of the left and right source pixels most adjacent to the sub-pixel of the target pixel to obtain the pixel value of the sub-pixel of the target pixel. Step S218 is described next.
In step S218, the processor 110 directly interpolates the calculation target pixel (x)sprY) pixel value of the sub-pixel. In detail, the processor 110 calculates the target pixel (x) by the following formulasprY) pixel value of the sub-pixel. When the target pixel (x) of the target imagesprY) is located at an even row of the target image, the target pixel (x) of the target imagesprY) R and G areThe pixel values of the sub-pixels of the channel and the B channel are calculated by the following formula:
Figure BDA0002038166890000101
when in use
Figure BDA0002038166890000102
Figure BDA0002038166890000103
*factoraveWhen is coming into contact with
Figure BDA0002038166890000104
Figure BDA0002038166890000111
*factorkepWhen is coming into contact with
Figure BDA0002038166890000112
Figure BDA0002038166890000113
*factoraveWhen is coming into contact with
Figure BDA0002038166890000114
And, when the target pixel (x) of the target imagesprY) is located in an odd line of the target image, the target pixel (x) of the target imagesprY) pixel values of the R and G channels and the B channel sub-pixels are calculated by the following formula:
Figure BDA0002038166890000115
when edgecode ═0x3030
Figure BDA0002038166890000116
When edgecode is 0xC0C0
Wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0002038166890000117
refers to the coordinates (x) in the target imagesprY) pixel values of sub-pixels of the R-channel or G-channel of the target pixel,
Figure BDA0002038166890000118
refers to the coordinates (x) in the target imagesprY) pixel value of a sub-pixel of B channel of the target pixel, R '(G')x,yRefers to the pixel value of R channel or G channel of a source pixel with coordinates (x, y) in the source image, B'x,yRefers to the pixel value of the B channel of the source pixel with coordinates (x, y) in the source image, each in the formula
Figure BDA0002038166890000119
All include an integer down operation, factorkepIs a preset value, factoraveIs a default value, and edgecode refers to edge code. And, in embodiments of the invention, the factorkepHas a value of 1.0, and factoraveThe value of (A) is 0.5. It is noted that the factorkepAnd factoraveThe value of (c) is adjustable according to the user's requirement, and is not limited by the present invention. In one example, x of the sub-pixel of the R channel of the target pixel issprAt coordinate 5, the processor 110 multiplies the pixel value of the source pixel with coordinate x of 7 by the factorkepTo obtain the pixel value of the sub-pixel corresponding to the target pixel.
In step S220, the processor 110 calculates a target pixel (x) to be rendered based on the distance according to the texture informationsprY) pixel value of the sub-pixel. In detail, the target pixel to be rendered (x) corresponding to the even-numbered linesprY) the pixel values of the sub-pixels of the R, G, and B channels mayThe following formula is used to obtain:
Figure BDA00020381668900001110
Figure BDA0002038166890000121
and, a target pixel (x) to be rendered corresponding to the odd-numbered linesprY) the pixel values of the sub-pixels of the R channel, the G channel and the B channel can be obtained by the following equations:
Figure BDA0002038166890000122
Figure BDA0002038166890000123
wherein the content of the first and second substances,
Figure BDA0002038166890000124
refers to the target pixel (x)sprY) pixel values of sub-pixels of the R-channel or the G-channel,
Figure BDA0002038166890000125
refers to the target pixel (x)sprY) pixel value of a sub-pixel of B channel, R '(G')(x,y)Refers to the pixel value of R channel or G channel of the source pixel with coordinates of (x, y), B'(x,y)Refers to the pixel value of the corresponding B channel in the source pixel with coordinates (x, y), and each of the formulas
Figure BDA0002038166890000126
All involve a down-integer operation (e.g., 3 x 3/2 equals 4) where% 2 represents the remainder for 2, and thus xspr% 2-0 denotes even columns, and xspr% 2-1 represents odd columns. In one embodiment, when the target pixel (x)sprY) is weak, that is, when the encoding of the edge code in the horizontal (h) direction is 0x8 or 0x4, the target pixel (x) is calculatedsprY) is required to be smoothed. In detail, when the encoding in the horizontal (h) direction is 0x8 and xsprWhen% 2 is 0 or when the coding in the horizontal (h) direction is 0x4 and xsprWhen% 2 is 1, factor is usedsmoothReplacement factorrg(b)**Wherein the factorsmoothIs a preset value, factorrg(b)**Representing factorrg00、factorrg01、factorrg10、factorrg11、factorb00、factorb01、factorb10Or factorb11And factorsmooth、factorrg00、factorrg01、factorrg10、factorrg11、factorb00、factorb01、factorb10And factorb11Are all preset values.
For example, when the processor 110 calculates the pixel value of the R channel sub-pixel of the target pixel located at (3, 1) in the target image, it can be obtained according to the pixel values of the R channels of the source pixels located at (3, 1) and (4, 1) in the source image, and so on.
In one embodiment of the present invention, the factorrg00、factorrg01、factorrg10、factorrg11、factorb00、factorb01、factorb10、factorb11All values of (A) are 0.7. Alternatively, according to another embodiment of the present invention, the factorrg00、factorrg10、factorrg11、factorb00、factorb01、factorb10Has a value of 1.0, and factorrg01、factorb11The value of (b) is then 0.7. In other words, the values of the factors applied to the R-channel, G-channel and B-channel sub-pixels of the target pixels of different rows/columns can be changed according to the user's requirements for color display. In addition, when the texture around the sub-pixel of the target pixel is relatively smooth or has no texture, the value of the factor can also be directly applied by 0.5.
The processor 110 has calculated the target pixel (x)sprY), the process proceeds to step S225. In step S225, the processor 110 checks whether there are any unrendered target pixels in the target image. If not, it indicates that all the target pixels of the target image have been rendered, and the process ends. The processor 110 may then send the rendered target image to a display for display. Otherwise, returning to step S210, continuing to render the next unrendered target pixel.
The area-based sub-pixel rendering method is described below. FIG. 13 is a flowchart illustrating a method for rendering sub-pixels based on area and according to texture information according to an embodiment of the present invention. Fig. 13 will be described with reference to fig. 14 to 16.
Fig. 14 is a schematic diagram illustrating an arrangement of source pixels of a source image and an arrangement of sub-pixels corresponding to a target image are overlapped according to another embodiment of the invention. Fig. 15A to 15D are schematic diagrams illustrating a pixel value of a sub-pixel of a B channel of a target pixel according to texture information and area calculation according to an embodiment of the invention. Fig. 16 is a diagram illustrating texture information corresponding to 12 edge codes requiring sharpening according to an embodiment of the present invention.
As shown in fig. 13, steps S1301, S1305, S1310, S1316, S1318, and S1325 are the same as the operations of steps S201, S205, S210, S216, S218, and S225 in fig. 2, respectively, and a repeated description thereof will not be repeated. S1315 and step S1320 are described below, respectively. FIG. 13 illustrates that in step S1320, the area-based calculation is used to calculate the target pixel (x) to be renderedsprY) of the sub-pixels, and in step S220 of fig. 2, the distance-based calculation is used to calculate the target pixel (x) to be renderedsprY), and thus step S1315 in fig. 13 is different from step S215 in fig. 2, i.e., the embodiments of fig. 13 and 2 calculate and target pixel (x)sprY) the coordinates of the source pixels in the source image corresponding to the sub-pixels are also different, as follows:
when the target pixel (x) is to be renderedsprY) when the target image is in the even-numbered row, the x-coordinate of the source pixel in the corresponding source image is obtained by the following formula (the R and G channels of the target pixel may be different from the source pixel corresponding to the sub-pixel of the B channel):
Figure BDA0002038166890000131
Figure BDA0002038166890000132
and, when the target pixel (x) is to be renderedsprY) when the target image is in the odd-numbered row, the x-coordinate of the source pixel in the source image corresponding to the target image is obtained by the following formula (the target pixel R and the source pixel corresponding to the sub-pixel of the G channel and the sub-pixel of the B channel may be different):
Figure BDA0002038166890000141
Figure BDA0002038166890000142
where floor () represents a downward integer. And target pixel (x) to be renderedsprY) is (x, y), where% 2 represents the remainder to 2, so x isspr% 2-0 denotes even columns, and xspr% 2-1 represents odd columns.
Step S1320 is described again below. In step S1320, the processor 110 calculates a target pixel to be rendered (x) based on the area according to the texture informationsprY) pixel value of the sub-pixel. In detail, as shown in fig. 14, "Δ" represents a sub-pixel of the R channel or the G channel of the target pixel to be rendered, and "□" represents a sub-pixel of the B channel of the target pixel to be rendered, and the center of the small square of each dashed box is the position of one source pixel of the source image. Processor 110When calculating the pixel values of the sub-pixels corresponding to the target pixel to be rendered of a target image, a window corresponding to the target pixel to be rendered in the source image is obtained with the sub-pixel of the target pixel to be rendered of the target image as the center, and it should be noted that the window is different from the window obtained by calculating the edge code in the step S1315. The following takes a 3 × 3 window as an example, and the following description is given.
When the sub-pixels of the target pixel to be rendered are the sub-pixels of the R channel or the G channel of the target pixel, the sub-pixels of the target pixel to be rendered, which are positioned in the even-row even-numbered column, the even-row odd-numbered column and the odd-row even-numbered column of the target image, comprise the source pixels in the corresponding windows in the source image:
Figure BDA0002038166890000143
and the source pixels contained in the view windows corresponding to the source images by the sub-pixels to be rendered positioned in the odd rows and the odd columns of the target image are as follows:
Figure BDA0002038166890000144
wherein, R '(G')(x,y)Refers to a pixel value corresponding to an R channel or a G channel among source pixels having coordinates (x, y).
In addition, when the sub-pixels of the target pixel to be rendered are the sub-pixels of the B channel of the target pixel, the sub-pixels of the target pixel to be rendered, which are located in the even row and even column, the odd row and even column, and the odd row and odd column of the target image, include the source pixels in the corresponding windows in the source image:
Figure BDA0002038166890000151
and the source pixels contained in the corresponding windows in the source image by the sub-pixels of the target pixels to be rendered, which are positioned in the even rows and the odd columns of the target image, are:
Figure BDA0002038166890000152
wherein, B'(x,y)Refers to the pixel value of the B channel of the source pixel with coordinates (x, y) in the source image.
After obtaining the source pixels included in the window corresponding to the sub-pixels of the target pixel to be rendered in the source image (e.g., the small square with 3 × 3 dashed boxes in fig. 14), the processor 110 obtains a diamond region based on the sub-pixels of the target pixel to be rendered above, below, to the left, and to the right of the sub-pixels of the target pixel to be rendered. As shown in fig. 15A to 15D, "Δ" represents the R channel/G channel sub-pixel of the target pixel to be rendered, "□" represents the B channel sub-pixel of the target pixel to be rendered, and 9 small squares represent the source pixels included in the corresponding window in the source image. There are 4 different types of diamond regions, diamond 1550 in fig. 15A is a diamond region obtained when the position of the sub-pixel of the B-channel of the target pixel to be rendered in the target image is even row even column, diamond 1560 in fig. 15B is a diamond region obtained when the position of the sub-pixel of the B-channel of the target pixel to be rendered in the target image is even row odd column, diamond 1560 in fig. 15C is a diamond region obtained when the position of the sub-pixel of the B-channel of the target pixel to be rendered in the target image is odd column even row, and diamond in fig. 15D is a diamond region obtained when the position of the sub-pixel of the B-channel of the target pixel to be rendered in the target image is odd row odd column.
Then, the processor 110 calculates the pixel value of the sub-pixel of the target pixel to be rendered of the corresponding target image according to the area ratio of the diamond-shaped area to the surrounding source pixels. That is, the processor 110 determines the area ratio of the diamond region to the surrounding source pixels, and finally multiplies and sums up the pixel values of the corresponding source pixels according to the area ratio of each sub-region to the corresponding source pixels to obtain the pixel values of the sub-pixels of the target pixel to be rendered.
As shown in fig. 15A, when the processor 110 wants to obtain the pixel value of the sub-pixel 1501 of the B channel of the target pixel to be rendered, the processor 110 first obtains a diamond-shaped region 1550 based on the R channel/G channel sub-pixels 1502-1505 of the target pixel above, below, left and right of the sub-pixel 1501 of the B channel of the target pixel to be rendered, so as to obtain the pixel value of the sub-pixel of the target pixel to be rendered according to the area occupied by the diamond-shaped region in the source pixels of the source images. The diamond-shaped areas 1550 are respectively formed by sub-areas 1550a 1550f, and the sub-areas 1550a 1550f are respectively a portion of the source pixels occupying two right columns of the source pixels of the source image included in the 3 × 3 window (i.e., the source pixels 1511 to 1516 of the source image shown in the figure). The processor 110 then obtains the area ratio of the sub-region 1550a to the source pixel 1511 of the source image, the area ratio of the sub-region 1550B to the source pixel 1512 of the source image, the area ratio of the sub-region 1550c to the source pixel 1513 of the source image, the area ratio of the sub-pixel 1550d to the source pixel 1514 of the source image, the area ratio of the sub-pixel 1550e to the source pixel 1515 of the source image, and the area ratio of the sub-pixel 1550f to the source pixel 1516 of the source image, respectively, and then multiplies the area ratios corresponding to the sub-regions by the B-channel pixel values corresponding to the 3 × 3 source pixels, respectively, and adds up the values corresponding to the sub-regions, thereby obtaining the pixel value of the B-channel sub-pixel 1501 of the target pixel to be rendered. For example, the sub-region 1550d occupies an area ratio of the source pixel 1514 of the source image of 54/144, if the source pixel 1514 of the source image corresponds to a B-channel pixel value of 144, the sub-region 1550d corresponds to a value of 54, and so on.
As shown in fig. 15B, when the processor 110 wants to obtain the pixel value of the sub-pixel 1521 of the B channel of the target pixel, the processor 110 first obtains a diamond region 1560 based on the R channel/G channel sub-pixels 1522-1525 above, below, left and right of the sub-pixel 1521 of the B channel of the target pixel to be rendered, so as to obtain the pixel value of the sub-pixel 1521 of the B channel of the target pixel to be rendered according to the area occupied by the diamond region in the source pixels of the source images. The diamond-shaped regions 1560 are respectively composed of sub-regions 1560 a-1560 e, and the sub-regions 1560 a-1560 e are respectively a portion of the 2 nd, 4 th to 6 th, and 8 th pixels of the source pixel of the 3 × 3 source image (i.e., the source pixels 1531-1535 of the source image shown in the figure). Next, the processor 110 obtains the area ratio of the sub-region 1560a to the source pixel 1531 of the source image, the area ratio of the sub-region 1560B to the source pixel 1532 of the source image, the area ratio of the sub-region 1560c to the source pixel 1533 of the source image, the area ratio of the sub-pixel 1560d to the source pixel 1534 of the source image, and the area ratio of the sub-pixel 1560e to the source pixel 1535 of the source image, and multiplies the obtained area ratio of each sub-region by the B-channel pixel value corresponding to the source pixel of the source image of 3 × 3, and adds up the pixel value corresponding to each sub-region, thereby obtaining the pixel value 1521521521521521 of the B-channel of the target pixel to be rendered.
The manner of calculating the pixel values of the sub-pixels of the target pixel to be rendered as shown in fig. 15C and 15D is similar to that shown in fig. 15A and 15B, except that the areas of the source pixels of the source image occupied by the regions of the diamond shape are different, and will not be described herein for brevity. It should be noted that, as mentioned above, the area configuration information of the source pixel of the source image and the sub-pixels of each channel of the target pixel to be rendered is stored in the storage 120 in advance, and the processor 110 may access the corresponding area configuration information according to the row and column corresponding to the sub-pixel of the target pixel and substitute the pixel value of the source pixel of the source image of 3 × 3 corresponding to each sub-region to obtain the pixel value of the corresponding sub-pixel of the target pixel to be rendered. It is to be noted that the configuration of the diamond-shaped regions of the sub-pixels of the R channel or the G channel corresponding to the target pixel to be rendered in the 3 × 3 source pixels is reverse to the configuration of the diamond-shaped regions of the sub-pixels of the B channel of the target pixel to be rendered in the 3 × 3 source pixels. In other words, when "Δ" and "□" are interchanged, that is, when "□" is the sub-pixel of the R channel/G channel corresponding to the target pixel to be rendered and "Δ" is the sub-pixel of the B channel corresponding to the target pixel to be rendered, fig. 15A is a case when the sub-pixel of the R channel or G channel of the target pixel to be rendered is an odd-row odd-column, fig. 15B is a case when the sub-pixel of the R channel or G channel of the target pixel to be rendered is an odd-row even-column, fig. 15C is a case when the sub-pixel of the R channel or G channel of the target pixel to be rendered is an even-column odd-row, and fig. 15D is a case when the sub-pixel of the R channel or G channel of the target pixel to be rendered is an even-row even-column.
For example, the source pixels of the source image located around the sub-pixels of the R channel of the target pixel to be rendered of (1, 1) are:
Figure BDA0002038166890000171
since (1, 1) available according to the above description corresponds to odd rows and odd columns, the processor 110 calculates the pixel value of the sub-pixel corresponding to the R channel of the target pixel to be rendered according to the pixel value of the source pixel obtained by the above description multiplied by the area configuration of the diamond-shaped area of fig. 15A.
In addition, according to an embodiment of the present invention, in the area-based sub-pixel rendering method, when the texture information corresponding to the sub-pixel of the target pixel to be rendered is any one of the 12 patterns shown in fig. 16 (where white represents "1" and black represents "0"), that is, when the texture corresponding to the sub-pixel of the target pixel to be rendered is stronger, the processor 110 further sharpens the sub-pixel of the target pixel to be rendered, so that the target image is clearer. In detail, when the texture corresponding to the sub-pixel of the target pixel to be rendered is strong, the processor 110 performs convolution operation on the 3 × 3 source pixel corresponding to the sub-pixel of the target pixel to be rendered in the source image by using a diamond filter, so as to obtain a sharpening parameter. And then adding the pixel value of the sub-pixel of the target pixel to be rendered, which is obtained based on the area, with the sharpening parameter, wherein the obtained value is the pixel value of the sub-pixel of the sharpened target pixel to be rendered. The following is an example of a diamond filter in one embodiment of the invention:
Figure BDA0002038166890000181
according to another embodiment of the present invention, after the processor 110 obtains the pixel values of the sub-pixels of the R channel, the G channel and the B channel corresponding to the target pixel to be rendered according to the distance-based sub-pixel rendering method and the area-based sub-pixel rendering method, the processor 110 may further combine the two calculation results according to the predetermined weight to obtain the final pixel value. For example, the processor 110 may give a weight of 0.5 to each of the two calculation results to average the pixel values obtained by the distance-based sub-pixel rendering method and the area-based sub-pixel rendering method.
In summary, according to the sub-pixel rendering method and apparatus of the present invention, under the condition of not changing the image quality, the target pixels whose number is only 2/3 of the source pixels of the source image are obtained by interpolating two source pixels or a plurality of source pixels of the source image, so that 1/3 number of light emitters can be saved. In addition, when obtaining the pixel value of each sub-pixel of the target pixel to be rendered, the method of the present invention performs special processing on the pixel having a special texture (for example, the edge code described in the foregoing description is a specific pattern) according to the texture information around the sub-pixel of the target pixel, and when calculating the pixel value of the sub-pixel of the target pixel to be rendered by using different methods (for example, based on distance and/or area), performs smoothing or sharpening processing on the calculated pixel value of the sub-pixel of the target pixel to be rendered correspondingly according to different texture information (for example, the case where the texture described in the foregoing description is weak or strong) to obtain the best image conversion effect. Moreover, in response to the display having a rounded corner or a notch, the method of the present invention may perform anti-aliasing on the source pixels corresponding to the rounded corner or notch area in advance, so as to make the quality of the finally output image better.
While the present invention has been described with reference to the above embodiments, it should be noted that the description is not intended to limit the invention. Rather, this invention covers modifications and similar arrangements apparent to those skilled in the art. Therefore, the scope of the appended claims should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements as is readily apparent.

Claims (19)

1. A method of subpixel rendering to generate a target image from a source image, the method comprising:
acquiring the source image;
determining a target pixel to be rendered in the target image;
calculating an edge code of a source pixel in the source image corresponding to a sub-pixel of the target pixel to be rendered, wherein the edge code is obtained by performing edge detection on a window centered on the source pixel and comprises a plurality of bits, wherein each bit represents texture information in one direction;
determining texture information around the sub-pixels of the target pixel to be rendered according to the edge code; and
when the edge code is not a specific pattern, calculating a pixel value of the sub-pixel of the target pixel to be rendered according to the texture information and based on a distance.
2. The subpixel rendering method of claim 1, wherein said computing said edge code of said source pixel in said source image corresponding to said subpixel of said target pixel to be rendered comprises:
obtaining source coordinates of the source pixels corresponding to the sub-pixels of the target pixel to be rendered;
acquiring a plurality of pixel values of the window which takes the source coordinate as the center in the source image; and
and calculating the edge code according to a plurality of pixel values of the window.
3. The subpixel rendering method of claim 2, wherein when the target pixel to be rendered is located at an even row of the target image, the source coordinates corresponding to the subpixels of the R channel and/or G channel of the target pixel to be rendered are
Figure FDA0003525439590000011
The source coordinate corresponding to the sub-pixel of the B channel of the target pixel to be rendered is
Figure FDA0003525439590000012
Wherein xsprRepresents the x-coordinate of the target pixel to be rendered, y represents the y-coordinate of the target pixel to be rendered, and floor () represents a downward integer.
4. The subpixel rendering method of claim 2, wherein when the target pixel to be rendered is located on an odd-numbered row of the target image, the source coordinate corresponding to the subpixel of the R-channel and/or G-channel of the target pixel to be rendered is
Figure FDA0003525439590000013
The source coordinate corresponding to the sub-pixel of the B channel of the target pixel to be rendered is
Figure FDA0003525439590000014
Wherein xsprRepresents the x-coordinate of the target pixel to be rendered, y represents the y-coordinate of the target pixel to be rendered, and floor () represents a downward integer.
5. The sub-pixel rendering method of claim 1, wherein the edge code includes four hexadecimal bits, wherein the edge code of the specific pattern is 0x3030 or 0xC0C 0.
6. The sub-pixel rendering method of claim 1, wherein the calculating of the pixel value of the sub-pixel of the target pixel to be rendered based on the distance according to the texture information comprises:
when the texture information around the sub-pixels of the target pixel to be rendered is weak, smoothing is performed when the pixel values of the sub-pixels of the target pixel to be rendered are calculated.
7. The subpixel rendering method of claim 6, wherein when the encoding in the horizontal direction in the edge code is 0x8 or 0x4, it is determined that the texture information around the subpixel of the target pixel to be rendered is weaker.
8. The subpixel rendering method according to claim 6, wherein when said subpixels of said target pixel to be rendered are located in even rows of said target image, said smoothing processing performed when calculating pixel values of said subpixels of said target pixel to be rendered uses a formula of:
Figure FDA0003525439590000021
Figure FDA0003525439590000022
wherein, when the code in the horizontal direction is 0x8 and xspr% 2 is 0 or when the coding in the horizontal direction is 0x4 and xsprWhen% 2 is 1, factor is usedsmoothReplacement factorrg(b)**Wherein the factorrg(b)**Representing factorrg00、factorrg01、factorrg10、factorrg11、factorb00、factorb01、factorb10Or factorb11
Wherein the content of the first and second substances,
Figure FDA0003525439590000023
refers to the pixel value of the sub-pixel of the R channel or the G channel of the target pixel to be rendered,
Figure FDA0003525439590000024
refers to a pixel value, R '(G') of the sub-pixel of the B channel of the target pixel to be rendered(x,y)Refers to a coordinate system of (x, y)Pixel value of R channel or G channel in the source pixel, B'(x,y)Refers to the pixel value of B channel in the source pixel with coordinates (x, y), each in the formula
Figure FDA0003525439590000025
All include a down-integer operation, xspr% 2-0 indicates that the target pixel to be rendered is located in an even column of the target image, and xsprThe% 2-1 represents that the target pixel to be rendered is positioned in an odd column of the target image, and the factor is positioned in an odd column of the target imagesmooth、factorrg00、factorrg01、factorrg10、factorrg11、factorb00、factorb01、factorb10And factorb11Are all preset values.
9. The subpixel rendering method according to claim 6, wherein when said target pixel to be rendered is located in an odd-numbered line of said target image, said smoothing processing is performed when calculating a pixel value of said subpixel of said target pixel to be rendered using a formula of:
Figure FDA0003525439590000031
Figure FDA0003525439590000032
wherein, when the code in the horizontal direction is 0x8 and xspr% 2 is 0 or when the coding in the horizontal direction is 0x4 and xsprWhen% 2 is 1, factor is usedsmoothReplacement factorrg(b)**Wherein the factorrg(b)**Representing factorrg00、factorrg01、factorrg10、factorrg11、factorb00、factorb01、factorb10Or factorb11
Wherein the content of the first and second substances,
Figure FDA0003525439590000033
refers to the pixel value of the sub-pixel of the R channel or the G channel of the target pixel to be rendered,
Figure FDA0003525439590000034
refers to a pixel value, R '(G') of the sub-pixel of the B channel of the target pixel to be rendered(x,y)Refers to a pixel value, B 'of R channel or G channel in the source pixel with coordinates (x, y)'(x,y)Refers to the pixel value of B channel in the source pixel with coordinates (x, y), each in the formula
Figure FDA0003525439590000037
All include a down-integer operation, xspr% 2 ═ 0 denotes that the target pixel to be rendered is located in an even column of the target image, and xsprThe% 2-1 represents that the target pixel to be rendered is positioned in an odd column of the target image, and the factor is positioned in an odd column of the target imagesmooth、factorrg00、factorrg01、factorrg10、factorrg11、factorb00、factorb01、factorb10And factorb11Are all preset values.
10. The method of subpixel rendering of claim 1, further comprising:
when the edge code is the specific style and the target pixel to be rendered is located on an even line of the target image, calculating a pixel value of the sub-pixel of the target pixel to be rendered by the following formula:
Figure FDA0003525439590000035
Figure FDA0003525439590000036
Figure FDA0003525439590000041
Figure FDA0003525439590000042
wherein the content of the first and second substances,
Figure FDA0003525439590000043
refers to the coordinates of (x)sprY) pixel values of the sub-pixels of an R channel or a G channel of the target pixel to be rendered,
Figure FDA0003525439590000044
refers to the coordinates of (x)sprY) pixel value of the sub-pixel of B channel of the target pixel to be rendered, R '(G')x,yRefers to the pixel value, B ', of R channel or G channel of the source pixel with coordinates (x, y) in the source image'x,yRefers to the pixel value of B channel of the source pixel with coordinates (x, y) in the source pixel, and each of the formulas
Figure FDA0003525439590000045
All include an integer down operation, factorkepIs a preset value, factoraveIs a default value, and edgecode refers to the edge code.
11. The method of subpixel rendering of claim 1, further comprising:
when the edge code is the specific pattern and the target pixel to be rendered is located on an odd line of the target image, calculating a pixel value of the sub-pixel of the target pixel to be rendered by the following formula:
Figure FDA0003525439590000046
Figure FDA0003525439590000047
when edgecode is 0xC0C0
Wherein the content of the first and second substances,
Figure FDA0003525439590000048
refers to the coordinates of (x)sprY) pixel values of the sub-pixels of an R channel or a G channel of the target pixel to be rendered,
Figure FDA0003525439590000049
refers to the coordinates of (x)sprY) pixel value of the sub-pixel of B channel of the target pixel to be rendered, R '(G')x,yRefers to the pixel value, B ', of R channel or G channel of the source pixel with coordinates (x, y) in the source image'x,yRefers to the pixel value of the B channel of the source pixel with coordinates (x, y) in the source image, each in the formula
Figure FDA00035254395900000410
All include an integer down operation, factorkepIs a default value, and edgecode refers to the edge code.
12. The subpixel rendering method of claim 1, wherein when said target image has at least one notch and/or a rounded corner, multiplying pixel values of said source pixels corresponding to said notch and/or said rounded corner in said source image by an attenuation factor.
13. The sub-pixel rendering method of claim 12, wherein the attenuation coefficient is related to an area cut by a circular arc tangent.
14. The sub-pixel rendering method of claim 12, wherein the attenuation coefficient is obtainable by the following equation:
Areaarch=(2*offset-1)/(2*step)
wherein, AreaarchFor the attenuation coefficient, offset is the position index of the source pixel in a sawtooth, and step is the width of the sawtooth.
15. The subpixel rendering method of claim 12, wherein coordinate information of the notch and/or the fillet is saved in a register, using which the location of the source pixel in the source image corresponding to the notch and/or the fillet can be determined.
16. The sub-pixel rendering method of claim 12, wherein if the source pixel in the source image does not have the sub-pixel of the corresponding target pixel to be rendered in the target image, the pixel value of the source pixel in the source image is set to 0.
17. A sub-pixel rendering apparatus, comprising:
the storage is used for storing a source image and a target image; and
a processor for generating the target image from the source image;
wherein the processor takes the source image from the storage, determines a target pixel to be rendered in the target image, calculates an edge code of a source pixel in the source image corresponding to a sub-pixel of the target pixel to be rendered, wherein the edge code is obtained by performing edge detection on a window centered around the source pixel and includes a plurality of bits, wherein each bit represents texture information of one direction, determines texture information around the sub-pixel of the target pixel to be rendered according to the edge code, and calculates a pixel value of the sub-pixel of the target pixel to be rendered based on distance according to the texture information when the edge code is not a specific pattern.
18. The subpixel rendering apparatus of claim 17, wherein the processor performs smoothing processing when calculating the pixel value of the subpixel of the target pixel to be rendered if the texture information around the subpixel of the target pixel to be rendered is weak, when calculating the pixel value of the subpixel of the target pixel to be rendered, based on the texture information and the distance.
19. The sub-pixel rendering apparatus of claim 17, wherein the processor determines the texture information around the sub-pixel of the target pixel to be rendered is weaker when the encoding of the horizontal direction in the edge code is 0x8 or 0x 4.
CN201910332550.6A 2019-04-24 2019-04-24 Sub-pixel rendering method and device Active CN110060619B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201910332550.6A CN110060619B (en) 2019-04-24 2019-04-24 Sub-pixel rendering method and device
US16/592,061 US11030937B2 (en) 2019-04-24 2019-10-03 Sub-pixel rendering method and device
US17/318,126 US11158236B2 (en) 2019-04-24 2021-05-12 Sub-pixel rendering method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910332550.6A CN110060619B (en) 2019-04-24 2019-04-24 Sub-pixel rendering method and device

Publications (2)

Publication Number Publication Date
CN110060619A CN110060619A (en) 2019-07-26
CN110060619B true CN110060619B (en) 2022-05-10

Family

ID=67320423

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910332550.6A Active CN110060619B (en) 2019-04-24 2019-04-24 Sub-pixel rendering method and device

Country Status (1)

Country Link
CN (1) CN110060619B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112419147B (en) * 2020-04-14 2023-07-04 上海哔哩哔哩科技有限公司 Image rendering method and device

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100818988B1 (en) * 2006-09-05 2008-04-04 삼성전자주식회사 Method and apparatus for processing image signal
JP5910529B2 (en) * 2013-02-15 2016-04-27 ソニー株式会社 Display device and electronic device
CN104766548A (en) * 2015-03-17 2015-07-08 京东方科技集团股份有限公司 Display device and display method thereof
JP2016206243A (en) * 2015-04-15 2016-12-08 株式会社ジャパンディスプレイ Display device and electronic apparatus
KR102369368B1 (en) * 2015-09-30 2022-03-02 엘지디스플레이 주식회사 Image processing circuit and display device using the same
US20180137602A1 (en) * 2016-11-14 2018-05-17 Google Inc. Low resolution rgb rendering for efficient transmission
CN108417177A (en) * 2017-02-10 2018-08-17 深圳云英谷科技有限公司 Display pixel arrangement and its driving circuit
CN109559650B (en) * 2019-01-16 2021-01-12 京东方科技集团股份有限公司 Pixel rendering method and device, image rendering method and device, and display device

Also Published As

Publication number Publication date
CN110060619A (en) 2019-07-26

Similar Documents

Publication Publication Date Title
CN110047417B (en) Sub-pixel rendering method and device
US11158236B2 (en) Sub-pixel rendering method and device
US7570260B2 (en) Tiled view-maps for autostereoscopic interdigitation
CN102254504B (en) Image processing method and display device using the same
US20040145599A1 (en) Display apparatus, method and program
US9071835B2 (en) Method and apparatus for generating multiview image with hole filling
US20160247440A1 (en) Display method and display panel
EP3895153A1 (en) Method of driving pixel arrangement structure having plurality of subpixels, driving chip for driving pixel arrangement structure having plurality of subpixels, display apparatus, and computer-program product
JP2009075869A (en) Apparatus, method, and program for rendering multi-viewpoint image
JP2022511319A (en) Distance field color palette
MXPA03002165A (en) Hardware-enhanced graphics acceleration of pixel sub-component-oriented images.
US20120223941A1 (en) Image display apparatus, method, and recording medium
CN110992867A (en) Image processing method and display device
CN110060619B (en) Sub-pixel rendering method and device
US7050066B2 (en) Image processing apparatus and image processing program
US6731301B2 (en) System, method and program for computer graphics rendering
JP4180043B2 (en) Three-dimensional graphic drawing processing device, image display device, three-dimensional graphic drawing processing method, control program for causing computer to execute the same, and computer-readable recording medium recording the same
US7532773B2 (en) Directional interpolation method and device for increasing resolution of an image
TWI356394B (en) Image data generating device, image data generatin
US20120062559A1 (en) Method for Converting Two-Dimensional Image Into Stereo-Scopic Image, Method for Displaying Stereo-Scopic Image and Stereo-Scopic Image Display Apparatus for Performing the Method for Displaying Stereo-Scopic Image
US7532216B2 (en) Method of scaling a graphic character
KR20140061064A (en) Apparatus for processing image of vehicle display panal
EP0855682B1 (en) Scan line rendering of convolutions
US6445392B1 (en) Method and apparatus for simplified anti-aliasing in a video graphics system
US6570562B1 (en) Method for drawing patterned lines in a system supporting off-screen graphics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210311

Address after: 201203 3rd floor, building 2, No. 200, zhangheng Road, Pudong New Area pilot Free Trade Zone, Shanghai

Applicant after: Gryfield Intelligent Technology Co.,Ltd.

Address before: Room 301, 2537 Jinke Road, Zhangjiang High Tech Park, Pudong New Area, Shanghai 201203

Applicant before: VIA ALLIANCE SEMICONDUCTOR Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 201203, 11th Floor, Building 3, No. 889 Bibo Road, China (Shanghai) Pilot Free Trade Zone, Shanghai

Patentee after: Granfei Intelligent Technology Co.,Ltd.

Country or region after: China

Address before: 201203 3rd floor, building 2, No. 200, zhangheng Road, Pudong New Area pilot Free Trade Zone, Shanghai

Patentee before: Gryfield Intelligent Technology Co.,Ltd.

Country or region before: China