CN115442573A - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN115442573A
CN115442573A CN202211013504.8A CN202211013504A CN115442573A CN 115442573 A CN115442573 A CN 115442573A CN 202211013504 A CN202211013504 A CN 202211013504A CN 115442573 A CN115442573 A CN 115442573A
Authority
CN
China
Prior art keywords
image
pixel
pixels
white channel
channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211013504.8A
Other languages
Chinese (zh)
Other versions
CN115442573B (en
Inventor
陈涛
仇康
陈浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Goodix Technology Co Ltd
Original Assignee
Shenzhen Goodix Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Goodix Technology Co Ltd filed Critical Shenzhen Goodix Technology Co Ltd
Priority to CN202211013504.8A priority Critical patent/CN115442573B/en
Publication of CN115442573A publication Critical patent/CN115442573A/en
Application granted granted Critical
Publication of CN115442573B publication Critical patent/CN115442573B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The embodiment of the application provides an image processing method, an image processing device and electronic equipment, relates to the technical field of image processing, and can solve the problem that pseudo-color and zipper noise are easily generated in an obtained color image by demosaicing the image of a CFA added with W. The image processing method includes: acquiring an image from an image sensor; combining two color pixels in each sub-pixel block in the image into a color channel pixel, and combining two white pixels in each sub-pixel block in the image into a white channel pixel; splitting an image into a white channel image and a color channel image, wherein the white channel image comprises all white channel pixels, and the color channel image comprises all color channel pixels; correcting the white channel image to obtain a white channel corrected image; and carrying out image fusion on the white channel correction image and the color channel image to obtain a demosaiced image.

Description

Image processing method and device and electronic equipment
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, and an electronic device.
Background
With the development of electronic devices such as mobile phones, users have a higher demand for obtaining high-quality pictures and video contents through shooting by the electronic devices. However, due to cost, power consumption, and size, image sensors on electronic devices often rely on a Color Filter Array (CFA) to obtain limited image information. It is therefore necessary to design interpolation algorithms for CFA to restore high quality images. A common CFA employs a bayer pattern arrangement, and is composed of three types of color filters of specific wavelengths of red (R), green (G), and blue (B). However, the signal-to-noise ratio of the image sensor corresponding to the bayer pattern CFA is low, and therefore, more and more image sensors are provided with a white pixel (W) color filter in the CFA, and since the white pixel can widely transmit visible light, the amount of light entering can be increased, the signal-to-noise ratio can be improved, and the sensitivity can be improved.
However, at present, demosaicing is performed on an image of CFA added with W, and a resultant color image is liable to generate pseudo color and zipper noise.
Disclosure of Invention
An image processing method, an image processing device and an electronic device can solve the problem that pseudo color and zipper noise are easily generated in an obtained color image by demosaicing processing of an image of a CFA added with W.
In a first aspect, an image processing method is provided, including: the method comprises the steps of obtaining an image from an image sensor, wherein the image comprises a plurality of repeating units which are arranged in a plurality of rows and a plurality of columns, each repeating unit comprises four pixel blocks which are arranged in 2 rows and 2 columns, each pixel block comprises sub-pixel blocks which are arranged in m rows and n columns, m is more than or equal to 1, n is more than or equal to 1, each sub-pixel block comprises two white pixels which are arranged along a first diagonal direction and two color pixels which are arranged along a second diagonal direction, the color pixels in the two pixel blocks which are arranged along the first diagonal direction in the four pixel blocks are green pixels, and the color pixels in the two pixel blocks which are arranged along the second diagonal direction in the four pixel blocks are blue pixels and red pixels respectively; combining two color pixels in each sub-pixel block in the image into a color channel pixel, and combining two white pixels in each sub-pixel block in the image into a white channel pixel; splitting an image into a white channel image and a color channel image, wherein the white channel image comprises all white channel pixels, and the color channel image comprises all color channel pixels; correcting the white channel image to obtain a white channel corrected image; correcting the white channel image includes: increasing the blur degree of at least part of white channel pixels in the white channel image in a first diagonal direction, and/or reducing the sharpness of at least part of white channel pixels in the white channel image in a second diagonal direction; and carrying out image fusion on the white channel correction image and the color channel image to obtain a demosaiced image.
In a possible embodiment, the process of increasing the blur degree of at least some white channel pixels in the white channel image in the first diagonal direction and/or the process of decreasing the sharpness degree of at least some white channel pixels in the white channel image in the second diagonal direction comprises: for each white channel pixel in the white channel image, calculating a first gradient in a first diagonal direction and a second gradient in a second diagonal direction; and for the white channel pixel of which the ratio of the first gradient to the second gradient meets the preset condition, if the first gradient is smaller than the second gradient, performing blur correction, and if the first gradient is larger than the second gradient, performing sharpening correction.
In one possible embodiment, the first gradient is a sum of absolute values of differences of every two adjacent white channel pixels in the first diagonal direction in a neighborhood centered on the current white channel pixel; the second gradient is the sum of absolute values of difference values of every two adjacent white channel pixels in a second diagonal direction in a neighborhood taking the current white channel pixel as a center.
In one possible embodiment, the neighborhood is white channel pixels arranged in 5 rows and 5 columns.
In one possible embodiment, the ratio of the first gradient and the second gradient satisfies the preset condition: the maximum value of the first ratio and the second ratio is greater than a first preset value, the first ratio is the ratio of the first gradient to the second gradient, and the second ratio is the ratio of the second gradient to the first gradient.
In a possible embodiment, the first preset value is greater than 5 and less than 8.
In one possible embodiment, the ratio of the first gradient and the second gradient satisfies a preset condition for a white channel pixelIf the first gradient is smaller than the second gradient, performing blur correction, and if the first gradient is greater than the second gradient, performing sharpening correction, including: for the current white channel pixel, if the maximum value of the first ratio and the second ratio is greater than the first preset value, and the first gradient is not equal to the second gradient, modifying the pixel value of the current white channel pixel according to the following formula: w' 1 =w 1 +β(w 6 +w 8 -w 2 -w 4 ) (ii) a Wherein, w' 1 Is the pixel value, w, of the modified current white channel pixel 1 Is the pixel value, w, of the current white channel pixel before modification 2 And w 4 Is the pixel value, w, of the white channel pixel adjacent to and on both sides of the current white channel pixel in the first diagonal direction 6 And w 8 Is the pixel value of the white channel pixel adjacent to and respectively located at both sides of the current white channel pixel in the second diagonal direction, 0<β<1。
In one possible embodiment, 0.3< β <0.6.
In a possible embodiment, before the process of performing image fusion on the white channel correction image and the color channel image to obtain the demosaiced image, the method further includes: for the modified current white channel pixel, if the maximum value of the third ratio and the fourth ratio is greater than the second preset value, the pixel value of the current white channel pixel is restored to w 1 And the third ratio is w' 1 And w 1 The fourth ratio is w 1 And w' 1 The ratio of (a) to (b).
In a possible embodiment, the second preset value is greater than 1.4 and less than 1.6.
In a second aspect, there is provided an image processing apparatus comprising: the image acquisition unit is used for acquiring an image from the image sensor, the image comprises a plurality of repeating units which are arranged in a plurality of rows and a plurality of columns, each repeating unit comprises four pixel blocks which are arranged in 2 rows and 2 columns, each pixel block comprises sub-pixel blocks which are arranged in m rows and n columns, m is more than or equal to 1, n is more than or equal to 1, each sub-pixel block comprises two white pixels which are arranged along a first diagonal direction and two color pixels which are arranged along a second diagonal direction, the color pixels in the two pixel blocks which are arranged along the first diagonal direction in the four pixel blocks are green pixels, and the color pixels in the two pixel blocks which are arranged along the second diagonal direction in the four pixel blocks are blue pixels and red pixels respectively; the merging unit is used for merging two color pixels in each sub-pixel block in the image into a color channel pixel and merging two white pixels in each sub-pixel block in the image into a white channel pixel; the device comprises a splitting unit, a processing unit and a display unit, wherein the splitting unit is used for splitting an image into a white channel image and a color channel image, the white channel image comprises all white channel pixels, and the color channel image comprises all color channel pixels; the correction unit is used for correcting the white channel image to obtain a white channel correction image; correcting the white channel image includes: increasing the blurring in the first diagonal direction of at least part of the white channel pixels in the white channel image, and/or reducing the sharpness in the second diagonal direction of at least part of the white channel pixels in the white channel image; and the fusion unit is used for carrying out image fusion on the white channel correction image and the color channel image to obtain a demosaiced image.
In a third aspect, an image processing apparatus is provided, including: a processor and a memory for storing at least one instruction which is loaded and executed by the processor to implement the image processing method described above.
In a fourth aspect, an electronic device is provided, which includes the image processing apparatus.
According to the image processing method, the image processing device and the electronic equipment, aiming at an image of a CFA image sensor adopting RGBW arrangement, the image is split into a white channel image and a color channel image based on a diagonal pixel combination mode, then the white channel image is corrected to improve the consistency of the white channel pixel and the color channel pixel in the diagonal direction, and then the image fusion is carried out based on the corrected image, so that the pseudo color and zipper noise generated by poor consistency of the white channel pixel and the color channel pixel in the diagonal direction are reduced; because the fused image defects mainly exist in the diagonal direction with poorer consistency of white channel pixels and color channel pixels, and other horizontal or vertical boundaries, fringe areas and flat areas have no obvious defects, the image correction process of the embodiment of the application can only aim at the diagonal direction and only depend on one type of channel obtained by diagonal combination and downsampling, the algorithm with poorer overall algorithm complexity is reduced, the real-time performance is good, and the method can be applied to the processing of video images.
Drawings
FIG. 1 is a schematic illustration of a minimal repeating unit of an image corresponding to a CFA structure in a Bayer pattern;
FIG. 2 is a schematic diagram of a minimal repeating unit of an image corresponding to a CFA using a Hexa-deca RGBW pattern;
FIG. 3 is a flowchart illustrating an image processing method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of pixel variation of the image corresponding to FIG. 3;
FIG. 5 is a schematic diagram of a minimal repeating unit of another image in an embodiment of the present application;
FIG. 6 is a schematic view of a minimal repeating unit of another image in an embodiment of the present application;
FIG. 7 is a schematic diagram illustrating an arrangement of a plurality of minimal repeating units of an image according to an embodiment of the present application;
FIG. 8 is a diagram illustrating a white channel correction process of an image processing method according to an embodiment of the present application;
fig. 9 is a schematic diagram of a neighborhood of 5 × 5 with a current white channel pixel as a center in an embodiment of the present application;
FIG. 10 is a diagram illustrating a white channel correction procedure of another image processing method according to an embodiment of the present application;
fig. 11 is a block diagram of an image processing apparatus according to an embodiment of the present application.
Detailed Description
The terminology used in the description of the embodiments section of the present application is for the purpose of describing particular embodiments of the present application only and is not intended to be limiting of the present application.
Before describing the embodiments of the present application, a description will be given of a related art, as shown in fig. 1, fig. 1 is a minimal repeating unit of an image corresponding to a CFA structure in a bayer pattern, which may be referred to as a pixel, and with the development of miniaturization of an image sensor, since each pixel of the CFA in the bayer pattern can only transmit incident light of a specific wavelength, a signal-to-noise ratio of the image sensor corresponding to the CFA in the bayer pattern is low. Thus, for example, the CFA with W added as shown in FIG. 2 appears, and FIG. 2 is the minimum repetition unit of the image corresponding to the CFA adopting the Hexa-deca RGBW pattern, and since the sampling rate of R, G, B components in the RGBW pattern is reduced to half of that of the Bayer pattern, the information of the W component must be fully utilized to fuse or guide the generation of the final color image. However, since the W component and the R, G, and B color components are located at different sampling points, the direct use of the W component for guided fusion is likely to generate pseudo color and zipper noise, and thus the technical solution of the embodiment of the present application is provided to solve the above technical problem, and the technical solution of the embodiment of the present application is described below.
As shown in fig. 3 and 4, an embodiment of the present application provides an image processing method, including:
step 101, acquiring an image from an image sensor, the image including a plurality of repeating units arranged in a plurality of rows and a plurality of columns, the repeating units of the image being, for example, as shown in fig. 2, each repeating unit including four pixel blocks arranged in 2 rows and 2 columns, each pixel block including sub-pixel blocks arranged in m rows and n columns, m ≧ 1, n ≧ 1, e.g., m = n =2 in fig. 2, each sub-pixel block including two white pixels W arranged in a first diagonal direction a1 and two color pixels arranged in a second diagonal direction a2, the first diagonal direction a1 and the second diagonal direction a2 being two diagonal directions perpendicular to each other, for example, the first diagonal direction a1 is a direction in which the row direction is rotated 45 ° counterclockwise, the second diagonal direction a2 is a direction in which the row direction is rotated 135 ° counterclockwise, the color pixels in the two pixel blocks arranged in the first diagonal direction a1 of the four pixel blocks are both green pixels G, the color pixels in two pixel blocks arranged along the second diagonal direction a2 in the four pixel blocks are respectively a blue pixel B and a red pixel R, that is, the smallest rectangular frame in fig. 2 is a pixel, the adjacent four pixels arranged in 2 rows and 2 columns form a sub-pixel block, the adjacent four sub-pixel blocks arranged in 2 rows and 2 columns in the upper left corner form a pixel block, the pixel block in the upper left corner is formed by a white pixel W and a blue pixel B, the adjacent four sub-pixel blocks arranged in 2 rows and 2 columns in the upper right corner form a pixel block, the pixel block in the upper right corner is formed by a white pixel W and a green pixel G, the adjacent pixel block in the lower right corner arranged in 2 rows and 2 columns is formed by a white pixel W and a red pixel R, the adjacent four sub-pixel blocks arranged in 2 rows and 2 columns in the lower left corner form a pixel block, the lower left corner of this pixel block is composed of white pixels W and green pixels G, the repeating unit is composed of 8 rows and 8 columns of pixels, each pixel block is composed of 4 rows and 4 columns of pixels, each sub-pixel block is composed of 2 rows and 2 columns of pixels, any two adjacent pixels in the row direction and the column direction are respectively a white pixel and a color pixel, it should be noted that the arrangement of pixels in the repeating unit shown in fig. 2 is only an example, in other possible embodiments, there may be other pixel arrangements, for example, fig. 5 is a repeating unit of another pixel arrangement, it can be seen that the structure shown in fig. 5 is a structure after the structure shown in fig. 2 is mirror-inverted in the row direction, and for example, fig. 6 is a repeating unit of another pixel arrangement, it can be seen that the structure shown in fig. 6 is a structure after 1 row and 1 column of sub-pixel blocks are added to each pixel block in the structure shown in fig. 2, but the subsequent contents in this application embodiment are all described by taking as an example with the structure shown in fig. 2, and it should be noted that, no matter whether fig. 2, fig. 5 or fig. 6, only one pixel block is a structure obtained by arranging four repeating units along a schematic image, and an image is a plurality of image obtained by arranging a plurality of image forming a repeating unit, for example, and a plurality of image is arranged along a plurality of repeating unit, which is arranged in fig. 7;
step 102, merging two color pixels in each sub-pixel block in the image into a color channel pixel, and merging two white pixels in each sub-pixel block in the image into a white channel pixel, that is, merging and binning the diagonal pixels in each sub-pixel block, for example, in the structure shown in fig. 2, the repeating unit is composed of 8 rows and 8 columns of pixels, and after merging the diagonal pixels in each sub-pixel block, an image composed of 4 rows and 8 columns of pixels may be formed, for example;
103, splitting the image into a white channel image and a color channel image, wherein the white channel image comprises all white channel pixels W, the color channel image comprises all color channel pixels R, G and B, and the resolution of the white channel image and the resolution of the color channel image are half of the resolution of the original image so as to facilitate subsequent operation;
104, correcting the white channel image to obtain a white channel corrected image;
correcting the white channel image includes: increasing the blur degree of at least part of the white channel pixels W in the white channel image in the first diagonal direction a1, and/or reducing the sharpness of at least part of the white channel pixels W in the white channel image in the second diagonal direction a 2;
and 105, carrying out image fusion on the white channel correction image and the color channel image to obtain a demosaiced image, wherein the image fusion process can utilize the relationship among channels such as chromatic aberration or color ratio to carry out fusion to obtain final R, G and B color images.
Specifically, the image of the embodiment of the present application is derived from an image processor adopting a Hexa-deca RGBW pattern CFA, since the white pixel and the color pixel are located at different positions, respectively, and the white pixel and the color pixel are combined in two directions by diagonal pixel combination in step 102, respectively, which may result in a large difference in the consistency of the white pixel and the color pixel in different diagonal directions, therefore, in step 104, the pixel value correction is performed on the white channel pixel W based on the diagonal direction for at least one white channel pixel W in the white channel image, and as can be seen from fig. 2, the white pixel is sharper in the first diagonal direction a1 than the color pixel, and the white pixel is smoother in the second diagonal direction a2 than the color pixel, therefore, in step 104, the blurring processing on the first diagonal direction a1 and the blurring processing on the second diagonal direction a2 for the white channel pixel W may make the corrected white channel pixel W consistent with the color pixel W in the diagonal direction, and the blurring processing on the second diagonal direction a2 may make the corrected white channel pixel have a sharper consistency in the diagonal direction and the color channel pixel in the subsequent diagonal direction, thereby generating a pseudo-fused image with a higher degree of the white channel image and the color channel image.
That is, in the embodiment of the present application, for an image from an image sensor using a CFA in an RGBW arrangement, white pixels are merged into white channel pixels and color pixels are merged into color channel pixels based on a diagonal pixel merging manner, the image is split into a white channel image composed of the white channel pixels and a color channel image composed of the color channel pixels, and then the white channel image is corrected so that the uniformity of the white channel pixels and the color channel pixels in the diagonal direction is improved after correction, and then image fusion is performed.
The image processing method in the embodiment of the application is used for splitting an image into a white channel image and a color channel image based on a diagonal pixel combination mode aiming at the image of a CFA (color filter array) image sensor adopting RGBW (red green blue white) arrangement, then correcting the white channel image to improve the consistency of the white channel pixel and the color channel pixel in the diagonal direction, and then carrying out image fusion based on the corrected image, so that the pseudo color and zipper noise generated by poor consistency of the white channel pixel and the color channel pixel in the diagonal direction are reduced; because the fused image defects mainly exist in the diagonal direction with poorer consistency of white channel pixels and color channel pixels, and other horizontal or vertical boundaries, fringe areas and flat areas have no obvious defects, the image correction process of the embodiment of the application can only aim at the diagonal direction and only depend on one type of channels obtained by diagonal combination and downsampling, the algorithm with poorer overall algorithm complexity is reduced, the real-time performance is good, and the method can be applied to the processing of video images.
In a possible embodiment, as shown in fig. 8, in the step 104, the process of increasing the blur degree of at least some white channel pixels in the white channel image in the first diagonal direction a1 and/or reducing the sharpness of at least some white channel pixels in the white channel image in the second diagonal direction a2 includes:
step 1041, calculating a first gradient G1 in a first diagonal direction a1 and a second gradient G2 in a second diagonal direction a2 for each white channel pixel in the white channel image;
step 1042, for the current white channel pixel, judging whether the ratio of the first gradient G1 to the second gradient G2 meets a preset condition, if so, entering step 1043, otherwise, entering step 1044, keeping the pixel value of the current white channel pixel unchanged, and then executing step 1042 based on the next white channel pixel until all white channel pixels in the white channel image are processed, namely obtaining a white channel corrected image;
step 1043, determining a magnitude relation between the first gradient G1 and the second gradient G2, if the first gradient G1 is smaller than the second gradient G2, entering step 1045, performing blur correction processing, then executing step 1042 based on the next white channel pixel until all white channel pixels in the white channel image are processed completely, that is, obtaining a white channel corrected image, if the first gradient G1 is larger than the second gradient G2, entering step 1046, performing sharpening correction processing, then executing step 1042 based on the next white channel pixel until all white channel pixels in the white channel image are processed completely, that is, obtaining a white channel corrected image, if the first gradient G1 is equal to the second gradient G2, entering step 1044, that is, for the white channel pixels whose ratio of the first gradient G1 to the second gradient G2 meets a preset condition, if the first gradient is smaller than the second gradient, performing blur correction processing, and if the first gradient is larger than the second gradient, performing sharpening processing.
In one possible implementation, the first gradient G1 is a sum of absolute values of difference values of every two adjacent white channel pixels in the first diagonal direction a1 in a neighborhood centered on the current white channel pixel; the second gradient G2 is the sum of absolute values of differences of every two adjacent white channel pixels in the second diagonal direction a2 in a neighborhood centered on the current white channel pixel. The neighborhood is for example white channel pixels arranged in 5 rows and 5 columns.
Specifically, for example, step 1042 is executed for each white channel pixel to implement the above-mentioned correction process, where for each white channel pixel, it is taken as the current white channel pixel, and a neighborhood of 5 × 5 with the current white channel pixel as the center is determined, as shown in fig. 9, fig. 9 is a schematic diagram of a neighborhood of 5 × 5 with the current white channel pixel as the center, where 5 rows and 5 columns of white channel pixels are illustrated, the white channel pixel at the center position is the current white channel pixel W1, the white channel pixels in the first diagonal direction a1 are sequentially W5, W4, W1, W2, and W3, the white channel pixels in the second diagonal direction a2 are sequentially W9, W8, W1, W6, and W7, and the first gradient G1 and the second gradient G2 corresponding to W1 may be calculated based on the following formulas:
G1=ABS(w 1 -w 2 )+ABS(w 1 -w 4 )+ABS(w 2 -w 3 )+ABS(w 4 -w 5 ) (1)
G2=ABS(w 1 -w 6 )+ABS(w 1 -w 8 )+ABS(w 6 -w 7 )+ABS(w 8 -w 9 ) (2)
ABS calculates the sign for the absolute value, w 1 、w 2 、w 3 、w 4 、w 5 、w 6 、w 7 、w 8 、w 9 The pixel values corresponding to W1, W2, W3, W4, W5, W6, W7, W8 and W9, respectively. The structure shown in fig. 9 is a portion of a split white channel image. In addition, for the white channel pixels at the edge of the white channel image, it may not be possible to compose a neighborhood of 5 × 5 with the white channel pixels as the center, and for this case, one way of processing is to add two lines for the calculated white channel to the left, right, lower, and upper sides of the white channel image based on a preset rule before calculating the first gradient G1 and the second gradient G2, for example, before step 1042In this way, the corresponding gradient of the white channel pixel originally located at the edge of the image can also be calculated by the above method.
In one possible embodiment, as shown in fig. 10, the ratio of the first gradient G1 and the second gradient G2 satisfies the preset condition: the maximum value alpha of the first ratio D1 and the second ratio D2 is greater than a first preset value
Figure BDA0003811558760000066
The first ratio D1 is a ratio of the first gradient G1 to the second gradient G2, and the second ratio D2 is a ratio of the second gradient G2 to the first gradient G1. That is, in step 1042, α is calculated by the following formula:
Figure BDA0003811558760000061
where MAX denotes a calculation symbol of the maximum value. Then judging whether alpha is greater than a first preset value beta or not, and executing the subsequent steps according to the judgment result, wherein alpha is greater than the first preset value beta
Figure BDA0003811558760000063
Indicates that a predetermined condition is satisfied, alpha is not more than
Figure BDA0003811558760000062
Indicating that the non-preset condition is satisfied.
In a possible embodiment, the first preset value
Figure BDA0003811558760000064
Greater than 5 and less than 8.
In a possible embodiment, for a white channel pixel whose ratio of the first gradient G1 to the second gradient G2 satisfies a preset condition, if the first gradient G1 is smaller than the second gradient G2, the blur correction processing is performed, and if the first gradient G1 is larger than the second gradient G2, the sharpening correction processing is performed by: for the current white channel pixel W1, if the maximum value alpha of the first ratio D1 and the second ratio D2 is greater than the first preset value
Figure BDA0003811558760000065
And the first gradient G1 is not equal to the second gradient G2, step 201 is entered to modify the pixel value of the current white channel pixel W1 according to the following formula: w' 1 =w 1 +β(w 6 +w 8 -w 2 -w 4 ) (ii) a Wherein, w' 1 Is the pixel value, W, of the modified current white channel pixel W1 1 Is the pixel value, W, of the current white channel pixel W1 before modification 2 And w 4 Is the pixel value of a white channel pixel adjacent to the current white channel pixel W1 in a first diagonal direction a1 and respectively located on both sides of the current white channel pixel W1, e.g. W 2 And w 4 The pixel values, W, corresponding to W2 and W4 in FIG. 9, respectively 6 And w 8 Is the pixel value of the white channel pixel adjacent to the current white channel pixel W1 in the second diagonal direction a2 and respectively located at two sides of the current white channel pixel W1, e.g. W 6 And w 8 The pixel values corresponding to W6 and W8 in FIG. 9, 0<β<1。
Specifically, w' 1 The calculation formula of (a) is obtained by the following formula:
g1=2w 1 -w 2 -w 4 (4)
g2=2w 1 -w 6 -w 8 (5)
w′ 1 =w 1 +β(g1-g2) (6)
if G1 < G2 corresponding to the current white channel pixel W1, G1 is negligible and W 'is calculated' 1 =w 1 + β (G1-G2) is understood to be the effect of blurring by subtracting the product of G2 and β, if G1 > G2 then the calculated G2 is negligible, w' 1 =w 1 + β (g 1-g 2) is understood to mean the effect of sharpening by adding the product of g1 and β.
In one possible embodiment, 0.3< β <0.6.
In a possible implementation manner, in order to enhance the robustness of the image processing method, whether the pixel value of the modified white channel pixel deviates too much can be determined by the following method, and if the pixel deviating too much indicates that the modification is abnormal, the modification is discarded and the pixel value before the modification is recovered. Specifically, before the process of performing image fusion on the white channel correction image and the color channel image to obtain the demosaiced image, the method further includes: for the modified current white channel pixel, the maximum value γ of the third ratio D3 and the fourth ratio D4 is calculated by the following formula:
Figure BDA0003811558760000071
wherein the third ratio D3 is w' 1 And w 1 A fourth ratio D4 of w 1 And w' 1 After step 201, step 202 is performed, and it is determined whether the maximum value γ of the third ratio D3 and the fourth ratio D4 is greater than the second preset value θ, if so, that is, if the maximum value γ of the third ratio D3 and the fourth ratio D4 is greater than the second preset value θ, it indicates that the modified pixel value of the current white channel pixel W1 is too large, step 203 is performed, and the pixel value of the current white channel pixel W1 is restored to W 1 And then processing the next white channel pixel until all the white channel pixels are processed, if not, namely if the maximum value gamma of the third ratio D3 and the fourth ratio D4 is not greater than the second preset value theta, which indicates that the modified pixel value of the current white channel pixel W1 is not abnormal, keeping the pixel value of the current white channel pixel W1 as W' 1 And proceeds to the processing of the next white channel pixel until all white channel pixels have been processed.
In a possible embodiment, the second preset value θ is greater than 1.4 and less than 1.6.
It should be noted that, although the above embodiment specifically performs the pixel value correction based on the diagonal gradient, in addition, the pixel value correction may be performed by other algorithms, for example, the pixel value correction may be performed by a method based on machine learning, as long as the consistency of the white channel pixel and the color channel pixel in the diagonal direction can be improved.
As shown in fig. 11, an embodiment of the present application further provides an image processing apparatus, including: the image acquisition unit 1 is used for acquiring an image from the image sensor, the image comprises a plurality of repeating units which are arranged in a plurality of rows and a plurality of columns, each repeating unit comprises four pixel blocks which are arranged in 2 rows and 2 columns, each pixel block comprises sub-pixel blocks which are arranged in m rows and n columns, m is more than or equal to 1, n is more than or equal to 1, each sub-pixel block comprises two white pixels which are arranged along a first diagonal direction and two color pixels which are arranged along a second diagonal direction, the color pixels in the two pixel blocks which are arranged along the first diagonal direction in the four pixel blocks are both green pixels, and the color pixels in the two pixel blocks which are arranged along the second diagonal direction in the four pixel blocks are respectively blue pixels and red pixels; a merging unit 2, configured to merge two color pixels in each sub-pixel block in the image into a color channel pixel, and merge two white pixels in each sub-pixel block in the image into a white channel pixel; the splitting unit 3 is configured to split the image into a white channel image and a color channel image, where the white channel image includes all white channel pixels and the color channel image includes all color channel pixels; a correction unit 4, configured to correct the white channel image to obtain a white channel corrected image; correcting the white channel image includes: increasing the blurring in the first diagonal direction of at least part of the white channel pixels in the white channel image, and/or reducing the sharpness in the second diagonal direction of at least part of the white channel pixels in the white channel image; and the fusion unit 5 is used for carrying out image fusion on the white channel correction image and the color channel image to obtain a demosaiced image.
The image processing method in any of the above embodiments may be applied to the image processing apparatus, and the specific process and principle are the same as those in the above embodiments, and are not described herein again.
It should be understood that the above division of the image processing apparatus is only a division of logical functions, and the actual implementation may be wholly or partially integrated into one physical entity, or may be physically separated. And these modules can all be implemented in the form of software invoked by a processing element; or can be implemented in the form of hardware; and part of the modules can be realized in the form of software called by the processing element, and part of the modules can be realized in the form of hardware. For example, any one of the image obtaining unit, the merging unit, the splitting unit, the correcting unit and the fusing unit may be a processing element that is separately installed, or may be integrated in the image processing apparatus, for example, be integrated in a certain chip of the image processing apparatus, or may be stored in a memory of the image processing apparatus in the form of a program, and a certain processing element of the image processing apparatus calls and executes the functions of the above modules. The other modules are implemented similarly. In addition, all or part of the modules can be integrated together or can be independently realized. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in the form of software.
For example, the modules of the image acquisition unit, the merging unit, the splitting unit, the correction unit and the fusion unit may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), among others. For another example, when some of the above modules are implemented in the form of a Processing element scheduler, the Processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor capable of calling programs. As another example, these modules may be integrated together and implemented in the form of a system-on-a-chip (SOC).
In one possible embodiment, the process of increasing the blur in the first diagonal direction of at least some of the white channel pixels in the white channel image, and/or the process of decreasing the sharpness in the second diagonal direction of at least some of the white channel pixels in the white channel image comprises: for each white channel pixel in the white channel image, calculating a first gradient in a first diagonal direction and a second gradient in a second diagonal direction; and for the white channel pixel of which the ratio of the first gradient to the second gradient meets the preset condition, if the first gradient is smaller than the second gradient, performing blur correction, and if the first gradient is larger than the second gradient, performing sharpening correction.
In one possible implementation, the first gradient is a sum of absolute values of differences of every two adjacent white channel pixels in the first diagonal direction in a neighborhood centered on the current white channel pixel; the second gradient is the sum of the absolute values of the difference values of every two adjacent white channel pixels in the second diagonal direction in the neighborhood centered on the current white channel pixel.
In one possible embodiment, the neighborhood is white channel pixels arranged in 5 rows and 5 columns.
In a possible embodiment, the ratio of the first gradient and the second gradient satisfies the preset condition: the maximum value of the first ratio and the second ratio is greater than a first preset value, the first ratio is the ratio of the first gradient to the second gradient, and the second ratio is the ratio of the second gradient to the first gradient.
In a possible embodiment, the first preset value is greater than 5 and less than 8.
In a possible embodiment, for a white channel pixel whose ratio of a first gradient to a second gradient satisfies a preset condition, if the first gradient is smaller than the second gradient, performing a blur correction process, and if the first gradient is larger than the second gradient, performing a sharpening correction process includes: for the current white channel pixel, if the maximum value of the first ratio and the second ratio is greater than a first preset value, and the first gradient is not equal to the second gradient, modifying the pixel value of the current white channel pixel according to the following formula: w' 1 =w 1 +β(w 6 +w 8 -w 2 -w 4 ) (ii) a Wherein, w' 1 Is the pixel value, w, of the modified current white channel pixel 1 Is the pixel value, w, of the current white channel pixel before modification 2 And w 4 Is the pixel value, w, of the white channel pixel adjacent to and on both sides of the current white channel pixel in the first diagonal direction 6 And w 8 Is the pixel value of the white channel pixel adjacent to and respectively located at both sides of the current white channel pixel in the second diagonal direction, 0<β<1。
In one possible embodiment, 0.3< β <0.6.
In a possible embodiment, before the process of performing image fusion on the white channel correction image and the color channel image to obtain the demosaiced image, the method further includes: for the modified current white channel pixel, if the maximum value of the third ratio and the fourth ratio is greater than the second preset value, the pixel value of the current white channel pixel is restored to w 1 And the third ratio is w' 1 And w 1 A fourth ratio of w 1 And w' 1 The ratio of (a) to (b).
In a possible embodiment, the second preset value is greater than 1.4 and less than 1.6.
An embodiment of the present application further provides an image processing apparatus, including: a processor and a memory for storing at least one instruction which is loaded and executed by the processor to implement the image processing method of any of the embodiments described above.
The number of processors may be one or more, for example: the processor may include an Image Signal Processor (ISP). The processor and memory may be connected by a bus or other means. The memory, as a non-transitory computer-readable storage medium, may be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules corresponding to the image processing apparatus in the embodiments of the present application. The processor executes various functional applications and data processing by executing non-transitory software programs, instructions and modules stored in the memory, i.e., implementing the methods in any of the method embodiments described above. The memory may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; and necessary data, etc. Further, the memory may include high speed random access memory, and may also include non-transitory memory, such as at least one disk storage device, flash memory device, or other non-transitory solid state storage device.
An embodiment of the present application further provides an electronic device, which includes the image processing apparatus. The electronic device related to the present application may be any product such as a mobile phone, a tablet computer, a Personal Computer (PC), a Personal Digital Assistant (PDA), a smart watch, a wearable electronic device, an Augmented Reality (AR) device, a Virtual Reality (VR) device, an on-vehicle device, an unmanned aerial vehicle device, a smart car, a smart audio, a robot, smart glasses, and the like.
Embodiments of the present application further provide a computer-readable storage medium, in which a computer program is stored, and when the computer program runs on a computer, the computer is caused to execute the image processing method in any of the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, it may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the procedures or functions described in accordance with the present application are generated, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital subscriber line) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk), among others.
In the embodiments of the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, and means that there may be three relationships, for example, a and/or B, and may mean that a exists alone, a and B exist simultaneously, and B exists alone. Wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" and the like, refer to any combination of these items, including any combination of singular or plural items. For example, at least one of a, b, and c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made to the present application by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (13)

1. An image processing method, comprising:
the method comprises the steps of obtaining an image from an image sensor, wherein the image comprises a plurality of repeating units which are arranged in a plurality of rows and a plurality of columns, each repeating unit comprises four pixel blocks which are arranged in 2 rows and 2 columns, each pixel block comprises sub-pixel blocks which are arranged in m rows and n columns, m is more than or equal to 1, n is more than or equal to 1, each sub-pixel block comprises two white pixels which are arranged along a first diagonal direction and two color pixels which are arranged along a second diagonal direction, the color pixels in the two pixel blocks which are arranged along the first diagonal direction in the four pixel blocks are both green pixels, and the color pixels in the two pixel blocks which are arranged along the second diagonal direction in the four pixel blocks are respectively blue pixels and red pixels;
merging two color pixels in each sub-pixel block in the image into a color channel pixel, and merging two white pixels in each sub-pixel block in the image into a white channel pixel;
splitting the image into a white channel image and a color channel image, wherein the white channel image comprises all the white channel pixels, and the color channel image comprises all the color channel pixels;
correcting the white channel image to obtain a white channel corrected image;
the correcting the white channel image includes: increasing a blur of at least a portion of white channel pixels in the white channel image in the first diagonal direction and/or decreasing a sharpness of at least a portion of white channel pixels in the white channel image in the second diagonal direction;
and carrying out image fusion on the white channel correction image and the color channel image to obtain a demosaiced image.
2. The image processing method according to claim 1,
the process of increasing the blur degree of at least part of the white channel pixels in the white channel image in the first diagonal direction and/or reducing the sharpness of at least part of the white channel pixels in the white channel image in the second diagonal direction comprises:
for each of the white channel pixels in the white channel image, calculating a first gradient in the first diagonal direction and a second gradient in the second diagonal direction;
and for the white channel pixel of which the ratio of the first gradient to the second gradient meets a preset condition, if the first gradient is smaller than the second gradient, performing blur correction, and if the first gradient is larger than the second gradient, performing sharpening correction.
3. The image processing method according to claim 2,
the first gradient is the sum of absolute values of difference values of every two adjacent white channel pixels in the first diagonal direction in a neighborhood taking a current white channel pixel as a center;
and the second gradient is the sum of absolute values of difference values of every two adjacent white channel pixels in the second diagonal direction in a neighborhood taking the current white channel pixel as the center.
4. The image processing method according to claim 3,
the neighborhood is white channel pixels arranged in 5 rows and 5 columns.
5. The image processing method according to claim 3,
the ratio of the first gradient to the second gradient satisfies a preset condition:
the maximum value of a first ratio and a second ratio is greater than a first preset value, the first ratio is the ratio of the first gradient to the second gradient, and the second ratio is the ratio of the second gradient to the first gradient.
6. The image processing method according to claim 5,
the first preset value is greater than 5 and less than 8.
7. The image processing method according to claim 5,
for the white channel pixel of which the ratio of the first gradient to the second gradient meets a preset condition, if the first gradient is smaller than the second gradient, performing blur correction processing, and if the first gradient is larger than the second gradient, performing sharpening correction processing includes:
for the current white channel pixel, if the maximum value of the first ratio and the second ratio is greater than a first preset value, and the first gradient is not equal to the second gradient, modifying the pixel value of the current white channel pixel according to the following formula: w' 1 =w 1 +β(w 6 +w 8 -w 2 -w 4 );
Wherein, w' 1 Is the pixel value, w, of the modified current white channel pixel 1 Is the pixel value, w, of the current white channel pixel before modification 2 And w 4 Is the pixel value, w, of the white channel pixel adjacent to and respectively located at both sides of the current white channel pixel in the first diagonal direction 6 And w 8 And beta is more than 0 and less than 1, and the pixel values of the white channel pixels are adjacent to the current white channel pixel in the second diagonal direction and respectively positioned at two sides of the current white channel pixel.
8. The image processing method according to claim 7,
0.3<β<0.6。
9. the image processing method according to claim 7,
before the process of performing image fusion on the white channel correction image and the color channel image to obtain a demosaiced image, the method further includes:
for the modified current white channel pixel, if the maximum value of the third ratio and the fourth ratio is greater than the second preset value, the pixel value of the current white channel pixel is restored to w 1 And the third ratio is w' 1 And w 1 The fourth ratio is w 1 And w' 1 The ratio of (a) to (b).
10. The image processing method according to claim 9,
the second preset value is greater than 1.4 and less than 1.6.
11. An image processing apparatus characterized by comprising:
the image acquisition unit is used for acquiring an image from an image sensor, the image comprises a plurality of repeating units which are arranged in a plurality of rows and a plurality of columns, each repeating unit comprises four pixel blocks which are arranged in 2 rows and 2 columns, each pixel block comprises sub-pixel blocks which are arranged in m rows and n columns, m is more than or equal to 1, n is more than or equal to 1, each sub-pixel block comprises two white pixels which are arranged along a first diagonal direction and two color pixels which are arranged along a second diagonal direction, the color pixels in the two pixel blocks which are arranged along the first diagonal direction in the four pixel blocks are both green pixels, and the color pixels in the two pixel blocks which are arranged along the second diagonal direction in the four pixel blocks are respectively blue pixels and red pixels;
a merging unit, configured to merge two color pixels in each sub-pixel block in the image into a color channel pixel, and merge two white pixels in each sub-pixel block in the image into a white channel pixel;
a splitting unit, configured to split the image into a white channel image and a color channel image, where the white channel image includes all the white channel pixels, and the color channel image includes all the color channel pixels;
the correction unit is used for correcting the white channel image to obtain a white channel correction image;
the correcting the white channel image includes: increasing the blur degree of at least part of the white channel pixels in the white channel image in the first diagonal direction, and/or reducing the sharpness of at least part of the white channel pixels in the white channel image in the second diagonal direction;
and the fusion unit is used for carrying out image fusion on the white channel correction image and the color channel image to obtain a demosaiced image.
12. An image processing apparatus characterized by comprising:
a processor and a memory for storing at least one instruction which is loaded and executed by the processor to implement the image processing method of any of claims 1 to 10.
13. An electronic device characterized by comprising the image processing apparatus according to claim 12.
CN202211013504.8A 2022-08-23 2022-08-23 Image processing method and device and electronic equipment Active CN115442573B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211013504.8A CN115442573B (en) 2022-08-23 2022-08-23 Image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211013504.8A CN115442573B (en) 2022-08-23 2022-08-23 Image processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN115442573A true CN115442573A (en) 2022-12-06
CN115442573B CN115442573B (en) 2024-05-07

Family

ID=84243985

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211013504.8A Active CN115442573B (en) 2022-08-23 2022-08-23 Image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN115442573B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150029358A1 (en) * 2012-03-27 2015-01-29 Sony Corporation Image processing apparatus, imaging device, image processing method, and program
JP2016040874A (en) * 2014-08-12 2016-03-24 株式会社東芝 Solid state image sensor
US20170163951A1 (en) * 2015-12-07 2017-06-08 Samsung Electronics Co., Ltd. Imaging apparatus and image processing method of thereof
CN107274353A (en) * 2017-05-17 2017-10-20 上海集成电路研发中心有限公司 The bearing calibration of defect pixel in a kind of black white image
CN109285125A (en) * 2018-07-24 2019-01-29 北京交通大学 The multi-direction total Variation Image Denoising method and apparatus of anisotropy
CN112261391A (en) * 2020-10-26 2021-01-22 Oppo广东移动通信有限公司 Image processing method, camera assembly and mobile terminal
CN113676708A (en) * 2021-07-01 2021-11-19 Oppo广东移动通信有限公司 Image generation method and device, electronic equipment and computer-readable storage medium
CN113676675A (en) * 2021-08-16 2021-11-19 Oppo广东移动通信有限公司 Image generation method and device, electronic equipment and computer-readable storage medium
US20220021857A1 (en) * 2020-07-17 2022-01-20 SK Hynix Inc. Demosaic operation circuit, image sensing device and operation method of the same

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150029358A1 (en) * 2012-03-27 2015-01-29 Sony Corporation Image processing apparatus, imaging device, image processing method, and program
JP2016040874A (en) * 2014-08-12 2016-03-24 株式会社東芝 Solid state image sensor
US20170163951A1 (en) * 2015-12-07 2017-06-08 Samsung Electronics Co., Ltd. Imaging apparatus and image processing method of thereof
CN107274353A (en) * 2017-05-17 2017-10-20 上海集成电路研发中心有限公司 The bearing calibration of defect pixel in a kind of black white image
CN109285125A (en) * 2018-07-24 2019-01-29 北京交通大学 The multi-direction total Variation Image Denoising method and apparatus of anisotropy
US20220021857A1 (en) * 2020-07-17 2022-01-20 SK Hynix Inc. Demosaic operation circuit, image sensing device and operation method of the same
CN112261391A (en) * 2020-10-26 2021-01-22 Oppo广东移动通信有限公司 Image processing method, camera assembly and mobile terminal
CN113676708A (en) * 2021-07-01 2021-11-19 Oppo广东移动通信有限公司 Image generation method and device, electronic equipment and computer-readable storage medium
CN113676675A (en) * 2021-08-16 2021-11-19 Oppo广东移动通信有限公司 Image generation method and device, electronic equipment and computer-readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
董鹏宇;: "RGBX格式图像传感器的去马赛克算法", 集成电路应用, no. 05, 3 May 2018 (2018-05-03) *

Also Published As

Publication number Publication date
CN115442573B (en) 2024-05-07

Similar Documents

Publication Publication Date Title
US20180352165A1 (en) Device having cameras with different focal lengths and a method of implementing cameras with different focal lenghts
US9582853B1 (en) Method and system of demosaicing bayer-type image data for image processing
US8411962B1 (en) Robust image alignment using block sums
US8571308B2 (en) Image processing for aberration correction
EP2847998B1 (en) Systems, methods, and computer program products for compound image demosaicing and warping
US20230316456A1 (en) Panoramic video frame interpolation method and apparatus, and corresponding storage medium
WO2021179590A1 (en) Disparity map processing method and apparatus, computer device and storage medium
US20150022869A1 (en) Demosaicing rgbz sensor
EP3782359B1 (en) Method of combining content from multiple frames and electronic device therefor
US20210185285A1 (en) Image processing method and apparatus, electronic device, and readable storage medium
US9836847B2 (en) Depth estimation apparatus, imaging device, and depth estimation method
TWI492187B (en) Method and device for processing a super-resolution image
US9047665B2 (en) Image processing apparatus
CN111833269B (en) Video noise reduction method, device, electronic equipment and computer readable medium
KR101257946B1 (en) Device for removing chromatic aberration in image and method thereof
JP6041133B2 (en) Image processing apparatus, image processing method, image processing program, and chip circuit
CN111724448A (en) Image super-resolution reconstruction method and device and terminal equipment
US20210012459A1 (en) Image processing method and apparatus
CN115442573B (en) Image processing method and device and electronic equipment
US20230093967A1 (en) Purple-fringe correction method and purple-fringe correction device
CN115861077A (en) Panoramic image determination method, device, equipment and storage medium
US20130286241A1 (en) Low-cost roto-translational video stabilization
CN115809959A (en) Image processing method and device
CN113962892A (en) Method and device for correcting wide-angle lens image distortion and photographic equipment
CN111986144A (en) Image blur judgment method and device, terminal equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant