CN114943658A - Color edge removing method based on transverse chromatic aberration calibration - Google Patents
Color edge removing method based on transverse chromatic aberration calibration Download PDFInfo
- Publication number
- CN114943658A CN114943658A CN202210653837.0A CN202210653837A CN114943658A CN 114943658 A CN114943658 A CN 114943658A CN 202210653837 A CN202210653837 A CN 202210653837A CN 114943658 A CN114943658 A CN 114943658A
- Authority
- CN
- China
- Prior art keywords
- channel
- corrected
- image
- gradient
- color
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000004075 alteration Effects 0.000 title claims abstract description 51
- 238000000034 method Methods 0.000 title claims abstract description 35
- 230000014509 gene expression Effects 0.000 claims abstract description 29
- 238000012360 testing method Methods 0.000 claims abstract description 28
- 238000012937 correction Methods 0.000 claims abstract description 14
- 238000003384 imaging method Methods 0.000 claims description 5
- 238000006073 displacement reaction Methods 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 2
- 230000007704 transition Effects 0.000 description 11
- 230000008569 process Effects 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 4
- 238000004042 decolorization Methods 0.000 description 4
- 239000006185 dispersion Substances 0.000 description 4
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 239000003086 colorant Substances 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 235000005811 Viola adunca Nutrition 0.000 description 1
- 240000009038 Viola odorata Species 0.000 description 1
- 235000013487 Viola odorata Nutrition 0.000 description 1
- 235000002254 Viola papilionacea Nutrition 0.000 description 1
- 244000172533 Viola sororia Species 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004040 coloring Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005215 recombination Methods 0.000 description 1
- 230000006798 recombination Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000001429 visible spectrum Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30204—Marker
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Color Television Image Signal Generators (AREA)
Abstract
The invention provides a color edge removing method based on transverse chromatic aberration calibration, which comprises the following steps: providing a test chessboard image, and obtaining coordinate offset relational expressions of an R channel and a B channel relative to a G channel respectively according to the test chessboard image; according to the chromatic aberration information and the gradient information of the image to be corrected, an offset scaling rule suitable for local image characteristics is constructed so as to adjust coordinate offset relational expressions of an R channel and a B channel relative to a G channel respectively; acquiring a mark image for distinguishing color edges of local dark regions and color objects according to the gradient information and the brightness information of the image to be corrected; correcting the image to be corrected and the marked image according to the coordinate offset relational expression of the adjusted R channel and B channel relative to the G channel respectively so as to obtain a compensation value of the image to be corrected; and adjusting the compensation value according to the overcorrection limiting condition to construct a target image so as to solve the problems of overcorrection of color edges and incomplete correction of wide color edges.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a color edge removing method based on transverse chromatic aberration calibration.
Background
The color fringing phenomenon is dispersion that easily occurs at the boundary between a high light region and a low light region during photographing with a digital camera due to the large contrast of the brightness of the scene to be photographed, and the dispersion is usually expressed as purple, violet, yellow-green, red, blue, and the like, and the dispersion is usually called as color fringing. The occurrence of the color fringing is related to the dispersion of a camera lens, the undersize imaging area of a CCD (charge coupled device), a signal processing algorithm inside a camera and the like, even a high-end digital camera cannot completely solve the problem of the occurrence of the color fringing, the color fringing is more obvious along with the increasing resolution of the current image sensor (sensor), and the occurrence of the color fringing greatly influences the imaging quality of the image sensor, so that the research of the algorithm for removing the color fringing has important significance for obtaining a higher-quality image.
At present, the image de-coloring edge algorithm can be generally divided into two types, the first type of algorithm carries out purple edge detection and purple edge correction based on color information such as chromatic aberration or saturation of an image, and the second type of algorithm achieves the purpose of de-purple edge by calibrating, compensating and repairing chromatic aberration of a camera lens based on chromatic aberration of the camera lens. In practice, it has been found that the second type of algorithm can be applied to the correction of color fringing of different colors compared with the first type of algorithm, and meanwhile, the false correction of purple scenes but non-purple fringed areas can be reduced. However, the second type of algorithm has an overcorrection phenomenon of color fringing, for example, a yellow-green side is corrected to a blue-violet side, a blue side is corrected to a yellow-green side, and the like, and also has a problem that a wide color fringing cannot be completely corrected, for example, a wide-violet side is not completely corrected.
Disclosure of Invention
The invention aims to provide a color removal edge method based on transverse chromatic aberration calibration, which can solve the problems that the color edge is over-corrected and the wide color edge cannot be completely corrected.
In order to solve the above problems, the present invention provides a method for removing a color margin based on lateral chromatic aberration calibration, comprising the following steps:
step S1: providing a test chessboard image, and obtaining coordinate offset relational expressions of an R channel and a B channel relative to a G channel respectively according to the test chessboard image;
step S2: providing an image to be corrected, constructing an offset scaling rule suitable for local picture characteristics according to chromatic aberration information and gradient information of the image to be corrected, and adjusting coordinate offset relational expressions of an R channel and a B channel relative to a G channel respectively according to the offset scaling rule;
step S3: acquiring a mark image for distinguishing a local dark area color edge and a color object according to the gradient information and the brightness information of the image to be corrected;
step S4: correcting the image to be corrected and the marked image according to the adjusted coordinate offset relational expression of the R channel and the B channel relative to the G channel respectively so as to obtain a compensation value of the image to be corrected; and
step S5: and adjusting the compensation value according to the overcorrection limiting condition to construct a target image.
Optionally, step S1 includes:
obtaining a test chessboard image through a camera lens to be corrected, wherein the test chessboard image is used as a calibration board;
modeling the transverse chromatic aberration of the camera lens to be corrected to obtain a distortion model of the transverse chromatic aberration of the RGB three channels;
calibrating the camera lens to be corrected through the calibration plate to obtain a distortion parameter of the lateral chromatic aberration of the camera lens to be corrected; and
and respectively calculating the displacement deviation of each pixel position of the R channel and the B channel relative to the G channel on the imaging plane by taking the G channel as a reference according to the distortion model and the distortion parameter, thereby obtaining a coordinate offset relational expression of the R channel relative to the G channel and a coordinate offset relational expression of the B channel relative to the G channel.
Furthermore, the formats of the test chessboard image and the image to be corrected are both RGB formats, and the RGB formats are obtained through the camera lens to be corrected, which is obtained by the same module to be corrected.
Alternatively to this, the first and second parts may,
step S2 includes:
providing an image to be corrected;
according to the chromatic aberration information and the gradient information of the image to be corrected, an offset scaling rule constructed by a channel B is constructed, and a coordinate offset relation of the channel B relative to a channel G is adjusted according to the offset scaling rule constructed by the channel B;
and according to the chromatic aberration information and the gradient information of the image to be corrected, constructing an offset scaling rule constructed by an R channel, and adjusting a coordinate offset relation of the R channel relative to a G channel according to the offset scaling rule constructed by the R channel.
Optionally, step S3 includes:
and according to the gradient information and the brightness information of the image to be corrected, filtering the image to be corrected by adopting two filters to obtain a marked image for distinguishing a local dark color edge and a color object.
Further, step S3 includes:
calculating and obtaining an original coordinate, an original gray value and a gradient value of RGB three channels of a pixel point to be corrected of the image to be corrected in a G channel respectively, and establishing a gradient curve of the gradient value and the original coordinate and a gray curve of the original gray value and the original coordinate;
judging whether a local area of the image to be corrected is a local dark area color edge or not by adopting a first filter according to the gradient curve so as to obtain a marked image; and
and according to the gray curve, adopting a second filter to confirm the correction degree of the local area of the local dark color edge.
Further, the first filter determines whether the local area is a local dark area color edge by determining whether a valley of the gradient curve of the B channel and a valley of the gradient curve of the G channel are at the same coordinate position, and determining whether a gradient difference between the valley of the gradient curve of the G channel and the gradient curve of the B channel exceeds a preset value when the valley of the gradient curve of the B channel and the valley of the gradient curve of the G channel are not at the same coordinate position, so as to obtain a mark image of the local dark area color edge.
Further, the second filter determines the correction degree of the local dark area color edge by taking the value of the sum of the G channel original gray value and the B channel original gray value of the pixel point at the trough in the G channel gray curve and the value of the difference between the G channel original gray value and the B channel original gray value of the pixel point at the trough in the G channel gray curve.
Further, the following formula is satisfied when the correction degree of the local dark color edge is confirmed:
T3=clip(400/(VG0+VB0),1.0,1.4);
T4=clip((VB0-VG0)/(VG0+VB0),0.2,0.3);
S1=T3*(4*T4+0.2);
wherein, G0 is the G channel corresponding point of the pixel point to be corrected; b0 is the B channel corresponding point of the pixel point to be corrected; VG0 is the original gray value of G0 in the G channel; VB0 is the original gray value of B0 in the B channel; t3 is the overall brightness of the image to be corrected; t4 is the lateral chromatic aberration of the pixel point to be corrected of the image to be corrected.
Optionally, in step S5, the method for adjusting the compensation value is as follows:
|CB|<|VB0-VG0|*alpha;
|CR|<|VR0-VG0|*alpha;
|VB2-VG0|≤|VB0-VG0|;
|VR2-VG0|≤|VR0-VG0|;
wherein, B0 is the B channel corresponding point of the pixel point to be corrected; g0 is the G channel corresponding point of the pixel point to be corrected; r0 is the R channel corresponding point of the pixel point to be corrected; b2 is a target pixel point of a B channel obtained after the offset of B0 is adjusted; r2 is a target pixel point of an R channel obtained after the offset of R0 is adjusted; VG0 is the original gray value of G0 in the G channel; VB0 is the original gray value of B0 in the B channel; VB2 is the gray value of B2 on the B channel; VR2 is the grey scale value of R2 on the R channel; CB is a compensation value of the B channel; CR is the compensation value of the R channel.
Compared with the prior art, the invention has the following beneficial effects:
the invention provides a color removal method based on transverse chromatic aberration calibration, which comprises the following steps: providing a test chessboard image, and obtaining coordinate offset relational expressions of an R channel and a B channel relative to a G channel respectively according to the test chessboard image; providing an image to be corrected, constructing an offset scaling rule suitable for local picture characteristics according to chromatic aberration information and gradient information of the image to be corrected, and adjusting coordinate offset relational expressions of an R channel and a B channel relative to a G channel respectively according to the offset scaling rule; acquiring a marked image for distinguishing a local dark area color edge and a color object according to the gradient information and the brightness information of the image to be corrected; correcting the image to be corrected and the marked image according to the adjusted coordinate offset relational expression of the R channel and the B channel relative to the G channel respectively so as to obtain a compensation value of the image to be corrected; and adjusting the compensation value according to the overcorrection limiting condition to construct a target image so as to solve the problems of overcorrection of color edges and incomplete correction of wide color edges.
Drawings
Fig. 1 is a schematic flowchart of a color removal method based on lateral chromatic aberration calibration according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a test chessboard pattern according to an embodiment of the present invention;
fig. 3-6 are schematic diagrams of a first curve and a second curve of different suspicious regions according to an embodiment of the present invention.
Detailed Description
The invention will be described in further detail below with respect to a method for removing a color fringe based on lateral color difference calibration. The present invention will now be described in more detail with reference to the accompanying drawings, in which preferred embodiments of the invention are shown, it being understood that one skilled in the art may modify the invention herein described while still achieving the advantageous effects of the invention. Accordingly, the following description should be construed as broadly as possible to those skilled in the art and not as limiting the invention.
In the interest of clarity, not all features of an actual implementation are described. In the following description, well-known functions or constructions are not described in detail since they would obscure the invention in unnecessary detail. It will of course be appreciated that in the development of any such actual embodiment, numerous implementation-specific details must be set forth in order to achieve the developer's specific goals, such as compliance with system-related and business-related constraints, which will vary from one implementation to another. Moreover, it will be appreciated that such a development effort might be complex and time-consuming, but would nevertheless be a routine undertaking for those of ordinary skill in the art.
In order to make the objects and features of the present invention more comprehensible, embodiments of the present invention are described in detail below with reference to the accompanying drawings. It is to be noted that the drawings are in a very simplified form and are each provided with a non-precise ratio for the purpose of facilitating and clearly facilitating the description of the embodiments of the present invention.
In the lens of a color camera, polychromatic light is split into a set of rays or wavelengths. Light of different wavelengths will follow slightly different paths through the optical system and their misaligned recombination will introduce chromatic aberration when reaching the image plane. The chromatic aberration can be roughly divided into axial chromatic aberration and lateral chromatic aberration, wherein the lateral chromatic aberration is caused by the displacement error of the focus point of R, G, B three channels. Meanwhile, with the advent of high resolution arrays, color differences are exceeding the sub-pixel range, creating significant color fringes at the edges of the image and high contrast areas of the scene being photographed, which gives the overall impression of poor quality or clarity.
Fig. 1 is a schematic flow chart of a method for removing a color fringe based on lateral chromatic aberration calibration according to this embodiment. As shown in fig. 1, the present embodiment provides a method for removing a color fringe based on lateral chromatic aberration calibration, which can compensate for a lateral error of a camera lens to eliminate color fringes with different widths generated at an edge of the camera lens and a high contrast area of a photographed scene. The width here is perpendicular to the extending direction of the color stripe.
The method for removing the color edge based on the transverse chromatic aberration calibration comprises the following steps:
step S1: providing a test chessboard image, and obtaining coordinate offset relational expressions of an R channel and a B channel relative to a G channel respectively according to the test chessboard image;
step S2: providing an image to be corrected, constructing an offset scaling rule suitable for local picture characteristics according to chromatic aberration information and gradient information of the image to be corrected, and adjusting coordinate offset relational expressions of an R channel and a B channel relative to a G channel respectively according to the offset scaling rule;
step S3: acquiring a mark image for distinguishing a local dark area color edge and a color object according to the gradient information and the brightness information of the image to be corrected;
step S4: correcting the image to be corrected and the marked image according to the adjusted coordinate offset relational expression of the R channel and the B channel relative to the G channel respectively so as to obtain a compensation value of the image to be corrected; and
step S5: and adjusting the compensation value according to the overcorrection limiting condition to construct a target image.
The method for removing the color fringes based on the lateral chromatic aberration calibration provided in this embodiment is described in detail below with reference to fig. 1 to 6.
Step S1 is executed first, a test chessboard image is provided, and coordinate offset relations of the R channel and the B channel with respect to the G channel are obtained according to the test chessboard image.
The method specifically comprises the following steps:
firstly, obtaining a test chessboard image 1 through a camera lens to be corrected, wherein the format of the test chessboard image is, for example, a real-time RGB (red, green and blue) format, the test chessboard image 1 is a chessboard, and the chessboard is used as a calibration board; modeling the transverse chromatic aberration of the camera lens to be corrected to obtain a distortion model of the transverse chromatic aberration of the RGB three channels; calibrating the camera lens to be corrected through a calibration plate to obtain a distortion parameter of the transverse chromatic aberration of the camera lens to be corrected; and finally, respectively calculating the displacement deviation of each pixel position of the R channel and the B channel relative to the G channel on the imaging plane by taking the G channel as a reference according to the distortion model and the distortion parameter so as to obtain a coordinate offset relational expression of the R channel relative to the G channel and a coordinate offset relational expression of the B channel relative to the G channel. The reason for choosing the G channel for the reference color in this embodiment is that the G channel is located in the middle of the visible spectrum and generally exhibits a lower average amplitude in the secondary spectrum.
Then, step S2 is executed to provide an image to be corrected, an offset scaling criterion suitable for the local picture feature is constructed according to the color difference information and the gradient information of the image to be corrected, and the coordinate offset relations of the R channel and the B channel relative to the G channel are adjusted according to the offset scaling criterion.
The method specifically comprises the following steps:
step S21, providing an image to be corrected, where the image to be corrected is obtained through a camera lens to be corrected, the format of the image to be corrected is RGB format, and both the image to be corrected and the test checkerboard image are obtained through the camera lens obtained by the same module to be corrected (i.e. the same image sensor).
Step S22, constructing an offset scaling rule constructed by a B channel according to the chromatic aberration information and the gradient information of the image to be corrected, and adjusting a coordinate offset relational expression of the B channel relative to a G channel according to the offset scaling rule constructed by the B channel; and simultaneously, according to the chromatic aberration information and the gradient information of the image to be corrected, an offset scaling rule constructed by the R channel is constructed, and a coordinate offset relation of the R channel relative to the G channel is adjusted according to the offset scaling rule constructed by the R channel.
In detail:
firstly, the offset (also called distortion value) of the pixel point Q to be corrected of the image to be corrected is amplified until the gradient of the corrected position point of the pixel point to be corrected related to the image to be corrected is close to 0, that is, the gradient meets the following formula:
VG1–VG0=gradG0*Dnew------------(1);
VB1–VB0=gradB0*Dnew--------------(2);
wherein, G0 is the G channel corresponding point of the pixel point Q to be corrected; g1 is a transition point obtained by G0 at the leading edge of the adjustment offset (WarpBx, WarpBy); b0 is the B channel corresponding point of the pixel point Q to be corrected; b1 is a transition point of the B channel in the calculation process; VG0 is the original gray value of G0 in the G channel; VB0 is the original gray value of B0 in the B channel; VG1 is the gray scale value of G1 in the G channel; VB1 is the gray value of B1 on the B channel; gradB0 is the B channel gradient of B0; gradG0 is the G channel gradient of G0; dnew is a scaling multiple of the coordinate offset relation of the B channel relative to the G channel.
Then, VG0 and VB0 of G0 are not always close to 0 in gradient, making VG0 and VB0 difficult to obtain. Since gray looks better than color edges, we assume VG0 is VB 0; using equations (1) and (2) results in the following equations:
VG1–VB1=(gradG0-gradB0)*Dnew-----------(3);
wherein, G0 is the G channel corresponding point of the pixel point Q to be corrected; g1 is a transition point of the G channel in the calculation process; b0 is the B channel corresponding point of the pixel point Q to be corrected; b1 is a transition point of the B channel in the calculation process; VG0 is the original gray value of G0 in the G channel; VB0 is the original gray value of B0 in the B channel; VG1 is the gray scale value of G1 in the G channel; VB1 is the gray value of B1 on the B channel; gradB0 is the B channel gradient of B0; gradG0 is a G channel gradient of G0.
And finally, constructing an offset scaling rule suitable for local picture characteristics, and adjusting coordinate offset relational expressions of the R channel and the B channel relative to the G channel according to the offset scaling rule. Wherein, the offset scaling criterion constructed by the B channel meets the following formula:
Dnew=|VG0-VB0|/(beta+|gradG0-gradB0|);-----------------(4)
gradG0=gradx*warpBx+grady*warpBy;------------------(5)
gradB0=bx*warpBx+by*warpBy;-----------------(6)
warpBx_new=Dnew*warpBx;-----------------(7)
warpBy_new=Dnew*warpBy;-----------------(8)
wherein, G0 is the G channel corresponding point of the pixel point Q to be corrected; b0 is the B channel corresponding point of the pixel point Q to be corrected; VG0 is the original gray value of G0 in the G channel; VB0 is the original gray value of B0 in the B channel; warpBx _ new, warpBy _ new is the coordinate offset of the new B channel relative to the G channel in the x and y directions after adjustment respectively; warpBx, warpBy are the coordinate offset of the B channel relative to the G channel in the x and y directions obtained according to the test chessboard image respectively; dnew is a scaling multiple of a coordinate offset relational expression of the B channel relative to the G channel; gradx is the sobel gradient of the G channel in the x direction; grady is the Sobel gradient of the G channel in the y direction; bx is the Sobel gradient of the B channel in the x direction; by is the Sobel gradient of the B channel in the x direction.
The offset scaling criteria for the R-channel construction is similar to the above formula, i.e., the following formula is satisfied:
Dnew=|VG0-VR0|/(beta+|gradG0-gradR0|);-----------------(9)
gradG0=gradx*warpRx+grady*warpRy;------------------(10)
gradR0=rx*warpRx+ry*warpRy;-----------------(11)
warpRx_new=Dnew*warpRx;-----------------(12)
warpRy_new=Dnew*warpRy;-----------------(13)
wherein, G0 is the G channel corresponding point of the pixel point Q to be corrected; r0 is the R channel corresponding point of the pixel point Q to be corrected; VG0 is the original gray value of G0 in the G channel; VR0 is the original grey scale value of R0 on the R channel; the adjusted coordinate offset of the new R channel relative to the G channel in the x direction and the y direction is warp Rex _ new and warp Rey _ new respectively; warrx, warry being the coordinate offset of the R channel relative to the G channel in x and y directions obtained from the test chessboard image, respectively; dnew is a scaling multiple of a coordinate offset relation of the R channel relative to the G channel; gradx is the sobel gradient of the G channel in the x direction; grady is the Sobel gradient of the G channel in the y direction; rx is the sobel gradient of the R channel in the x direction; ry is the sobel gradient of the R channel in the x direction.
Step S3 is then executed to obtain a mark image for distinguishing a local dark color edge from a color object according to the gradient information and the brightness information of the image to be corrected.
In detail, when the color margin area in the image to be corrected is greatly different from the color object, the color margin area and the color object can be easily distinguished, and when the color margin and the color object (such as a purple margin in a local dark area and a purple object) with similar colors are adopted, two filters (a first filter and a second filter) are adopted to filter and process the image to be corrected, so that a mark image for distinguishing the color margin in the local dark area and the color object is obtained through the gradient information and the brightness information of the image to be corrected.
The specific distinguishing method for distinguishing the color edge of the local dark area from the color object in this embodiment is as follows:
firstly, calculating and obtaining an original coordinate, an original gray value and a gradient value of an RGB three-channel of a pixel point Q to be corrected of the image to be corrected in a G channel, and establishing a gradient curve of the gradient value and the original coordinate and a gray curve of the original gray value and the original coordinate.
And then, judging whether a local area of the image to be corrected is a local dark color edge or not by adopting a first filter according to the gradient curve so as to obtain a marked image. Specifically, the first filter determines whether the local area is a color edge of a local dark area by determining whether a valley of a gradient curve of a B channel and a valley of a gradient curve of a G channel are at the same coordinate position and determining whether a gradient difference between the valley of the gradient curve of the G channel and the gradient curve of the B channel exceeds a preset value when the valley of the gradient curve of the B channel and the valley of the gradient curve of the G channel are not at the same coordinate position, so as to obtain a mark image of the color edge of the local dark area.
As shown in fig. 3, fig. 3 shows a gradient curve 21 of the B channel and a gradient curve 22 of the G channel in the first local region, and when the gradient between the valley of the gradient curve of the G channel and the gradient curve of the B channel satisfies the following formula, it can be determined that the local region is a local dark color edge.
The gradient between the trough of the gradient curve of the G channel and the gradient curve of the B channel satisfies the following formula:
gradB(G1’)*gradB(G1)>10;---------------(14)
gradG(G1’)*gradG(G1)<-10;---------------(15)
wherein, G0 is the G channel corresponding point of the pixel point Q to be corrected; g1 is the transition point of G0 at the adjusted offset leading edge (WarpBx, WarpBy); g1' is a negative point opposite to G1, that is, a pixel point obtained by G0 at the adjusted offset leading edge (-1 × WarpBx, -1 × WarpBy); gradB is the gradient of the B channel; gradG is the G channel gradient; gradB (G1) is the B channel gradient of G1; gradB (G1 ') is the B channel gradient of G1'; gradG (G1) is a G channel gradient of G1; gradB (G1 ') is the B channel gradient of G1'.
As shown in fig. 4, when fig. 4 shows the gradient curve 21 of the B channel and the gradient curve 22 of the G channel in the second local region, and the gradient between the trough of the gradient curve of the G channel and the gradient curve of the B channel satisfies the following formula, it can be determined that the local region is a local dark color border (e.g., a local dark color border).
The gradient between the trough of the gradient curve of the G channel and the gradient curve of the B channel satisfies the following formula:
gradB(G1’)*gradB(G1)<-10;---------------(16)
gradG(G1’)*gradG(G1)>10;---------------(17)
wherein, G0 is the G channel corresponding point of the pixel point to be corrected; g1 is the transition point of G0 at the adjusted offset leading edge (WarpBx, WarpBy); g1' is a negative point opposite to G1, that is, a pixel point obtained by G0 at the adjusted offset leading edge (-1 × WarpBx, -1 × WarpBy); gradB is the gradient of the B channel; gradG is the G channel gradient; gradB (G1) is the B channel gradient of G1; gradB (G1 ') is the B channel gradient of G1'; gradG (G1) is a G channel gradient of G1; gradB (G1 ') is the B channel gradient of G1'.
As shown in fig. 5, fig. 5 shows a gradient curve 21 of the B channel and a gradient curve 22 of the G channel in the third local region, and when the gradient between the trough of the gradient curve of the G channel and the gradient curve of the B channel satisfies the following formula, it can be determined that the local region is a local dark color edge.
The gradient between the trough of the gradient curve of the G channel and the gradient curve of the B channel satisfies the following formula:
gradB(G1’)*gradB(G1)<-10;---------------(18)
gradB(G1)*gradB(G0)<-10;---------------(19)
gradG(G1’)*gradG(G1)<-10;---------------(20)
gradG(G1)*gradG(G0)>10;---------------(21)
wherein, G0 is the G channel corresponding point of the pixel point Q to be corrected; g1 is the transition point of G0 at the adjusted offset leading edge (WarpBx, WarpBy); g1' is a negative point opposite to G1, that is, a pixel point obtained by G0 at the adjusted offset leading edge (-1 × WarpBx, -1 × WarpBy); gradB is the gradient of the B channel; gradG is the G channel gradient; gradB (G1) is the B channel gradient of G1; gradB (G1 ') is the B channel gradient of G1'; gradG (G1) is a G channel gradient of G1; gradB (G1 ') is the B channel gradient of G1'; gradB (G0) is the B channel gradient of G0; gradG (G0) is a G channel gradient of G0.
In the fourth local region, when the value of gradG (G0) or gradG (G1') is close to 0 or greater than 10, it can be determined that the local region is a local dark color edge.
T1=gradB(G1)*gradG(G0);------------------(22)
T2=gradG(G1)*gradB(G0);-----------------(23)
abs(T1)/(abs(T2)+1)<0.1or>10;-------------(24)
Wherein, G0 is the G channel corresponding point of the pixel point Q to be corrected; g1 is the transition point of G0 at the adjusted offset leading edge (WarpBx, WarpBy); g1' is a negative point opposite to G1, that is, a pixel point obtained by G0 at the adjusted offset leading edge (-1 × WarpBx, -1 × WarpBy); gradB is the gradient of the B channel; gradG is the G channel gradient; t1 and T2 are transition parameters; gradB (G0) is the B channel gradient of G0; gradG (G0) is a G channel gradient of G0; gradB (G1) is the B channel gradient of G1; gradG (G1) is a G channel gradient of G1.
And marking the local area which is judged as the local dark color edge by adopting a second filter according to the gray curve so as to obtain a marked image. In detail, the second filter determines the correction degree of the color fringes in the local dark region by the value of the sum of the G channel original gray value and the B channel original gray value of the pixel point at the trough in the G channel gray curve and the value of the difference between the G channel original gray value and the B channel original gray value.
As shown in fig. 6, the following formula is satisfied when the degree of correction of the local dark color fringes is confirmed:
T3=clip(400/(VG0+VB0),1.0,1.4);---------------(25)
T4=clip((VB0-VG0)/(VG0+VB0),0.2,0.3);---------------(26)
S1=T3*(4*T4+0.2);---------------(27)
wherein, G0 is the G channel corresponding point of the pixel point to be corrected; b0 is the B channel corresponding point of the pixel point to be corrected; VG0 is the original gray value of G0 in the G channel; VB0 is the original gray value of B0 in the B channel; t3 is the overall brightness of the image to be corrected; t4 is the lateral chromatic aberration of the pixel to be corrected of the image to be corrected; s1 is a penalty value.
And then, the G channel is utilized to carry out scaling on the pixel values (namely the gray value) of the R channel and the B channel so as to obtain a target pixel point.
The B channel corresponding point B2 of the target pixel point satisfies the following formula:
VBN(B2)=Min(1.0,VG0/VG(B2)*S1)*VB2;-----------(28)
wherein, G0 is the G channel corresponding point of the pixel point Q to be corrected; g1 is the transition point of G0 at the adjusted offset leading edge (WarpBx, WarpBy); b0 is the B channel corresponding point of the pixel point to be corrected; b1 is the transition point of B0 in the B channel calculation process; b2 is a target pixel point obtained after the offset is adjusted by B0; VG0 is the original gray value of G0 in the G channel; VG (B2) is the gray scale value of B2 in the G channel; VB2 is the gray value of B2 on the B channel; VR (B2) is the gray scale value of B2 on the R channel; VBN (B2) is the gray level value of B2 in B channel after scaling; s1 is the penalty value of VG0/VG (B2).
If the pixel point to be corrected meets the color boundary condition of the local dark area, then
VB2=VBN(B2);------------------------(29)
VR2=VR(B2);------------------------(30)
And step S4 is executed, the image to be corrected and the marker image are corrected according to the adjusted coordinate offset relation of the R channel and the B channel relative to the G channel, respectively, to obtain a compensation value of the image to be corrected.
The compensation value in this step satisfies the following formula:
CB=VB2-VB0;------------------------(31)
CR=VR2-VR0;------------------------(32)
wherein, B0 is the B channel corresponding point of the pixel point to be corrected; r0 is the R channel corresponding point of the pixel point to be corrected; b2 is a target pixel point of a B channel obtained after the offset of B0 is adjusted; r2 is a target pixel point of an R channel obtained after the offset of R0 is adjusted; VB2 is the gray value of B2 on the B channel; VR2 is the grey scale value of R2 on the R channel; CB is a compensation value of the B channel; CR is the compensation value of the R channel.
Then, step S5 is executed to adjust the compensation value according to the overcorrection limiting condition to construct the target image. In detail, the compensation value is adjusted according to an overcorrection limiting condition, and the image to be corrected is further corrected according to the compensation value, so that a target image is constructed.
The method for adjusting the compensation value in this step is as follows:
|CB|<|VB0-VG0|*alpha;-------(33)
|CR|<|VR0-VG0|*alpha;-------(34)
|VB2-VG0|≤|VB0-VG0|;----------------(35)
|VR2-VG0|≤|VR0-VG0|;----------------(36)
wherein, B0 is a B channel corresponding point of the pixel point Q to be corrected; g0 is the G channel corresponding point of the pixel point Q to be corrected; r0 is the R channel corresponding point of the pixel point Q to be corrected; b2 is a target pixel point of a B channel obtained after the offset of B0 is adjusted; r2 is a target pixel point of an R channel obtained after the offset of R0 is adjusted; VG0 is the original gray value of G0 in the G channel; VB0 is the original gray value of B0 in the B channel; VB2 is the gray value of B2 on the B channel; VR2 is the grey scale value of R2 on the R channel; CB is a compensation value of the B channel; CR is the compensation value of the R channel.
In the above formulas, formulas (33) and (34) provide the coordinate offset relational expression of the R channel and the B channel with respect to the G channel, respectively, for the pixel points to be corrected in the region to be corrected, obtained by the test checkerboard image, to calculate the color inversion caused by overcorrection, and convert the color after inversion into gray; equations (35) and (36) limit the lateral chromatic aberration enlargement of the region to be corrected. Therefore, when the formula is satisfied, the coordinate offset relation does not need to be adjusted; if the formula is not satisfied, adjusting the coordinate offset relational expression of the R channel and the B channel relative to the G channel according to the formula.
The invention provides a color removal method based on transverse chromatic aberration calibration, which comprises the following steps: providing a test chessboard image, and obtaining coordinate offset relational expressions of an R channel and a B channel relative to a G channel respectively according to the test chessboard image; providing an image to be corrected, constructing an offset scaling rule suitable for local picture characteristics according to chromatic aberration information and gradient information of the image to be corrected, and adjusting coordinate offset relational expressions of an R channel and a B channel relative to a G channel respectively according to the offset scaling rule; acquiring a mark image for distinguishing a local dark area color edge and a color object according to the gradient information and the brightness information of the image to be corrected; correcting the image to be corrected and the marked image according to the adjusted coordinate offset relational expression of the R channel and the B channel relative to the G channel respectively so as to obtain a compensation value of the image to be corrected; and adjusting the compensation value according to the overcorrection limiting condition to construct a target image so as to solve the problems of overcorrection of color edges and incomplete correction of wide color edges.
In addition, unless otherwise specified or indicated, the description of the terms "first" and "second" in the specification is only used for distinguishing various components, elements, steps and the like in the specification, and is not used for representing logical relationships or sequential relationships among the various components, elements, steps and the like.
It is to be understood that while the present invention has been described in conjunction with the preferred embodiments thereof, it is not intended to limit the invention to those embodiments. It will be apparent to those skilled in the art from this disclosure that many changes and modifications can be made, or equivalents modified, in the embodiments of the invention without departing from the scope of the invention. Therefore, any simple modification, equivalent change and modification made to the above embodiments according to the technical essence of the present invention are still within the scope of the protection of the technical solution of the present invention, unless the contents of the technical solution of the present invention are departed.
Claims (10)
1. A method for removing color fringes based on transverse chromatic aberration calibration is characterized by comprising the following steps:
step S1: providing a test chessboard image, and obtaining coordinate offset relational expressions of an R channel and a B channel relative to a G channel respectively according to the test chessboard image;
step S2: providing an image to be corrected, constructing an offset scaling rule suitable for local picture characteristics according to chromatic aberration information and gradient information of the image to be corrected, and adjusting coordinate offset relational expressions of an R channel and a B channel relative to a G channel respectively according to the offset scaling rule;
step S3: acquiring a mark image for distinguishing a local dark area color edge and a color object according to the gradient information and the brightness information of the image to be corrected;
step S4: correcting the image to be corrected and the marked image according to the adjusted coordinate offset relational expression of the R channel and the B channel relative to the G channel respectively so as to obtain a compensation value of the image to be corrected; and
step S5: and adjusting the compensation value according to the overcorrection limiting condition to construct a target image.
2. The method for removing color fringes based on lateral color difference calibration as claimed in claim 1, wherein step S1 comprises:
obtaining a test chessboard image through a camera lens to be corrected, wherein the test chessboard image is used as a calibration board;
modeling the transverse chromatic aberration of the camera lens to be corrected to obtain a distortion model of the transverse chromatic aberration of the RGB three channels;
calibrating the camera lens to be corrected through the calibration plate to obtain a distortion parameter of the lateral chromatic aberration of the camera lens to be corrected; and
and respectively calculating the displacement deviation of each pixel position of the R channel and the B channel relative to the G channel on the imaging plane by taking the G channel as a reference according to the distortion model and the distortion parameter, thereby obtaining a coordinate offset relational expression of the R channel relative to the G channel and a coordinate offset relational expression of the B channel relative to the G channel.
3. The method for removing a color fringe based on lateral chromatic aberration calibration of claim 2, wherein the formats of the test chessboard image and the image to be corrected are both RGB formats, and are obtained through a camera lens to be corrected obtained by the same module to be corrected.
4. The method for removing color fringes based on transverse color difference calibration according to claim 1, wherein constructing an offset scaling criterion suitable for local picture features according to the color difference information and gradient information of the image to be corrected, and adjusting the coordinate offset relations of the R channel and the B channel respectively relative to the G channel according to the offset scaling criterion comprises:
according to the chromatic aberration information and the gradient information of the image to be corrected, an offset scaling rule constructed by a channel B is constructed, and a coordinate offset relation of the channel B relative to a channel G is adjusted according to the offset scaling rule constructed by the channel B; and
and according to the chromatic aberration information and the gradient information of the image to be corrected, constructing an offset scaling rule constructed by an R channel, and adjusting a coordinate offset relation of the R channel relative to a G channel according to the offset scaling rule constructed by the R channel.
5. The method for removing color fringes based on lateral chromatic aberration calibration of claim 1, wherein step S3 comprises:
and according to the gradient information and the brightness information of the image to be corrected, filtering the image to be corrected by adopting two filters to obtain a marked image for distinguishing a local dark color edge and a color object.
6. The method for removing color fringes based on lateral chromatic aberration calibration of claim 5, wherein step S3 comprises:
calculating and obtaining an original coordinate, an original gray value and a gradient value of RGB three channels of a pixel point to be corrected of the image to be corrected in a G channel respectively, and establishing a gradient curve of the gradient value and the original coordinate and a gray curve of the original gray value and the original coordinate;
judging whether a local area of the image to be corrected is a local dark area color edge or not by adopting a first filter according to the gradient curve so as to obtain a marked image; and
and according to the gray curve, adopting a second filter to confirm the correction degree of the local area of the local dark color edge.
7. The method for removing color fringes according to claim 6 characterized in that the first filter determines whether the local region is a local dark region color fringe by determining whether the valleys of the gradient curve of the B channel and the valleys of the gradient curve of the G channel are at the same coordinate position, and determining whether the gradient difference between the valleys of the gradient curve of the G channel and the gradient curve of the B channel exceeds a preset value when the valleys of the gradient curve of the B channel and the valleys of the gradient curve of the G channel are not at the same coordinate position, so as to obtain a mark image of the local dark region color fringe.
8. The method as claimed in claim 6, wherein the second filter determines the degree of correction of the color fringes in the local dark region by taking the sum of the G-channel original gray-scale value and the B-channel original gray-scale value of the pixel at the valley in the G-channel gray-scale curve and the difference between the G-channel original gray-scale value and the B-channel original gray-scale value of the pixel at the valley in the G-channel gray-scale curve.
9. The method for removing color fringes based on lateral color difference calibration of claim 8 wherein the degree of correction of said local dark color fringes is determined by satisfying the following formula:
T3=clip(400/(VG0+VB0),1.0,1.4);
T4=clip((VB0-VG0)/(VG0+VB0),0.2,0.3);
S1=T3*(4*T4+0.2);
wherein, G0 is the G channel corresponding point of the pixel point to be corrected; b0 is the B channel corresponding point of the pixel point to be corrected; VG0 is the original gray value of G0 in the G channel; VB0 is the original gray value of B0 in the B channel; t3 is the overall brightness of the image to be corrected; t4 is the lateral chromatic aberration of the pixel point to be corrected of the image to be corrected; s1 is a penalty value.
10. The method for removing color fringes based on lateral chromatic aberration calibration of claim 1, wherein in step S5, the method for adjusting the compensation value is as follows:
|CB|<|VB0-VG0|*alpha;
|CR|<|VR0-VG0|*alpha;
|VB2-VG0|≤|VB0-VG0|;
|VR2-VG0|≤|VR0-VG0|;
wherein, B0 is the B channel corresponding point of the pixel point to be corrected; g0 is the G channel corresponding point of the pixel point to be corrected; r0 is the R channel corresponding point of the pixel point to be corrected; b2 is a target pixel point of a B channel obtained after the offset of B0 is adjusted; r2 is a target pixel point of an R channel obtained after the offset of R0 is adjusted; VG0 is the original gray value of G0 in the G channel; VB0 is the original gray value of B0 in the B channel; VB2 is the gray value of B2 on the B channel; VR2 is the grey scale value of R2 on the R channel; CB is a compensation value of the B channel; CR is the compensation value of the R channel.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210653837.0A CN114943658B (en) | 2022-06-09 | 2022-06-09 | De-coloring method based on transverse chromatic aberration calibration |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210653837.0A CN114943658B (en) | 2022-06-09 | 2022-06-09 | De-coloring method based on transverse chromatic aberration calibration |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114943658A true CN114943658A (en) | 2022-08-26 |
CN114943658B CN114943658B (en) | 2024-06-14 |
Family
ID=82909711
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210653837.0A Active CN114943658B (en) | 2022-06-09 | 2022-06-09 | De-coloring method based on transverse chromatic aberration calibration |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114943658B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115499629A (en) * | 2022-08-31 | 2022-12-20 | 北京奕斯伟计算技术股份有限公司 | Transverse chromatic aberration correction method, device, equipment and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2009095422A2 (en) * | 2008-01-28 | 2009-08-06 | Fotonation Ireland Limited | Methods and apparatuses for addressing chromatic aberrations and purple fringing |
WO2009141403A1 (en) * | 2008-05-20 | 2009-11-26 | Dublin City University | Correction of optical lateral chromatic aberration in digital imaging systems |
CN111199524A (en) * | 2019-12-26 | 2020-05-26 | 浙江大学 | Purple edge correction method for image of adjustable aperture optical system |
CN111242863A (en) * | 2020-01-09 | 2020-06-05 | 上海酷芯微电子有限公司 | Method and medium for eliminating lens lateral chromatic aberration based on image processor |
CN114283077A (en) * | 2021-12-08 | 2022-04-05 | 凌云光技术股份有限公司 | Method for correcting image lateral chromatic aberration |
CN114596220A (en) * | 2022-01-28 | 2022-06-07 | 浙江大华技术股份有限公司 | Method for correcting image lateral chromatic aberration, electronic device and computer storage medium |
-
2022
- 2022-06-09 CN CN202210653837.0A patent/CN114943658B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2009095422A2 (en) * | 2008-01-28 | 2009-08-06 | Fotonation Ireland Limited | Methods and apparatuses for addressing chromatic aberrations and purple fringing |
WO2009141403A1 (en) * | 2008-05-20 | 2009-11-26 | Dublin City University | Correction of optical lateral chromatic aberration in digital imaging systems |
CN111199524A (en) * | 2019-12-26 | 2020-05-26 | 浙江大学 | Purple edge correction method for image of adjustable aperture optical system |
CN111242863A (en) * | 2020-01-09 | 2020-06-05 | 上海酷芯微电子有限公司 | Method and medium for eliminating lens lateral chromatic aberration based on image processor |
CN114283077A (en) * | 2021-12-08 | 2022-04-05 | 凌云光技术股份有限公司 | Method for correcting image lateral chromatic aberration |
CN114596220A (en) * | 2022-01-28 | 2022-06-07 | 浙江大华技术股份有限公司 | Method for correcting image lateral chromatic aberration, electronic device and computer storage medium |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115499629A (en) * | 2022-08-31 | 2022-12-20 | 北京奕斯伟计算技术股份有限公司 | Transverse chromatic aberration correction method, device, equipment and storage medium |
CN115499629B (en) * | 2022-08-31 | 2023-05-26 | 北京奕斯伟计算技术股份有限公司 | Lateral chromatic aberration correction method, device, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN114943658B (en) | 2024-06-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP1746846B1 (en) | Image processing device having color shift-correcting function, image processing program, and electronic camera | |
US7580070B2 (en) | System and method for roll-off correction in image processing | |
US7683948B2 (en) | System and method for bad pixel replacement in image processing | |
EP1528793B1 (en) | Image processing apparatus, image-taking system and image processing method | |
US8358835B2 (en) | Method for detecting and correcting chromatic aberration, and apparatus and method for processing image using the same | |
CN107590840B (en) | Color shadow correction method based on grid division and correction system thereof | |
JP2588640B2 (en) | Electronic image processing method and apparatus by automatic color correction | |
US20090273690A1 (en) | Image processing apparatus, image pickup apparatus, control method for image processing apparatus, and storage medium storing control program therefor | |
JPS6359292A (en) | Method of correcting saturation in electronic image processing | |
KR20090020290A (en) | Method and apparatus for compensating chromatic aberration of an image | |
WO2011152174A1 (en) | Image processing device, image processing method and program | |
WO2007007878A1 (en) | Image processing device and image processing method | |
US8818128B2 (en) | Image processing apparatus, image processing method, and program | |
WO2014208178A1 (en) | Image pickup device and image pickup method | |
CN114943658A (en) | Color edge removing method based on transverse chromatic aberration calibration | |
US8699793B2 (en) | Image processing apparatus, image processing method, and program | |
CN107864365B (en) | Method for eliminating purple border of image | |
CN114866754A (en) | Automatic white balance method and device, computer readable storage medium and electronic equipment | |
US8559711B2 (en) | Method for correcting chromatic aberration | |
CN113850738B (en) | Correction device and method for purple fringing of image | |
GB2460241A (en) | Correction of optical lateral chromatic aberration | |
US20030002735A1 (en) | Image processing method and image processing apparatus | |
US8115834B2 (en) | Image processing device, image processing program and image processing method | |
JP2003015026A (en) | Image processing method and image processing device | |
CN114581344B (en) | Purple edge correction method for video image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |