CN115643490A - Image crosstalk compensation method and device, electronic equipment and storage medium - Google Patents

Image crosstalk compensation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115643490A
CN115643490A CN202211088046.4A CN202211088046A CN115643490A CN 115643490 A CN115643490 A CN 115643490A CN 202211088046 A CN202211088046 A CN 202211088046A CN 115643490 A CN115643490 A CN 115643490A
Authority
CN
China
Prior art keywords
value
block
determining
processed
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211088046.4A
Other languages
Chinese (zh)
Inventor
童星
孙保基
陈飞飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Goodix Technology Co Ltd
Original Assignee
Shenzhen Goodix Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Goodix Technology Co Ltd filed Critical Shenzhen Goodix Technology Co Ltd
Priority to CN202211088046.4A priority Critical patent/CN115643490A/en
Publication of CN115643490A publication Critical patent/CN115643490A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The embodiment of the application provides an image crosstalk compensation method, an image crosstalk compensation device, electronic equipment and a storage medium, wherein the method comprises the steps of determining a central block and a plurality of peripheral blocks surrounding the central block in a region to be processed according to each grid unit in the region to be processed, determining a crosstalk compensation value of the region to be processed according to white channel values of the central block and each peripheral block, and determining a compensation threshold value of the region to be processed according to non-white channel values of the central block and each peripheral block; and selectively utilizing the crosstalk compensation value to compensate the white channel value of the central block according to the comparison result of the crosstalk compensation value and the compensation threshold value. The crosstalk compensation value and the compensation threshold value of the area to be processed are determined to dynamically execute image crosstalk compensation, and the imaging quality can be effectively improved.

Description

Image crosstalk compensation method and device, electronic equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of image crosstalk compensation, in particular to an image crosstalk compensation method and device suitable for an RGBW filter array, electronic equipment and a storage medium.
Background
With the increasing demand of consumers for photographic effect and the vigorous development of the CMOS industry, the RGBW filter array has gradually replaced the bayer filter array, and becomes the mainstream choice of the market.
In particular, compared to a conventional bayer filter array, the RGBW filter array introduces a W luminance channel, which has a wider spectrum to pass through, and therefore has stronger retention capability for image details, better signal-to-noise ratio, and especially performs better in a low-light imaging environment.
However, since the RGBW filter array is changed from the RGB three-channel color of the conventional bayer array to the RGBW four-channel color, the crosstalk between the channels may be more serious than that of the bayer array. At present, the crosstalk phenomenon in the bayer filter array is studied more at home and abroad, but the research on the RGBW filter array is less, and an effective method for reducing the crosstalk influence in the RGBW filter array is lacked.
Disclosure of Invention
In view of this, embodiments of the present application provide an image crosstalk compensation method, an apparatus, an electronic device, and a storage medium suitable for an RGBW filter array, which can improve imaging quality after crosstalk compensation by determining a crosstalk compensation value and a compensation threshold value of a region to be processed and dynamically performing image crosstalk compensation processing.
In a first aspect of the embodiments of the present application, there is provided an image crosstalk compensation method, including: determining a central block and a plurality of peripheral blocks surrounding the central block in a region to be processed according to each grid unit in the region to be processed; determining a crosstalk compensation value of the area to be processed according to the white channel values of the central block and the peripheral blocks, and determining a compensation threshold value of the area to be processed according to the non-white channel values of the central block and the peripheral blocks; and selectively utilizing the crosstalk compensation value to compensate the white channel value of the central block according to the comparison result of the crosstalk compensation value and the compensation threshold value.
Optionally, the area to be treated is obtained by: identifying a minimum filter bank of an image capturing device for acquiring an image to be processed, and determining each grid unit in the image to be processed according to the minimum filter bank; determining a second unit number of the grid unit covered by a sliding window and a sliding step length of the sliding window according to the first unit number of the grid unit covered by the minimum filter bank; and repeatedly executing sliding processing meeting the sliding step length on each grid unit of the image to be processed by utilizing the sliding window to obtain each area to be processed corresponding to each sliding processing.
Optionally, the minimum filter bank and the sliding window are respectively a square array composed of a plurality of grid cells; the second number of units is 2.25 times the first number of units; the sliding step length of the sliding window is 1/3 of the side length of the sliding window.
Optionally, the determining, according to each grid unit in the to-be-processed region, a central block in the to-be-processed region and a plurality of peripheral blocks surrounding the central block includes: performing nine-equal-division processing on the area to be processed according to each grid unit in the area to be processed to obtain nine sub-blocks of the area to be processed; one of the subblocks located at the center position is determined as the center block, and eight subblocks surrounding the center block are determined as peripheral blocks.
Optionally, the method further comprises: identifying a channel color of each grid cell covered by the central block; and if the grid unit covered by the red or blue grid unit of the central block is identified, determining the crosstalk compensation value of the area to be processed according to the white channel values of the central block and the peripheral blocks, and determining the compensation threshold value of the area to be processed according to the non-white channel values of the central block and the peripheral blocks.
Optionally, the determining a crosstalk compensation value of the to-be-processed area according to the white channel values of the central block and the peripheral blocks includes: determining the white channel mean value of the central block and each peripheral block according to the white channel values of the central block and each peripheral block; determining adjacent edge blocks adjacent to the side edges of the central block in the peripheral blocks, and determining similarity weighted values of the adjacent edge blocks according to the white channel mean values of the central block and the adjacent edge blocks; determining a gradient weight value of the area to be processed according to the white channel mean value of each peripheral block; and determining a crosstalk compensation value of the area to be processed according to the similarity weight value of each adjacent edge block and the gradient weight value of the area to be processed.
Optionally, the determining, according to the white channel mean of the central block and each adjacent edge block, a similarity weight value of each adjacent edge block includes: determining a compensation reference value of each adjacent side block according to the difference value of the white channel mean value of the central block and the white channel mean value of each adjacent side block; and determining the similarity weight value of each adjacent edge block according to the compensation reference value of each adjacent edge block and a given similarity weight coefficient.
Optionally, the determining a gradient weight value of the to-be-processed region according to the white channel mean of each peripheral block includes: determining a horizontal gradient value and a vertical gradient value of the area to be processed according to the position distribution of each peripheral block in the area to be processed and the white channel mean value of each peripheral block; and determining the gradient weight value of the region to be processed according to the given gradient weight coefficient and the larger one of the horizontal gradient value and the vertical gradient value.
Optionally, the determining a crosstalk compensation value of the to-be-processed region according to the similarity weight value of each adjacent edge block and the gradient weight value of the to-be-processed region includes: determining a first weighted value according to the similarity weighted value of each adjacent edge block, the gradient weighted value of the area to be processed and the compensation reference value of each adjacent edge block; determining a second weighted value according to the similarity weighted value of each adjacent edge block and the gradient weighted value of the area to be processed; and determining a crosstalk compensation value of the area to be processed according to the first weighted value and the second weighted value.
Optionally, the determining a compensation threshold of the to-be-processed region according to the non-white channel values of the central block and the peripheral blocks includes: determining a non-white channel mean value of the central block and each peripheral block according to the non-white channel values of the central block and each peripheral block, wherein the non-white channel values comprise one of a red channel value, a green channel value and a blue channel value; determining a red pixel mean value, a green pixel mean value and a blue pixel mean value of the area to be processed according to the non-white channel mean values of the central block and the peripheral blocks; determining the saturation of the area to be processed according to the maximum value and the minimum value in the red pixel mean value, the green pixel mean value and the blue pixel mean value of the area to be processed; and determining a compensation threshold value of the area to be processed according to the saturation of the area to be processed and the given maximum crosstalk intensity value.
Optionally, the determining a red pixel mean value, a green pixel mean value, and a blue pixel mean value of the region to be processed according to the non-white channel mean values of the central block and the peripheral blocks includes: determining adjacent side blocks adjacent to the side edges of the central block in each peripheral block, and determining corner blocks adjacent to the corners of the central block in each peripheral block; determining the channel color of each grid unit in the central block, if the central block contains red grid units, determining the red pixel mean value of the area to be processed according to the non-white channel mean value of the central block, determining the green pixel mean value of the area to be processed according to the non-white channel mean value of each adjacent edge block, and determining the blue pixel mean value of the area to be processed according to the non-white channel mean value of each vertex angle block; or if the central block comprises blue grid cells, determining a blue pixel mean value of the region to be processed according to the non-white channel mean value of the central block, determining a green pixel mean value of the region to be processed according to the non-white channel mean value of each adjacent side block, and determining a red pixel mean value of the region to be processed according to the non-white channel mean value of each top corner block.
Optionally, the selectively compensating the white channel value of the central block by using the crosstalk compensation value according to the comparison result between the crosstalk compensation value and the compensation threshold includes: comparing the crosstalk compensation value with the compensation threshold, if the crosstalk compensation value is less than or equal to the compensation threshold, determining an adaptive weight value of each white channel value according to the crosstalk compensation value and each white channel value in the central block, and determining a compensation channel value of each white channel value in the central block according to each white channel value in the central block, the adaptive weight value of each white channel value, and the crosstalk compensation value; and if the crosstalk compensation value is larger than the compensation threshold value, not executing the compensation processing of each white channel value in the central block.
In a second aspect of the embodiments of the present application, there is provided an image crosstalk compensation apparatus, including: the block determining module is used for determining a central block in the area to be processed and a plurality of peripheral blocks surrounding the central block according to each grid unit in the area to be processed; the calculation module is used for determining a crosstalk compensation value of the area to be processed according to the white channel values of the central block and the peripheral blocks, and determining a compensation threshold value of the area to be processed according to the non-white channel values of the central block and the peripheral blocks; and the compensation module is used for selectively utilizing the crosstalk compensation value to compensate the white channel value of the central block according to the comparison result of the crosstalk compensation value and the compensation threshold value.
In a third aspect of embodiments of the present application, there is provided an electronic device, including: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus; the memory is configured to store at least one executable instruction, and the executable instruction causes the processor to perform operations corresponding to the image crosstalk compensation method according to the first aspect.
In a fourth aspect of the embodiments of the present application, a computer storage medium is provided, on which a computer program is stored, and when the computer program is executed by a processor, the method for compensating image crosstalk according to the first aspect can be implemented.
In summary, the image crosstalk compensation method, the apparatus, the electronic device and the storage medium provided in the embodiments of the present application divide the area to be processed into the central block and the plurality of peripheral blocks, so as to respectively determine the crosstalk compensation value and the compensation threshold value of the area to be processed according to the white channel value and the non-white channel value of the central block and each peripheral block, thereby dynamically performing the image crosstalk compensation of the central block, and effectively improving the imaging quality.
Drawings
Some specific embodiments of the present application will be described in detail hereinafter by way of illustration and not limitation with reference to the accompanying drawings. The same reference numbers in the drawings identify the same or similar elements or components. Those skilled in the art will appreciate that the drawings are not necessarily drawn to scale. In the drawings:
fig. 1 is a process flow diagram of an image crosstalk compensation method according to an exemplary embodiment of the present application.
Fig. 2 is a flowchart illustrating a process of an image crosstalk compensation method according to another exemplary embodiment of the present application.
Fig. 3A to 3B are schematic diagrams of a minimum filter bank and a region to be processed determined based on the minimum filter bank according to an exemplary embodiment of the present application.
Fig. 4A to 4B are schematic diagrams of a minimum filter bank and a region to be processed determined based on the minimum filter bank according to another exemplary embodiment of the present application.
Fig. 5 is a flowchart illustrating a process of an image crosstalk compensation method according to another exemplary embodiment of the present application.
Fig. 6A is a schematic distribution diagram of a central block and a peripheral block in a to-be-processed area according to an exemplary embodiment of the present application.
Fig. 6B to 6C are schematic diagrams illustrating distribution of grid cells in a central block according to different embodiments of the present application.
Fig. 7 is a flowchart illustrating a method for compensating image crosstalk according to another exemplary embodiment of the present application.
Fig. 8 is a flowchart illustrating a method for compensating image crosstalk according to another exemplary embodiment of the present application.
Fig. 9 is a flowchart illustrating a method for compensating image crosstalk according to another exemplary embodiment of the present application.
Fig. 10 is a block diagram illustrating a configuration of an image crosstalk compensation apparatus according to an exemplary embodiment of the present application.
Fig. 11 is a block diagram of an electronic device according to an exemplary embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly and completely with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present application.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
In this application, the directional terms "upper", "lower", "front", "rear", and the like are defined with respect to the schematically-disposed orientation of the components in the drawings, and it is to be understood that these directional terms are relative concepts that are used for descriptive and clarity purposes and that will vary accordingly depending on the orientation in which the components are disposed in the drawings.
In addition, unless a specified order is explicitly stated in the context of the present application, the process steps described herein may be performed in a different order than specified, i.e., each step may be performed in the specified order, substantially simultaneously, in a reverse order, or in a different order.
With the increasing demands of consumers on imaging quality, RGBW filter arrays have become the mainstream choice of the market. Although the W (white) channel filter in the RGBW filter array can transmit a wider spectrum and perform better in a low-light imaging environment, the crosstalk between channels is more serious than that of the conventional RGB three-channel color bayer array.
In view of this, the present application provides an image crosstalk compensation scheme for an RGBW filter array. In particular, since imbalance due to crosstalk mainly occurs in a white channel, the present application proposes an image crosstalk compensation scheme that compensates specifically for the white channel.
Referring to fig. 1, which is a processing flow chart of an image crosstalk compensation method according to an exemplary embodiment of the present application, the method mainly includes the following steps:
step S102, according to each grid unit in the area to be processed, a central block and a plurality of peripheral blocks surrounding the central block in the area to be processed are determined.
Optionally, according to each grid unit in the region to be processed, performing an equal division process on the region to be processed to obtain a plurality of sub-blocks with equal sizes.
In this embodiment, the sub-block located at the central position in each sub-block can be determined as the central block, and each sub-block surrounding the central block can be determined as each peripheral block.
Step S104, determining the crosstalk compensation value of the area to be processed according to the white channel values of the central block and the peripheral blocks, and determining the compensation threshold value of the area to be processed according to the non-white channel values of the central block and the peripheral blocks.
Optionally, the white channel mean values of the central block and the peripheral blocks may be determined according to the white channel values of the central block and the peripheral blocks, the similarity weight value and the gradient weight value may be determined according to the white channel mean values of the central block and the peripheral blocks, and the crosstalk compensation value of the to-be-processed area may be determined according to the similarity weight value and the gradient weight value.
Optionally, the non-white channel mean value of the central block and each peripheral block may be determined according to the non-white channel values of the central block and each peripheral block, the saturation of the region to be processed may be determined according to the non-white channel mean value of the central block and each peripheral block, and the compensation threshold of the region to be processed may be determined according to the saturation and the given maximum crosstalk strength value.
In this embodiment, the non-white channel value includes one of a red channel value (R value), a green channel value (G value), and a blue channel value (B value).
And step S106, selectively using the crosstalk compensation value to compensate the white channel value of the central block according to the comparison result of the crosstalk compensation value and the compensation threshold value.
In this embodiment, if the crosstalk compensation value is greater than the compensation threshold value, which represents that the central block of the to-be-processed area is texture details, the compensation process is not performed, and if the crosstalk compensation value is less than or equal to the compensation threshold value, the compensation process is performed on the white channel value of the central block by using the crosstalk compensation value.
In summary, the image crosstalk compensation method of the present embodiment can perform compensation on the white channel value of the central block of the to-be-processed area, so as to effectively solve the problem in the prior art that imbalance caused by crosstalk mainly occurs in the white channel, and improve the imaging quality.
Furthermore, in the image crosstalk method of the present embodiment, the crosstalk compensation value and the compensation threshold are compared to selectively perform the compensation processing of the white channel, so that the texture details in the image can be effectively retained, and the image degradation caused by the crosstalk compensation is avoided.
Fig. 2 is a processing flow chart of an image crosstalk compensation method according to another exemplary embodiment of the present application, which can be implemented as a front-end implementation of S102 and mainly includes the following steps:
step S202, identifying a minimum filter bank of the image capturing device for acquiring the image to be processed, and determining each grid cell in the image to be processed according to the minimum filter bank.
Optionally, the image capture device comprises a camera.
In this embodiment, the image to be processed includes a plurality of tiled minimum repetition units, and each minimum repetition unit includes a minimum filter bank.
Optionally, the minimum filter bank is a square array (also referred to as a square matrix) composed of a plurality of grid cells.
Specifically, each grid cell in the minimum filter bank is arranged in rows and columns, and the number of rows and columns of the grid cells is the same to form a square filter array. Illustratively, the minimum filter bank may include, but is not limited to, a 6 × 6 filter bank, an 8 × 8 filter bank, and the like.
In this embodiment, each minimum filter group includes 4 different photoresponse filters, namely, a total transmittance filter (W), a red filter (R), a green filter (G), and a blue filter (B). Each photoresponse filter corresponds to one grid unit in the image to be processed respectively.
Specifically, in the case where the minimum filter bank is an 8 × 8 filter bank, the determined channel color distribution of each grid cell in the image to be processed according to the minimum filter bank is as shown in fig. 3A; in the case where the minimum filter bank is a 6 × 6 filter bank, the determined channel color distribution of each grid cell in the image to be processed according to the minimum filter bank is as shown in fig. 4A.
Step S204, according to the first unit number of the grid unit covered by the minimum filter bank, determining the second unit number of the grid unit covered by the sliding window and the sliding step length of the sliding window.
In this embodiment, the sliding window determined based on the minimum filter bank is also represented as a square array composed of a plurality of grid cells, each grid cell in the sliding window is arranged in rows and columns, and the number of rows and columns of the grid cells is the same.
In this embodiment, the number of the second units is 2.25 times that of the first units, that is, the number of rows or columns of the grid unit covered by the sliding window is 1.5 times that of the grid unit covered by the minimum filter bank, and the side length of the sliding window may also be regarded as 1.5 times that of the minimum filter bank (the size of each grid unit is the same).
For example, if the minimum filter bank is an 8 × 8 filter bank, i.e., the minimum filter bank covers 64 grid cells of 8 × 8, 144 grid cells of 12 × 12 may be covered based on the sliding window determined by the minimum filter bank; if the minimum filter bank is a 6 × 6 filter bank, i.e. the minimum filter bank covers 36 grid cells of 6 × 6, 81 grid cells of 9 × 9 may be covered based on the sliding window determined by the minimum filter bank.
In this embodiment, the number of rows and columns of the grid unit covered by the minimum filter bank is even.
In this embodiment, the sliding step of the sliding window is 1/3 of the side length of the sliding window.
Exemplarily, if the side length of the sliding window is a length of 12 grid units, the sliding step is a length of 4 grid units; if the side length of the sliding window is the length of 9 grid units, the sliding step length is the length of 3 grid units.
Step S206, using the sliding window, repeatedly executing the sliding processing satisfying the sliding step length on each grid unit of the image to be processed, and acquiring each area to be processed corresponding to each sliding processing.
Specifically, the sliding window may be utilized to perform sliding processing on each grid unit of the image to be processed for multiple times, and a sliding distance of each sliding processing performed is matched with a sliding step length, so as to obtain a region to be processed corresponding to each sliding processing, wherein each region to be processed is covered with grid units of the second number of units.
For example, if the side length of the sliding window is 12 grid units long and the sliding step is 4 grid units long, the area to be processed as shown in fig. 3B can be obtained, which is covered with 12 × 12 grid units.
For example, if the side length of the sliding window is 9 grid units long and the sliding step is 3 grid units long, the area to be processed as shown in fig. 4B, which is covered with 9 × 9 grid units, can be obtained.
In summary, in the image crosstalk compensation method of the embodiment, the window size and the sliding step length of the sliding window are determined according to the minimum filter bank of the image capturing device for acquiring the image to be processed, so that the grid unit covered in the area to be processed determined based on the sliding window can be adapted to the grid unit covered by the minimum filter bank, which is beneficial to improving the effect of image crosstalk compensation processing.
Fig. 5 is a flowchart illustrating a method for compensating image crosstalk according to another exemplary embodiment of the present application. This embodiment is a specific implementation of the step S102, which mainly includes:
step S502, the sliding window is utilized to execute sliding processing on each grid unit of the image to be processed based on the current position and the sliding step length, and the area to be processed in the image to be processed is determined.
Exemplarily, in the case that the side length of the sliding window is a length of 12 grid units and the sliding step is a length of 4 grid units, the area to be processed covered with 12 × 12 grid units may be obtained (as shown in fig. 3B).
Illustratively, in the case where the side length of the sliding window is a length of 9 grid cells and the sliding step is a length of 3 grid cells, an area to be processed covered with 9 × 9 grid cells can be obtained (as shown in fig. 4B).
Step S504, performing nine-equal division processing on the to-be-processed area according to each grid unit in the to-be-processed area, and obtaining nine sub-blocks of the to-be-processed area.
In the present embodiment, the region to be processed has nine sub-blocks with the same size, and each sub-block may be arranged in a squared figure (refer to fig. 6A).
Exemplarily, in the case where the area to be processed is covered with 12 × 12 grid cells, each sub-block is covered with 4 × 4 grid cells (refer to fig. 6B); in the case where the area to be processed is covered with 9 × 9 grid cells, each sub-block is covered with 3 × 3 grid cells (refer to fig. 6C).
In the embodiment shown in fig. 6B and 6C, W corresponds to the transflective filter for characterizing white channel values, and C corresponds to the red (R), green (G) or blue (B) filter for characterizing non-white channel values (i.e., any of red, green and blue).
In step S506, one of the sub-blocks located at the center is determined as a central block, and eight sub-blocks surrounding the central block are determined as peripheral blocks.
Specifically, referring to fig. 6A, the sub-block 22 located at the center position may be determined as a center block, and the eight sub-blocks 11, 21, 31, 12, 32, 13, 23, 33 surrounding the center block 22 may be determined as peripheral blocks.
In step S508, the channel color of each grid cell covered by the central block is identified.
In this embodiment, the number of grid cells of the white channel (W channel) and the number of grid cells of the non-white channels (R channel, G channel, B channel) in each sub-block as the central block or the peripheral block may be equal or different.
Illustratively, in the embodiment shown in fig. 6B, each sub-block contains mesh cells of 8 white channels and mesh cells of 8 non-white channels; in the embodiment shown in fig. 6B, each sub-block comprises 9 grid cells, wherein each sub-block may comprise 4 grid cells of white channels and 5 grid cells of non-white channels.
Note that, in a single sub-block, the channel colors of the grid cells of the included non-white channels are necessarily the same (refer to fig. 3B or fig. 4B).
Specifically, the channel color of the grid cell in the center patch corresponding to the non-white channel may be identified.
Step S510, determining whether the central block includes a green grid cell, if yes, returning to step S502, otherwise, executing step S104.
Specifically, if it is determined that the central block is covered by the green grid unit, the image crosstalk compensation process is not performed on the current to-be-processed area, and the process returns to step S502 to obtain the next to-be-processed area in the to-be-processed image until each grid unit in the to-be-processed image is covered by the sliding window at least once; on the contrary, if it is determined that the central block does not cover the grid cells of green, that is, the central block covers the grid cells of red or blue, step S104 is performed to perform the image crosstalk compensation process on the current region to be processed.
In summary, the present embodiment can improve the effect of the image crosstalk processing by determining the central block and the peripheral blocks of the to-be-processed area, and selectively performing the image crosstalk compensation processing on the to-be-processed area by identifying the channel color of the grid cell covered by the central block.
Fig. 7 shows a process flow diagram of an image crosstalk compensation method according to another exemplary embodiment of the present application. This embodiment shows a specific implementation of determining the crosstalk compensation value of the to-be-processed area in step S104, which mainly includes:
step S702 determines the white channel mean of the central block and each peripheral block according to the white channel values of the central block and each peripheral block.
Optionally, for any one of the central block and the peripheral blocks, the white channel mean of the sub-block is determined according to each white channel value in the sub-block.
Alternatively, the white channel mean of the sub-block may be calculated using the following equation 1:
Figure BDA0003835980270000101
in the above-mentioned formula 1, the first,
Figure BDA0003835980270000102
represents the average value of white channels, W, of the sub-blocks in the ith row and jth column of the region to be processed z Is the z-th white channel value in the sub-block, and M is the number of white channel values contained in the sub-block.
For example, in the embodiment shown in fig. 6B, each sub-block includes 8 white channel values (i.e., W1 to W8), and then M takes a value of 8; in the embodiment shown in fig. 6C, each sub-block includes 4 white channel values (i.e., W1 to W4), and then M is 4.
Step S704, determining adjacent edge blocks adjacent to the side edges of the central block in the peripheral blocks, and determining similarity weight values of the adjacent edge blocks according to the white channel mean values of the central block and the adjacent edge blocks.
Referring to fig. 6A, in the present embodiment, four peripheral blocks 12, 21, 23, 32 adjacent to four sides of the central block 22 may be determined as adjacent-side blocks.
Optionally, the compensation reference value of each adjacent side block may be determined according to a difference between the white channel mean value of the central block and the white channel mean value of each adjacent side block, and the similarity weight value of each adjacent side block may be determined according to the compensation reference value of each adjacent side block and a given similarity weight coefficient.
In this embodiment, the compensation reference value of each neighboring block can be determined according to the difference between the white channel mean of the central block and the white channel mean of each neighboring block by using the following equations 2 to 5:
Figure BDA0003835980270000111
Figure BDA0003835980270000112
Figure BDA0003835980270000113
Figure BDA0003835980270000114
in the above equations 2 to 5, δ 1 Represents the compensation reference value of the adjacent edge block 12; delta. For the preparation of a coating 2 Represents the compensation reference value of the adjacent side block 21; delta. For the preparation of a coating 3 Represents the compensation reference value of the adjacent edge block 23; delta 4 Indicates the compensated reference value of the neighboring block 32 (refer to fig. 6A).
In this embodiment, the similarity weight value of each neighboring block can be determined according to the compensation reference value of each neighboring block and a given similarity weight coefficient by using the following formula 6:
Figure BDA0003835980270000115
in the above equation 6, weight1 i Represents the similarity weight value of the ith adjacent edge block, delta i Indicates the compensated reference value of the ith neighboring block (i.e. delta as described above) 1 、δ 2 、δ 3 、δ 4 ),σ 1 Is an adjustable similarity weight coefficient.
Step S706, determining a gradient weight value of the to-be-processed region according to the white channel mean value of each peripheral block.
Optionally, the horizontal gradient value and the vertical gradient value of the region to be processed may be determined according to the position distribution of each peripheral block in the region to be processed and the white channel average value of each peripheral block, and the gradient weight value of the region to be processed may be determined according to a given gradient weight coefficient and a greater one of the horizontal gradient value and the vertical gradient value.
In this embodiment, the following formula 7 can be used to determine the horizontal gradient value of the to-be-processed area according to the white channel mean of each peripheral block located in the left column and the right column of the to-be-processed area:
Figure BDA0003835980270000116
in the above equation 7, grad hor A horizontal gradient value representing the region to be processed,
Figure BDA0003835980270000121
is the average of the white channels of the peripheral blocks located in the left column of the region to be processed,
Figure BDA0003835980270000122
is the white channel average of each peripheral block located in the right column of the region to be processed (refer to fig. 6A).
In this embodiment, the following formula 8 can be used to determine the vertical gradient value of the to-be-processed region according to the white channel mean values of the peripheral blocks in the top row and the bottom row of the to-be-processed region:
Figure BDA0003835980270000123
in the above equation 8, grad ver Represents the vertical gradient value of the region to be processed,
Figure BDA0003835980270000124
for the perimeters in the top row of the area to be processedThe average value of the white channels of the block,
Figure BDA0003835980270000125
is the average of the white channels of the peripheral blocks located in the bottom row of the region to be processed (refer to fig. 6A).
In this embodiment, the following formula 9 can be used to determine the gradient weight value of the region to be processed according to the given gradient weight coefficient, which is the larger of the horizontal gradient value and the vertical gradient value:
Figure BDA0003835980270000126
in the above formula 9, weight2 represents a gradient weight value of the region to be processed, and grad represents the larger of the horizontal gradient value and the vertical gradient value, i.e., grad = max (grad) hor ,grad ver ),σ 2 Representing adjustable gradient weight coefficients.
Step S708, determining a crosstalk compensation value of the to-be-processed area according to the similarity weight value of each adjacent edge block and the gradient weight value of the to-be-processed area.
Optionally, the first weighted value may be determined according to the similarity weighted value of each adjacent edge block, the gradient weighted value of the area to be processed, and the compensation reference value of each adjacent edge block, the second weighted value may be determined according to the similarity weighted value of each adjacent edge block and the gradient weighted value of the area to be processed, and the crosstalk compensation value of the area to be processed may be determined according to the first weighted value and the second weighted value.
In this embodiment, the following formula 10 may be used to determine the crosstalk compensation value of the to-be-processed area according to the similarity weight value of each adjacent edge block, the gradient weight value of the to-be-processed area, and the compensation reference value of each adjacent edge block:
Figure BDA0003835980270000127
in the above equation 10, δ represents the crosstalk compensation value, weight1, of the region to be processed i Represents the ith adjacent sideThe similarity weight of the block, weight2, represents the gradient weight of the region to be processed, δ i Indicating the compensated reference value of the ith neighboring block.
In summary, in the image crosstalk compensation method of the embodiment, a manner of weighted average of the similarity weight value and the gradient weight value is adopted to determine the crosstalk compensation value of the region to be processed, so that the texture details in the image can be prevented from being compensated by mistake, rich texture details in the image can be effectively retained, and the problem of image degradation is avoided.
Fig. 8 is a flowchart illustrating a process of an image crosstalk compensation method according to another exemplary embodiment of the present application. This embodiment shows a specific implementation of determining the compensation threshold of the region to be processed in step S104, which mainly includes the following steps:
in step S802, the non-white channel mean value of the central block and each peripheral block is determined according to the non-white channel values of the central block and each peripheral block.
In this embodiment, the non-white channel values of the central block and the peripheral blocks include any one of a red channel value, a green channel value, and a blue channel value.
Optionally, for any one of the sub-blocks in the central block and the peripheral blocks, the non-white channel mean of the sub-block is determined according to each non-white channel value in the sub-block.
Alternatively, the non-white channel mean of the sub-block may be calculated using the following equation 11:
Figure BDA0003835980270000131
in the above-mentioned formula 11, the first,
Figure BDA0003835980270000132
means of non-white channel, C, of the sub-block in the ith row and jth column of the region to be processed z Is the z-th non-white channel value in the sub-block, and N is the number of non-white channel values contained in the sub-block.
For example, in the embodiment shown in fig. 6B, each sub-block includes 8 non-white channel values (i.e., C1 to C8), and the value of N is 8; in the embodiment shown in fig. 6C, each sub-block includes 5 non-white channel values (i.e., C1 to C5), and N is 5.
Step S804, determining a red pixel mean value, a green pixel mean value, and a blue pixel mean value of the region to be processed according to the non-white channel mean values of the central block and the peripheral blocks.
Optionally, adjacent side blocks of the peripheral blocks adjacent to the sides of the central block may be determined, and corner blocks of the peripheral blocks adjacent to the corners of the central block may be determined.
Referring to fig. 6A, in the present embodiment, four peripheral blocks 12, 21, 23, 32 adjacent to four sides of the central block 22 may be determined as adjacent-side blocks, and four peripheral blocks 11, 13, 31, 33 adjacent to four corners of the central block 22 may be determined as corner blocks.
Alternatively, the red pixel mean, the green pixel mean, and the blue pixel mean of the region to be processed may be dynamically determined by determining the channel color of each grid cell in the central block.
In this embodiment, if the central block includes red grid cells, the red pixel mean value of the region to be processed may be determined according to the non-white channel mean value of the central block, the green pixel mean value of the region to be processed may be determined according to the non-white channel mean value of each adjacent edge block, and the blue pixel mean value of the region to be processed may be determined according to the non-white channel mean value of each vertex angle block.
Specifically, if the central block includes a red grid cell, the red pixel mean value of the region to be processed may be determined according to the following formula 12, the green pixel mean value of the region to be processed may be determined according to the following formula 13, and the blue pixel mean value of the region to be processed may be determined according to the following formula 14:
Figure BDA0003835980270000141
Figure BDA0003835980270000142
Figure BDA0003835980270000143
in the above-mentioned formula 12, the first,
Figure BDA0003835980270000144
the red pixel mean value representing the area to be processed,
Figure BDA0003835980270000145
representing the non-white channel mean of the central block.
In the above-mentioned formula 13, the first,
Figure BDA0003835980270000146
the mean of the green pixels representing the area to be processed,
Figure BDA0003835980270000147
representing the non-white channel mean of each neighboring edge block.
In the above-mentioned formula 14, the first,
Figure BDA0003835980270000148
the blue pixel mean representing the area to be processed,
Figure BDA0003835980270000149
representing the non-white channel mean of each corner block.
In this embodiment, if the central block includes a blue grid cell, the blue pixel mean value of the region to be processed is determined according to the non-white channel mean value of the central block, the green pixel mean value of the region to be processed is determined according to the non-white channel mean value of each neighboring block, and the red pixel mean value of the region to be processed is determined according to the non-white channel mean value of each vertex angle block.
Specifically, if the central block includes a blue grid cell, the blue pixel mean value of the region to be processed may be determined according to the following formula 15, the green pixel mean value of the region to be processed may be determined according to the following formula 16, and the red pixel mean value of the region to be processed may be determined according to the following formula 17:
Figure BDA00038359802700001410
Figure BDA00038359802700001411
Figure BDA00038359802700001412
in the above-mentioned formula 15, the first,
Figure BDA00038359802700001413
the blue pixel mean value representing the area to be processed,
Figure BDA00038359802700001414
representing the non-white channel mean of the central patch.
In the above-mentioned formula 16,
Figure BDA00038359802700001415
the mean value of the green pixels representing the area to be processed,
Figure BDA00038359802700001416
representing the non-white channel mean of each neighboring block.
In the above-mentioned formula 17, the first,
Figure BDA00038359802700001417
the red pixel mean value representing the area to be processed,
Figure BDA00038359802700001418
representing the non-white channel mean of each corner block.
Step S806, determining the saturation of the region to be processed according to the maximum value and the minimum value of the red pixel average value, the green pixel average value, and the blue pixel average value of the region to be processed.
Specifically, the saturation of the to-be-processed area may be calculated according to the maximum value and the minimum value of the red pixel mean value, the green pixel mean value, and the blue pixel mean value of the to-be-processed area by using the following formula 18:
Figure BDA0003835980270000151
in the above equation 18, sat represents the saturation of the region to be processed,
Figure BDA0003835980270000152
the minimum value of the red pixel mean value, the green pixel mean value and the blue pixel mean value of the area to be processed is represented,
Figure BDA0003835980270000153
and the maximum value of the red pixel mean value, the green pixel mean value and the blue pixel mean value of the area to be processed is represented.
Step S808, determining a compensation threshold of the to-be-processed region according to the saturation of the to-be-processed region and the given maximum crosstalk intensity value.
In this embodiment, the following equation 19 can be used to determine the compensation threshold value of the region to be processed according to the saturation of the region to be processed and the given maximum crosstalk strength value:
thre = sat left stren (equation 19)
In the above formula 19, where Thre represents a compensation threshold of the to-be-processed region, sat represents a saturation of the to-be-processed region, and stren represents a maximum crosstalk strength value, which can be determined in a calibration manner, that is, the maximum crosstalk strength value can be adjusted according to an actually set crosstalk standard.
In summary, the image crosstalk compensation method of the present embodiment dynamically determines the compensation threshold of the to-be-processed area according to the non-white channel values of the central block and the peripheral blocks, and determines whether the compensation process needs to be performed by using the color-related dynamic threshold instead of the fixed threshold, so as to retain more color details in the image and improve the processing effect of the image crosstalk compensation.
Fig. 9 shows a process flow diagram of an image crosstalk compensation method according to another exemplary embodiment of the present embodiment. This example shows a specific implementation of the above step S106, which mainly includes the following steps:
step S902, comparing the crosstalk compensation value with a compensation threshold.
In particular, the crosstalk compensation value and the compensation threshold value can be compared in size.
In step S904, it is determined whether the crosstalk compensation value is greater than the compensation threshold value, if so, the process is terminated, and if not, step S906 is performed.
Specifically, if the crosstalk compensation value is greater than the compensation threshold (δ > Thre), it indicates that the current region to be processed is texture detail, and the image crosstalk compensation process is not required to be executed, and the process is ended; otherwise, if the crosstalk compensation value is less than or equal to the compensation threshold (δ ≦ Thre), step S906 is performed.
Step S906, determining an adaptive weight value of each white channel value according to the crosstalk compensation value and each white channel value in the central block.
In this embodiment, formula 20 may be utilized to calculate the adaptive weight value of each white channel value according to the crosstalk compensation value and each white channel value in the central block:
Figure BDA0003835980270000161
in the above equation 20, weight3 i An adaptive weight value representing the ith white channel value, δ representing a crosstalk compensation value, W i Representing the ith white channel value.
Step S908 is to determine a compensation channel value of each white channel value in the central block according to each white channel value in the central block, the adaptive weight value of each white channel value, and the crosstalk compensation value.
In this embodiment, the compensation channel value of each white channel value in the central block can be determined according to each white channel value in the central block, the adaptive weight value of each white channel value, and the crosstalk compensation value by using the following formula 21:
W i corr =W i +weight3 i x delta (formula 21)
In the above equation 21, W i corr A compensated channel value, W, representing the ith white channel value i Denotes the ith white channel value, weight3 i Represents the adaptive weight value for the ith white channel value and δ represents the crosstalk compensation value.
In summary, in the image crosstalk compensation method of the present embodiment, the adaptive weight values of the white pixels in the central block are calculated, and the white channel values of the white pixels are compensated, and the compensation values of the white channel values are respectively calculated in an adaptive manner, instead of performing the compensation of the white channel values in a fixed compensation manner, so that the influence of noise on the compensation values can be effectively avoided, and the imaging quality can be improved.
Fig. 10 is a block diagram illustrating a configuration of an image crosstalk compensation apparatus according to an exemplary embodiment of the present application. As shown, the image crosstalk compensation apparatus 1000 of the present embodiment includes a block determining module 1002, a calculating module 1004, and a compensating module 1006.
The block determining module 1002 is configured to determine a central block in the area to be processed and a plurality of peripheral blocks surrounding the central block according to each grid unit in the area to be processed.
A calculating module 1004, configured to determine a crosstalk compensation value of the to-be-processed area according to the white channel values of the central block and each peripheral block, and determine a compensation threshold of the to-be-processed area according to the non-white channel values of the central block and each peripheral block.
A compensation module 1006, configured to selectively utilize the crosstalk compensation value to compensate the white channel value of the central block according to a comparison result between the crosstalk compensation value and the compensation threshold.
Optionally, the block determining module 1002 is further configured to: identifying a minimum filter bank of an image capturing device for acquiring an image to be processed, and determining each grid unit in the image to be processed according to the minimum filter bank; determining the second unit number of the grid units covered by a sliding window and the sliding step length of the sliding window according to the first unit number of the grid units covered by the minimum filter bank; and repeatedly executing sliding processing meeting the sliding step length on each grid unit of the image to be processed by utilizing the sliding window to obtain each area to be processed corresponding to each sliding processing.
Optionally, the minimum filter bank and the sliding window are respectively a square array composed of a plurality of grid cells; the second number of units is 2.25 times the first number of units; the sliding step length of the sliding window is 1/3 of the side length of the sliding window.
Optionally, the block determining module 1002 is further configured to: performing nine-equal-division processing on the area to be processed according to each grid unit in the area to be processed to obtain nine sub-blocks of the area to be processed; one of the subblocks located at the center position is determined as the center block, and eight subblocks surrounding the center block are determined as the peripheral blocks.
Optionally, the computing module 1004 is further configured to: identifying the channel color of each grid unit covered by the central block; if the grid unit covered by red or blue is identified, determining the crosstalk compensation value of the area to be processed according to the white channel values of the central block and the peripheral blocks, and determining the compensation threshold value of the area to be processed according to the non-white channel values of the central block and the peripheral blocks.
Optionally, the computing module 1004 is further configured to: determining the white channel mean value of the central block and each peripheral block according to the white channel values of the central block and each peripheral block; determining adjacent edge blocks adjacent to the side edges of the central block in the peripheral blocks, and determining similarity weighted values of the adjacent edge blocks according to the white channel mean values of the central block and the adjacent edge blocks; determining the gradient weight value of the area to be processed according to the white channel mean value of each peripheral block; and determining a crosstalk compensation value of the area to be processed according to the similarity weight value of each adjacent edge block and the gradient weight value of the area to be processed.
Optionally, the computing module 1004 is further configured to: determining a compensation reference value of each adjacent side block according to the difference value of the white channel mean value of the central block and the white channel mean value of each adjacent side block; and determining the similarity weight value of each adjacent edge block according to the compensation reference value of each adjacent edge block and a given similarity weight coefficient.
Optionally, the computing module 1004 is further configured to: determining a horizontal gradient value and a vertical gradient value of the area to be processed according to the position distribution of each peripheral block in the area to be processed and the white channel mean value of each peripheral block; and determining the gradient weight value of the region to be processed according to the given gradient weight coefficient and the larger one of the horizontal gradient value and the vertical gradient value.
Optionally, the computing module 1004 is further configured to: determining a first weighted value according to the similarity weighted value of each adjacent edge block, the gradient weighted value of the area to be processed and the compensation reference value of each adjacent edge block; determining a second weighted value according to the similarity weighted value of each adjacent edge block and the gradient weighted value of the area to be processed; and determining a crosstalk compensation value of the area to be processed according to the first weighted value and the second weighted value.
Optionally, the computing module 1004 is further configured to: determining a non-white channel mean value of the central block and each peripheral block according to the non-white channel values of the central block and each peripheral block, wherein the non-white channel values comprise one of a red channel value, a green channel value and a blue channel value; determining a red pixel mean value, a green pixel mean value and a blue pixel mean value of the area to be processed according to the non-white channel mean values of the central block and the peripheral blocks; determining the saturation of the area to be processed according to the maximum value and the minimum value in the red pixel mean value, the green pixel mean value and the blue pixel mean value of the area to be processed; and determining a compensation threshold value of the region to be processed according to the saturation of the region to be processed and the given maximum crosstalk intensity value.
Optionally, the computing module 1004 is further configured to: determining adjacent side blocks adjacent to the side edges of the central block in each peripheral block, and determining corner blocks adjacent to the corners of the central block in each peripheral block; determining the channel color of each grid unit in the central block, if the central block contains red grid units, determining the red pixel mean value of the area to be processed according to the non-white channel mean value of the central block, determining the green pixel mean value of the area to be processed according to the non-white channel mean value of each adjacent edge block, and determining the blue pixel mean value of the area to be processed according to the non-white channel mean value of each vertex angle block; or if the central block comprises a blue grid unit, determining a blue pixel mean value of the area to be processed according to the non-white channel mean value of the central block, determining a green pixel mean value of the area to be processed according to the non-white channel mean value of each adjacent edge block, and determining a red pixel mean value of the area to be processed according to the non-white channel mean value of each vertex angle block.
Optionally, the compensation module 1006 is further configured to: comparing the crosstalk compensation value with the compensation threshold, if the crosstalk compensation value is less than or equal to the compensation threshold, determining an adaptive weight value of each white channel value according to the crosstalk compensation value and each white channel value in the central block, and determining a compensation channel value of each white channel value in the central block according to each white channel value in the central block, the adaptive weight value of each white channel value, and the crosstalk compensation value; and if the crosstalk compensation value is larger than the compensation threshold value, not executing the compensation processing of each white channel value in the central block.
The image crosstalk compensation device provided in the embodiment of the present invention corresponds to the image crosstalk compensation method provided in each embodiment of the present invention, and other descriptions can refer to the description of the image crosstalk compensation method provided in each embodiment of the present invention, and are not described herein again.
Another embodiment of the present invention provides an electronic device, including: the device comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface are communicated with each other through the communication bus.
Fig. 11 is a block diagram of an electronic device according to an exemplary embodiment of the present invention, and as shown in the drawing, the electronic device 1100 according to this embodiment may include a processor (processor) 1102, a communication interface (communication interface) 1104 and a memory (memory) 1106.
The processor 1102, communication interface 1104, and memory 1106 may communicate with one another via a communication bus 1108.
The communication interface 1104 is used for communication with other electronic devices such as a terminal device or a server.
The processor 1102, configured to execute the computer program 1110, may specifically perform the relevant steps in the embodiments of the methods described above, that is, perform the steps in the image crosstalk compensation methods described in the embodiments described above.
In particular, the computer program 1110 may comprise program code comprising computer operating instructions.
The processor 1102 may be a central processing unit CPU, or an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement an embodiment of the present invention. The electronic device comprises one or more processors, which can be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
A memory 1106 for storing a computer program 1110. Memory 1106 may include high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
Another embodiment of the present invention provides a computer storage medium, on which a computer program is stored, which when executed by a processor, can implement the image crosstalk compensation method described in the above embodiments.
It should be noted that, according to the implementation requirement, each component/step described in the embodiment of the present invention may be divided into more components/steps, and two or more components/steps or partial operations of the components/steps may also be combined into a new component/step to achieve the purpose of the embodiment of the present invention.
The above-described method according to an embodiment of the present invention may be implemented in hardware, firmware, or as software or computer code storable in a recording medium such as a CD ROM, a RAM, a floppy disk, a hard disk, or a magneto-optical disk, or as computer code originally stored in a remote recording medium or a non-transitory machine-readable medium downloaded through a network and to be stored in a local recording medium, so that the method described herein may be stored in such software processing on a recording medium using a general-purpose computer, a dedicated processor, or programmable or dedicated hardware such as an ASIC or FPGA. It is understood that a computer, processor, microprocessor controller, or programmable hardware includes memory components (e.g., RAM, ROM, flash memory, etc.) that can store or receive software or computer code that, when accessed and executed by a computer, processor, or hardware, implements the image crosstalk compensation methods described herein. Further, when a general-purpose computer accesses code for implementing the image crosstalk compensation method shown herein, execution of the code converts the general-purpose computer into a special-purpose computer for performing the image crosstalk compensation method shown herein.
Those of ordinary skill in the art will appreciate that the various illustrative elements and method steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments of the present invention.
Thus, particular embodiments of the present subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may be advantageous.
It should be noted that, without conflict, the embodiments and/or technical features in the embodiments described in the present application may be arbitrarily combined with each other, and the technical solutions obtained after the combination also fall within the protection scope of the present application. All the embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from other embodiments.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (15)

1. An image crosstalk compensation method, comprising:
determining a central block and a plurality of peripheral blocks surrounding the central block in a region to be processed according to each grid unit in the region to be processed;
determining a crosstalk compensation value of the area to be processed according to the white channel values of the central block and the peripheral blocks, and determining a compensation threshold value of the area to be processed according to the non-white channel values of the central block and the peripheral blocks;
and selectively utilizing the crosstalk compensation value to compensate the white channel value of the central block according to the comparison result of the crosstalk compensation value and the compensation threshold value.
2. The method according to claim 1, characterized in that the area to be treated is obtained by:
identifying a minimum filter bank of an image capturing device for acquiring an image to be processed, and determining each grid unit in the image to be processed according to the minimum filter bank;
determining a second unit number of the grid unit covered by a sliding window and a sliding step length of the sliding window according to the first unit number of the grid unit covered by the minimum filter bank;
and repeatedly executing sliding processing meeting the sliding step length on each grid unit of the image to be processed by utilizing the sliding window to obtain each area to be processed corresponding to each sliding processing.
3. The method of claim 2,
the minimum filter bank and the sliding window are respectively a square array consisting of a plurality of grid units;
the second number of units is 2.25 times the first number of units;
the sliding step length of the sliding window is 1/3 of the side length of the sliding window.
4. The method according to any one of claims 1 to 3, wherein the determining a central block and a plurality of peripheral blocks surrounding the central block in the area to be processed according to each grid cell in the area to be processed comprises:
performing nine-equal-division processing on the area to be processed according to each grid unit in the area to be processed to obtain nine sub-blocks of the area to be processed;
one of the subblocks located at the center position is determined as the center block, and eight subblocks surrounding the center block are determined as peripheral blocks.
5. The method of claim 4, further comprising:
identifying the channel color of each grid unit covered by the central block;
and if the grid unit covered by the red or blue grid unit of the central block is identified, determining the crosstalk compensation value of the area to be processed according to the white channel values of the central block and the peripheral blocks, and determining the compensation threshold value of the area to be processed according to the non-white channel values of the central block and the peripheral blocks.
6. The method according to claim 1 or 5, wherein the determining the crosstalk compensation value of the area to be processed according to the white channel values of the central block and each peripheral block comprises:
determining the white channel mean value of the central block and each peripheral block according to the white channel values of the central block and each peripheral block;
determining adjacent edge blocks adjacent to the side edges of the central block in the peripheral blocks, and determining similarity weighted values of the adjacent edge blocks according to the white channel mean values of the central block and the adjacent edge blocks;
determining the gradient weight value of the area to be processed according to the white channel mean value of each peripheral block;
and determining a crosstalk compensation value of the area to be processed according to the similarity weight value of each adjacent edge block and the gradient weight value of the area to be processed.
7. The method of claim 6, wherein determining the similarity weight value of each neighboring block according to the white channel mean of the center block and each neighboring block comprises:
determining a compensation reference value of each adjacent side block according to the difference value of the white channel mean value of the central block and the white channel mean value of each adjacent side block;
and determining the similarity weight value of each adjacent edge block according to the compensation reference value of each adjacent edge block and a given similarity weight coefficient.
8. The method of claim 7, wherein the determining the gradient weight value of the region to be processed according to the white channel mean of each peripheral block comprises:
determining a horizontal gradient value and a vertical gradient value of the area to be processed according to the position distribution of each peripheral block in the area to be processed and the white channel mean value of each peripheral block;
and determining the gradient weight value of the region to be processed according to the given gradient weight coefficient and the larger one of the horizontal gradient value and the vertical gradient value.
9. The method of claim 8, wherein determining the crosstalk compensation value of the to-be-processed area according to the similarity weight value of each neighboring edge block and the gradient weight value of the to-be-processed area comprises:
determining a first weighted value according to the similarity weighted value of each adjacent edge block, the gradient weighted value of the area to be processed and the compensation reference value of each adjacent edge block;
determining a second weighted value according to the similarity weighted value of each adjacent edge block and the gradient weighted value of the area to be processed;
and determining a crosstalk compensation value of the area to be processed according to the first weighted value and the second weighted value.
10. The method according to claim 1 or 5, wherein the determining the compensation threshold of the region to be processed according to the non-white channel values of the central block and each peripheral block comprises:
determining a non-white channel mean value of the central block and each peripheral block according to the non-white channel values of the central block and each peripheral block, wherein the non-white channel values comprise one of a red channel value, a green channel value and a blue channel value;
determining a red pixel mean value, a green pixel mean value and a blue pixel mean value of the area to be processed according to the non-white channel mean values of the central block and the peripheral blocks;
determining the saturation of the area to be processed according to the maximum value and the minimum value in the red pixel mean value, the green pixel mean value and the blue pixel mean value of the area to be processed;
and determining a compensation threshold value of the region to be processed according to the saturation of the region to be processed and the given maximum crosstalk intensity value.
11. The method of claim 10, wherein determining the red, green, and blue pixel averages of the area to be processed according to the non-white channel averages of the central and peripheral blocks comprises:
determining adjacent side blocks adjacent to the side edges of the central block in each peripheral block, and determining corner blocks adjacent to the corners of the central block in each peripheral block;
determining a channel color for each grid cell in the center block,
if the central block comprises a red grid unit, determining a red pixel mean value of the area to be processed according to the non-white channel mean value of the central block, determining a green pixel mean value of the area to be processed according to the non-white channel mean value of each adjacent side block, and determining a blue pixel mean value of the area to be processed according to the non-white channel mean value of each top corner block; or alternatively
If the central block comprises blue grid cells, determining a blue pixel mean value of the area to be processed according to the non-white channel mean value of the central block, determining a green pixel mean value of the area to be processed according to the non-white channel mean value of each adjacent side block, and determining a red pixel mean value of the area to be processed according to the non-white channel mean value of each top corner block.
12. The method of claim 1 or 5, wherein the selectively compensating the white channel value of the center block using the crosstalk compensation value according to the comparison of the crosstalk compensation value and the compensation threshold comprises:
comparing the crosstalk compensation value with the compensation threshold value,
if the crosstalk compensation value is smaller than or equal to the compensation threshold value, determining an adaptive weight value of each white channel value according to the crosstalk compensation value and each white channel value in the central block, and determining a compensation channel value of each white channel value in the central block according to each white channel value in the central block, the adaptive weight value of each white channel value and the crosstalk compensation value;
and if the crosstalk compensation value is larger than the compensation threshold value, not executing the compensation processing of each white channel value in the central block.
13. An image crosstalk compensation apparatus, comprising:
the block determining module is used for determining a central block and a plurality of peripheral blocks surrounding the central block in the area to be processed according to each grid unit in the area to be processed;
the computing module is used for determining a crosstalk compensation value of the area to be processed according to the white channel values of the central block and the peripheral blocks, and determining a compensation threshold value of the area to be processed according to the non-white channel values of the central block and the peripheral blocks;
and the compensation module is used for selectively utilizing the crosstalk compensation value to compensate the white channel value of the central block according to the comparison result of the crosstalk compensation value and the compensation threshold value.
14. An electronic device, comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface are communicated with each other through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction causes the processor to execute the operation corresponding to the image crosstalk compensation method according to any one of claims 1 to 12.
15. A computer storage medium, having a computer program stored thereon, which, when executed by a processor, implements the image crosstalk compensation method according to any one of claims 1 to 12.
CN202211088046.4A 2022-09-07 2022-09-07 Image crosstalk compensation method and device, electronic equipment and storage medium Pending CN115643490A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211088046.4A CN115643490A (en) 2022-09-07 2022-09-07 Image crosstalk compensation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211088046.4A CN115643490A (en) 2022-09-07 2022-09-07 Image crosstalk compensation method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115643490A true CN115643490A (en) 2023-01-24

Family

ID=84939377

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211088046.4A Pending CN115643490A (en) 2022-09-07 2022-09-07 Image crosstalk compensation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115643490A (en)

Similar Documents

Publication Publication Date Title
EP1389771B1 (en) Digital image system and method for combining demosaicing and bad pixel correction
US7088392B2 (en) Digital image system and method for implementing an adaptive demosaicing method
EP2263373B1 (en) Generalized assorted pixel camera systems and methods
US8314863B2 (en) Image processing device, image processing method, and program pertaining to image correction
EP1395041B1 (en) Colour correction of images
KR101663871B1 (en) Method and associated apparatus for correcting color artifact of image
EP1416739B1 (en) Color interpolation for image sensors using a local linear regression method
US20080253652A1 (en) Method of demosaicing a digital mosaiced image
US7756355B2 (en) Method and apparatus providing adaptive noise suppression
EP3416128A1 (en) Raw image processing system and method
US8639054B2 (en) Image processing apparatus for noise removal and edge enhancement based on edge direction
CN104067611A (en) Image processing device, image processing method, and program
US9959633B2 (en) Texture detection in image processing
US20030117507A1 (en) Color filter array interpolation
US11399160B2 (en) Image sensor down-up sampling using a compressed guide
CN113454687A (en) Image processing method, apparatus and system, computer readable storage medium
US6847396B1 (en) Interpolation method for producing full color images in using a single-chip color sensor
CN113068011B (en) Image sensor, image processing method and system
CN115643490A (en) Image crosstalk compensation method and device, electronic equipment and storage medium
US8391649B2 (en) Image filter method
US20070297011A1 (en) Estimation of lighting conditions based on near-gray pixel detection using cylindrical boundaries
CN111988592B (en) Image color reduction and enhancement circuit
KR101327790B1 (en) Image interpolation method and apparatus
US20080285883A1 (en) Apparatus and method for reducing image noise with edge tracking and computer readable medium stored thereon computer executable instructions for performing the method
KR102628938B1 (en) Image processing apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination