CN112950635A - Gray dot detection method, gray dot detection device, electronic device, and storage medium - Google Patents

Gray dot detection method, gray dot detection device, electronic device, and storage medium Download PDF

Info

Publication number
CN112950635A
CN112950635A CN202110456056.8A CN202110456056A CN112950635A CN 112950635 A CN112950635 A CN 112950635A CN 202110456056 A CN202110456056 A CN 202110456056A CN 112950635 A CN112950635 A CN 112950635A
Authority
CN
China
Prior art keywords
gray
processed
points
image
frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110456056.8A
Other languages
Chinese (zh)
Inventor
吴晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110456056.8A priority Critical patent/CN112950635A/en
Publication of CN112950635A publication Critical patent/CN112950635A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The application discloses a gray point detection method, a gray point detection device, electronic equipment and a storage medium. The gray point detection method comprises the following steps: determining gray points to be processed in at least two frames of original images, wherein the at least two frames of original images are generated by different image sensors respectively, and each image sensor can receive light rays with the same wave band; determining a calibration gray point of the image sensor under a preset light source; determining the confidence coefficient to be processed of the gray points to be processed in each frame of original image according to the calibrated gray points; fusing the confidence coefficients to be processed of at least two frames of original images to determine the target confidence coefficient of the gray points to be processed; and taking the gray points to be processed with the target confidence degrees larger than the preset confidence degrees as target gray points. According to the gray point detection method, the gray point detection device, the electronic equipment and the storage medium, the target gray point can be determined relatively quickly and accurately by comparing the target confidence coefficient with the preset confidence coefficient.

Description

Gray dot detection method, gray dot detection device, electronic device, and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a gray point detection method, a gray point detection device, an electronic device, and a storage medium.
Background
In the related art, white balance correction is performed on an image based on gray points, and if a gray point detection result is incorrect, the effect of image correction is directly reduced.
Disclosure of Invention
The embodiment of the application provides a gray point detection method, a gray point detection device, electronic equipment and a storage medium.
The gray point detection method of the embodiment of the application comprises the following steps: determining gray points to be processed in at least two frames of original images, wherein the at least two frames of original images are respectively generated by different image sensors, and each image sensor can receive light rays with the same wave band; determining a calibration gray point of the image sensor under a preset light source; determining the confidence coefficient to be processed of the gray points to be processed in each frame of the original image according to the calibrated gray points; fusing the confidence coefficients to be processed of the at least two frames of original images to determine a target confidence coefficient of the gray points to be processed; and taking the gray points to be processed with the target confidence degrees larger than the preset confidence degrees as target gray points.
The gray point detection device comprises a first determination module, a calibration module, a second determination module, a fusion module and a screening module. The first determining module is used for determining gray points to be processed in at least two frames of original images, the at least two frames of original images are respectively generated by different image sensors, and each image sensor can receive light rays with the same wave band. The calibration module is used for determining a calibration gray point of the image sensor under a preset light source. And the second determining module is used for determining the confidence coefficient to be processed of the gray point to be processed in each frame of the original image according to the calibrated gray point. The fusion module is used for fusing the confidence coefficients to be processed of the at least two frames of original images to determine a target confidence coefficient of the gray points to be processed. And the screening module is used for taking the gray points to be processed with the target confidence degrees larger than the preset confidence degrees as target gray points.
The electronic device of embodiments of the present application includes one or more processors and memory. The memory stores a computer program. The computer program, when executed by the processor, implements the steps of the gray point detection method described in the above embodiments.
The computer-readable storage medium of the present embodiment stores thereon a computer program, which is characterized by implementing the steps of the gray point detection method of the above embodiment when the program is executed by a processor.
According to the gray point detection method, the gray point detection device, the electronic equipment and the storage medium, the calibration gray points of the image sensor under the preset light source are predetermined, the confidence coefficient to be processed of the detected gray points to be processed is determined by utilizing the calibration gray points, the target confidence coefficient is further determined according to the confidence coefficient to be processed, and the target gray points can be determined rapidly and accurately by comparing the target confidence coefficient with the preset confidence coefficient.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The above and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic flow chart of a gray point detection method according to an embodiment of the present disclosure;
FIG. 2 is a schematic view of a gray point detection device according to an embodiment of the present application;
FIG. 3 is a schematic view of an electronic device of an embodiment of the present application;
FIG. 4 is a schematic flow chart of a gray point detection method according to an embodiment of the present disclosure;
FIG. 5 is a schematic view of a gray point detection device according to an embodiment of the present application;
FIG. 6 is a schematic flow chart of a gray point detection method according to an embodiment of the present disclosure;
FIG. 7 is a schematic view of a gray point detection device according to an embodiment of the present application;
FIG. 8 is a schematic flow chart diagram of a gray point detection method according to an embodiment of the present application;
FIG. 9 is a schematic view of a gray point detection device according to an embodiment of the present application;
FIG. 10 is a schematic flow chart diagram of a gray point detection method according to an embodiment of the present application;
FIG. 11 is a schematic view of a gray point detection device according to an embodiment of the present application;
FIG. 12 is a schematic flow chart diagram of a gray point detection method according to an embodiment of the present application;
FIG. 13 is a schematic view of a gray spot detection device according to an embodiment of the present application;
FIG. 14 is a schematic flow chart diagram of a gray point detection method according to an embodiment of the present application;
fig. 15 is a schematic view of a gray point detection device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative and are only for the purpose of explaining the present application and are not to be construed as limiting the present application.
In the description of the embodiments of the present application, the terms "first", "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, features defined as "first", "second", may explicitly or implicitly include one or more of the described features. In the description of the embodiments of the present application, "a plurality" means two or more unless specifically defined otherwise.
Referring to fig. 1 to 3, a gray point detection method according to an embodiment of the present disclosure includes:
01: determining gray points to be processed in at least two frames of original images, wherein the at least two frames of original images are generated by different image sensors respectively, and each image sensor can receive light rays with the same wave band;
03: determining a calibration gray point of the image sensor under a preset light source;
05: determining the confidence coefficient to be processed of the gray points to be processed in each frame of original image according to the calibrated gray points;
07: fusing the confidence coefficients to be processed of at least two frames of original images to determine the target confidence coefficient of the gray points to be processed;
09: and taking the gray points to be processed with the target confidence degrees larger than the preset confidence degrees as target gray points.
The gray point detection method according to the embodiment of the present application can be realized by the gray point detection device 100 according to the embodiment of the present application. Specifically, the gray point detection device 100 includes a first determination module 30, a calibration module 10, a second determination module 50, a fusion module 70, and a screening module 90. The first determining module 30 is configured to determine gray points to be processed in at least two frames of raw images, where the at least two frames of raw images are generated by different image sensors respectively, and each image sensor can receive light rays in the same wavelength band. The calibration module 10 is used for determining a calibration gray point of the image sensor under a preset light source. The second determining module 50 is configured to determine, according to the calibrated gray points, a confidence level to be processed of the gray points to be processed in each frame of the original image. The fusion module 70 is used for fusing the confidence coefficients to be processed of at least two frames of original images to determine the target confidence coefficient of the gray points to be processed. The screening module 90 is configured to use the to-be-processed gray point with the target confidence degree greater than the preset confidence degree as a target gray point.
The gray point detection method according to the embodiment of the present application can be implemented by the electronic device 200 according to the embodiment of the present application. In particular, the electronic device 200 includes one or more processors 201 and memory 202. The memory 202 stores a computer program. When executed by the processor 201, the computer program realizes step 03, step 01, step 05, step 07, and step 09.
According to the gray point detection method, the gray point detection device 100 and the electronic device 200, the calibration gray points of the image sensor under the preset light source are predetermined, the confidence coefficient to be processed of the detected gray points to be processed is determined by using the calibration gray points, the target confidence coefficient is further determined according to the confidence coefficient to be processed, and the target gray points can be determined more quickly and accurately by comparing the target confidence coefficient with the preset confidence coefficient. In addition, the gray point detection method does not need to acquire a data training network, is small in calculation amount, and can meet the requirement of real-time calculation.
In particular, in some embodiments, the electronic device may include at least two image sensors, the total number of channels of the at least two image sensors being greater than 3, the at least two image sensors each being capable of receiving light in the same wavelength band, e.g., the at least two image sensors each being capable of receiving light in a wavelength band of 380nm to 780 nm. In some embodiments, the at least two image sensors may include a main-shot image sensor and a wide-angle image sensor; in some embodiments, the at least two image sensors may include a main-shot image sensor and a tele image sensor; in some embodiments, the at least two image sensors may include a main camera image sensor capable of generating raw images in RGB format, a wide angle image sensor capable of generating raw images with a wide angle effect, and a tele image sensor capable of generating raw images with a tele effect, each of the main camera image sensor, the wide angle image sensor, and the tele image sensor being capable of receiving light in a wavelength band of 380nm to 780 nm. In other embodiments, the at least two image sensors may further include other image sensors having a receiving wavelength band of 380nm to 780nm, which is not limited herein. It can be understood that at least two image sensors with a total channel number greater than 3 can be regarded as multispectral sensors, so that when at least two frames of original images are acquired by the at least two image sensors, at least two frames of original images based on the multispectral can be acquired, that is, in a visible light range of a human eye, the information of wave bands commonly received by the at least two image sensors is richer, so that the target gray point detected by the at least two frames of original images is more accurate. In the embodiment shown in fig. 3, the electronic device 200 is a mobile phone, and in other embodiments, the electronic device 200 may further include a server, a tablet computer, a notebook computer, a smart wearable device, a smart home appliance, or any other device with a shooting function. In some embodiments, the first electronic device can perform data transmission with the second electronic device, and after acquiring at least two frames of original images, the first electronic device transmits the at least two frames of original images to the second electronic device, and the second electronic device executes the gray point detection method according to the embodiments of the present application.
In step 01, the gray points to be processed, i.e. the pixel points in the original image that may be the target gray points.
In step 03, the calibration gray point is predetermined, and the calibration gray point of the image sensor under the preset light source is determined, that is, the data of the calibration gray point corresponding to the image sensor and the preset light source is read. The predetermined light sources may include a D65 light source, a U30 light source, a TL84 light source, a CWF light source, an A light source, and an H light source. Calibration gray points of different image sensors under different preset light sources are measured in advance, and data of the calibration gray points can be stored in the memory 202 of the electronic device 200, so that when the gray point detection method is executed, the data of the calibration gray points can be directly read from the memory 202 and used for subsequent processing, the operation speed is increased, and the operation time is saved.
In step 05, the confidence of processing, i.e. the probability that the gray point to be processed in each frame of the original image is the target gray point.
In step 07, the target confidence, i.e. the further calculated probability that the gray point to be processed in each frame of the original image is the target gray point. In some embodiments, an average value of confidence levels to be processed of gray points to be processed at the same position of at least two frames of original images which are aligned in advance is obtained, and the obtained average value of the confidence levels to be processed is used as a target confidence level of the gray points to be processed at the same position, so as to further determine the possibility that the gray points to be processed in each frame of original image are the target gray points.
Further, in some embodiments, after determining the target gray point, white balance correction is performed according to the target gray point. Therefore, the white balance correction can be better realized, and compared with the white balance algorithm in the related art, the white balance correction of the skin color-like scene (such as a brown scene) can be better realized. Specifically, determining a target gray point comprises: the position of the target gray point in the original image is determined. And performing white balance correction according to the target gray point, comprising: and determining the pixel value of the pixel point in the original image corresponding to the position according to the position of the target gray point in the original image, and performing white balance correction according to the pixel value.
Referring to fig. 4 and 5, in some embodiments, before step 01, the gray point detection method further includes:
02: preprocessing at least two frames of original images, the preprocessing including at least one of black level (OB) correction, Lens Shading Correction (LSC), resize, and alignment processing.
The gray point detection method according to the above embodiment can be realized by the gray point detection device 100 according to the present embodiment. Specifically, the gray point detecting device 100 includes a processing module 20. The processing module 20 is configured to perform black level correction, lens shading correction, size unification, and alignment processing on at least two frames of original images in sequence.
The gray point detection method according to the above embodiment can be implemented by the electronic device 200 according to the embodiment of the present application. Specifically, the processor 201 is configured to perform black level correction, lens shading correction, resizing, and alignment processing on at least two frames of original images in sequence.
Thus, at least two frames of original images are preprocessed.
Specifically, the circuit of the image sensor itself has a dark current, so that the image sensor has a certain output voltage when no light is irradiated, and therefore, the influence caused by the dark current needs to be subtracted, that is, black level correction is performed, and the original image includes a black level portion, and the removal of the black level portion can enhance the dark portion performance of the picture. The lens shading correction is to solve a case where shading occurs around the lens due to optical characteristics of the lens of the image sensor, that is, unevenness of optical refraction of the lens. Because the lens is a convex lens, the central sensitization is inevitably more than the peripheral sensitization due to the convex lens principle, and the light intensity received by the edge area of the image sensor is less than that of the central sensitization, thereby causing the phenomenon that the brightness of the center and the peripheral brightness of the original image are inconsistent. When the lens shading is corrected, corresponding gains are given according to the positions of the pixel points in the original image, so that the difference between the brightness and the color of the center and the periphery of the picture is reduced to be a desirable value. The size of the at least two original images is unified, that is, the size of the at least two original images is adjusted so that the size of the at least two original images is the same. The sizes of the at least two frames of original images are the same, so that the at least two frames of original images can be aligned, fused and the like subsequently. In some embodiments, the size of at least two frames of the original image is adjusted by zooming in and zooming out. In one example, the size of at least two original images is unified to 480 x 640. And (4) alignment treatment, namely determining the characteristic points in the at least two frames of original images and establishing the corresponding relation of the same characteristic points in the at least two frames of original images. The alignment of the at least two frames of original images is beneficial to the subsequent fusion processing of the at least two frames of original images. In one example, at least two frames of raw images are pre-processed, including all of black level (OB) correction, Lens Shading Correction (LSC), resize, and alignment.
Referring to fig. 6 and 7, in some embodiments, after step 02, the gray point detection method further includes:
021: determining over-bright points and over-dark points in at least two frames of original images, wherein the over-bright points are pixel points with gray values larger than a first gray value, the over-dark points are pixel points with gray values smaller than a second gray value, and the first gray value is larger than the second gray value;
step 09 includes:
091: and taking the gray points to be processed with the target confidence degree larger than the preset confidence degree and without the over-bright points and the over-dark points as target gray points.
The gray point detection method according to the above embodiment can be realized by the gray point detection device 100 according to the present embodiment. Specifically, the gray point detecting device 100 further includes a third determining module 21. The third determining module 21 is configured to determine too-bright points and too-dark points in at least two frames of original images, where the too-bright points are pixels with a gray value greater than the first gray value, the too-dark points are pixels with a gray value less than the second gray value, and the first gray value is greater than the second gray value. The screening module 90 is configured to use the to-be-processed gray dots with the target confidence degree greater than the preset confidence degree and without the over-bright dots and the over-dark dots as the target gray dots.
The gray point detection method according to the above embodiment can be implemented by the electronic device 200 according to the embodiment of the present application. Specifically, the processor 201 is configured to determine an over-bright point and an over-dark point in at least two frames of original images, where the over-bright point is a pixel point whose gray value is greater than a first gray value, the over-dark point is a pixel point whose gray value is smaller than a second gray value, and the first gray value is greater than the second gray value, and to use a to-be-processed gray point whose target confidence is greater than a preset confidence and is not the over-bright point and the over-dark point as a target gray point.
Therefore, the gray points to be processed in the over-bright points and the over-dark points can be eliminated, and the accuracy of the target gray points is improved. It can be understood that there are pixel points coinciding with the gray point to be processed in the over-bright point and the over-dark point, but the over-bright point and the over-dark point cannot be used for white balance correction, and when the gray point to be processed in the over-bright point or the over-dark point is used as a target gray point and white balance correction is performed according to the target gray point, the correction effect is poor or even cannot be corrected, so the gray point to be processed in the over-bright point and the over-dark point should be excluded.
Specifically, in step 021, in some embodiments, the first gray value is 250, that is, a pixel point in the original image with a gray value greater than 250 is determined to be an over-bright point. In some embodiments, the second gray value is 5, i.e., the pixel points in the original image with gray value less than 5 are determined to be too dark points. Further, in some embodiments, a mask (mask) initialized to 0 is created, which may cover the original image, the size of the mask being the same as the size of the original image after the unified size. After the over-bright points and the over-dark points are determined, the numerical values of the positions, corresponding to the over-bright points and the over-dark points, in the mask are updated to be 1 from 0, in the subsequent processing, the pixel points in the original image corresponding to the positions, of the positions, corresponding to the numerical values of 1, in the mask are set to be invalid points, the invalid points cannot be used as target gray points, and therefore the gray points to be processed in the over-bright points and the over-dark points are prevented from being used as the target gray points.
In step 091, in some embodiments, the preset confidence is 0.7, that is, the gray dots to be processed with the target confidence greater than 0.7 and not being the over-bright dots and the over-dark dots are regarded as the target gray dots, and the gray dots to be processed with the target confidence not greater than 0.7 or belonging to the over-bright dots or belonging to the over-dark dots are not regarded as the target gray dots.
Referring to fig. 8 and 9, in some embodiments, after step 02, the gray point detection method further includes:
023: denoising at least two frames of original images;
025: respectively carrying out gradient solving processing on the at least two frames of denoised original images so as to determine gradient values in the at least two frames of original images;
027: determining pixel points with gradient values smaller than a preset gradient value in at least two frames of original images as flat areas;
step 09 includes:
093: and taking the gray points to be processed of which the target confidence coefficient is greater than the preset confidence coefficient and which are in the non-flat area as target gray points.
The gray point detection method according to the above embodiment can be realized by the gray point detection device 100 according to the present embodiment. Specifically, the gray point detection apparatus 100 includes a denoising module 23, a gradient finding module 25, and a fourth determination module 27. The denoising module 23 is configured to denoise at least two frames of original images. The gradient solving module 25 is configured to perform gradient solving on the denoised at least two frames of original images respectively to determine gradient values in the at least two frames of original images. The fourth determining module 27 is configured to determine that pixel points in at least two frames of original images with gradient values smaller than the preset gradient value are flat areas. The screening module 90 is configured to use the gray points to be processed of which the target confidence is greater than the preset confidence and which are in the non-flat area as the target gray points.
The gray point detection method according to the above embodiment can be implemented by the electronic device 200 according to the embodiment of the present application. Specifically, the processor 201 is configured to denoise at least two frames of original images, and is configured to perform gradient processing on the at least two frames of original images after denoising to determine gradient values in the at least two frames of original images, and is configured to determine that pixel points in the at least two frames of original images, whose gradient values are smaller than a preset gradient value, are flat areas, and to use to-be-processed gray points, whose target confidence degrees are greater than a preset confidence degree and in which the non-flat areas are to be processed, as target gray points.
Therefore, the ash points to be processed in the flat area can be eliminated, and the accuracy of the target ash points is improved. It is understood that, in some embodiments, since the gray value index of the flat area is constant at 0, the flat area may be erroneously determined as the gray point to be processed, resulting in an error in obtaining the target gray point, and therefore, the gray point to be processed in the flat area should be excluded.
Specifically, in step 023, in some embodiments, the denoising process includes a mean filtering process, i.e., at least two frames of original images are mean filtered for removing noise in the original images. In other embodiments, the denoising process may further include a median filtering process, a wiener filtering process, or other filtering processes for removing image noise, which is not limited herein.
In step 025, in some embodiments, the gradient processing includes a laplacian gaussian filtering process, that is, the de-noised at least two frames of original images are respectively subjected to the laplacian gaussian filtering process to determine gradient values in the at least two frames of original images. In other embodiments, the gradient calculation process may further include a laplacian filtering process or other filtering processes for calculating gradient values, which is not limited herein.
In step 027, in certain embodiments, the predetermined gradient value is 10-3That is, the pixel points with gradient values smaller than 10-3 in at least two frames of original images are determined as flat areas. Further, in some embodiments, a mask (mask) initialized to 0 is created, which may cover the original image, the size of the mask being the same as the size of the original image after the unified size. After the flat area is determined, the numerical value of the position, corresponding to the flat area, in the mask is updated to 1 from 0, in the subsequent processing, the pixel point in the original image, corresponding to the position, with the numerical value of 1, in the mask is set as an invalid point, the invalid point cannot be used as a target gray point, and therefore the fact that the target gray point obtained due to the fact that the flat area is mistakenly determined as the gray point to be processed has errors is avoided.
In step 093, in some embodiments, the confidence level is preset to be 0.7, that is, the target confidence level is greater than 0.7 and the gray points to be processed in the non-flat area are taken as target gray points, and the target confidence level is not greater than 0.7 or the gray points to be processed belonging to the flat area are not taken as target gray points.
It should be noted that, in some embodiments, after step 021, that is, after determining the too-bright point and the too-dark point in the at least two frames of the original image, the too-bright point is a pixel point whose gray value is greater than the first gray value, the too-dark point is a pixel point whose gray value is less than the second gray value, and after the first gray value is greater than the second gray value, the gray point detection method further includes: denoising at least two frames of original images; respectively carrying out gradient solving processing on the at least two frames of denoised original images so as to determine gradient values in the at least two frames of original images; and determining pixel points with gradient values smaller than a preset gradient value in at least two frames of original images as flat areas. Step 09 further comprises taking the gray points to be processed, which are not over-bright points, not over-dark points and non-flat areas and have the target confidence degree larger than the preset confidence degree, as the target gray points. Therefore, the over-bright points, the over-dark points and the to-be-processed gray points in the flat area can be simultaneously eliminated, and the accuracy of the target gray points is greatly improved.
Referring to fig. 10 and 11, in some embodiments, the at least two frames of original images include a first original image and a second original image, and step 01 includes:
011: calculating the gray value index of each pixel point in the original image, wherein the gray value index can be represented by the following formula: GI (x, y) | | C { log (I)R1)-log(||I||1)}+C{log(IR2)-log(||I||1)}+C{log(IG1)-log(||I||1)}+C{log(IG2)-log(||I||1)}+C{log(IB1)-log(||I||1)}+C{log(IB2)-log(||I||1)}||2Wherein GI (x, y) represents the gray value index of the pixel point with coordinates (x, y) in the original image, IR1Red channel image, I, representing a first original imageG1Green channel image, I, representing a first original imageB1Blue channel image, I, representing a first original imageR2Red channel image, I, representing a second original imageG2Representing the green channel image of the second original image, IB2Representing a blue channel image of a second original image, representing an image formed by aligning and fusing a first original image and the second original image, and representing that a local contrast operator is adopted by C { };
013: and determining the pixel points with the gray value indexes smaller than the preset threshold value as the gray points to be processed.
The above embodimentsThe gray point detection method in (2) can be realized by the gray point detection device 100 according to the embodiment of the present application. Specifically, the at least two frames of original images include a first original image and a second original image. The first determination module 30 comprises a first calculation unit 31 and a comparison unit 33. The first calculating unit 31 is configured to calculate a gray scale indicator of each pixel point in the original image, where the gray scale indicator can be represented by the following formula: GI (x, y) | | C { log (I)R1)-log(||I||1)}+C{log(IR2)-log(||I||1)}+C{log(IG1)-log(||I‖1)}+C{log(IG2)-log(||I||1)}+C{log(IB1)-log(||I||1)}+C{log(IB2)-log(||I||1)}||2Wherein GI (x, y) represents the gray value index of the pixel point with coordinates (x, y) in the original image, IR1Red channel image, I, representing a first original imageG1Green channel image, I, representing a first original imageB1Blue channel image, I, representing a first original imageR2Red channel image, I, representing a second original imageG2Representing the green channel image of the second original image, IB2And B, representing a blue channel image of the second original image, I representing an image formed by aligning and fusing the first original image and the second original image, and C { } representing the adoption of a local contrast operator. The comparing unit 33 is configured to determine a pixel point with a gray value index smaller than a preset threshold as a gray point to be processed.
The gray point detection method according to the above embodiment can be implemented by the electronic device 200 according to the embodiment of the present application. Specifically, the at least two frames of original images include a first original image and a second original image. The processor 201 is configured to calculate a gray value indicator of each pixel point in the original image, where the gray value indicator can be represented by the following formula: GI (x, y) | | C { log (I)R1)-log(||I||1)}+C{log(IR2)-log(||I||1)}+C{log(IG1)-log(‖I‖1)}+C{log(IG2)-log(||I||1)}+C{log(IB1)-log(||I||1)}+C{log(IB2)-log(||I||1)}||2Wherein GI (x, y) represents the gray value index of the pixel point with coordinates (x, y) in the original image, IR1Red channel image, I, representing a first original imageG1Green channel image, I, representing a first original imageB1Blue channel image, I, representing a first original imageR2Red channel image, I, representing a second original imageG2Representing the green channel image of the second original image, IB2And C { } represents that a local contrast operator is adopted and a pixel point used for determining a gray value index smaller than a preset threshold value as a gray point to be processed.
Therefore, the gray points to be processed can be determined more accurately.
Specifically, the design principle of the gradation value index formula will be described below. In a two-color reflectance model of object imaging, the pixel value at (x, y) under one global light source can be modeled as:
Figure BDA0003040549220000091
wherein the content of the first and second substances,
Figure BDA0003040549220000092
the pixel value at point (x, y) of the representative image I, I ═ R, G, B, Fi(λ) is a distribution of a photoreceptive response { R, G, B } of the image sensor, L (λ) is a light intensity distribution of the light source, λ is a wavelength of the light source,
Figure BDA0003040549220000093
is the diffuse reflectance of the surface of the object,
Figure BDA0003040549220000094
is the specular reflectivity of the surface of the object,
Figure BDA0003040549220000095
is the intensity of the diffuse reflection of the object,
Figure BDA0003040549220000096
is specular reflection of an objectStrength.
When the wavelength of the light source is fixed to a certain value (e.g., 400nm) from 380nm to 780nm, equation 1 can be simplified as:
Figure BDA0003040549220000097
for (x, y) point in the original image, for the red channel image I of the original imageRAnd carrying out log processing on the original image I, and adopting a local contrast operator C { }, thus obtaining:
C{log(IR)-log(||I||1)}=C{log(FRLR)+log(γbRb,RsRs,R)}-C{log(FRLRbRb,RsRs,R)+FGLGbRb,GsRs,G)+FBLBbRb,BsRs,B) ) }, equation 3
According to the neutral interface reflection assumption: when the wavelength of the light source is 380nm-780nm, the diffuse reflectivity and the specular reflectivity of the light emitted by the light source on a gray card (calibration gray point) are equal, so that:
Figure BDA0003040549220000098
substituting equation 4 into equation 3 yields:
Figure BDA0003040549220000099
since C { } is a local contrast operator, assuming that the intensity distribution of the light source is the same and the photosensitive response of the image sensor is the same in a small range near the (x, y) point in the original image, we can: c { log (F)RLR) Substituting 0 for equation 5 yields:
Figure BDA00030405492200000910
from the above derivation, it can be found that C { log (I) of the calibration gray pointR)-log(||I||1) The value is 0, and in the process of determining the gray point to be processed, the C { log (I) of the pixel point with the coordinate (x, y) in the original image can be judgedR)-log(||I||1) The value of the pixel is C { log (I) }, namely, C { log (I) } of the pixel with the coordinate (x, y) in the original image is judgedR)-log(||I||1) Whether the numerical value of the pixel is close to 0 or not is judged, the more the numerical value is close to 0, the higher the possibility that the pixel point with the coordinate (x, y) in the original image is a calibration gray point is shown, and therefore the pixel point with the coordinate (x, y) can be determined as a gray point to be processed, namely C { log (I) } { (I) }R)-log(||I||1) As part of the gray value index used to determine the gray points to be processed. For the same reason, C { log (I)G)-log(||I||1) And C { log (I) }B)-log(||I||1) It is also possible to use as part of the gray value index for determining the gray points to be processed. Therefore, the gradation value index can be expressed by the following formula: GI (x, y) | | C { log (I)R)-log(||I||1)}+C{log(IG)-log(||I||1)}+C{log(IB)-log(||I||1)}||2And determining the pixel points with the gray value index smaller than the preset threshold value as the gray points to be processed.
Further, in some embodiments, the raw image includes a first raw image and a second raw image, the first raw image and the second raw image are respectively generated by different image sensors, each image sensor can receive light rays in the same wavelength band, and the gray value index can be represented by the following formula: GI (x, y) | | C { log (I)R1)-log(||I||1)}+C{log(IR2)-log(||I||1)}+C{log(IG1)-log(||I||1)}+C{log(IG2)-log(||I||1)}+C{log(IB1)-log(||I||1)}+C{log(IB2)-log(||I||1)}||2. In this way, the gray point to be processed can be determined more accurately.
It should be noted that the gray scale index can be more accurately determinedDetermining whether pixel points of a non-flat area in the original image are gray points to be processed or not, and for the flat area in the original image, because C { } is a local contrast operator, C { log (I) of the flat area in the original imageR)-log(||I||1) Constantly being 0, the method can cause that the flat area in the original image is wrongly determined as the gray point to be processed, so that when the gray value index is used for determining the gray point to be processed in the original image, the accuracy of the gray point to be processed can be improved by excluding the pixel point corresponding to the flat area.
In some embodiments, the local contrast operator includes at least one of a laplacian of gaussian operator, a prewitt operator, a sobel operator, and a prelaplacian operator.
Therefore, the gradient of the original image can be calculated better, and the gray value index of each pixel point in the original image can be calculated conveniently.
Referring to fig. 12 and 13, in some embodiments, step 03 includes:
031: acquiring a standard image of the gray card under a preset light source, wherein the standard image is generated by an image sensor;
033: and taking a gray card in the standard image as a calibration gray point of the image sensor under a preset light source.
The gray point detection method according to the above embodiment can be realized by the gray point detection device 100 according to the present embodiment. Specifically, the calibration module 10 includes an acquisition unit 11 and a first determination unit 13. The acquisition unit 11 is used for acquiring a standard image of the gray card under a preset light source, and the standard image is generated by an image sensor. The first determining unit 13 is configured to use a gray card in the standard image as a calibration gray point of the image sensor under a preset light source.
The gray point detection method according to the above embodiment can be implemented by the electronic device 200 according to the embodiment of the present application. Specifically, the processor 201 is configured to obtain a standard image of the gray card under the preset light source, where the standard image is generated by the image sensor, and to use the gray card in the standard image as a calibration gray point of the image sensor under the preset light source.
In this way, the calibration gray point of the image sensor under the preset light source can be predetermined.
In particular, a gray card is a gray card. The predetermined light sources may include a D65 light source, a U30 light source, a TL84 light source, a CWF light source, an A light source, and an H light source. Taking a preset light source as a D65 light source and an image sensor as a main shooting image sensor as examples, a main shooting image sensor is used in advance to obtain a standard image of a gray card under a D65 light source, the obtained standard image includes the gray card, further, a pixel value of the gray card in the image is determined, and the pixel value of the gray card in the image is used as a pixel value of a calibration gray point of the main shooting image sensor under a D65 light source, so that the calibration gray point of the main shooting image sensor under the D65 light source is determined. Furthermore, the calibration gray points of each image sensor under each preset light source are determined, and the data of all the calibration gray points are stored, so that the data of the calibration gray points can be conveniently and directly read subsequently, and the speed of detecting the gray points is accelerated.
Referring to fig. 14 and 15, in some embodiments, the calibration gray point includes a calibration red component, a calibration green component, and a calibration blue component, the to-be-processed gray point includes a to-be-processed red component, a to-be-processed green component, and a to-be-processed blue component, and step 05 includes:
051: calculating a first red-green component ratio and a first blue-green component ratio of the calibrated gray point, and determining a calibration coordinate of the calibrated gray point according to the first red-green component ratio and the first blue-green component ratio, wherein the first red-green component ratio is the ratio of the calibrated red component to the calibrated green component, and the first blue-green component ratio is the ratio of the calibrated blue component to the calibrated green component;
053: calculating a second red-green component ratio and a second blue-green component ratio of the gray point to be processed, and determining a coordinate to be processed of the gray point to be processed according to the second red-green component ratio and the second blue-green component ratio, wherein the second red-green component ratio is the ratio of the red component to be processed to the green component to be processed, and the second blue-green component ratio is the ratio of the blue component to be processed to the green component to be processed;
055: and determining the confidence coefficient to be processed of the gray points to be processed in each frame of the original image according to the Euclidean distance between the calibration coordinates and the coordinates to be processed.
The gray point detection method according to the above embodiment can be realized by the gray point detection device 100 according to the present embodiment. Specifically, the second determination module 50 includes a second calculation unit 51, a third calculation unit 53, and a second determination unit 55. The second calculating unit 51 is configured to calculate a first red-green component ratio and a first blue-green component ratio of the calibrated gray point, and determine a calibration coordinate of the calibrated gray point according to the first red-green component ratio and the first blue-green component ratio, where the first red-green component ratio is a ratio of the calibrated red component to the calibrated green component, and the first blue-green component ratio is a ratio of the calibrated blue component to the calibrated green component. The third calculating unit 53 is configured to calculate a second red-green component ratio and a second blue-green component ratio of the to-be-processed gray point, and determine a to-be-processed coordinate of the to-be-processed gray point according to the second red-green component ratio and the second blue-green component ratio, where the second red-green component ratio is a ratio of the to-be-processed red component to the to-be-processed green component, and the second blue-green component ratio is a ratio of the to-be-processed blue component to the to-be-processed green component. The second determining unit 55 is configured to determine, according to the euclidean distance between the calibration coordinate and the to-be-processed coordinate, a to-be-processed confidence of the to-be-processed gray point in each frame of the original image.
The gray point detection method according to the above embodiment can be implemented by the electronic device 200 according to the embodiment of the present application. Specifically, the processor 201 is configured to calculate a first red-green component ratio and a first blue-green component ratio of the calibrated gray point, and determine a calibration coordinate of the calibrated gray point according to the first red-green component ratio and the first blue-green component ratio, the first red-green component ratio is a ratio of the calibrated red component to the calibrated green component, the first blue-green component ratio is a ratio of the calibrated blue component to the calibrated green component, and is configured to calculate a second red-green component ratio and a second blue-green component ratio of the gray point to be processed, and determine a coordinate to be processed of the gray point to be processed according to the second red-green component ratio and the second blue-green component ratio, the second red-green component ratio is a ratio of the red component to be processed to the green component to be processed, the second blue-green component ratio is a ratio of the blue component to be processed to the green component to be processed, and is configured to determine a euclidean distance between the calibrated coordinate and the coordinate to be processed, and determining the confidence coefficient to be processed of the gray points to be processed in each frame of original image.
Therefore, the confidence coefficient to be processed of the gray points to be processed in each frame of original image can be determined quickly and accurately.
Specifically, the calibration gray point and the gray point to be processed respectively comprise a red channel, a green channel and a blue channel. Demarcating the red component R1It can be understood that the green component G is scaled to scale the pixel values of the gray points in the red channel1It can be understood that the pixel value of the gray point in the green channel is calibrated, and the blue component B is calibrated1It can be understood as the pixel value of the calibration gray point in the blue channel. Similarly, the red component R to be processed2It can be understood that the pixel value of the gray point to be processed in the red channel, the green component G to be processed2It can be understood that the pixel value of the gray point to be processed in the green channel, the blue component B to be processed2It can be understood as the pixel value of the gray point to be processed in the blue channel.
Further, the first red-green component ratio is R1/G1, the first blue-green component ratio is B1/G1, and the calibration coordinates are (R1/G1, B1/G1). The second red-green component ratio is R2/G2, the second blue-green component ratio is B2/G2, and the coordinates to be processed are (R2/G2, B2/G2). Euclidean distance between calibration coordinate and coordinate to be processed
Figure BDA0003040549220000121
It can be understood that the euclidean distance between the calibration coordinate and the to-be-processed coordinate has a corresponding relationship with the to-be-processed confidence coefficient of the to-be-processed gray point in each frame of the original image, and the smaller the euclidean distance between the calibration coordinate and the to-be-processed coordinate is, that is, the closer the to-be-processed gray point is to the calibration gray point, the higher the possibility that the to-be-processed gray point is a target gray point is, and correspondingly, the higher the to-be-processed confidence coefficient of the to-be; the more the Euclidean distance between the calibration coordinate and the to-be-processed coordinate is, that is, the farther the to-be-processed gray point is from the calibration gray point, the lower the possibility that the to-be-processed gray point is the target gray point is, and correspondingly, the lower the to-be-processed confidence coefficient of the to-be-processed gray point is, so that after the Euclidean distance between the calibration coordinate and the to-be-processed coordinate is determined, the to-be-processed confidence coefficient of the to-be-processed gray point can be determined according to.
In some embodiments, the calibration gray point is predetermined, that is, the first red-green component ratio and the first blue-green component ratio are predetermined, so that after the gray point to be processed is determined, the euclidean distance between the calibration coordinates and the coordinates to be processed can be calculated more quickly, and the confidence of the gray point to be processed can be determined.
In an example, please refer to table 1 in combination, table 1 is a first red-green component ratio and a first blue-green component ratio of a calibrated gray point determined by a main photographic image sensor under a D65 light source, a U30 light source, a TL84 light source, a CWF light source, an a light source, and an H light source, respectively, table 1 may be stored in a memory 202 of the electronic device 200, after a second red-green component ratio and a second blue-green component ratio of the gray point to be processed are determined, a current light source color corresponding to an original image may be preliminarily determined by the second red-green component ratio and the second blue-green component ratio, a first red-green component ratio and a first blue-green component ratio of the calibrated gray point of a preset light source closest to the current light source color may be directly retrieved from the memory 202, so as to calculate a euclidean distance between a calibrated coordinate and the coordinates to be processed, and determine a confidence coefficient of the gray point to be processed according to the calculation result.
TABLE 1
Light source R1/G1 B1/G1
D65 0.714594 0.67968
U30 0.990364 0.564486
TL84 0.847527 0.60783
CWF 0.633686 0.8125
A 0.876607 0.640531
H 0.731521 0.681113
It should be noted that the specific numerical values mentioned above are only for illustrating the implementation of the present application in detail and should not be construed as limiting the present application. In other examples or embodiments or examples, other values may be selected according to the application and are not specifically limited herein.
The computer-readable storage medium of the present embodiment stores thereon a computer program, which, when executed by the processor 201, implements the steps of the gray point detection method of any of the above embodiments.
For example, in the case where the program is executed by the processor 201, the following steps of the gray point detection method are implemented:
01: determining gray points to be processed in at least two frames of original images, wherein the at least two frames of original images are generated by different image sensors respectively, and each image sensor can receive light rays with the same wave band;
03: determining a calibration gray point of the image sensor under a preset light source;
05: determining the confidence coefficient to be processed of the gray points to be processed in each frame of original image according to the calibrated gray points;
07: fusing the confidence coefficients to be processed of at least two frames of original images to determine the target confidence coefficient of the gray points to be processed;
09: and taking the gray points to be processed with the target confidence degrees larger than the preset confidence degrees as target gray points.
It will be appreciated that the computer program comprises computer program code. The computer program code may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable storage medium may include: any entity or device capable of carrying computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), software distribution medium, and the like. The Processor 201 may be a central processing unit, or may be other general-purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, a discrete hardware component, or the like.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (11)

1. A gray point detection method is characterized by comprising the following steps:
determining gray points to be processed in at least two frames of original images, wherein the at least two frames of original images are generated by different image sensors respectively, and each image sensor can receive light rays with the same wave band;
determining a calibration gray point of the image sensor under a preset light source;
determining the confidence coefficient to be processed of the gray points to be processed in each frame of the original image according to the calibrated gray points;
fusing the confidence coefficients to be processed of the at least two frames of original images to determine a target confidence coefficient of the gray points to be processed;
and taking the gray points to be processed with the target confidence degrees larger than the preset confidence degrees as target gray points.
2. A gray point detection method as claimed in claim 1, wherein before said determining gray points to be processed in at least two frames of original images, said gray point detection method further comprises:
and preprocessing the at least two frames of original images, wherein the preprocessing comprises at least one of black level correction, lens shading correction, unified size and alignment processing.
3. A gray point detection method as claimed in claim 2, wherein after said preprocessing of said at least two frames of raw images, said gray point detection method further comprises:
determining over-bright points and over-dark points in at least two frames of original images, wherein the over-bright points are pixel points with gray values larger than a first gray value, the over-dark points are pixel points with gray values smaller than a second gray value, and the first gray value is larger than the second gray value;
the step of taking the gray point to be processed with the target confidence degree greater than the preset confidence degree as a target gray point comprises the following steps:
and taking the gray points to be processed, which are not the over-bright points and the over-dark points and have the target confidence degree larger than a preset confidence degree, as target gray points.
4. A gray point detection method as claimed in claim 2, wherein after said preprocessing of said at least two frames of raw images, said gray point detection method further comprises:
denoising the at least two frames of original images;
respectively carrying out gradient solving processing on the at least two frames of denoised original images so as to determine gradient values in the at least two frames of original images;
determining pixel points of which the gradient values are smaller than a preset gradient value in the at least two frames of original images as flat areas;
the step of taking the gray point to be processed with the target confidence degree greater than the preset confidence degree as a target gray point comprises the following steps:
and taking the gray points to be processed, which are not the flat area and have the target confidence degree larger than the preset confidence degree, as target gray points.
5. A gray point detecting method as claimed in claim 1, wherein said at least two frames of original images comprise a first original image and a second original image, and said determining gray points to be processed in said at least two frames of original images comprises:
calculating a gray value index of each pixel point in the original image, wherein the gray value index can be represented by the following formula: GI (x, y) | | C { log (I)R1)-log(||I||1)}+C{log(IR2)-log(||I||1)}+C{log(IG1)-log(||I||1)}+C{log(IG2)-log(||I||1)}+C{log(IB1)-log(||I||1)}+C{log(IB2)-log(||I||1)}||2GI (x, y) represents the gray value index of the pixel point with the coordinate (x, y) in the original image, IR1A red channel image, I, representing said first original imageG1A green channel image, I, representing said first original imageB1A blue channel image, I, representing the first original imageR2A red channel image, I, representing said second original imageG2A green channel image, I, representing said second original imageB2Representing a blue channel image of the second original image, I representing an image after the first original image and the second original image are aligned and fused, and C { } represents the adoption of a local contrast operator;
and determining the pixel points with the gray value indexes smaller than a preset threshold value as the gray points to be processed.
6. A gray point detection method as claimed in claim 5, wherein said local contrast operator comprises at least one of a Gaussian Laplacian operator, a Pruitt operator, a Sobel operator, and a Laplacian operator.
7. A gray point detection method as claimed in claim 1, wherein said determining a calibration gray point of said image sensor under a preset light source comprises:
acquiring a standard image of a gray card under a preset light source, wherein the standard image is generated by the image sensor;
and taking the gray card in the standard image as the calibration gray point of the image sensor under a preset light source.
8. The gray point detection method of claim 1, wherein said calibrating gray points comprises a calibrating red component, a calibrating green component and a calibrating blue component, said gray points to be processed comprise a red component to be processed, a green component to be processed and a blue component to be processed, and said determining confidence to be processed of said gray points to be processed in each frame of said original image according to said calibrating gray points comprises:
calculating a first red-green component ratio and a first blue-green component ratio of the calibrated gray point, and determining a calibration coordinate of the calibrated gray point according to the first red-green component ratio and the first blue-green component ratio, wherein the first red-green component ratio is the ratio of the calibrated red component to the calibrated green component, and the first blue-green component ratio is the ratio of the calibrated blue component to the calibrated green component;
calculating a second red-green component ratio and a second blue-green component ratio of the gray point to be processed, and determining a coordinate to be processed of the gray point to be processed according to the second red-green component ratio and the second blue-green component ratio, wherein the second red-green component ratio is the ratio of the red component to be processed to the green component to be processed, and the second blue-green component ratio is the ratio of the blue component to be processed to the green component to be processed;
and determining the confidence coefficient to be processed of the gray point to be processed in each frame of the original image according to the Euclidean distance between the calibration coordinate and the coordinate to be processed.
9. A gray spot detecting device, comprising:
the device comprises a first determining module, a second determining module and a processing module, wherein the first determining module is used for determining gray points to be processed in at least two frames of original images, the at least two frames of original images are respectively generated by different image sensors, and each image sensor can receive light rays with the same wave band;
the calibration module is used for determining a calibration gray point of the image sensor under a preset light source;
the second determining module is used for determining the confidence coefficient to be processed of the gray point to be processed in each frame of the original image according to the calibrated gray point;
the fusion module is used for fusing the confidence coefficients to be processed of the at least two frames of original images to determine a target confidence coefficient of the gray points to be processed;
and the screening module is used for taking the gray points to be processed with the target confidence degrees larger than the preset confidence degrees as target gray points.
10. An electronic device, characterized in that the electronic device comprises one or more processors and a memory, the memory storing a computer program which, when executed by the processors, carries out the steps of the gray point detection method according to any one of claims 1 to 8.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the gray point detection method according to any one of claims 1 to 8.
CN202110456056.8A 2021-04-26 2021-04-26 Gray dot detection method, gray dot detection device, electronic device, and storage medium Withdrawn CN112950635A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110456056.8A CN112950635A (en) 2021-04-26 2021-04-26 Gray dot detection method, gray dot detection device, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110456056.8A CN112950635A (en) 2021-04-26 2021-04-26 Gray dot detection method, gray dot detection device, electronic device, and storage medium

Publications (1)

Publication Number Publication Date
CN112950635A true CN112950635A (en) 2021-06-11

Family

ID=76233504

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110456056.8A Withdrawn CN112950635A (en) 2021-04-26 2021-04-26 Gray dot detection method, gray dot detection device, electronic device, and storage medium

Country Status (1)

Country Link
CN (1) CN112950635A (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040120575A1 (en) * 2002-12-20 2004-06-24 Cheng Nai-Sheng Automatic white balance correction method for image capturing apparatus
CN101911715A (en) * 2008-02-13 2010-12-08 高通股份有限公司 The white balance calibration that is used for digital camera device
CN103491357A (en) * 2013-10-14 2014-01-01 旗瀚科技有限公司 Auto white balance treatment method of image sensor
CN105025215A (en) * 2014-04-23 2015-11-04 中兴通讯股份有限公司 Method and apparatus for achieving group shooting through terminal on the basis of multiple pick-up heads
CN105828058A (en) * 2015-05-29 2016-08-03 维沃移动通信有限公司 Adjustment method and device of white balance
US20160366388A1 (en) * 2015-06-10 2016-12-15 Microsoft Technology Licensing, Llc Methods and devices for gray point estimation in digital images
CN108805103A (en) * 2018-06-29 2018-11-13 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
CN110022469A (en) * 2019-04-09 2019-07-16 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN111526351A (en) * 2020-04-27 2020-08-11 展讯半导体(南京)有限公司 White balance synchronization method, white balance synchronization system, electronic device, medium, and digital imaging device
CN112422942A (en) * 2020-12-09 2021-02-26 Oppo(重庆)智能科技有限公司 White balance synchronization method, lens module and electronic equipment
CN112601063A (en) * 2020-12-07 2021-04-02 深圳市福日中诺电子科技有限公司 Mixed color temperature white balance method
CN112598594A (en) * 2020-12-24 2021-04-02 Oppo(重庆)智能科技有限公司 Color consistency correction method and related device
CN112652027A (en) * 2020-12-30 2021-04-13 凌云光技术股份有限公司 Pseudo-color detection algorithm and system

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040120575A1 (en) * 2002-12-20 2004-06-24 Cheng Nai-Sheng Automatic white balance correction method for image capturing apparatus
CN101911715A (en) * 2008-02-13 2010-12-08 高通股份有限公司 The white balance calibration that is used for digital camera device
CN103491357A (en) * 2013-10-14 2014-01-01 旗瀚科技有限公司 Auto white balance treatment method of image sensor
CN105025215A (en) * 2014-04-23 2015-11-04 中兴通讯股份有限公司 Method and apparatus for achieving group shooting through terminal on the basis of multiple pick-up heads
CN105828058A (en) * 2015-05-29 2016-08-03 维沃移动通信有限公司 Adjustment method and device of white balance
US20160366388A1 (en) * 2015-06-10 2016-12-15 Microsoft Technology Licensing, Llc Methods and devices for gray point estimation in digital images
CN108805103A (en) * 2018-06-29 2018-11-13 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
CN110022469A (en) * 2019-04-09 2019-07-16 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN111526351A (en) * 2020-04-27 2020-08-11 展讯半导体(南京)有限公司 White balance synchronization method, white balance synchronization system, electronic device, medium, and digital imaging device
CN112601063A (en) * 2020-12-07 2021-04-02 深圳市福日中诺电子科技有限公司 Mixed color temperature white balance method
CN112422942A (en) * 2020-12-09 2021-02-26 Oppo(重庆)智能科技有限公司 White balance synchronization method, lens module and electronic equipment
CN112598594A (en) * 2020-12-24 2021-04-02 Oppo(重庆)智能科技有限公司 Color consistency correction method and related device
CN112652027A (en) * 2020-12-30 2021-04-13 凌云光技术股份有限公司 Pseudo-color detection algorithm and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YANLIN QIAN ET AL.: "On Finding Gray Pixels", 《2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》, pages 1 - 9 *
王飞: "图像颜色恒常性计算研究", 《中国博士学位论文全文数据库 信息科技辑》 *

Similar Documents

Publication Publication Date Title
US12002233B2 (en) Systems and methods for estimating depth and visibility from a reference viewpoint for pixels in a set of images captured from different viewpoints
US10997696B2 (en) Image processing method, apparatus and device
CN110021047B (en) Image processing method, image processing apparatus, and storage medium
CN107977940B (en) Background blurring processing method, device and equipment
Zheng et al. Single-image vignetting correction
US7444017B2 (en) Detecting irises and pupils in images of humans
US20200311981A1 (en) Image processing method, image processing apparatus, image processing system, and learnt model manufacturing method
Zheng et al. Single-image vignetting correction using radial gradient symmetry
WO2019105206A1 (en) Method and device for image processing
US7907786B2 (en) Red-eye detection and correction
WO2021057474A1 (en) Method and apparatus for focusing on subject, and electronic device, and storage medium
US11195055B2 (en) Image processing method, image processing apparatus, storage medium, image processing system, and manufacturing method of learnt model
CN108989699B (en) Image synthesis method, image synthesis device, imaging apparatus, electronic apparatus, and computer-readable storage medium
van Zwanenberg et al. Edge detection techniques for quantifying spatial imaging system performance and image quality
CN111091507A (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN114037992A (en) Instrument reading identification method and device, electronic equipment and storage medium
CN114445314A (en) Image fusion method and device, electronic equipment and storage medium
CN117061868A (en) Automatic photographing device based on image recognition
CN114049549A (en) Underwater visual recognition method, system and computer readable storage medium
CN113673474A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN113313645A (en) Image processing method, image processing apparatus, terminal, and readable storage medium
CN117058183A (en) Image processing method and device based on double cameras, electronic equipment and storage medium
CN108876845B (en) Fresnel pattern center determining method and device
CN112950635A (en) Gray dot detection method, gray dot detection device, electronic device, and storage medium
CN113516595B (en) Image processing method, image processing apparatus, electronic device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20210611

WW01 Invention patent application withdrawn after publication