CN106296658B - A kind of scene light source estimation accuracy method for improving based on camera response function - Google Patents

A kind of scene light source estimation accuracy method for improving based on camera response function Download PDF

Info

Publication number
CN106296658B
CN106296658B CN201610606411.4A CN201610606411A CN106296658B CN 106296658 B CN106296658 B CN 106296658B CN 201610606411 A CN201610606411 A CN 201610606411A CN 106296658 B CN106296658 B CN 106296658B
Authority
CN
China
Prior art keywords
image
camera
light source
calculated
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610606411.4A
Other languages
Chinese (zh)
Other versions
CN106296658A (en
Inventor
李永杰
高绍兵
张明
罗福亚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201610606411.4A priority Critical patent/CN106296658B/en
Publication of CN106296658A publication Critical patent/CN106296658A/en
Application granted granted Critical
Publication of CN106296658B publication Critical patent/CN106296658B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Of Color Television Signals (AREA)

Abstract

The invention discloses a kind of, and the scene light source based on camera response function estimates accuracy method for improving, training image is calculated first to the camera transition matrix between test image, then training image and its real light sources are converted into corresponding image and real light sources under test image camera using this matrix, then the feature of converted images is extracted, regression matrix between learning characteristic and its real light sources, finally estimated to complete the light source of test image using this regression matrix, to realize the estimation of the light source between different images library.The present invention does not have any parameter, the regression matrix between transition matrix and training image feature and light source between camera need to only be calculated and can once be decided, the light source for the image for estimating that camera is shot used in different training images is can be directly used for, the accuracy of light source estimation between different cameral can be effectively promoted.

Description

A kind of scene light source estimation accuracy method for improving based on camera response function
Technical field
The invention belongs to computer visions and technical field of image processing, and in particular to a kind of based on camera response function Scene light source estimates the design of accuracy method for improving.
Background technology
Color constancy refers to when the light source colour for irradiating body surface changes, and people are to the color of object surface Consciousness still maintain constant perceptual properties, computational color constancy refers to assigning imaging device (in full by some algorithms Code camera) this ability.Color constancy computational methods mainly have algorithm and static method based on study, static method It is difficult to meet requirement of engineering since evaluated error is larger, in precision;Based on the algorithm of study by learning complicated characteristics of image, It can obtain accurate light source estimation.Currently based on study algorithm can not also application, in addition to calculation amount is bigger Except, it is that different cameral possesses different color sensitivity camera response functions there are one important reason, this can be caused The algorithm parameter learnt on the image set of one camera shooting, the scene light on image that can not be shot effective for other cameras Estimate that the intersection light source estimation performance that is, between camera is poor in source.
The researcher in computational color constancy field early has been working on research camera response function for chromatic adaptation It influences, there are Finlayson G D to be equal to the method proposed in 1994, bibliography than more typical:Finlayson G D, Drew M S,Funt B V.Spectral sharpening:sensor transformations for improved color constancy.[J].Journal of the Optical Society of America A,1994,11(5): 1553-63.This method can be obviously improved and be based on by simplifying light source transform characteristics to promote the performance of color constancy algorithm The accuracy of the estimation light source for the image that the method for study shoots same camera, however between still can not promoting different cameral Scene light source estimates accuracy.Currently, a color constancy algorithm based on study can effectively solve the problems, such as this not yet.
Invention content
The purpose of the present invention is to solve the methods in the prior art based on study can not effectively be promoted between different cameral Light source estimates the problem of accuracy, it is proposed that a kind of scene light source estimation accuracy method for improving based on camera response function.
The technical scheme is that:A kind of scene light source estimation accuracy method for improving based on camera response function, Include the following steps:
S1, camera transition matrix is calculated:By least square method, calculates the camera color sensitivity that training image uses and ring Answer the camera color sensitivity receptance function that function is used to test image between the response of same given surface reflectivity Transition matrix, obtain camera transition matrix;
S2, conversion training image group:The training image of real light sources known to one group and its real light sources are utilized into step S1 The camera transition matrix being calculated is converted to corresponding image and real light sources under test image camera;
S3, test image regression matrix is calculated:Utilize transformed image in static Illuminant estimation method estimating step S2 Light source obtain corresponding feature using the corresponding all estimated results of every piece image and cross term as the feature of its image Matrix;By the method for recurrence, the regression matrix between the feature of test image and light source is calculated;
S4, estimation test image light source:To test image extraction and identical characteristics of image in step S3, obtained with step S3 The regression matrix arrived is multiplied, and obtains the estimation light source of test image.
Further, step S2 is specially:
The value of each pixel in training image is multiplied with the camera transition matrix that step S1 is calculated, is obtained To value be converted images corresponding position pixel value;The real light sources of training image and step S1 are calculated again Camera transition matrix is multiplied, and obtained value is the real light sources of converted images.
Further, the static Illuminant estimation method in step S3 is grey world and grey edge methods, specific mistake Journey is as follows:Calculative feature is respectively the mean value in tri- channels image R, G, B and the mean value of three channel edges, is introduced Cross term, the finally obtained mean value characterized by tri- channels R, G, B, the mean value of tri- channel edges of R, G, B, the channels R are logical with G Mean value after road is multiplied opens radical sign, and the mean value after the channels R are multiplied with channel B opens radical sign and after the channels G are multiplied with B Mean value open radical sign, totally 9 features.
Further, the method for the recurrence in step S3 is nonlinear neural network, support vector machines or least square method.
The beneficial effects of the invention are as follows:The present invention calculates training image and converts square to the camera between test image first Then training image and its real light sources are converted to corresponding image and true light under test image camera by battle array using this matrix Source, then extracts the feature of converted images, and the regression matrix between learning characteristic and its real light sources finally utilizes this recurrence Matrix is estimated to complete the light source of test image, to realize the estimation of the light source between different images library.The present invention does not appoint What parameter, the transition matrix between camera and the regression matrix between training image feature and light source need to only calculate and once can It decides, can be directly used for the light source for the image for estimating that camera is shot used in different training images, Ke Yiyou Effect ground promotes the accuracy of light source estimation between different cameral.
Description of the drawings
Fig. 1 is that a kind of scene light source based on camera response function provided by the invention estimates accuracy method for improving flow Figure.
Fig. 2 is the 930 camera response function curve graphs of SONY DXC of the embodiment of the present invention one.
Fig. 3 is the CANON 5D camera response function curve graphs of the embodiment of the present invention one.
Fig. 4 is the original image to be corrected of the embodiment of the present invention two.
Fig. 5 is the image after being corrected using light source colour value of the embodiment of the present invention two.
Specific implementation mode
The embodiment of the present invention is further described below in conjunction with the accompanying drawings.
Different cameral has different color sensitivity receptance functions, this difference can be by learning a camera conversion Matrix removes, and estimates accuracy method for improving based on this scene light source that the present invention provides a kind of based on camera response function, As shown in Figure 1, including the following steps:
S1, camera transition matrix is calculated:By least square method, calculates the camera color sensitivity that training image uses and ring Answer the camera color sensitivity receptance function that function is used to test image between the response of same given surface reflectivity Transition matrix, obtain camera transition matrix.
S2, conversion training image group:The training image of real light sources known to one group and its real light sources are utilized into step S1 The camera transition matrix being calculated is converted to corresponding image and real light sources under test image camera.
The value of each pixel in training image is multiplied with the camera transition matrix that step S1 is calculated, is obtained To value be converted images corresponding position pixel value;The real light sources of training image and step S1 are calculated again Camera transition matrix is multiplied, and obtained value is the real light sources of converted images.
S3, test image regression matrix is calculated:Utilize transformed image in static Illuminant estimation method estimating step S2 Light source obtain corresponding feature using the corresponding all estimated results of every piece image and cross term as the feature of its image Matrix;By the method for recurrence, the regression matrix between the feature of test image and light source is calculated.
In the embodiment of the present invention, static Illuminant estimation method is using grey world and grey edge methods, detailed process It is as follows:Calculative feature is respectively the mean value in tri- channels image R, G, B and the mean value of three channel edges, introduces and hands over Pitch item, the finally obtained mean value characterized by tri- channels R, G, B, the mean value of tri- channel edges of R, G, B, the channels R and the channels G Mean value after multiplication opens radical sign, and the mean value after the channels R are multiplied with channel B opens radical sign and after the channels G are multiplied with B Mean value opens radical sign, totally 9 features.
S4, estimation test image light source:To test image extraction and identical characteristics of image in step S3, obtained with step S3 The regression matrix arrived is multiplied, and obtains the estimation light source of test image.
It is used directly for subsequent meter by the scene light source colour estimated value for the image being calculated after step S4 Calculation machine vision application, such as with each color component of the original color image of input divided by the light sources that calculate of above-mentioned steps S4 Color value, to achieve the purpose that remove light source colour in coloured image.In addition the tint correction of image, white balance processing are also required to Use the scene light source colour of step S4 estimations.
Below with a specific embodiment to a kind of scene light source estimation based on camera response function provided by the invention Accuracy method for improving is described further:
Embodiment one:
Artificial synthesized surface is carried above and below the image library website for estimating scene light source colour internationally recognized at present S, and 321 width colour cast image T and its real light sources L that the image library is shot by 930 cameras of SONY DXC are downloaded as training Collection.Image IMG_0332.png that another one width of image library is shot by CANON 5D cameras is downloaded simultaneously as test image, Image size is 1460*2193.Training set image and test image all (are put down without the pretreatment by any camera itself as white Weighing apparatus, the correction of gamma values).Then detailed step of the invention is as follows:
S1, camera transition matrix is calculated:By least square method, camera (the SONY DXC that training image uses are calculated 930) camera (CANON 5D) color sensitivity receptance function that color sensitivity receptance function is used to test image is to same Transition matrix between the response of given surface reflectivity obtains camera transition matrix.
The specific camera transition matrix process for calculating SONY DXC 930 to CANON 5D is as follows:
The response R1 of 930 cameras pair of a SONY DXC given surface reflectivity S is calculated according to formula (1) first:
R1=CSS1 × S (1)
CSS1 indicates that the camera color sensitivity receptance function of SONY DXC 930, function curve are as shown in Figure 2 in formula.
Then the response R2 of a CANON 5D cameras pair given surface reflectivity S is calculated according to formula (2):
R2=CSS2 × S (2)
CSS2 indicates that the camera color sensitivity receptance function of CANON 5D, function curve are as shown in Figure 3 in formula.
Finally by formula (3), and least square method is combined, the camera that SONY DXC 930 to CANON 5D are calculated turns Change Matrix C.
R2=R1 × C (3)
Final calculation result is:
S2, conversion training image group:By the training image (321 width colour cast image T) of real light sources known to one group and its very The camera transition matrix C that real light source L is calculated using step S1 be converted under test image camera corresponding image TC with it is true Real light source LC.
The process that specific calculating training set image is converted to corresponding image under test image camera is as follows:
The image transition matrix C that each pixel of training set image is calculated with step S1 is multiplied, with instruction For practicing a pixel (0.296,0.326,0.0948) for concentrating piece image apples2_syl-cwf.GIF, after conversion Become:
Obtained value (1.2453,0.5855,0.0598) is the pixel value of converted images corresponding position.
The above process mainly illustrates that reality is the institute in entire image when calculating using the single pixel of image point as example Have and carries out on pixel.
The specific process for calculating corresponding real light sources under training set image real light sources L to test image camera is as follows:
The real light sources of training set image are multiplied with the image transition matrix C that step S1 is calculated, with above-mentioned training For the real light sources (0.4557,0.4604,0.7618) for concentrating image apples2_syl-cwf.GIF, become after conversion:
Obtained value (1.7095,1.2806,0.4234) is the corresponding real light sources of converted images.
S3, test image regression matrix is calculated:Utilize transformed image in static Illuminant estimation method estimating step S2 Light source obtain corresponding feature using the corresponding all estimated results of every piece image and cross term as the feature of its image Matrix;By the method for recurrence, the regression matrix between the feature of test image and light source is calculated.
Wherein static Illuminant estimation method includes grey world, grey edge, white patch etc. a variety of classical The method of static estimation light source.
In the embodiment of the present invention, light source is estimated using two kinds of grey world, grey edge static methods, with above-mentioned instruction Practice for concentrating image apples2_syl-cwf.GIF, estimates light source approach using grey world, R, G, B tri- is calculated The mean value in a channel is respectively (0.1600,0.7254,0.1146), estimates light source approach using grey edge, be calculated R, G, the mean value of tri- channel edges of B is respectively (0.1572,0.7295,0.1133).After introducing cross term, image apples2_ The eigenmatrix of syl-cwf.GIF is that f is:
F=[0.1600 0.7254 0.1146 0.1572 0.7295 0.1133 0.3407 0.1354 0.2883]
The method returned in the step can be nonlinear neural network, support vector machines, least square method etc. linearly with Nonlinear homing method.Using least square method as homing method in the embodiment of the present invention, specific calculating process is:
By formula (4) and least square method is combined, the corresponding regression matrix R of test image is calculated:
L=F × R (4)
F indicates the matrix collectively constituted by the eigenmatrix f of all training set images in formula.Final calculation result is:
S4, estimation test image light source:To test image IMG_0332.png extractions, identical image is special with step S3 Sign, obtaining eigenmatrix is:
[0.2583 0.4140 0.3277 0.2464 0.4571 0.2965 0.3270 0.2909 0.3684]
It is multiplied with the regression matrix R that step S3 is obtained, obtains the estimation light source of test image:
[0.2583 0.4140 0.3277 0.2464 0.4571 0.2965 0.3270 0.2909 0.3684]
(1.4601,1.3322,1.0858) become (0.3765,0.3435,0.2800) after normalization, (0.3765, 0.3435,0.2800) light source colour for the test image as finally estimated.
The light source colour of test image finally estimated to the present invention with a specific embodiment below is with the tone of image Make a simple demonstration when practical application for correction:
Embodiment two:
Light source colour value under each color component being calculated using step S4 corrects original input picture respectively The pixel value of each color component.A pixel value (0.459,0.545,0.472) with the test image inputted in step S4 For, result after correction be (0.459/0.3765,0.545/0.3435,0.472/0.2800)=(1.2191,1.5866, 1.6857) value after correction then, is multiplied by standard white light coefficientObtain (0.7038,0.9160, 0.9732) pixel value as the correction image of final output, other pixel values of original input picture also do similar calculating, Finally obtain the coloured image after correction.
It is illustrated in figure 4 original image to be corrected, color is carried out using the light source colour value that step S4 is calculated to it Image after adjustment just is as shown in Figure 5.
Those of ordinary skill in the art will understand that the embodiments described herein, which is to help reader, understands this hair Bright principle, it should be understood that protection scope of the present invention is not limited to such specific embodiments and embodiments.This field Those of ordinary skill can make according to the technical disclosures disclosed by the invention various does not depart from the other each of essence of the invention The specific variations and combinations of kind, these variations and combinations are still within the scope of the present invention.

Claims (4)

1. a kind of scene light source based on camera response function estimates accuracy method for improving, which is characterized in that including following step Suddenly:
S1, camera transition matrix is calculated:By least square method, the camera color sensitivity response letter that training image uses is calculated The camera color sensitivity receptance function that test image uses is counted between the response of same given surface reflectivity to turn Matrix is changed, camera transition matrix is obtained;
S2, conversion training image group:The training image of real light sources known to one group and its real light sources are calculated using step S1 Obtained camera transition matrix is converted to corresponding image and real light sources under test image camera;
S3, test image regression matrix is calculated:Utilize the light of transformed image in static Illuminant estimation method estimating step S2 Source obtains corresponding eigenmatrix using the corresponding all estimated results of every piece image and cross term as the feature of its image; By the method for recurrence, the regression matrix between the feature of test image and light source is calculated;
S4, estimation test image light source:To the feature of test image extraction and its identical image in step S3, the test is obtained The corresponding eigenmatrix of image is multiplied with the obtained regression matrix of step S3 again, obtains the estimation light source of test image.
2. the scene light source according to claim 1 based on camera response function estimates accuracy method for improving, feature It is, the step S2 is specially:
The value of each pixel in training image is multiplied with the camera transition matrix that step S1 is calculated, is obtained Value is the pixel value of converted images corresponding position;The camera that the real light sources of training image and step S1 are calculated again Transition matrix is multiplied, and obtained value is the real light sources of converted images.
3. the scene light source according to claim 1 based on camera response function estimates accuracy method for improving, feature It is, the static Illuminant estimation method in the step S3 is grey world and grey edge methods, and detailed process is as follows: Calculative feature is respectively the mean value in tri- channels image R, G, B and the mean value of three channel edges, introduces cross term, The finally obtained mean value characterized by tri- channels R, G, B, the mean value of tri- channel edges of R, G, B, the channels R are multiplied with the channels G Mean value later opens radical sign, the channels R be multiplied with channel B after mean value open radical sign and the channels G be multiplied with B after mean value Radical sign is opened, totally 9 features.
4. the scene light source according to claim 1 based on camera response function estimates accuracy method for improving, feature It is, the method for the recurrence in the step S3 is nonlinear neural network, support vector machines or least square method.
CN201610606411.4A 2016-07-28 2016-07-28 A kind of scene light source estimation accuracy method for improving based on camera response function Active CN106296658B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610606411.4A CN106296658B (en) 2016-07-28 2016-07-28 A kind of scene light source estimation accuracy method for improving based on camera response function

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610606411.4A CN106296658B (en) 2016-07-28 2016-07-28 A kind of scene light source estimation accuracy method for improving based on camera response function

Publications (2)

Publication Number Publication Date
CN106296658A CN106296658A (en) 2017-01-04
CN106296658B true CN106296658B (en) 2018-09-04

Family

ID=57662488

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610606411.4A Active CN106296658B (en) 2016-07-28 2016-07-28 A kind of scene light source estimation accuracy method for improving based on camera response function

Country Status (1)

Country Link
CN (1) CN106296658B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020097130A1 (en) * 2018-11-06 2020-05-14 Flir Commercial Systems, Inc. Response normalization for overlapped multi-image applications
EP3928503B1 (en) 2019-11-13 2024-04-17 Huawei Technologies Co., Ltd. Multi-hypothesis classification for color constancy
CN111372007B (en) * 2020-03-03 2021-11-12 荣耀终端有限公司 Ambient light illumination detection method and device and electronic equipment
CN112950662B (en) * 2021-03-24 2022-04-01 电子科技大学 Traffic scene space structure extraction method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1732696A (en) * 2002-11-12 2006-02-08 索尼株式会社 Light source estimating device, light source estimating method, and imaging device and image processing method
CN103258334B (en) * 2013-05-08 2015-11-18 电子科技大学 The scene light source colour method of estimation of coloured image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130243085A1 (en) * 2012-03-15 2013-09-19 Samsung Electronics Co., Ltd. Method of multi-view video coding and decoding based on local illumination and contrast compensation of reference frames without extra bitrate overhead

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1732696A (en) * 2002-11-12 2006-02-08 索尼株式会社 Light source estimating device, light source estimating method, and imaging device and image processing method
CN103258334B (en) * 2013-05-08 2015-11-18 电子科技大学 The scene light source colour method of estimation of coloured image

Also Published As

Publication number Publication date
CN106296658A (en) 2017-01-04

Similar Documents

Publication Publication Date Title
Ancuti et al. I-HAZE: A dehazing benchmark with real hazy and haze-free indoor images
CN106296658B (en) A kind of scene light source estimation accuracy method for improving based on camera response function
EP3542347B1 (en) Fast fourier color constancy
Galdran et al. Enhanced variational image dehazing
Bianco et al. A new color correction method for underwater imaging
CN106030653B (en) For generating the image processing system and image processing method of high dynamic range images
CN108230407B (en) Image processing method and device
Banić et al. Improving the white patch method by subsampling
van de Weijer et al. Robust optical flow from photometric invariants
CN107808137A (en) Image processing method, device, electronic equipment and computer-readable recording medium
Cheng et al. Beyond white: Ground truth colors for color constancy correction
CN107798661B (en) Self-adaptive image enhancement method
CN106056559A (en) Dark-channel-prior-method-based non-uniform-light-field underwater target detection image enhancement method
CN102547063A (en) Natural sense color fusion method based on color contrast enhancement
CN105359024B (en) Camera device and image capture method
CN103258334B (en) The scene light source colour method of estimation of coloured image
CN104504722B (en) Method for correcting image colors through gray points
CN109255758A (en) Image enchancing method based on full 1*1 convolutional neural networks
CN103065334A (en) Color cast detection and correction method and device based on HSV (Hue, Saturation, Value) color space
CN102867295A (en) Color correction method for color image
CN111292246A (en) Image color correction method, storage medium, and endoscope
CN112508814B (en) Image tone restoration type defogging enhancement method based on unmanned aerial vehicle at low altitude visual angle
Wang et al. Fast automatic white balancing method by color histogram stretching
Ueda et al. Underwater image synthesis from RGB-D images and its application to deep underwater image restoration
CN105678245A (en) Target position identification method based on Haar features

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant