CN106296658A - A kind of scene light source based on camera response function estimates accuracy method for improving - Google Patents

A kind of scene light source based on camera response function estimates accuracy method for improving Download PDF

Info

Publication number
CN106296658A
CN106296658A CN201610606411.4A CN201610606411A CN106296658A CN 106296658 A CN106296658 A CN 106296658A CN 201610606411 A CN201610606411 A CN 201610606411A CN 106296658 A CN106296658 A CN 106296658A
Authority
CN
China
Prior art keywords
image
camera
light source
matrix
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610606411.4A
Other languages
Chinese (zh)
Other versions
CN106296658B (en
Inventor
李永杰
高绍兵
张明
罗福亚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201610606411.4A priority Critical patent/CN106296658B/en
Publication of CN106296658A publication Critical patent/CN106296658A/en
Application granted granted Critical
Publication of CN106296658B publication Critical patent/CN106296658B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Of Color Television Signals (AREA)

Abstract

The invention discloses a kind of scene light source based on camera response function and estimate accuracy method for improving, first training image is calculated to the camera transition matrix tested between image, then this matrix is utilized to be converted to training image and real light sources thereof test image corresponding under image camera and real light sources, then the feature of converted images is extracted, regression matrix between learning characteristic and its real light sources, the light source finally utilizing this regression matrix to complete to test image is estimated, it is achieved thereby that the light source between different images storehouse is estimated.The present invention does not has any parameter, transition matrix between camera and the regression matrix between training image feature and light source the most only need to calculate and the most just may determine that, can be directly used for estimating the light source of the image that the camera used by different training images shooting obtains, can effectively promote the accuracy that between different cameral, light source is estimated.

Description

A kind of scene light source based on camera response function estimates accuracy method for improving
Technical field
The invention belongs to computer vision and technical field of image processing, be specifically related to a kind of based on camera response function Scene light source estimates the design of accuracy method for improving.
Background technology
Color constancy refers to that people are to this color of object surface when the light source colour irradiating body surface changes Consciousness remain in that constant perceptual properties, computational color constancy refer to by some algorithms give imaging device (in full Code photographing unit) this ability.Color constancy computational methods mainly have algorithm based on study and static method, static method Owing to estimation difference is relatively big, precision is difficult to meet requirement of engineering;The algorithm based on the study characteristics of image by study complexity, Accurate light source can be obtained estimate.Be currently based on the algorithm of study also cannot application, except amount of calculation is bigger Outside, also an important reason is, different cameral has different color sensitivity camera response functions, and this can cause The algorithm parameter of study on the image set of one camera shooting, it is impossible to be effective to the scene light on the image of other camera shooting Source is estimated, i.e. intersection light source between camera estimates that Performance comparision is poor.
The research worker in computational color constancy field early has been working on studying camera response function for chromatic adaptation Impact, the more typical method having Finlayson G D to propose equal to 1994 of ratio, list of references: Finlayson G D, Drew M S,Funt B V.Spectral sharpening:sensor transformations for improved color constancy.[J].Journal of the Optical Society of America A,1994,11(5): 1553-63.The method by simplify light source transform characteristics thus promote the performance of color constancy algorithm, can be obviously improved based on The method of the study accuracy estimating light source to the image that same camera shoots, but still cannot promote between different cameral Scene light source estimates accuracy.At present, go back neither one color constancy algorithm based on study and can effectively solve this problem.
Summary of the invention
The invention aims to the method based on study in prior art that solves and cannot effectively promote between different cameral The problem that light source estimates accuracy, it is proposed that a kind of scene light source based on camera response function estimates accuracy method for improving.
The technical scheme is that a kind of scene light source based on camera response function estimates accuracy method for improving, Comprise the following steps:
S1, calculating camera transition matrix: by method of least square, calculate the camera color sensitivity sound that training image uses Function is answered to arrive between the camera color sensitivity receptance function of the test image use response value to same given surface reflectivity Transition matrix, obtain camera transition matrix;
S2, conversion training image group: training image and the real light sources thereof of one group of known real light sources are utilized step S1 Calculated camera transition matrix is converted to test image corresponding under image camera and real light sources;
S3, calculating test image regression matrix: utilize the image after conversion in static Illuminant estimation method estimating step S2 Light source, using all estimated results corresponding for every piece image and cross term as the feature of its image, obtain corresponding feature Matrix;By the method returned, it is calculated the regression matrix between the feature of test image and light source;
S4, estimation test image light source: the characteristics of image identical with step S3 to test image zooming-out, obtain with step S3 To regression matrix be multiplied, obtain test image estimation light source.
Further, step S2 particularly as follows:
All calculated with step S1 for the value of each pixel in training image camera transition matrix is multiplied, To value be the pixel value of converted images correspondence position;Again that the real light sources of training image is calculated with step S1 Camera transition matrix is multiplied, and the value obtained is the real light sources of converted images.
Further, the static Illuminant estimation method in step S3 is grey world and grey edge method, concrete mistake Journey is as follows: calculative feature is respectively average and the average of three channel edges of tri-passages of image R, G, B, introduces Cross term, the feature finally given is the average of tri-passages of R, G, B, the average of tri-channel edges of R, G, B, and R passage leads to G Average after road is multiplied opens radical sign, after the average after R passage is multiplied with channel B opens radical sign, and G passage is multiplied with B Average open radical sign, totally 9 features.
Further, the method for the recurrence in step S3 is nonlinear neural network, support vector machine or method of least square.
The invention has the beneficial effects as follows: first the present invention calculates training image and change square to the camera between test image Battle array, then utilizes this matrix to be converted to training image and real light sources thereof test image corresponding under image camera and true light Source, then extracts the feature of converted images, the regression matrix between learning characteristic and its real light sources, finally utilizes this to return Matrix completes to test the light source of image and estimates, it is achieved thereby that the light source between different images storehouse is estimated.The present invention does not appoint What parameter, the transition matrix between camera and the regression matrix between training image feature and light source the most only need to calculate the most permissible Decide, can be directly used for estimating the light source of the image that the camera used by different training images shooting obtains, Ke Yiyou Effect ground promotes the accuracy that between different cameral, light source is estimated.
Accompanying drawing explanation
A kind of based on camera response function the scene light source that Fig. 1 provides for the present invention estimates accuracy method for improving flow process Figure.
Fig. 2 is the SONY DXC 930 camera response function curve chart of the embodiment of the present invention one.
Fig. 3 is the CANON 5D camera response function curve chart of the embodiment of the present invention one.
Fig. 4 is the image original to be corrected of the embodiment of the present invention two.
Fig. 5 be the embodiment of the present invention two utilize light source colour value to be corrected after image.
Detailed description of the invention
Below in conjunction with the accompanying drawings embodiments of the invention are further described.
Different cameral has different color sensitivity receptance functions, and this difference can be changed by one camera of study Matrix is removed, and the invention provides a kind of scene light source based on camera response function based on this and estimates accuracy method for improving, As it is shown in figure 1, comprise the following steps:
S1, calculating camera transition matrix: by method of least square, calculate the camera color sensitivity sound that training image uses Function is answered to arrive between the camera color sensitivity receptance function of the test image use response value to same given surface reflectivity Transition matrix, obtain camera transition matrix.
S2, conversion training image group: training image and the real light sources thereof of one group of known real light sources are utilized step S1 Calculated camera transition matrix is converted to test image corresponding under image camera and real light sources.
All calculated with step S1 for the value of each pixel in training image camera transition matrix is multiplied, To value be the pixel value of converted images correspondence position;Again that the real light sources of training image is calculated with step S1 Camera transition matrix is multiplied, and the value obtained is the real light sources of converted images.
S3, calculating test image regression matrix: utilize the image after conversion in static Illuminant estimation method estimating step S2 Light source, using all estimated results corresponding for every piece image and cross term as the feature of its image, obtain corresponding feature Matrix;By the method returned, it is calculated the regression matrix between the feature of test image and light source.
In the embodiment of the present invention, static Illuminant estimation method uses grey world and grey edge method, detailed process As follows: calculative feature is respectively average and the average of three channel edges of tri-passages of image R, G, B, introduce and hand over Fork item, the feature finally given is the average of tri-passages of R, G, B, the average of tri-channel edges of R, G, B, R passage and G passage Average after being multiplied opens radical sign, after the average after R passage is multiplied with channel B opens radical sign, and G passage is multiplied with B Average opens radical sign, totally 9 features.
S4, estimation test image light source: the characteristics of image identical with step S3 to test image zooming-out, obtain with step S3 To regression matrix be multiplied, obtain test image estimation light source.
After step S4, the scene light source colour estimated value of calculated image is used directly for follow-up meter Calculation machine vision is applied, the light source such as calculated divided by above-mentioned steps S4 with each color component of the original color image of input Color value, to reach to remove the purpose of light source colour in coloured image.In addition the tint correction of image, white balance processes and is also required to Use the scene light source colour that step S4 is estimated.
A kind of based on camera response function the scene light source provided the present invention with a specific embodiment below is estimated Accuracy method for improving is described further:
Embodiment one:
From the most internationally recognized for estimating that the image library website of scene light source colour carries the surface of synthetic up and down S, and download 321 width colour cast image T and real light sources L thereof that this image library shoots by SONY DXC 930 camera as training Collection.Download image IMG_0332.png that another image library one width shoots by CANON 5D camera as test image simultaneously, Image size is 1460*2193.Training set image the most (is not put down as white through the pretreatment of any camera itself with test image Weighing apparatus, gamma value corrects).Then the detailed step of the present invention is as follows:
S1, calculating camera transition matrix: by method of least square, calculate camera (the SONY DXC that training image uses 930) color sensitivity receptance function is to testing camera (CANON 5D) the color sensitivity receptance function of image use to same Transition matrix between the response value of given surface reflectivity, obtains camera transition matrix.
The camera transition matrix process that the concrete SONY DXC of calculating 930 arrives CANON 5D is as follows:
First calculate SONY DXC 930 camera according to formula (1) and give response value R1 of surface reflectivity S to one:
R1=CSS1 × S (1)
In formula, CSS1 represents the camera color sensitivity receptance function of SONY DXC 930, and its function curve is as shown in Figure 2.
Then calculate CANON 5D camera according to formula (2) and give response value R2 of surface reflectivity S to one:
R2=CSS2 × S (2)
In formula, CSS2 represents the camera color sensitivity receptance function of CANON 5D, and its function curve is as shown in Figure 3.
Last by formula (3), and combine method of least square, the camera being calculated SONY DXC 930 to CANON 5D turns Change Matrix C.
R2=R1 × C (3)
Final calculation result is:
C = 3.7087 0.5401 - 0.3009 - 0.0050 1.5903 0.7228 0.0104 0.0171 0.5392
S2, conversion training image group: by the training image (321 width colour cast image T) of one group of known real light sources and true Real light source L utilizes step S1 calculated camera transition matrix C to be converted to test image TC corresponding under image camera with true Real light source LC.
The process that concrete calculating training set image is converted to test image corresponding under image camera is as follows:
All calculated with step S1 for each pixel of training set image image transition matrix C is multiplied, with instruction As a example by practicing a pixel (0.296,0.326,0.0948) of concentration piece image apples2_syl-cwf.GIF, after conversion Become:
0.296 0.326 0.0948 * 3.7087 0.5401 - 0.3009 - 0.0050 1.5903 0.7228 0.0104 0.0171 0.5392 = 1.2453 0.5855 0.0598
The value (1.2453,0.5855,0.0598) obtained is the pixel value of converted images correspondence position.
Said process mainly illustrates with the single pixel of image for example, is the institute in entire image during Practical Calculation Have and carry out on pixel.
The concrete training set image real light sources L of calculating is as follows to the process testing real light sources corresponding under image camera:
Calculated with step S1 for the real light sources of training set image image transition matrix C is multiplied, with above-mentioned training As a example by concentrating the real light sources (0.4557,0.4604,0.7618) of image apples2_syl-cwf.GIF, become after conversion:
0.4557 0.4604 0.7618 * 3.7087 0.5401 - 0.3009 - 0.0050 1.5903 0.7228 0.0104 0.0171 0.5392 = 1.7095 1.2806 0.4234
The value (1.7095,1.2806,0.4234) obtained is the real light sources that converted images is corresponding.
S3, calculating test image regression matrix: utilize the image after conversion in static Illuminant estimation method estimating step S2 Light source, using all estimated results corresponding for every piece image and cross term as the feature of its image, obtain corresponding feature Matrix;By the method returned, it is calculated the regression matrix between the feature of test image and light source.
Wherein static Illuminant estimation method includes the multiple classics such as grey world, grey edge, white patch The method of static estimation light source.
In the embodiment of the present invention, grey world, two kinds of static methods of grey edge are used to estimate light source, with above-mentioned instruction Practice as a example by concentrating image apples2_syl-cwf.GIF, use grey world to estimate light source approach, be calculated R, G, B tri- The average of individual passage be respectively (0.1600,0.7254,0.1146), utilize grey edge estimate light source approach, be calculated R, The average of tri-channel edges of G, B is respectively (0.1572,0.7295,0.1133).After introducing cross term, image apples2_ The eigenmatrix of syl-cwf.GIF is that f is:
F=[0.1600 0.7254 0.1146 0.1572 0.7295 0.1133 0.3407 0.1354 0.2883]
The method returned in this step can be nonlinear neural network, support vector machine, method of least square etc. linear and Nonlinear homing method.Using method of least square as homing method in the embodiment of the present invention, concrete calculating process is:
By formula (4) and combine method of least square, it is calculated the regression matrix R that test image is corresponding:
L=F × R (4)
In formula, F represents the matrix collectively constituted by the eigenmatrix f of all training set images.Final calculation result is:
R = - 12.9915 - 7.6758 - 2.6435 4.4385 - 1.1675 - 1.7950 19.8971 12.8949 6.6503 3.1322 - 0.4298 - 1.0170 5.8480 2.7373 - 0.0522 2.3639 1.7440 3.2810 25.5584 21.1764 10.6852 - 3.6503 - 5.0820 - 4.0196 - 40.6794 - 20.4598 - 7.3044
S4, estimation test image light source: test image IMG_0332.png is extracted the image spy identical with step S3 Levying, obtaining eigenmatrix is:
[0.2583 0.4140 0.3277 0.2464 0.4571 0.2965 0.3270 0.2909 0.3684]
The regression matrix R obtained with step S3 is multiplied, obtain test image estimation light source:
[0.2583 0.4140 0.3277 0.2464 0.4571 0.2965 0.3270 0.2909 0.3684]
* - 12.9915 - 7.6758 - 2.6435 4.4385 - 1.1675 - 1.7950 19.8971 12.8949 6.6503 3.1322 - 0.4298 - 1.0170 5.8480 2.7373 - 0.0522 2.3639 1.7440 3.2810 25.5584 21.1764 10.6852 - 3.6503 - 5.0820 - 4.0196 - 40.6794 - 20.4598 - 7.3044 = 1.4601 1.3322 1.0858
(0.3765,0.3435,0.2800) is become after (1.4601,1.3322,1.0858) normalization, (0.3765, 0.3435,0.2800) light source colour of the test image of final estimation it is.
The light source colour tone with image of image is tested below with a specific embodiment estimation final to the present invention Simply demonstrating when making an actual application as a example by correction:
Embodiment two:
Utilize the light source colour value under step S4 each color component calculated, correct original input picture respectively The pixel value of each color component.A pixel value (0.459,0.545,0.472) with the test image of input in step S4 As a example by, the result after correction be (0.459/0.3765,0.545/0.3435,0.472/0.2800)=(1.2191,1.5866, 1.6857), then the value after correction is multiplied by standard white backscatter extinction logarithmic ratioObtain (0.7038,0.9160, 0.9732) as the pixel value of the final correction chart picture exported, other pixel value of original input picture also does similar calculating, Finally obtain the coloured image after correction.
It is illustrated in figure 4 original image to be corrected, utilizes step S4 calculated light source colour value to carry out color it Image after adjustment just is as shown in Figure 5.
Those of ordinary skill in the art it will be appreciated that embodiment described here be to aid in reader understanding this Bright principle, it should be understood that protection scope of the present invention is not limited to such special statement and embodiment.This area It is each that those of ordinary skill can make various other without departing from essence of the present invention according to these technology disclosed by the invention enlightenment Planting concrete deformation and combination, these deform and combine the most within the scope of the present invention.

Claims (4)

1. a scene light source based on camera response function estimates accuracy method for improving, it is characterised in that include following step Rapid:
S1, calculating camera transition matrix: by method of least square, calculate the camera color sensitivity response letter that training image uses Count to the camera color sensitivity receptance function testing image use to turning between the response value of same given surface reflectivity Change matrix, obtain camera transition matrix;
S2, conversion training image group: utilize step S1 to calculate training image and the real light sources thereof of one group of known real light sources The camera transition matrix obtained is converted to test image corresponding under image camera and real light sources;
S3, calculating test image regression matrix: utilize the light of the image after changing in static Illuminant estimation method estimating step S2 Source, using all estimated results corresponding for every piece image and cross term as the feature of its image, obtains corresponding eigenmatrix; By the method returned, it is calculated the regression matrix between the feature of test image and light source;
S4, estimate test image light source: to characteristics of image identical in test image zooming-out and step S3, obtain with step S3 Regression matrix is multiplied, and obtains testing the estimation light source of image.
Scene light source based on camera response function the most according to claim 1 estimates accuracy method for improving, its feature Be, described step S2 particularly as follows:
All calculated with step S1 for the value of each pixel in training image camera transition matrix is multiplied, obtains Value is the pixel value of converted images correspondence position;Again by the real light sources of training image and the calculated camera of step S1 Transition matrix is multiplied, and the value obtained is the real light sources of converted images.
Scene light source based on camera response function the most according to claim 1 estimates accuracy method for improving, its feature Being, the static Illuminant estimation method in described step S3 is grey world and grey edge method, and detailed process is as follows: Calculative feature is respectively average and the average of three channel edges of tri-passages of image R, G, B, introduces cross term, The feature finally given is the average of tri-passages of R, G, B, the average of tri-channel edges of R, G, B, and R passage is multiplied with G passage Average afterwards opens radical sign, and the average after R passage is multiplied with channel B opens radical sign, and G passage be multiplied with B after average Open radical sign, totally 9 features.
Scene light source based on camera response function the most according to claim 1 estimates accuracy method for improving, its feature Being, the method for the recurrence in described step S3 is nonlinear neural network, support vector machine or method of least square.
CN201610606411.4A 2016-07-28 2016-07-28 A kind of scene light source estimation accuracy method for improving based on camera response function Active CN106296658B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610606411.4A CN106296658B (en) 2016-07-28 2016-07-28 A kind of scene light source estimation accuracy method for improving based on camera response function

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610606411.4A CN106296658B (en) 2016-07-28 2016-07-28 A kind of scene light source estimation accuracy method for improving based on camera response function

Publications (2)

Publication Number Publication Date
CN106296658A true CN106296658A (en) 2017-01-04
CN106296658B CN106296658B (en) 2018-09-04

Family

ID=57662488

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610606411.4A Active CN106296658B (en) 2016-07-28 2016-07-28 A kind of scene light source estimation accuracy method for improving based on camera response function

Country Status (1)

Country Link
CN (1) CN106296658B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111372007A (en) * 2020-03-03 2020-07-03 华为技术有限公司 Ambient light illumination detection method and device and electronic equipment
CN112950662A (en) * 2021-03-24 2021-06-11 电子科技大学 Traffic scene space structure extraction method
CN113272855A (en) * 2018-11-06 2021-08-17 菲力尔商业***公司 Response normalization for overlapping multi-image applications
CN114586330A (en) * 2019-11-13 2022-06-03 华为技术有限公司 Multi-hypothesis classification of color constancy

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1732696A (en) * 2002-11-12 2006-02-08 索尼株式会社 Light source estimating device, light source estimating method, and imaging device and image processing method
US20130243085A1 (en) * 2012-03-15 2013-09-19 Samsung Electronics Co., Ltd. Method of multi-view video coding and decoding based on local illumination and contrast compensation of reference frames without extra bitrate overhead
CN103258334B (en) * 2013-05-08 2015-11-18 电子科技大学 The scene light source colour method of estimation of coloured image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1732696A (en) * 2002-11-12 2006-02-08 索尼株式会社 Light source estimating device, light source estimating method, and imaging device and image processing method
US20130243085A1 (en) * 2012-03-15 2013-09-19 Samsung Electronics Co., Ltd. Method of multi-view video coding and decoding based on local illumination and contrast compensation of reference frames without extra bitrate overhead
CN103258334B (en) * 2013-05-08 2015-11-18 电子科技大学 The scene light source colour method of estimation of coloured image

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113272855A (en) * 2018-11-06 2021-08-17 菲力尔商业***公司 Response normalization for overlapping multi-image applications
CN114586330A (en) * 2019-11-13 2022-06-03 华为技术有限公司 Multi-hypothesis classification of color constancy
CN114586330B (en) * 2019-11-13 2023-04-21 华为技术有限公司 Multi-hypothesis classification of color constancy
US11949996B2 (en) 2019-11-13 2024-04-02 Huawei Technologies Co., Ltd. Automatic white balance correction for digital images using multi-hypothesis classification
CN111372007A (en) * 2020-03-03 2020-07-03 华为技术有限公司 Ambient light illumination detection method and device and electronic equipment
CN111372007B (en) * 2020-03-03 2021-11-12 荣耀终端有限公司 Ambient light illumination detection method and device and electronic equipment
CN112950662A (en) * 2021-03-24 2021-06-11 电子科技大学 Traffic scene space structure extraction method
CN112950662B (en) * 2021-03-24 2022-04-01 电子科技大学 Traffic scene space structure extraction method

Also Published As

Publication number Publication date
CN106296658B (en) 2018-09-04

Similar Documents

Publication Publication Date Title
Galdran et al. Enhanced variational image dehazing
CN103905803B (en) A kind of color calibration method of image and device
Finlayson et al. Shades of gray and colour constancy
CN106296658A (en) A kind of scene light source based on camera response function estimates accuracy method for improving
CN102547063B (en) Natural sense color fusion method based on color contrast enhancement
CN103268499A (en) Human body skin detection method based on multi-spectral imaging
CN105359024B (en) Camera device and image capture method
US20160314361A1 (en) Method and system for classifying painted road markings in an automotive driver-vehicle-asistance device
Riffeser et al. WeCAPP-Wendelstein Calar Alto pixellensing project I-Tracing dark and bright matter in M 31
Milotta et al. Challenges in automatic Munsell color profiling for cultural heritage
CN104504722A (en) Method for correcting image colors through gray points
CN103091615A (en) Method to measure response curve of image sensor and device
CN114066857A (en) Infrared image quality evaluation method and device, electronic equipment and readable storage medium
CN103051922A (en) Establishment method of evaluation criterion parameters and method for evaluating image quality of display screen
CN110047059A (en) Image processing method, device, electronic equipment and readable storage medium storing program for executing
CN113962875A (en) Method and system for enhancing images using machine learning
CN115267180A (en) Chromatographic immunoassay method and device
CN108769505A (en) A kind of image procossing set method and electronic equipment
Thomas Illuminant estimation from uncalibrated multispectral images
Lee et al. Removal of specular reflections in tooth color image by perceptron neural nets
CN104050678A (en) Underwater monitoring color image quality measurement method
CN106204500A (en) A kind of different cameral that realizes shoots the method that Same Scene color of image keeps constant
Jiwani et al. Single image fog removal using depth estimation based on blur estimation
CN106295679B (en) A kind of color image light source colour estimation method based on category correction
CN105049841A (en) Method for enhancing color displaying capability of color camera through single-channel pre-optical filter

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant