CN102938837A - Tone mapping method based on edge preservation total variation model - Google Patents

Tone mapping method based on edge preservation total variation model Download PDF

Info

Publication number
CN102938837A
CN102938837A CN2012103508952A CN201210350895A CN102938837A CN 102938837 A CN102938837 A CN 102938837A CN 2012103508952 A CN2012103508952 A CN 2012103508952A CN 201210350895 A CN201210350895 A CN 201210350895A CN 102938837 A CN102938837 A CN 102938837A
Authority
CN
China
Prior art keywords
image
input
output
intensity
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012103508952A
Other languages
Chinese (zh)
Other versions
CN102938837B (en
Inventor
周凡
苏卓
张宗伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN201210350895.2A priority Critical patent/CN102938837B/en
Publication of CN102938837A publication Critical patent/CN102938837A/en
Application granted granted Critical
Publication of CN102938837B publication Critical patent/CN102938837B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a tone mapping method based on an edge preservation total variation model. The method includes: inputting a high dynamic image; conducting light intensity reconstruction on the input high dynamic image; obtaining a logarithm value of the reconstructed light intensity value; using an edge preservation total variation filter to filter the obtained logarithm value of the light intensity value to obtain an image base layer; enabling the logarithm value of the light intensity value to deduct the image base layer to obtain an image detailed layer; calculating a compression scale factor and image output light intensity; and enabling the compressed light intensity to divide the input image light intensity to obtain a new scale factor M. The scale factor M is acted on a red channel, a green channel and a blue channel respectively to obtain a color-compressed image. The color-compressed image is corrected through gamma to obtain a low-dynamic image. By means of the tone mapping method based on the edge preservation total variation model, the high-quality low-dynamic image can be obtained, and a phenomenon of halo can not occur.

Description

A kind of tone mapping method that keeps full sub-model based on the edge
Technical field
The present invention relates to technical field of image processing, be specifically related to keep based on the edge tone mapping method of full sub-model.
Background technology
High-dynamics image (High Dynamic Range lmage, HDR lmage) can be supported very large range of light intensities, can effectively store the intensity information of real world, presents the image of superior quality to people.Yet traditional display device and printer can not show these high-dynamics images.Because their displayable contrasts are far smaller than high-dynamics image.In order to address this problem, can to compress at present the high-dynamics image dynamic range and make it adapt to low dynamic image display device.Be exactly tone mapping method as this method that high-dynamics image is converted into low dynamic image (LowDynamic Range lmage, LDR lmage).
Now, a large amount of tone mapping methods is suggested.The simplest contrast compression be multiply by a scale factor C with original image, C<1 exactly.This can lose details and the texture of shade in the image scene or high-brightness region.This just requires us to seek a kind of method to compress as far as possible contrast, keep simultaneously texture and details in the image.A lot of tone mapping methods have used the picture breakdown technology.Piece image is decomposed into basal layer and levels of detail.Basal layer has high-contrast to be needed compressedly, and levels of detail has clearly that texture need to be retained.Only basal layer is compressed, levels of detail remains unchanged, and is then that the basal layer after levels of detail and the compression is synthetic, just obtained low dynamic image.Basal layer can obtain by the edge preserving smoothing method, and levels of detail deducts basal layer by input picture and obtains.Durand and Dorsey document " Fast bilateral filtering for the display of high-dynamic-range images " (2002, pp.257-266) introduced a kind of tone mapping method based on quick bilateral filtering.The quick two-sided filter of this utilization carries out multiple dimensioned decomposition to image, obtains basal layer and levels of detail.Quick two-sided filter effectively level and smooth small details keeps strong edge simultaneously.But, increasing smoothness, the image border will be blured.This is so that halation phenomenon appears in the LDR image that obtains by the method.
Based on the tone mapping method of quick bilateral filtering effectively level and smooth small details keep simultaneously strong edge.But, increasing smoothness, the image border will be blured.This is so that halation phenomenon appears in the LDR image that obtains by the method.
Summary of the invention
The present invention has overcome the shortcoming that can produce halation phenomenon based on the tone mapping method of quick bilateral filtering, has proposed a kind of tone mapping method that keeps full Variation Model based on the edge.
The invention provides and a kind ofly keep the tone mapping method of full sub-model based on the edge, comprising:
Input a panel height dynamic image;
High-dynamics image to input carries out luminous intensity reconstruct;
Obtain the logarithm value of the light intensity value of neotectonics;
Utilize the edge to keep full variation filter that the luminous intensity logarithm value that obtains is carried out filtering and obtain the image basis layer;
Utilize luminous intensity logarithm value subtracted image basal layer to obtain the levels of detail of image;
Calculate the compression factor factor and image output light intensity;
Obtain a new scale factor M with luminous intensity after the compression divided by the input picture luminous intensity.Scale factor M is acted on respectively on red channel, green channel, each passage of blue channel, obtain compressing the image behind the color;
Carry out gamma and proofread and correct and obtain low dynamic image obtaining compressing image behind the color.
Described high-dynamics image to input carries out luminous intensity reconstruct and comprises:
According to the channel value of red channel R, green channel G, blue channel B, utilize formula intensity=0.299*R+0.587*G+0.114*B, re-construct out luminous intensity intensity value.
The described calculating compression factor factor and image output light intensity comprise:
Calculate the compression factor factor: utilize formula compressfactor=log (outputrange)/(max (log_base)-min (log_base)) * 1.5, wherein outputrange is the dynamic range of images of output, max (log_base) is the maximum of getting all pixel values among the image basis layer log_base, min (log_base) is the minimum value of getting all pixel values among the image basis layer log_base, and 1.5 is empirical value;
Computed image output light intensity: utilize formula output intensity=exp (compressfactor*log_base+log_detail).
Described with compressing rear luminous intensity divided by new scale factor M of input picture luminous intensity acquisition.Scale factor M is acted on respectively on red channel, green channel, each passage of blue channel, and the image that obtains compressing behind the color comprises:
Utilize formula
M = output _ intensity intensity ,
R, G, three color channel compression factor of B factor have been obtained; Value after R, G, the B compression is easy to be calculated;
R _ Output = M * R _ Input G _ Output = M * G _ Input B _ Output = M * B _ Input ,
Wherein R_Output, G_Output, B_Output are respectively compressed images R, G, B color-values, R_Input, G_Input, B_Input are respectively input picture R, G, B color-values, the calculating of doing all is on the logarithm of image light intensity, the difference of this each pixel value of sampled images obtains unified the processing to whole image range simultaneously with regard to the contrast of correspondence image.Above technology can be found out, keeps the tone mapping method of full Variation Model can obtain the very high low dynamic image of quality based on the edge, and the treatment effect of color and contrast is very good, and can not produce halation phenomenon.
Description of drawings
In order to be illustrated more clearly in the embodiment of the invention or technical scheme of the prior art, the below will do to introduce simply to the accompanying drawing of required use in embodiment or the description of the Prior Art, apparently, accompanying drawing in the following describes only is some embodiments of the present invention, for those of ordinary skills, under the prerequisite of not paying creative work, can also obtain according to these accompanying drawings other accompanying drawing.
Fig. 1 keeps the tone mapping method flow chart of full Variation Model based on the edge in the embodiment of the invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the invention, the technical scheme in the embodiment of the invention is clearly and completely described, obviously, described embodiment only is the present invention's part embodiment, rather than whole embodiment.Based on the embodiment among the present invention, those of ordinary skills belong to the scope of protection of the invention not making all other embodiment that obtain under the creative work prerequisite.
The present invention has overcome the shortcoming that can produce halation phenomenon based on the tone mapping method of quick bilateral filtering, has proposed a kind of tone mapping method that keeps full Variation Model based on the edge.Total variation (Total Variation) regularization model 1992 by Rudin, Osher and Fatemi propose, with the noise that removes in the image.This model has two base attributes: one is the edge that can keep image, and treatment effect is very good under given conditions; Another is that variable quantity and the image detail yardstick of image light intensity is inversely proportional to.More succinct says, this model keep the edge information very effectively in smooth grain.This can realize by minimizing equation.
u = arg min u { 1 2 | | f - Ku | | 2 + λ | u | TV } , - - - ( 1 )
Wherein first is the fidelity item, and the image u after assurance is level and smooth keeps the principal character of observing image f; Second is regular terms, reaches the purpose of smoothed image by the total variation that minimizes u.λ>0th, scale factor plays the effect of balance fidelity item and regular terms.The value of λ is larger, and smooth effect is more obvious.K is the linear operator that certainty is degenerated.The Euler-Lagrange equation that formula (1) is corresponding is
K * ( f - Ku ) + λ div ( ▿ u | ▿ u | ) = 0 , - - - ( 2 )
K wherein *It is the adjoint operator of K.Formula (1) has identical solution with formula (2), and can be in the hope of unique solution.
Adopt steepest descent method to find the solution to formula (1), add suitable first, boundary value condition, then be constructed as follows reaction-diffusion equation,
∂ u ∂ t ( t ; x ) = K * ( f ( x ) - Ku ( t ; x ) ) + λ div ( ▿ u | ▿ u | ) ( t ; x ) , ( t ; x ) ∈ ( 0 , T ] × Ω u ( 0 , x ) = f ( x ) x ∈ Ω ∂ u ∂ n ( t , x ) | ∂ Ω = 0 ( t ; x ) ∈ ( 0 , T ] × Ω , - - - ( 3 )
Wherein, l Ω is the border of Ω, and n is normal vector outside the unit on the l Ω of border.Formula (3) is adopted finite difference method.
Utilize edge recited above to keep operator, we are easy to realize two Scale Decompositions of image.Resolve into the levels of detail of the basal layer of piecewise smooth, high-contrast and low contrast, small scale texture.Basal layer keeps the total variation filter to obtain by the edge, deducts basal layer with input picture and just obtains levels of detail.In particular, g represents a width of cloth input picture and is used to two Scale Decompositions, f TVFor the edge keeps function of total variation, b is basal layer, and d is levels of detail, and said process is expressed as
b=f TV(g),(4)
d=g-b.(5)
The purpose of tone mapping is that a panel height dynamic image is converted into the low dynamic image that a width of cloth can show in conventional display apparatus.In this process, must reduce contrast, reach the dynamic range that conventional display apparatus can show.Simultaneously, in order to obtain a high-quality low dynamic image, the details in the necessary former high-dynamics image of reservation as much as possible.Basal layer comprises high-contrast information, and levels of detail comprises detail textures information.So only need basal layer is carried out the yardstick compression, levels of detail remains unchanged, and just can reach to reduce the purpose that contrast keeps details simultaneously.At last, basal layer and levels of detail synthesize new images after will compressing, and are needed LDR image.It is as follows to be expressed as the formula form
u=C*b+d,(6)
Wherein, C is the compression factor factor, C<1, and u is the low dynamic image after synthetic.Bringing formula (5) into formula (6) gets
u=g-(1-C)*b,(7)
By formula (7), see that more significantly only the large scale of image is compressed, other small scale details are retained.Can regulate the brightness compression degree by changing scale factor C, to satisfy the human visual effect of being wanted.Bring formula (4) into formula (7) and just obtained the tone mapping method that this patent proposes
u=g-(1-C)*f TV(g).(8)
For the processing of color of image, we at first calculate the luminous intensity after the compression, have obtained so a new scale factor M with luminous intensity after the compression divided by the input picture luminous intensity.Scale factor M is acted on respectively on R, G, the B passage, just obtain compressing the image behind the color.In particular, l_lnput is the input picture luminous intensity, and l_Output is luminous intensity after the image compression.Then
M = I _ Output I _ Input , - - - ( 9 )
R, G, three color channel compression factor of B factor have been obtained like this.Value after R, G, the B compression is easy to be calculated.As follows
R _ Output = M * R _ Input G _ Output = M * G _ Input B _ Output = M * B _ Input , - - - ( 10 )
Wherein R_Output, G_Output, B_Output are respectively compressed images R, G, B color-values, and R_Input, G_Input, B_Input are respectively input picture R, G, B color-values.The calculating that we do all is on the logarithm of image light intensity, and the difference of this each pixel value of sampled images can obtain unified the processing to whole image range simultaneously with regard to the contrast of correspondence image.
At last image is carried out gamma correction.The method can obtain the very high low dynamic image of quality, and the treatment effect of color and contrast is very good, and can not produce halation phenomenon.
Illustrate and keep the tone mapping method of full sub-model based on the edge in the embodiment of the invention, concrete steps are as follows:
S101: input a panel height dynamic image.
S102: carry out luminous intensity (intensity) reconstruct.According to the channel value of red channel (R), green channel (G), blue channel (B), utilize formula intensity=0.299*R+0.587*G+0.114*B, re-construct out light intensity value.
S103: the logarithm value (log_intensity) that obtains the light intensity value of neotectonics.
S104: image filtering.The luminous intensity logarithm value log_intensity that utilizes the edge to keep full variation filter that step 3 is obtained carries out filtering and obtains image basis layer (log_base).
S105: picture breakdown.Utilize the luminous intensity logarithm value log_intensity of step S103 to deduct the image basis layer log_base of step S104, obtain the levels of detail (log_detail) of image.So just a panel height dynamic image is decomposed into image basis layer and image detail layer.
S106: calculate compression factor factor compressfactor.Utilize formula compressfactor=log (outputrange)/(max (log_base)-min (log_base)) * 1.5, wherein outputrange is the dynamic range of images of output, max (log_base) is the maximum of getting all pixel values among the image basis layer log_base, min (log_base) is the minimum value of getting all pixel values among the image basis layer log_base, and 1.5 is empirical value.
S107: computed image output light intensity output_intensity.Utilize formula output_intensity=exp (compressfactor*log_base+log_detail).
S108: color treatments.Calculate luminous intensity output_intensity after the compression by step S107, obtained so a new scale factor M with luminous intensity after the compression divided by input picture luminous intensity intensity.Scale factor M is acted on respectively on red channel (R), green channel (G), each passage of blue channel (B), just obtain compressing the image behind the color.See that formula is as follows:
M = output _ intensity intensity ,
R, G, three color channel compression factor of B factor have been obtained like this.Value after R, G, the B compression is easy to be calculated.As follows
R _ Output = M * R _ Input G _ Output = M * G _ Input B _ Output = M * B _ Input ,
Wherein R_Output, G_Output, B_Output are respectively compressed images R, G, B color-values, and R_Input, G_Input, B_Input are respectively input picture R, G, B color-values.The calculating that we do all is on the logarithm of image light intensity, and the difference of this each pixel value of sampled images can obtain unified the processing to whole image range simultaneously with regard to the contrast of correspondence image.
S109:gamma proofreaies and correct.So just obtained low dynamic image.
To sum up, keep the tone mapping method of full Variation Model can obtain the very high low dynamic image of quality based on the edge, the treatment effect of color and contrast is very good, and can not produce halation phenomenon.
One of ordinary skill in the art will appreciate that all or part of step in the whole bag of tricks of above-described embodiment is to come the relevant hardware of instruction finish by program, this program can be stored in the computer-readable recording medium, storage medium can comprise: read-only memory (ROM, Read Only Memory), random access memory (RAM, Random Access Memory), disk or CD etc.
More than keep the tone mapping method of full Variation Model to what the embodiment of the invention provided based on the edge, be described in detail, used specific case herein principle of the present invention and execution mode are set forth, the explanation of above embodiment just is used for helping to understand method of the present invention and core concept thereof; Simultaneously, for one of ordinary skill in the art, according to thought of the present invention, all will change in specific embodiments and applications, in sum, this description should not be construed as limitation of the present invention.

Claims (4)

1. one kind keeps the tone mapping method of full sub-model based on the edge, it is characterized in that, comprising:
Input a panel height dynamic image;
High-dynamics image to input carries out luminous intensity reconstruct;
Obtain the logarithm value of the light intensity value of neotectonics;
Utilize the edge to keep full variation filter that the luminous intensity logarithm value that obtains is carried out filtering and obtain the image basis layer;
Utilize luminous intensity logarithm value subtracted image basal layer to obtain the levels of detail of image;
Calculate the compression factor factor and image output light intensity;
Obtain a new scale factor M with luminous intensity after the compression divided by the input picture luminous intensity.Scale factor M is acted on respectively on red channel, green channel, each passage of blue channel, obtain compressing the image behind the color;
Carry out gamma and proofread and correct and obtain low dynamic image obtaining compressing image behind the color.
2. as claimed in claim 1ly keep the tone mapping method of full sub-model based on the edge, it is characterized in that described high-dynamics image to input carries out luminous intensity reconstruct and comprises:
According to the channel value of red channel R, green channel G, blue channel B, utilize formula intensity=0.299*R+0.587*G+0.114*B, re-construct out luminous intensity intensity value.
3. as claimed in claim 2ly keep the tone mapping method of full sub-model based on the edge, it is characterized in that the described calculating compression factor factor and image output light intensity comprise:
Calculate the compression factor factor: utilize formula compressfactor=log (outputrange)/(max (log_base)-min (log_base)) * 1.5, wherein outputrange is the dynamic range of images of output, max (log_base) is the maximum of getting all pixel values among the image basis layer log_base, min (log_base) is the minimum value of getting all pixel values among the image basis layer log_base, and 1.5 is empirical value;
Computed image output light intensity: utilize formula output_intensity=exp (compressfactor*log_base+log_detail).
4. as claimed in claim 3ly keep the tone mapping method of full sub-model based on the edge, it is characterized in that, describedly obtain a new scale factor M with luminous intensity after the compression divided by the input picture luminous intensity.Scale factor M is acted on respectively on red channel, green channel, each passage of blue channel, and the image that obtains compressing behind the color comprises:
Utilize formula
M = output _ intensity intensity ,
R, G, three color channel compression factor of B factor have been obtained; Value after R, G, the B compression is easy to be calculated;
R _ Output = M * R _ Input G _ Output = M * G _ Input B _ Output = M * B _ Input ,
Wherein R_Output, G_Output, B_Output are respectively compressed images R, G, B color-values, R_Input, G_Input, B_Input are respectively input picture R, G, B color-values, the calculating of doing all is on the logarithm of image light intensity, the difference of this each pixel value of sampled images obtains unified the processing to whole image range simultaneously with regard to the contrast of correspondence image.
CN201210350895.2A 2012-09-19 2012-09-19 A kind of tone mapping method keeping full sub-model based on edge Active CN102938837B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210350895.2A CN102938837B (en) 2012-09-19 2012-09-19 A kind of tone mapping method keeping full sub-model based on edge

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210350895.2A CN102938837B (en) 2012-09-19 2012-09-19 A kind of tone mapping method keeping full sub-model based on edge

Publications (2)

Publication Number Publication Date
CN102938837A true CN102938837A (en) 2013-02-20
CN102938837B CN102938837B (en) 2016-06-01

Family

ID=47697703

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210350895.2A Active CN102938837B (en) 2012-09-19 2012-09-19 A kind of tone mapping method keeping full sub-model based on edge

Country Status (1)

Country Link
CN (1) CN102938837B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103702116A (en) * 2013-12-06 2014-04-02 浙江宇视科技有限公司 Wide dynamic compressing method and device for image
CN104065939A (en) * 2014-06-20 2014-09-24 深圳市大疆创新科技有限公司 HDRI generating method and device
WO2015192368A1 (en) * 2014-06-20 2015-12-23 深圳市大疆创新科技有限公司 Hdri generating method and apparatus
CN106847149A (en) * 2016-12-29 2017-06-13 武汉华星光电技术有限公司 A kind of tone mapping of high dynamic contrast image and display methods
CN110493584A (en) * 2019-07-05 2019-11-22 湖北工程学院 A kind of high dynamic range environment Visualization method, apparatus and storage medium
CN110992914A (en) * 2014-10-06 2020-04-10 三星电子株式会社 Display apparatus and method of controlling the same
CN113518221A (en) * 2016-10-14 2021-10-19 联发科技股份有限公司 Smoothing filtering method and device for removing ripple effect

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102388612A (en) * 2009-03-13 2012-03-21 杜比实验室特许公司 Layered compression of high dynamic range, visual dynamic range, and wide color gamut video
US20120206470A1 (en) * 2011-02-16 2012-08-16 Apple Inc. Devices and methods for obtaining high-local-contrast image data

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102388612A (en) * 2009-03-13 2012-03-21 杜比实验室特许公司 Layered compression of high dynamic range, visual dynamic range, and wide color gamut video
US20120206470A1 (en) * 2011-02-16 2012-08-16 Apple Inc. Devices and methods for obtaining high-local-contrast image data

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DURAND AND DORSEY: "Fast Bilateral Filtering for the Display of High-Dynamic-Range Images", 《ACM TRANSACTIONS ON GRAPHICS》, 31 December 2002 (2002-12-31), pages 257 - 266 *
宋明黎: "基于概率模型的高动态范围图像色调映射", 《软件学报》, 31 March 2009 (2009-03-31), pages 734 - 742 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103702116A (en) * 2013-12-06 2014-04-02 浙江宇视科技有限公司 Wide dynamic compressing method and device for image
CN104065939A (en) * 2014-06-20 2014-09-24 深圳市大疆创新科技有限公司 HDRI generating method and device
WO2015192368A1 (en) * 2014-06-20 2015-12-23 深圳市大疆创新科技有限公司 Hdri generating method and apparatus
US10298896B2 (en) 2014-06-20 2019-05-21 SZ DJI Technology Co., Ltd. Method and apparatus for generating HDRI
US10750147B2 (en) 2014-06-20 2020-08-18 SZ DJI Technology Co., Ltd. Method and apparatus for generating HDRI
CN110992914A (en) * 2014-10-06 2020-04-10 三星电子株式会社 Display apparatus and method of controlling the same
CN113518221A (en) * 2016-10-14 2021-10-19 联发科技股份有限公司 Smoothing filtering method and device for removing ripple effect
CN113518221B (en) * 2016-10-14 2024-03-01 联发科技股份有限公司 Video encoding or decoding method and corresponding device
CN106847149A (en) * 2016-12-29 2017-06-13 武汉华星光电技术有限公司 A kind of tone mapping of high dynamic contrast image and display methods
CN106847149B (en) * 2016-12-29 2020-11-13 武汉华星光电技术有限公司 Tone mapping and displaying method for high dynamic contrast image
CN110493584A (en) * 2019-07-05 2019-11-22 湖北工程学院 A kind of high dynamic range environment Visualization method, apparatus and storage medium

Also Published As

Publication number Publication date
CN102938837B (en) 2016-06-01

Similar Documents

Publication Publication Date Title
CN102938837A (en) Tone mapping method based on edge preservation total variation model
JP7008621B2 (en) Systems and methods for real-time tone mapping
Gu et al. Local edge-preserving multiscale decomposition for high dynamic range image tone mapping
Duan et al. Tone-mapping high dynamic range images by novel histogram adjustment
Shen et al. Exposure fusion using boosting Laplacian pyramid.
Kovaleski et al. High-quality reverse tone mapping for a wide range of exposures
JP6961139B2 (en) An image processing system for reducing an image using a perceptual reduction method
JP2018531447A6 (en) System and method for real-time tone mapping
Talebi et al. Fast multilayer Laplacian enhancement
Lee et al. Noise reduction and adaptive contrast enhancement for local tone mapping
CN103020998A (en) Tone mapping method based on edge-preserving total variation model
Hou et al. Recovering over-/underexposed regions in photographs
CN106709504A (en) Detail-preserving high fidelity tone mapping method
El Mezeni et al. Enhanced local tone mapping for detail preserving reproduction of high dynamic range images
Liu et al. Color enhancement using global parameters and local features learning
Su et al. Explorable tone mapping operators
Lee et al. Local tone mapping using sub-band decomposed multi-scale retinex for high dynamic range images
US7945107B2 (en) System and method for providing gradient preservation for image processing
Zhang et al. A dynamic range adjustable inverse tone mapping operator based on human visual system
US8081838B2 (en) System and method for providing two-scale tone management of an image
Zhang et al. Lookup table meets local laplacian filter: pyramid reconstruction network for tone mapping
Duan et al. Local contrast stretch based tone mapping for high dynamic range images
Dizdaroğlu et al. An improved method for color image editing
JP2011059960A (en) Image processor, simulation device, image processing method, simulation method, and program
Bansal et al. Regularized tone mapping using edge preserving filters

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
OL01 Intention to license declared
OL01 Intention to license declared