CN101853499B - Clear picture synthesis method based on detail detection - Google Patents

Clear picture synthesis method based on detail detection Download PDF

Info

Publication number
CN101853499B
CN101853499B CN2010101621699A CN201010162169A CN101853499B CN 101853499 B CN101853499 B CN 101853499B CN 2010101621699 A CN2010101621699 A CN 2010101621699A CN 201010162169 A CN201010162169 A CN 201010162169A CN 101853499 B CN101853499 B CN 101853499B
Authority
CN
China
Prior art keywords
pixel
value
detail
details
synthesized
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2010101621699A
Other languages
Chinese (zh)
Other versions
CN101853499A (en
Inventor
石洗凡
刁常宇
鲁东明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN2010101621699A priority Critical patent/CN101853499B/en
Publication of CN101853499A publication Critical patent/CN101853499A/en
Application granted granted Critical
Publication of CN101853499B publication Critical patent/CN101853499B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a clear picture synthesis method based on detail detection, which comprises the following steps that: the detail degree of each pixel in two pictures to be synthesized with totally corresponding size and position relationship; the detail degree of each pixel in a first picture to be synthesized is subtracted by the detail degree of the pixel at the corresponding position in a second picture to be synthesized to obtain the discriminant value of the detail degree of the pixel at each position; the discriminant value of the detail degree of the pixel at each position is compared with a set high threshold and a set low threshold, and the detail degree is compared with a set value to obtain the type value of the pixel detail of each position; low-pass filtration is carried out to the type value of the pixel detail of each position to remove possible detail type misjudgment; and according to the type value of the pixel detail of each position after misjudgment removal, corresponding methods are respectively selected for image synthesis to obtain a clear picture. The method can synthesize clear pictures, and is applied to occasions which need large depth-of-field and the clear photography of the center and the edge.

Description

A kind of clear picture synthesis method based on detail detection
Technical field
The present invention relates to technical field of image processing, relate in particular to a kind of clear pictures synthetic technology.
Background technology
At present; The photo-sensitive cell density of camera improves constantly; Sony has developed the sensor of 3480 perfectly sound pictures recently; Believe and to be used in (the big horse three of Canon's 2,100 ten thousand pixels (EOS 1Ds MarkIII) was issued more than 2 years, and according to the rule of renewal in 3 years in the past, big horse three will upgrade to estimate this year) on its single anti-product.Simultaneously, as the participant of pixel contest, Canon also is unwilling to be lag behind naturally, and the next generation of its big horse three also is the product of a 3,000 ten thousand above pixels certainly.
Sharpness value result when arranging in pairs or groups EF 50mm F1.4USM camera lens according to domestic certain photography website for the EOS 1Ds MarkIII of Canon under MTF50 can know that the best aperture of center and peripheral is respectively F4 and F5.6.But the depth of field of F4 and F5.6 is all superficial, and in the shooting of reality, particularly some inclined-plane mural paintings or irregular arch mural painting are difficult to make optical axis to be parallel to fully and are taken the photograph the surface, and this has just caused depth of field fuzzy problem.If (F8, F11 even F16) can obtain the big depth of field with little aperture, but but because diffraction causes whole photograph image fuzzy.Be to use for the solution of this type occasion on the optics and move lens shaft, make that certain planar imaging is clear.But in fact can not deal with problems because mural painting itself is not drawn on the plane on the plane clearly for one of some arch mural painting.In addition, move lens shaft and also have shortcomings such as image quality difference and price height.Therefore, can consider through taking multiple pictures that some is clear relatively for every photo, finally synthetic each zone of a width of cloth is digital picture clearly all.
In addition, in the photography at historical relic and archaeology scene, also have similar problem, its solution generally is to select suitable aperture according to the degree of depth of object and the depth of field scale on the camera lens.But done shortcoming like this, must use very little aperture in order to guarantee the depth of field, it is slight fuzzy to make that therefore entire image all has, and we can say that this method is the sharpness that guarantees entire image with the sharpness of sacrificing important area.But in fact on-the-spot for historical relic and archaeology, individual region-of-interest is generally arranged, this part zone needs higher sharpness, can alleviate the diffraction effect of region-of-interest through increasing aperture, makes this part content clear.And, use little aperture to guarantee the depth of field for the background area, and at last that this two width of cloth image is synthetic, generate a best photo.
In addition, even for the plane subject, in fact; Center and peripheral all has different resolution under the different apertures, and for a camera lens, must have the aperture of a best center resolution and the aperture of best edge resolution; These two apertures are often different, for example but unfortunately; In the website test of front, the sharpest aperture of center and peripheral is respectively F4 and F5.6.The film epoch can only be between these two best apertures trade-off; But in the digital age; Can utilize digital image processing techniques; Detect the photo acutance of zones of different under the different apertures, combine the sharpest keen zone, finally synthetic center and peripheral all is the sharpest keen image.
Summary of the invention
The invention provides a kind of clear picture synthesis method based on detail detection, judge the clear part in several photos automatically through algorithm, a finally synthetic each several part is clear photograph all.Can be applied to need the clear and center and peripheral of the big depth of field all to take occasion clearly.
A kind of based on the synthetic method of the clear pictures of detail detection, comprising:
(1) calculates the level of detail of each pixel in two big little and the corresponding fully photos to be synthesized of position relation;
Suppose that two photo size and position relations to be synthesized are corresponding fully (if do not satisfy; Can at first utilize registration Algorithm registration and adjustment to make it satisfied); And suppose two lightness to be synthesized and color basically identical (, can at first adjust and make it satisfied) if do not satisfy.
So-called details is appreciated that the contrast into object edge (border), and such as branch and background sky intersection, if details is abundant, the pixel at the branch of boundary and sky place still can keep original color in the photo so; Otherwise then the color of branch and sky can merge in these pixels, therefore, can characterize with the difference of maximal value and minimum value in the zonule.This value is component or the weighted mean of component in certain color space, and color space can be chosen as required flexibly, includes but not limited to HSV, HSI, RGB, CMYK, HSL, HSB, Ycc, XYZ, Lab and YUV etc.Further even can be with the component weighted mean in the different color space, such as the weighted mean of M among the CMYK and the H among the HSV.So-called zonule be can be around this pixel n * n (such as n=3) neighborhood or other certain get neighborhood method (such as circular neighborhood).Therefore, finally the level of detail of certain pixel can use around this pixel the maximal value of certain value in certain (several) individual color space of all pixels in the neighborhood (perhaps certain several values on average) and the difference degree of minimum value to describe.Imagination is if a blurred picture, and obviously, no matter at any color space, minimum and maximum value difference is not little in the small neighbourhood of this pixel; And, then at least in certain color space, exist certain component to exist than big difference for picture rich in detail.
(2) level of detail that the level of detail of each pixel in first photo to be synthesized is deducted relevant position pixel in second photo to be synthesized obtains the discriminant value of the pixel level of detail of each position;
(3) discriminant value of the pixel level of detail of each position and the high threshold and the low threshold value of setting are compared, obtain the types value of the pixel details of each position;
If discriminant value>high threshold, then types value pType=1;
If discriminant value<low threshold value, then types value pType=2;
If low threshold value≤discriminant value≤high threshold, and in any photo, the level of detail of the pixel on the current location reaches setting value, then pType=3; The setting value of the level of detail of pixel can be set as required; The setting value height representes then to require that level of detail clearly just can be detected in the photo, and setting value is low representes that then the level of detail in the comparison film is less demanding;
Remaining all be classified as one type, i.e. types value pType=0 is because pType all is set to 0 when initial.
(4) types value to the pixel details of each position carries out LPF, removes possible detail type erroneous judgement;
Be less than certain threshold value if certain pixel belongs to the number of pixels that is communicated with (pType is identical) zone, think that then the pType of this pixel place connected region is erroneous judgement, use the pType of this connected region surrounding pixel to replace.
Because pTye is more abstract, can be imagined as width of cloth pTye types of image, the corresponding various colors of different pType is such as 1,2, the 3 and 4 above-mentioned corresponding red, green, blues of difference and black.It is contemplated that; PType type map after the dyeing is divided into the subregion that a piece varies in color; The color of each subregion is identical; Any two pixels are communicated with in it, and so-called connection is meant between these two pixels and has a road warp at least, this road through on the color (being pType) of (comprising starting point and terminal point) of having a few identical.
(5), choose corresponding mode respectively and carry out the synthetic clear pictures that obtains of image according to the types value of the pixel details of removing each position after judging by accident;
Can see that after handling through step (4), each pixel is divided into following four types, final image combining method is following:
If types value pType=0 shows that the pixel details of this position on two figure is all less or do not have details, final composograph adopts the weighted mean through two images of noise reduction process; Two weights are respectively the distance of discriminant value and high threshold and low threshold value.
As preferably, adopt described weighted mean after, adopt again after synthetic around the neighborhood territory pixel mean filter carry out noise reduction process (noise reduction after the first weighted mean), can be through noise reduction so that the image after handling seems that noise is still less.
This mainly is because two exposure maps have JND.If still do not exist brightness inconsistent through overcorrect or after proofreading and correct; Such as it is contemplated that is such a case, and under a white background, two figure nearly all do not have details; But the level of detail of these positions at two figure white background places is difficult in full accord; And weighting comes according to discriminant value, and discriminant value calculates according to level of detail again, so possibly there is certain fluctuation in discriminant value.Further it is contemplated that following scene; First figure white background zone bright partially and second partially secretly; There is certain difference in the discriminant value of supposing two adjacent pixels, and these two neighbors cause existing difference owing to weights are different in the image after synthesizing so, and just composition algorithm has produced original non-existent details " artificially "; This type of false details (noise) obviously is undesirable, also need remove.Otherwise if just there was certain details here originally, neighbor can be covered (be real details difference greater than even much larger than error) by true details in other words because the error that causes of weights difference is not enough to influence true details so.Therefore, Here it is, and types value is 0 and be 3 reasons that need treat respectively.Types value is 0 o'clock noise reduction those false details (noise) of can erasing slightly, and if types value to be 3 o'clock noise reductions can weaken even erase the on the contrary original details that exists.
If types value pType=1 explains that first image obviously has more details, then final image is got first.
If types value pType=2 explains that second image obviously has more details, then final image is got second.
If types value pType=3 explain that one of them image has details, and readability is suitable basically, then final composograph adopts the weighted mean of two images, and two weights are respectively the distance between discriminant value and high threshold and the low threshold value.
The inventive method is judged the clear part in several photos automatically through algorithm, and a finally synthetic each several part is clear photograph all.Need can be applied to the big depth of field and center and peripheral all to take occasion clearly.
Description of drawings
Fig. 1 is the shooting effect figure of large aperture.
Fig. 2 is the shooting effect figure of little aperture.
Fig. 3 is the enlarged and displayed figure of branch of the shooting results of large aperture.
Fig. 4 is the enlarged and displayed figure of branch of the shooting results of little aperture.
Fig. 5 is the shooting results of large aperture and the synthetic later enlarged and displayed figure of branch of shooting results of little aperture.
Embodiment
Use Canon EF 28-135 camera lens on Canon EOS 400D fuselage, to take two notebooks (ThinkPad W500) with F5.6 and F16 respectively and go up content displayed at 135 ends; The camera lens optical axis of camera is not orthogonal to the screen of notebook computer; But certain included angle (45 degree) is arranged; Artificial produce Deep Canvas, and the notebook screen is perpendicular to desktop, the optical axis of camera lens is parallel to desktop.Done adjustment except aperture size when taking two photos, all the other comprise camera position, towards and camera lens and focal length etc. all constant, so doing is that position for same object in two photos that guarantee to take does not change.
When using large aperture (F5.6) to take, the part in the field depth is very clear, and the part beyond the scope is just fuzzyyer.On the contrary, adopt little aperture (F16) to take, the depth of field is bigger, but because diffraction effect, whole figure has fuzzy, still still clear than the part outside the field depth that uses large aperture slightly.
Space of a whole page limited size for algorithm effects is described, has done to simplify processing to image because the resolution of Canon EOS 400D is very high.At first, at vertical direction,, therefore chosen 120 pixels tall one section, enough representative because object distance is identical.Then, in the horizontal direction, it is fuzzyyer that the rightmost of original photo is compared the right of Fig. 1, therefore also removed.In addition, in order to make fog-level obvious, selected 15 inches high resolving power WUXGA display screen for use, and camera lens there is certain nearest focusing to leave, and therefore, can't make that display screen is full of whole image.The factor of comprehensive this two aspect; The zone of finally from the photo that uses large aperture (F5.6) to take, extracting size and be 2400 pixels * 128 pixels obtains Fig. 1; To extract size be that the zone of 2400 pixels * 128 pixels obtains Fig. 2 in the relevant position from use the photo that little aperture (F16) takes; Consider that Fig. 1 and Fig. 2 maybe be clear inadequately, with they branch's enlarged and displayed, the result is respectively like Fig. 3 and shown in Figure 4.
Level of detail to each pixel among Fig. 1 and Fig. 2 is calculated; The pixel level of detail is described with the difference degree of the minimum and maximum value in 3 * 3 neighborhoods around this pixel.Get the value of the R passage in the RGB passage of all pixels in described 3 * 3 neighborhoods when calculating the difference degree of maximal value and minimum value, the R value of maximum R value and minimum is subtracted each other, and promptly obtains the difference degree.
The level of detail that the level of detail of each pixel in first photo to be synthesized (Fig. 1) is deducted relevant position pixel in second photo to be synthesized (Fig. 2) obtains the discriminant value of the pixel level of detail of each position;
The discriminant value of the pixel level of detail of each position and the high threshold and the low threshold value of setting are compared, and with level of detail and setting value relatively, obtain the types value of the pixel details of each position;
If discriminant value>high threshold (getting 15), then types value pType=1;
If discriminant value<low threshold value (getting-11), then types value pType=2;
If-11≤discriminant value≤15, and in any photo, the level of detail of the pixel on the current location reaches setting value 35, then pType=3;
Remaining all be classified as one type, i.e. types value pType=0 is because the whole zero clearings of pType when initial.
Types value to the pixel details of each position carries out LPF, removes possible detail type erroneous judgement; If the number of pixels of certain pixel place connected region is less than certain threshold value (10), think that then the pType of this pixel place connected region is erroneous judgement, with the pType replacement of this connected region surrounding pixel.
After removing noise, the types value according to the pixel details of each position synthesizes processing respectively again:
If types value pType=0 explains that two images all do not have details, final composograph adopts the weighted mean (noise reduction after average earlier) through two images of noise reduction process, and two weights are respectively the distance between discriminant value and high threshold and the low threshold value.
If types value pType=1 explains that first image obviously has more details, then final image is got first.
If types value pType=2 explains that second image obviously has more details, then final image is got second.
If types value pType=3 explain that one of them image has details, and readability is suitable basically, then final composograph adopts the weighted mean of two images, and two weights are respectively the distance between discriminant value and high threshold and the low threshold value.
To the types value of the pixel details of each position among Fig. 1 and Fig. 2 carry out corresponding judgment and pixel synthetic after; Obtain clear pictures, branch's enlarged and displayed, as shown in Figure 5; Can see; Through synthetic, absorbed the advantage of two photos really, reached than any effect clearly all in two.

Claims (8)

1. the clear picture synthesis method based on detail detection is characterized in that: comprise the steps:
(1) calculate the level of detail of each pixel in two big little and the corresponding fully photos to be synthesized of position relation, the level of detail of described each pixel is the maximal value in the neighborhood and the difference degree of minimum value around this pixel;
(2) level of detail that the level of detail of each pixel in first photo to be synthesized is deducted relevant position pixel in second photo to be synthesized obtains the discriminant value of the pixel level of detail of each position;
(3) discriminant value of the pixel level of detail of each position and the high threshold and the low threshold value of setting are compared, and with level of detail and setting value relatively, obtain the types value of the pixel details of each position;
(4) types value to the pixel details of each position carries out LPF, removes possible detail type erroneous judgement;
(5), choose corresponding mode respectively and carry out the synthetic clear pictures that obtains of image according to the types value of the pixel details of removing each position after judging by accident.
2. clear picture synthesis method as claimed in claim 1 is characterized in that: neighborhood is square neighborhood of n * n or a circular neighborhood around this pixel around the pixel described in the step (1).
3. clear picture synthesis method as claimed in claim 1 is characterized in that: in the step (3), if discriminant value>high threshold, then types value pType=1;
If discriminant value<low threshold value, then types value pType=2;
If low threshold value≤discriminant value≤high threshold, and in any photo, the level of detail of the pixel on the current location reaches setting value, then pType=3;
Remaining all be classified as one type, types value pType=0.
4. clear picture synthesis method as claimed in claim 1 is characterized in that: in the step (4), if the number of a certain continuum same pixel detail type value is less than preset threshold, think that then this pixel detail type value is erroneous judgement.
5. the clear picture synthesis method of stating like claim 4 is characterized in that: the described removal of step (4) erroneous judgement is that the types value with the pixel details of thinking to judge by accident replaces with the types value that is not the pixel details of erroneous judgement around it.
6. clear picture synthesis method as claimed in claim 3 is characterized in that: in the step (5), finally the pixel of this position composograph adopts following synthetic method:
If it is 0 or 3 that the types value of the pixel details after the erroneous judgement is removed in certain position, then the weighted mean of two photo pixels to be synthesized is adopted in this position, and two weights are respectively the distance between discriminant value and high threshold and the low threshold value;
If it is 1 that the types value of the pixel details after the erroneous judgement is removed in certain position, then choose the pixel of first photo correspondence position to be synthesized;
If it is 2 that the types value of the pixel details after the erroneous judgement is removed in certain position, then choose the pixel of second photo correspondence position to be synthesized.
7. clear picture synthesis method as claimed in claim 6; It is characterized in that: if the types value of the pixel details after judging by accident is removed in certain position is 0; Then adopt after the weighted mean of two photo pixels to be synthesized this position, adopts the neighborhood territory pixel mean filter on every side after synthesizing to carry out noise reduction process again.
8. clear picture synthesis method as claimed in claim 1; It is characterized in that: the difference degree of maximal value described in the step (1) and minimum value is component or the weighted mean of component in certain or certain several color space, and color space comprises HSV, HSI, RGB, CMYK, HSL, HSB, Ycc, XYZ, Lab or YUV color space.
CN2010101621699A 2010-04-30 2010-04-30 Clear picture synthesis method based on detail detection Expired - Fee Related CN101853499B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010101621699A CN101853499B (en) 2010-04-30 2010-04-30 Clear picture synthesis method based on detail detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010101621699A CN101853499B (en) 2010-04-30 2010-04-30 Clear picture synthesis method based on detail detection

Publications (2)

Publication Number Publication Date
CN101853499A CN101853499A (en) 2010-10-06
CN101853499B true CN101853499B (en) 2012-01-25

Family

ID=42804964

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010101621699A Expired - Fee Related CN101853499B (en) 2010-04-30 2010-04-30 Clear picture synthesis method based on detail detection

Country Status (1)

Country Link
CN (1) CN101853499B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102622736B (en) * 2011-01-28 2017-08-04 鸿富锦精密工业(深圳)有限公司 Image processing system and method
CN103795920B (en) * 2014-01-21 2017-06-20 宇龙计算机通信科技(深圳)有限公司 Photo processing method and device
CN104952048B (en) * 2015-06-09 2017-12-08 浙江大学 A kind of focus storehouse picture synthesis method based on as volume reconstruction
CN112381836B (en) * 2020-11-12 2023-03-31 贝壳技术有限公司 Image processing method and device, computer readable storage medium, and electronic device
CN115358951B (en) * 2022-10-19 2023-01-24 广东电网有限责任公司佛山供电局 Intelligent ring main unit monitoring system based on image recognition

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003225428A (en) * 2002-02-05 2003-08-12 Shinnichi Electronics Kk Picture display device for pachinko machine, and picture displaying method and picture displaying program for the picture display device
US7817160B2 (en) * 2005-06-30 2010-10-19 Microsoft Corporation Sub-pass correction using neighborhood matching
CN100515042C (en) * 2007-03-29 2009-07-15 上海交通大学 Multiple exposure image intensifying method
CN101394485B (en) * 2007-09-20 2011-05-04 华为技术有限公司 Image generating method, apparatus and image composition equipment

Also Published As

Publication number Publication date
CN101853499A (en) 2010-10-06

Similar Documents

Publication Publication Date Title
CN108377343B (en) Exposure selector for high dynamic range imaging and related method
CN103366352B (en) Apparatus and method for producing the image that background is blurred
CN101778203B (en) Image processing device
CN101889453B (en) Image processing device, imaging device, method, and program
TWI464706B (en) Dark portion exposure compensation method for simulating high dynamic range with single image and image processing device using the same
Phillips et al. Camera image quality benchmarking
CN108537155A (en) Image processing method, device, electronic equipment and computer readable storage medium
CN101853499B (en) Clear picture synthesis method based on detail detection
US9361669B2 (en) Image processing apparatus, image processing method, and program for performing a blurring process on an image
WO2007095483A2 (en) Detection and removal of blemishes in digital images utilizing original images of defocused scenes
JP2010045613A (en) Image identifying method and imaging device
CN102480595B (en) Image processing apparatus and image processing method
CN109493283A (en) A kind of method that high dynamic range images ghost is eliminated
CN103563350A (en) Image processing device, image processing method, and digital camera
CN102737365B (en) Image processing apparatus, camera head and image processing method
CN103797782A (en) Image processing device and program
CN106296625A (en) Image processing apparatus and image processing method, camera head and image capture method
CN110352592A (en) Imaging device and imaging method and image processing equipment and image processing method
JP2013025650A (en) Image processing apparatus, image processing method, and program
CN107169973A (en) The background removal and synthetic method and device of a kind of image
JP5864936B2 (en) Image processing apparatus, image processing method, and program
CN117456371B (en) Group string hot spot detection method, device, equipment and medium
CN111711766B (en) Image processing method and device, terminal and computer readable storage medium
CN106791351A (en) Panoramic picture treating method and apparatus
CN109300186B (en) Image processing method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20120125

Termination date: 20140430