Embodiment
In following description, under its situation about implementing by software program usually, the preferred embodiments of the present invention are described.Those those skilled in the art should be fully recognized that the equivalent of software also can be configured in the hardware like this.Because image processing algorithm and system are known, this description can refer in particular to the algorithm of system and a method according to the invention and system and form part.Can be from this area being specifically illustrating or describe of selecting known system, algorithm, assembly and the element at this paper; Others of algorithm and system, and the hardware that is used to produce like this or software and handle being included in wherein of picture signal in addition.As shown in the following material with describe, according to the fixed system of giving of the present invention, to enforcement of the present invention be useful, be the routine and the common skill of this area in this paper suggestion or the software that does not specifically illustrate described.
Further, just as used herein, the computer program that is used for accomplishing the inventive method can be stored in computer-readable recording medium, and it can comprise, for example magnetic storage medium, for example disk (hard disk drive or floppy disk) or tape; Optical storage media, CD for example, light belt, machine readable bar code; The solid electronic memory device, for example random-access memory (ram), or read-only memory (ROM); Or be used for any other devices or the medium of storage computation machine program.
Because adopt image device and be used for signal capture and correction; And the digital camera of the respective lines of the control that is used to make public is known; This description can refer in particular to forming part with the element of device according to the method for the invention, or more directly cooperates according to the method for the invention the element formation part with device.What this paper did not specifically illustrate or described selects from technology known in the art.Some aspect of embodiment to be described, provides with software.As shown in the following material with describe, according to the fixed system of giving of the present invention, to enforcement of the present invention be useful, be the routine and the common skill of this area in this paper suggestion or the software that does not specifically illustrate described.
Turn to Fig. 1, it illustrates and adopts picture catching device block diagram of the present invention at present.In this example, the picture catching device is depicted as digital camera.But, though explanation now is digital camera, the present invention obviously also is applicable to the picture catching device of other types.In disclosed camera, come the light 10 of autonomous agent scene (subject scene) to be input to the imaging stage 11, here light is focused on by lens 12 and is formed on the image in the solid-state color filter array imageing sensor 20.Solid-state color filter array imageing sensor 20 converts incident light into the signal of telecommunication that is used for each pixel (pixel).The solid-state color filter array imageing sensor 20 of preferred embodiment is charge-coupled device (CCD) type or CMOS active pixel sensor (APS) type.The other types imageing sensor that (the APS device is commonly referred to cmos sensor, and this is because handle their performance of structure with complementary metal oxide semiconductors (CMOS)) has two dimensional array of pixels also can be used to provide them to adopt the image of pattern of the present invention.Be used for solid-state color filter array imageing sensor 20 of the present invention and comprise the two-dimensional array color pixel, become more clear in its standard after describing Fig. 1 after a while.Regulate the light quantity that arrives solid-state color filter array imageing sensor 20 by the pupil piece that changes aperture and neutral density (ND) filter block 13 14, neutral density (ND) filter block 13 comprises or the more ND filters that is inserted in the light path.What also regulate overall light quantity is the time that shutter 18 is opened.The exposure control unit 40 corresponding with available light in the scene measured by luminance sensor piece 16, and its all these three regulatory functions of control.
The description of particular camera configuration is familiar with for a person skilled in the art, and the additional features of many modifications and existence is conspicuous.For example, can increase autofocus system, perhaps lens are detachable and exchange.Be to be understood that the present invention can be applied to the digital camera of any kind, wherein similar functions is provided by optional components.For example, digital camera can be simple relatively fool's digital camera, and wherein shutter 18 can be simple relatively movable vane shutter, and perhaps its analog substitutes the more complicated focal plane is arranged.The present invention can also put into practice to utilize and be included in non-camera apparatus, the for example image-forming assembly in mobile phone and the automobile.
Handle and be applied to analog to digital (A/D) transducer 24 from the analog signal of color filter array imageing sensor 20 by analogue signal processor 22.Timing sequencer 26 produces various clock signals to select the operation of line number and pixel and synchronous analog signal processor 22 and A/D converter 24.The imageing sensor stage 28 comprises color filter array imageing sensor 20, analogue signal processor 22, A/D converter 24, and timing sequencer 26.The assembly in imageing sensor stage 28 can be the independent integrated circuit that constitutes, perhaps their single circuit of can structure imaging being accomplished by cmos image sensor usually.The stream that causes from the digital pixel value of A/D converter 24 is stored in digital signal processor (DSP) memory 32 that is associated with digital signal processor (DSP) 36.
DSP36 is in three processors except system controller 50 and exposure control unit 40 or the controller in the present embodiment.Though this camera-enabled control subregion in a plurality of controllers and processor is typical, under the situation that does not influence camera-enabled operation and the present invention's application, these controllers or processor can be combined into different modes.These controllers or processor can comprise one or multi-digital signal processor device more, microcontroller, programmable logic device, or other Digital Logical Circuits.Though described the combination of these controllers or processor, should be understood that a controller or processor can be specified to accomplish functions that all need.All these variations can be accomplished systemic-function and fall within the scope of the invention, and term " processing stage " be used for being included in all these functions in the stage, for example, as among Fig. 1 the processing stage 38.
In the illustrated embodiment, at program storage 54, DSP36 handles the DID in DSP memory 32, and copies the execution that DSP memory 32 is used for the picture catching stage to according to permanent storage.It is necessary software that DSP36 carries out being used to be engaged in following image processing.DSP memory 32 can be any in the random access memory, for example SDRAM.Comprise that the bus 30 that is used for address and data signal channel path is connected DSP36 to its corresponding D SP memory 32, A/D converter 24 and other corresponding assemblies.
System controller 50 can comprise the overall operation of the software program control camera in quickflashing EEPROM or other nonvolatile memories based on being stored in program storage 54.This memory can also be used for memory image sensor calibration data, and the user is provided with and selects and other data that camera must preserve when cutting out.System controller 50 is through instructing exposure control unit 40 handling lenses 12; ND filter block 13; Pupil piece 14, and the shutter of describing before 18, control picture catching order; Instruct timing sequencer 26 operation color filter array imageing sensors 20 and corresponding element, and instruct DSP3 to handle the view data of catching.After catching and handling image, the final image file that is stored in the DSP memory 32 is transferred in the master computer through HPI 57, is stored in removable memory card 64 or other memory devices, and in image display 88, shows for the user.
System controller bus 52 comprises the channel path that is used for address, data and control signal, and connected system controller 50 to DSP36, program storage 54, system storage 56, HPI 57, memory card interface 60 and other associated components.HPI 57 is provided for showing, storage, processing or print image data shift, connect to the high speed of personal computer or other master computers.This interface can be IEEE1394 or USB2.0 serial line interface or other suitable digital interfaces arbitrarily.Memory card 64 is the miniature fast erasable flash cards (CF) that typically are inserted in the storage card slot, and is connected to system controller 50 through memory card interface 60.Can use and include but not limited to the PC-card, multimedia card (MMC), or the other types storage of secure digital (SD) card.
The image of handling is copied in the display buffer in system storage 56, and reads continuously to produce vision signal through video encoder 80.This signal is directly exported the demonstration that is used for external monitor from camera, or is handled and displaying in image display 88 by display controller 82.This display is AMLCD (LCD) normally, though also can use the display of other types.
User interface 68 is by the software program of on exposure control unit 40, carrying out and the Combination Control of system controller 50; User interface 68 comprises the display 70 of finding a view, exposure display 72, status displays 76; Image display 88, and the user imports 74 all or combination in any.The user imports 74 and generally includes button, rocker switch, joystick, some combination of rotary dialer or touch-screen.Exposure control unit 40 operational light are measured, exposure mode, focusing and other exposure function automatically.System controller 50 management is at one or multi-display more, the graphic user interface of for example on image display 88, showing (GUI).GUI generally includes and is used to make the menu that seizure image test mode was selected and checked to different options.
Exposure control unit 40 is accepted the selection exposure mode of user's input, lens stop, and time for exposure (shutter speed), and exposure index or ISO speed class, and instruct the respective lens 12 and shutter 18 that is used for catching subsequently.When by hand ISO speed class, aperture and shutter speed being set, luminance sensor piece 16 is used to measure the brightness of scene and makes public the photometry function as a reference for the user provides.Under these circumstances, when the user changed one or more the setting more, the light meter indicating gage was showed on view finder 70, tells the user that what degree image is understood over-exposed or under-exposed.At auto exposure mode; The user changes a setting and exposure control unit 40 changes another setting automatically to keep correct exposure; For example for given ISO speed class, when the user reduced lens stop, exposure control unit 40 increased the time for exposure automatically to keep identical overall exposure.
The ISO speed class is the important attribute of digital camera.Time for exposure, lens stop, the eyeglass printing opacity, on-the-spot illumination level and spectral distribution, and scene reflectivity is confirmed the exposure levels of digital camera.When the not enough exposure of imagery exploitation of digital camera obtains, can keep suitable tone rendering through increasing electronics or digital gain usually, but consequent image usually comprises the noise of unacceptable amount.When exposure increased, gain reduced, and can reduce the exposure noise thus to acceptable level.If increase excessive exposure; Consequent signal in image brightness zone can exceed maximum signal level ability that imageing sensor or camera signal handle, and this can impel trimmed image brightness forming uniform luminance area, or in the image peripheral region " bloom ".Therefore, appropriate exposure is set is important to guides user.The ISO speed class tends to as such guide.Understand in order to be easy to photographer, the ISO speed class of digital camera should just in time relate to the ISO speed class of photographic film camera.For example, if digital camera has the ISO speed class of ISO200, follow film/treatment system that identical time for exposure and aperture should be fit to the ISO200 grade.
The ISO speed class is in order to meet film ISO speed class.But, electronics and got rid of definite equivalence based on the difference between the imaging system of film.Digital camera can comprise variable gain, and can after catching view data, digital processing be provided, and makes that the realization of tone rendering becomes possibility in the camera exposure scope.Therefore, to have the speed class scope be possible to digital camera.This scope definition is an ISO speed latitude.For prevention is obscured, single value is appointed as intrinsic ISO speed class, and the bound of ISO speed latitude indication velocity interval in other words, comprises that the scope of effective speed class is different from intrinsic ISO speed class.Consider this point, intrinsic ISO speed be from the numerical value of the exposure calculating of digital camera focal plane to produce concrete camera output signal characteristic.Concerning the given camera arrangement of normal scene, proper velocity normally produces the exposure index value of maximum image quality, and wherein exposure index is the numerical value that is inversely proportional to the exposure that offers imageing sensor.
Aforesaid digital camera is familiar with for a person skilled in the art.Possible and many variations that select to be used to the reducing camera cost, increase camera feature or improve the present embodiment of camera performance are conspicuous.Following description meeting in detail open according to of the present invention, be used to catch operation image, this camera.Though this description is with reference to digital camera, is to be understood that the present invention is adapted to have the image-capturing apparatus that comprises the imageing sensor that is used for a plurality of color path pixels.
Generally include at the color filter array imageing sensor 20 shown in Fig. 1 and to be provided at the two-dimensional array photaesthesia pixel a kind of mode, that on silicon substrate, construct that each pixel converts incident light into the signal of telecommunication of measurement.Because color filter array imageing sensor 20 is made public, in electronic structure spare, generate and catch the free electron of each pixel.Be that each time phase catches these free electrons, then measure the electron number of catching, or measure the speed that generates free electron at each pixel place that can the measuring light level.Under preceding a kind of situation; Charges accumulated shifts out pel array to electric charge electric industry measuring circuit; For example in the charge-coupled device (CCD), perhaps can comprise charge voltage measuring circuit element, in the zone near each pixel, CMOS active pixel sensor (APS or cmos sensor) for example.
When no matter when making in the imageing sensor of reference in following description, be to be understood that the color filter array imageing sensor 20 in its representative graph 1.Should understand further that disclosed pattern of pixels of the present invention is used for color filter array imageing sensor 20 in equivalent and this standard of all examples and image sensor architecture.
In the context of imageing sensor, electric charge transfer or charge measurement circuit that pixel (abbreviation of " pictorial element ") is meant the photosensitive region of separation and is associated with photosensitive region.In the context of digital color image, term pixel typically refers to has ad-hoc location relevant color-values, in the image.
Fig. 2 illustrates the flow chart of the high-level view of the preferred embodiment of the present invention.Produce coarse color filter array (CFA) image 100 by imageing sensor 20 (Fig. 1).Produce (noise reduces) CFA image 104 of denoisings through the coarse CFA image 100 of processing by denoising piece 102.
Fig. 3 illustrates the information description of denoising piece 102 (Fig. 2) according to the preferred embodiment of the invention.The single color blocks 106 of denoising produces the first denoising CFA image from coarse CFA image 100.Next step, through using the further processing to the first denoising CFA image 108, denoising CFA chrominance block 110 produces the second denoising CFA image 112.The second denoising CFA image becomes denoising CFA image 104.
Fig. 4 illustrates the detailed description of the single color blocks 106 of denoising (Fig. 3).Calculating pixel difference block 114 produces pixel difference 116 from coarse CFA image 100.The local border response of computer weighted value piece 118 produces local border response weighted value 120 from pixel difference 116.Calculate weighted pixel difference block 122 and produce weighted pixel difference 124 from local border response weighted value 120.At last, calculate the first denoising pixel value piece 126 and produce the first denoising CFA image 108 from weighted pixel difference 124.
Calculating pixel difference block 114 in Fig. 4 is calculated in the following manner.Fig. 7 illustrates from the pixel domain of coarse CFA image 100.In following discussion, suppose pixel value G
EBy denoising.Four pixel difference 116 (δ
N, δ
S, δ
E, δ
W) calculate by calculating pixel difference block 114, as shown in the following equation:
δ
N=G
2-G
E (1)
δ
S=G
R-G
E (2)
δ
E=G
G-G
E (3)
δ
W=G
C-G
E (4)
Pixel difference δ
N, δ
S, δ
E, δ
W, be positive denoising pixel value (G
E) and same hue at last (N=" north "), down (S=" south "), right (E=" east "), a left side (W=" west ") direction (G
2, G
R, G
G, G
C) four nearest difference between pixel values.In case calculate pixel difference, calculate local border response weighted value by calculating local border response weighted value piece 118.This value is calculated by following equation.
Wherein c is local border response weighted value 120, and δ is a pixel difference 116, k
YBe constant, and || || be vector norm operator.In a preferred embodiment, vector norm operator is the absolute value of pixel difference 116.In addition, k is set
YSo that big absolute pixel difference; Strong visible borders in the corresponding coarse CFA image 100; Produce little local border response weighted value and little absolute pixel difference, flat (smoothly) territory of corresponding coarse CFA image 100 produces big local border response weighted value.Four pixel differences before providing above the continuation, four local border response weighted values are subsequently calculated as follows:
Then utilize pixel difference and utilize following equation to calculate weighted pixel difference 124 and local border response weighted value:
w=δ·c (10)
In this equation, w is a weighted pixel difference 124, and δ is a pixel difference 116, and c is local border response weighted value 120.Provide four value groups above the continuation, following four weighted pixel differences are calculated as:
The pixel value of the first denoising CFA image 108 is calculated by following equation:
X′=X+λ∑
iw
i (15)
In this equation, X' is the pixel value of the first denoising CFA image 108 (Fig. 3), and X is coarse CFA image 100 (Fig. 2) original pixel value, and λ is the rate controlled constant, and w
iBe in i direction (N, S, E or W) weighted pixel difference 124.Execution is in the summation of a plurality of direction i.When denoising calculating described as follows repeated repeatedly, the rate controlled constant was set to the value less than 1 usually, so that equational result keeps stable and effective.Four value groups above continuing, calculating is used to import pixel G
EThe pixel value G of the first denoising CFA image 108
E'.
Wherein, in this expression formula, λ has been set to 1/16.
In the discussion in front, suppose pixel value G
E(Fig. 7) by denoising.Similar calculating is applied to each pixel of calculating the first denoising CFA image 108.For each pixel, be utilized in and just handle the center pixel neighbor of a plurality of direction same hues on every side, calculating pixel difference 116 and local border response weighted value 120.For example, can adopt R
3, R
D, R
HAnd R
SCalculating pixel R
FThe denoising pixel value, wherein can use B
8, B
I, B
MAnd B
XCalculating pixel B
KThe denoising pixel value.
In optional embodiment of the present invention, the vector norm in calculating local border response weighted value piece 118 can be used two or more colors.As an example, the following expression formula absolute value sum that uses adjacent pixels difference is as the vector norm:
Utilizing other vector norms of two or more colors is known for a person skilled in the art.
In an embodiment of the present invention, accomplish the single color blocks 106 of denoising more than once with repetitive mode.As an example, can accomplish the single color blocks 106 of denoising three times repeatedly, to produce the first denoising CFA image.Based on number of occurrence adjustment rate controlled constant λ to be accomplished, so that the suitable denoising of the consequent first denoising CFA image.
Fig. 5 illustrates the detailed description of denoising CFA chrominance block 110 (Fig. 3).Calculate blocks of chrominance values 128 and produce chromatic value 130 from the first denoising CFA image 108.Calculate colourity difference block 132 and produce colourity difference 134 from chromatic value 130.Calculate local colourity border response weighted value piece 136 and produce local colourity border response weighted value 138 from colourity difference 134.Response weighted value 138 produces weighting colourity difference 142 from local colourity border to calculate weighted pixel difference block 140.At last, calculate the second denoising pixel value piece 144 and produce the second denoising CFA image 112 from weighting colourity difference 142.
In Fig. 5, calculate blocks of chrominance values 128 and accomplish calculating in the following manner.With reference to figure 6, each 2x2 block of pixels 146 of in coarse CFA image 100, forming minimum repetitive is treated to occupy in the 2x2 block of pixels 146 the single pixel in central authorities 148 of each pixel value.Use this idea to Fig. 7, with G
ERelevant chromatic value is calculated by following expression formula:
C
E=2G
E-R
F-B
K (21)
G
EIt is the green pixel values on green and red pixel is capable.For the green pixel values on green and blue pixel row, use similar expression formula.For example, following expression formula is used to calculate G
LChromatic value:
C
L=2G
L-R
S-B
M (22)
For the chromatic value relevant, for example B with the blue pixel value
K, use following expression formula:
C
K=B
K-R
F (23)
At last, for the chromatic value relevant, for example R with the red pixel value
FExpression formula before also oppositely is used for redness and blue valve:
C
F=R
F-B
A (24)
Calculate colourity difference block 132 and generate colourity difference 134 in the following manner.Refer again to Fig. 7, for G
E, calculate following colourity difference:
δ
N=C
2-C
E=(2G
2-R
3-B
8)-(2G
E-R
F-B
K) (25)
δ
S=C
R-C
E=(2G
R-R
S-B
X)-(2G
E-R
F-B
K) (26)
δ
E=C
G-C
E=(2G
G-R
H-B
M)-(2G
E-R
F-B
K) (27)
δ
W=C
C-C
E=(2G
C-R
D-B
I)-(2G
E-R
F-B
K) (28)
For B
K, calculate following colourity difference:
δ
N=C
8-CK=(B
8-R
3)-(B
K-R
F) (29)
δ
S=C
X-C
K=(B
X-R
S)-(B
K-R
F) (30)
δ
E=C
M-C
K=(B
M-R
H)-(B
K-R
F) (31)
δ
W=C
I-C
K=(B
I-R
D)-(B
K-R
F) (32)
As before said, for R
F, be used for B
KExpression formula use and redly to exchange with the blue pixel value.
Calculate local colourity border response weighted value piece 136 and produce local colourity border response weighted value 138 in the following manner:
In this equation, c is a local colourity border response weighted value 138, and δ is a colourity difference 134, k
CBe constant, and || || be vector norm operator.In a preferred embodiment, vector norm operator is the absolute value of colourity difference 134.In addition, k is set
CSo that big Absolute Colorimetric difference; Strong visible borders in the corresponding first denoising CFA image 108; Produce the local border response of little colourity weighted value; And little Absolute Colorimetric difference, flat (smoothly) territory in the corresponding first denoising CFA image 108 produces big local colourity border response weighted value.
Provide above the continuation for G
EBefore four colourity differences, four local colourity borders response weighted values are subsequently calculated as follows:
For B
K, local colourity border response weighted value is calculated in the following manner:
For R
FLocal colourity border response weighted value, with accomplish B
KSimilar mode is calculated.
Calculate weighting colourity difference block 140 and produce weighting colourity difference 142 by following equation:
w=δ·c (42)
In this equation, w is a weighting colourity difference 142, and δ is a colourity difference 134, and c is a local colourity border response weighted value 138.What provide above the continuation is used for G
EFour value groups, calculate following four weighting colourity differences:
For B
K, calculate weighting colourity difference in the following manner:
For R
FWeighting colourity difference, be used for B
KSimilarly mode is calculated.
Calculating the second denoising pixel value piece 144 utilizes following equation to produce second denoising CFA image 112 with weighting colourity difference 142 for green value from the first denoising CFA image 108:
X′=X+λ∑
iw
i (51)
In equation, X' is the pixel value of the second denoising CFA image 112, and X is the pixel value of the first denoising CFA image 108, and λ is the rate controlled constant, and w
iBe in i direction (N, S, E, or W) weighting colourity difference 142.Execution is in the summation of a plurality of direction i.When denoising calculating described as follows repeated repeatedly, the rate controlled constant was set to the value less than 1 usually, so that equational result keeps stable and effective.
Continue the top G that is used for
EFour value groups, be used for pixel G
EThe second denoising CFA image, 112 calculated for pixel values following.
In the expression formula, λ has been set to 1/32 in the above.Be used for pixel B
KThe second denoising CFA image, 112 calculated for pixel values following:
In the expression formula, λ has been set to 1/16 in the above.Be used for pixel R
FThe second denoising CFA image, 112 pixel values with accomplish B
KSimilar mode is calculated.
In optional embodiment of the present invention, the vector norm in calculating local colourity border response weighted value piece 136 is used two or more colors.As an example, following expression formula uses the absolute value sum that adjoins colourity difference as the vector norm:
These identical local colourity borders response weighted values can be used to a more than color channel (G for example
EAnd B
K) the calculating second denoising CFA pixel value.Utilizing other vector norms of two or more colourity differences is known for a person skilled in the art.
In the discussion in front, the single application of denoising CFA chrominance block 110 has been described.In optional embodiment, denoising CFA chrominance block 110 is accomplished more than once with repetitive mode.As an example, can accomplish denoising CFA chrominance block 110 to produce the second denoising CFA image 112 by triplicate.Accomplish number of repetition adjustment rate controlled constant λ based on waiting, so that the second denoising CFA image denoising rightly that produces in view of the above.
Fig. 8 illustrates according to the optional embodiment of denoising piece 102 of the present invention (Fig. 2).Compare with the preferred embodiment shown in Fig. 3, the order of denoising CFA chrominance block 150 and three color blocks 154 of denoising changes.Denoising CFA chrominance block 150 produces the first denoising CFA image 152 from storage CFA image 100.The single color blocks 154 of denoising produces the second denoising CFA image 156 from the first denoising CFA image 152.The second denoising CFA image 156 becomes denoising CFA image.The identical calculations that denoising CFA piece 150 is accomplished like denoising CFA chrominance block 110 (Fig. 3).The identical calculations that the single color blocks 154 of denoising is accomplished like single color blocks 106 (Fig. 3).
Fig. 9 illustrates another the optional embodiment according to denoising piece of the present invention (Fig. 2).Compare with the embodiment of Fig. 3 and Fig. 8, denoising piece 102 does not comprise two independently noise reduction processing, but utilizes the first and second pixel difference block 158 that reduce single passage and chrominance noise simultaneously to comprise single denoising.
Figure 10 illustrates the flow chart that utilizes first and second pixel difference block 158 (Fig. 9) the denoising detailed views.For each pixel of first color channel, calculating pixel first difference block 160 produces the first pixel difference 162 from coarse CFA image 100.Calculate local border response weighted value piece 164 and produce local border response weighted value 166 from the first pixel difference 162.Calculate the first weighted pixel difference block 168 and produce the first weighted pixel difference 170 from the local border response weighted value 166 and the first pixel difference 162.
For each pixel of second color channel, calculate the second pixel difference block 172 and produce the second pixel difference 174 from coarse CFA image 100.Calculate the second weighted pixel difference block 176 and produce the second weighted pixel difference 178 from the local border response weighted value 166 and the second pixel difference 174.Finally, calculate denoising pixel value piece 180 and produce the first denoising CFA image 108 from the first weighted pixel difference 170 and the second weighted pixel difference 178.
In Figure 10, calculate the first pixel difference block 160 and calculate in the following manner.Fig. 7 illustrates the pixel domain from coarse CFA image 100 (Fig. 2).In following discussion, suppose pixel value G
EPositive denoising.Equation through shown in following calculates four first pixel differences of ground pixel difference block 160 calculating:
δ
N=G
2-G
E (58)
δ
S=G
R-G
E (59)
δ
E=G
G-G
E (60)
δ
W=G
C-G
E (61)
First speed of a motor vehicle difference is positive denoising (G
E) pixel value and same hue passage (G
2, G
R, G
G, G
C) these four nearest pixel values between difference.
In case calculate first speed of a motor vehicle difference, calculate local border response weighted value 166 through calculating local border response weighted value piece 164.Utilize following equation to calculate this value:
In this equation, c is a local colourity border response weighted value 166, and δ is the first pixel difference 162, k
YBe constant, and || || be vector norm operator.In one embodiment, vector norm operator is the absolute value of the first pixel difference 162.In addition, k is set
YSo that big Absolute Colorimetric difference; Strong visible borders in the corresponding coarse CFA image 100 produces little local border response weighted value, and little absolute pixel difference; Flat (smoothly) territory in the corresponding coarse CFA image 100 produces big local border response weighted value.
Four pixel differences before providing above the continuation, four local border response weighted values are subsequently calculated as follows:
Then utilize the first pixel difference 162 and local border response weighted value 16, adopt following equation to calculate the first weighted pixel difference 170:
w=δ·c (67)
In this equation, w is the first weighted pixel difference 170, and δ is the first pixel difference 162, and c is local border response weighted value 166.
The four value groups that provide above the continuation, calculate following four first weighted pixel differences:
Calculate four second pixel differences 174 through calculate the second pixel difference block 172 like the equation shown in following:
D
N=B
8-B
K (72)
D
S=B
X-B
K (73)
D
E=R
H-R
F (74)
D
W=R
D-R
F (75)
The second pixel difference is positive denoising different color passage (for example, the non-green pixel in forming the minimum repetitive of CFA (Fig. 6), B
KNear G
ER
F) in 2 * 2 of pixel 146), near the pixel value of the pixel of positive denoising pixel with as four nearest pixel values nearest pixel, the same hue passage (blue pixel, B
8Near B
KB
X, and red pixel, R
HNear R
FR
D) between difference.Then utilize the second pixel difference 174 and local border response weighted value 166, use following equation to calculate the second weighted pixel difference 178:
u=D·c (76)
In this equation, u is the second weighted pixel difference 178, and D is the second pixel difference 174, and c is local border response weighted value 166.
The four value groups that provide above the continuation, four second weighted pixel differences are calculated as follows:
Calculate denoising pixel value piece 180 and carry out column count down:
X′=X+λ∑
i[w
i+(w
i-u
i)] (81)
In this equation, X' is the denoising pixel value of denoising CFA image 104, and X is coarse CFA image 100 original pixel values, and λ is the rate controlled constant, w
iBe at i
ThDirection (N, S, E, or W) the first weighted pixel difference 170, and u
iBe at i
ThDirection (N, S, E, or W) the second weighted pixel difference 178.Execution is in the summation of a plurality of direction i.The first term w
iBe to make single channel noise reduce to become possible summation since and not from the information of other passages, can not reduce chrominance noise through himself.And the term w that increases
i-u
iBe equivalent to retrain solution X', make the first and second pixel differences of denoising CFA image 104 equate.When the first and second pixel differences of CFA image about equally the time, the chrominance noise that the constraint of CFA image reduces.Therefore the term w that increases
i-u
iDenoising becomes possibility when making single color and chroma component.
During example calculates in front, suppose pixel value G
EPositive denoising.Every other pixel value, no matter its color channel all can be handled in the same manner.
In optional embodiment of the present invention, the vector norm that is used to calculate local border response weighted value piece 164 can be used two or more color channels.
In the discussion in front, the single application that utilizes 158 denoisings of the first and second pixel difference block has been described.In optional embodiment of the present invention, accomplish with repetitive mode more than once and utilize the first and second pixel difference block 158.Based on number of repetition adjustment rate controlled constant λ to be accomplished, the feasible suitable denoising of the first denoising CFA image that produces in view of the above.
Algorithm like the disclosed calculating denoising of preferred embodiment of the present invention CFA image can be applied in the different user environment and background.Exemplary environment and background include but not limited to, cameras process (read sensor image, digital processing, the image of stores processor on digital media); (it relates to exemplary treatment step or stage to large-scale digital photos washing processing, for example submits the digital picture that is used for fulfiling on a large scale, digital processing to; And figure punch), the retail numeral develops and prints that (submission is used for the digital picture that retail is fulfiled, digital processing; And figure punch), home print (input family digital image, digital processing; And on the family expenses printer, print), the desktop process software (use algorithm to make it in digital picture and improve-or even only change their software), numeral is fulfiled (from medium or through the network input digital image; Digital processing, output digital image on medium is with digital form output digital image on the internet); Telephone booth (input digital image, digital quantity, figure punch or output digital media); Mobile device (for example, PDA maybe can be used as processing unit, display unit or provide the cell phone of processing instruction unit), and as the communal facility that provides through the World Wide Web (WWW)
Under these circumstances, the algorithm that calculates denoising CFA image can be independently maybe can be the assembly of large scale system scheme more.And; (for example, input, the digital processing of the interface of this algorithm; Give the input or the processing instruction (if necessary) of user's display (if necessary) user request; And output) can be on identical device and physical address or distinct device and physical address, and the communication between equipment and the position can connect via public or private genus network, perhaps passes through based on the medium of communicating by letter.Aforementioned to disclose consistent be that this algorithm oneself can be full automatic, can have user's input (promptly with the present invention; They can be all or part of manual); The examination that can have user or operator is with acceptance/refusal result, can by metadata (what metadata can be by user's supply, by measuring equipment (for example; In camera) supply, perhaps confirm by algorithm) auxiliary.In addition, this algorithm can with various workflow user interface plan interfaces.
The example CFA pattern that is based on discussed above with minimum repetitive as shown in Figure 6.Under these circumstances, minimum repetitive comprises 2 greens (G) pixel, 1 redness (R) pixel and 1 blueness (B) pixel.The CFA pattern that this method can be used for any kind equally it will be apparent to those skilled in the art that.For example, it can be used for the CFA pattern that rgb pixel is arranged to different mode.Similarly, it can be used to utilize other type pixel, blue-green (C) for example, the CFA pattern of aubergine (M) or yellow (Y) pixel or panchromatic (P).Panchromatic pixels is responsive to all optical bands, and utilizes usually and remove filter, or does not have filter on sensing element, to make.Many such CFA mode example with different spectral sensitivities and different spaces layout are known for a person skilled in the art.For example, people's such as Kijima the U.S. Patent application 2007/0268533 that is entitled as " Image sensor with improved light sensitivity " has been described the various different CFA patterns that comprise panchromatic pixels.Method of the present invention can also be applicable to utilization to non-visible radiation, for example the CFA pattern of infrared light (IR) or the radiosensitive pixel of ultraviolet (UV).
The calculating of the disclosed denoising CFA of this paper according to the present invention image algorithm can have the intraware that utilizes various data monitorings and reduction technique (for example, the monitoring of people's face, eyes monitoring, skin monitoring, flash of light monitoring).
Component list
10 light
11 imaging stages
12 lens
13 neutral density filter pieces
14 pupil pieces
16 luminance sensor pieces
18 shutters
20 color filter sensor array pieces
22 analogue signal processors
The 24A/D transducer
26 timing reflectors
28 imageing sensor stages
30 buses
The 32DSP memory
36 digital signal processors
The processing stage of 38
40 exposure control units
50 system controllers
52 system controller buses
54 program storages
56 system storages
57 HPIs
60 memory card interface
62 memory card slots
64 memory cards
68 user interfaces
70 displays of finding a view
72 exposure displays
74 users input
76 status displayses
80 video encoders
82 display controllers
88 image displays
100 coarse CFA images
102 denoising pieces
104 denoising CFA images
The single color blocks of 106 denoisings
108 first denoising CFA images
110 denoising CFA chrominance block
112 second denoising CFA images
114 calculation display difference block
116 pixel differences
118 calculate local border response weighted value piece
120 local border response weighted values
122 calculate the weighted pixel difference block
124 weighted pixel differences
126 calculate the first denoising pixel value piece
128 calculate blocks of chrominance values
130 chromatic values
132 calculate the colourity difference block
134 colourity differences
136 calculate local colourity border response weighted value piece
138 local colourity borders response weighted value piece
140 calculate weighting colourity difference block
142 weighting colourity differences
144 calculate the second denoising pixel value piece
1462x2 pixel value piece
The central authorities of 1482x2 pixel value piece
150 denoising CFA chrominance block
152 first denoising CFA images
The single color blocks of 154 denoisings
156 second denoising CFA images
158 utilize first second pixel difference block denoising
160 calculate the first pixel difference block
162 first pixel differences
164 calculate local border response weighted value piece
166 local border response weighted values
168 calculate the first weighted pixel difference block
170 first weighted pixel differences
172 calculate the second pixel difference block
174 second pixel differences
176 calculate the second weighted difference piece
178 second weighted pixel differences
180 calculate denoising pixel value piece