EP1737217A2 - Imaging device, image recording medium, image processing device, image processing program, and recording medium thereof - Google Patents

Imaging device, image recording medium, image processing device, image processing program, and recording medium thereof Download PDF

Info

Publication number
EP1737217A2
EP1737217A2 EP05734244A EP05734244A EP1737217A2 EP 1737217 A2 EP1737217 A2 EP 1737217A2 EP 05734244 A EP05734244 A EP 05734244A EP 05734244 A EP05734244 A EP 05734244A EP 1737217 A2 EP1737217 A2 EP 1737217A2
Authority
EP
European Patent Office
Prior art keywords
noise
signal
image
block
recording medium
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP05734244A
Other languages
German (de)
English (en)
French (fr)
Inventor
Gen Horie
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Olympus Corp
Original Assignee
Olympus Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Olympus Corp filed Critical Olympus Corp
Publication of EP1737217A2 publication Critical patent/EP1737217A2/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • H04N5/77Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
    • H04N5/772Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera the recording apparatus and the television camera being placed in the same enclosure
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/63Noise processing, e.g. detecting, correcting, reducing or removing noise applied to dark current
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/907Television signal recording using static stores, e.g. storage tubes or semiconductor memories
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • H04N5/92Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N5/9201Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving the multiplexing of an additional signal and the video signal

Definitions

  • the present invention relates generally to processing for reducing random noises ascribable to an imaging device system, and more particularly to an imaging system and an image recording medium as well as an image processor, an image processing program and a recording medium thereof, wherein the quantity of noise generated is dynamically estimated so that only noise components can be reduced with high precision yet without being substantially affected by taking conditions.
  • JP(A) 2003-189236 teaches that images taken by an imaging system are recorded in a recording medium and, at the same time, taking conditions at a time of taking them are recorded there in association with image data, and that in reducing noise from the image data, a noise reduction level is changed depending on the recorded taking conditions for noise reduction processing so that noise reduction processing depending on the taking conditions can be implemented.
  • JP(A) 2001-157057 sets forth a technique wherein the quantity of noise is converted into a function form with respect to a signal level, the quantity of noise with respect to the signal level is estimated from that function, and the frequency characteristics of filtering are controlled on the basis of the noise quantity. This enables adaptive noise reduction processing to be implemented with respect to the signal level.
  • JP(A) 2003-189236 shows that the noise reduction level is changed depending on taking conditions such as ISO sensitivity and shutter speed.
  • changes in the quantity of noise due to changes in taking conditions depend on the characteristics of imaging systems including imaging devices; even with a direct change in the noise reduction level depending on taking conditions, there is no sufficient noise reduction effect obtained. That is, it is impossible to precisely estimate the quantity of noise generated depending on taking conditions, and so it is impossible to implement any adaptive noise reduction processing.
  • a, b and c are statically given constant terms.
  • the quantity of noise changes dynamically with such factors as temperature, exposure time and gain at the taking time. That is, there is nothing corresponding to any function matching the quantity of noise at the taking time, ending up with poor estimation precision of noise quantities.
  • the present invention is characterized in that the quantity of noise is modeled in such a way as to be adaptive to not only signal levels but also dynamically changing factors such as temperature, exposure time and gain at the taking time, and this model is recorded along with taken images.
  • the present invention has for its object the provision of an imaging system and an image recording medium as well as an image processor, an image processing program and a recording medium thereof, wherein, even upon post-processing of data recorded in the recording medium, it is possible to implement noise reduction processing optimized for any taking situations.
  • the embodiments of the present invention are each directed to an imaging system supposed to be used with a digital camera, wherein images taken by an imaging device are A/D converted into digital data for storage in a recording medium.
  • the converted digital data are further subjected to signal processing such as noise reduction, gray level transformation, edge enhancement, etc.
  • signal processing such as noise reduction, gray level transformation, edge enhancement, etc.
  • any processing is not applied to the digital data at all; data just after digitalization (raw data) are recorded, and signal processing is applied to them by means of other system different than the imaging system.
  • processing with the aforesaid other system ensures higher performance than could be achieved by processing within the imaging system, and enables the user to control the extent of processing in an easier manner.
  • Fig. 1 is representative of the architecture of the first embodiment according to the present invention.
  • Images (gray scale images) taken via a lens system 100, a stop 101, a low-pass filter 102 and a white-and-black CCD 103 is sampled by a CDS (correlated double sampling) 104.
  • the sampled signals are amplified at a gain control (Gain) 105, and converted by A/D 106 into digital signals.
  • Signals from A/D 106 are forwarded to an output block 114 by way of an image buffer 107.
  • the image buffer 107 is connected to a metering estimation block 108 and a focusing detection block 109, too.
  • the metering estimation block 108 is connected to the stop 101, CCD 103 and Gain 105, and the focusing detection block 109 is connected to an AF motor 110.
  • the image buffer 107 is connected to a noise estimation block 112 that is connected to an output block 114. Further, signals from the image buffer 107 are connected to a processed image output block 119 such as a memory card by way of a signal processing block 113.
  • a control block 115 such as a microcomputer is bidirectionally connected to CDS 104, Gain 105, A/D 106, the metering estimation block 108, the focusing detection block 109, the noise estimation block 112, the signal processing block 113, the output block 114 and the processed image output block 119.
  • an external I/F block 116 comprising a power source switch, a shutter button and a mode select interface at the taking time, too, is bidirectionally connected to the control block 115.
  • the signal processing block 113 connected to the image buffer 107 and the processed image output block 119 are only illustrative of the general structure of the aforesaid digital camera, and are not essential components of the first embodiment.
  • the flow of signals is explained. After taking conditions such as ISO sensitivity are set via the external I/F block 116, the shutter button is half pressed down to place the camera in a pre-image pickup mode.
  • Video signals taken via the lens system 100, the stop 101, the low-pass filter 102 and CCD 103 are read as analog signals by known correlated double sampling at CDS 104.
  • the analog signals are amplified a given quantity at gain control (Gain) 105, and converted by A/D 106 into digital signals that are then forwarded to the image buffer 107.
  • Video signals within the image buffer 107 are forwarded to the metering estimation block 108 and the focusing detection block 109.
  • the stop 101, the electronic shutter speed of CCD 103, the rate of amplification at the gain control (Gain) 105, etc. are controlled such that in consideration of the set ISO sensitivity, a camera-shake limit shutter speed, etc., the luminance level in the image is determined for proper exposure.
  • the focusing detection block 109 the edge intensity in the image is detected, and the AF motor 110 is controlled such that the maximum edge intensity is obtained, thereby obtaining a focused image.
  • the shutter button is full pressed down by way of the external I/F block 106 so that the image can be taken, and video signals are forwarded to the image buffer 107, as in the pre-image pickup mode.
  • the image is taken under exposure conditions determined at the metering estimation block 108 and focusing conditions found at the focusing detection block 109, and these taking conditions are forwarded to the control block 115.
  • the exposure conditions found at the metering estimation block 108 and the taking conditions such as ISO sensitivity determined at the external I/F block 116 are forwarded to the noise estimation block 112 by way of the control block 115.
  • the noise estimation block 112 works out a noise quantity calculation coefficient adaptive to the aforesaid conditions.
  • That noise quantity calculation coefficient is forwarded to the output block 114.
  • the image buffer 107 forwards the video signals to the output block 114.
  • the output block 114 records and stores the video signals transmitted from the image buffer 107 and the noise quantity calculation coefficient transmitted from the noise estimation block 112 in a memory card or the like.
  • the signal processing block 113 applies known enhancement and compression processing, etc. to the video signals to forward them to the processed image output block 119.
  • the transmitted image signals are recorded and stored in an image recording medium such as a memory card.
  • Fig. 2 is representative of one example of the architecture of the noise estimation block 112.
  • the noise estimation block 112 is made up of an OB area extraction block 200, a buffer 1 201, a variance calculation block 202, a temperature estimation block 203, a temperature estimating ROM 204, a gain calculation block 208, a standard value application block 209, a coefficient calculation block 210, a parameter ROM 211 and an upper limit setting block 213.
  • the image buffer 107 is connected to the OB area extraction block 200.
  • the OB area extraction block 200 is connected to the coefficient calculation block 210 by way of the buffer 1 201, the variance calculation block 202 and the temperature estimation block 203, and there is the temperature estimating ROM 204 connected to the temperature estimation block 203.
  • the control block 115 is bidirectionally connected to the OB area extraction block 200, the variance calculation block 202, the temperature estimation block 203, the gain calculation block 208, the standard value application block 209, the coefficient calculation block 210 and the upper limit setting block 213.
  • the gain calculation block 208, the standard value application block 209 and the parameter ROM 211 are connected to the coefficient calculation block 210.
  • the coefficient calculation block 210 is connected to the output block 114.
  • the upper limit setting block 213 is connected to the output block 114.
  • the OB area extraction block 200 extracts an OB area of the device from the image buffer 107 to forward it to the buffer 1 201.
  • Fig. 3(a) is illustrative of where the OB area is located. In this embodiment, there is the OB area found at the right end of the device.
  • the variance calculation block 202 reads the OB area from the buffer 1 201 to work out a variance value. Further, on the basis of information about the exposure conditions transmitted from the control block 115, the aforesaid variance value is corrected for the amount of amplification at the Gain 105, and the corrected value is forwarded to the temperature estimation block 203.
  • the temperature estimation block 203 finds the temperature of the device to forward it to the coefficient calculation block 210.
  • Fig. 4 is a characteristic view illustrative of in what relations the variance of the OB area is to the temperature of the device, showing that the larger the variance grows, the higher the temperature grows.
  • Random noises at the OB area with no incident light are predominantly dark-current noises that have relations to the temperature of the device. Therefore, if the random noises at the OB area are worked out as a variance value and the variance value for changes in the temperature of the device is previously measured, it is then possible to estimate the temperature of the device from the variance value.
  • one temperature is found on condition that the temperature of the device is regarded as being constant; however, the present invention is not necessarily limited to that. For instance, with OB areas located at four sites as depicted in Fig.
  • a variance value for a specific block in the image may be found out by linear interpolation from the variance value of the OB area of the corresponding upper end, lower end, left end, and right end. It is thus possible to estimate the temperature of the device with high accuracy even when that temperature is not uniform.
  • the gain calculation block 208 finds out the quantity of amplification at Gain 105 to forward it the coefficient calculation block 210. Information regarding the shutter speed set by the metering estimation block 108 is forwarded from the control block 115 to the coefficient calculation block 210.
  • the coefficient calculation block 210 works out a coefficient for a function that gives the quantity of noise with respect to a signal value. This function is obtained from formulation on the basis of noise generation principles.
  • Fig. 5 is illustrative of how the quantity of noise is formulated.
  • Fig. 5(a) is a noise quantity vs. signal value level plot, showing that a constant term is added to a power function.
  • L be the signal value level
  • N the quantity of noise.
  • N AL B + C where A, B and C are each a constant term.
  • Fig. 6 is illustrative of the characteristics of this embodiment:
  • Fig. 6(a) is illustrative of the characteristics of the aforesaid three functions a(), b() and c().
  • Three such functions have the temperature T and the gain G as input parameters, producing the respective constant terms A, B and C as outputs. These functions may be easily obtained by measuring the characteristics of the device system in advance.
  • Fig. 6(b) is a characteristic representation of shutter speed and noise increments. Random noises tend to increase as exposure time becomes long. Even with the same exposure quantity, therefore, a different shutter speed/stop combination would result in a difference in the quantity of noise. This difference must be corrected.
  • That correction may be formulated as formula (3) by multiplying formula (2) by a correction coefficient provided that S is the shutter speed.
  • N a T , G ⁇ L b T , G + c T , G ⁇ d s
  • d() is a function with the shutter speed S as a parameter.
  • the function d() may be obtained by measuring the characteristics of the device system in advance. Four such functions a(), b(), c() and d() are recorded in the parameter ROM 211. Note here that for correction of the shutter speed, there is not necessarily any function in store. A noise increase grows sharply from a certain threshold value STH as can be seen from the characteristic representation of Fig.
  • the coefficient calculation block 210 has the temperature T, gain G and shutter speed S as input parameters. According to the four functions a(), b(), c() and d() recorded in the parameter ROM 211, four function values a(T,G), b(T,G), c(T,G) and d(S) at the temperature T, gain G and shutter speed S are found, and each is going to become a noise calculation coefficient A, B, C, D at the temperature T, gain G, and shutter speed S. These coefficients define noise characteristics at the temperature T, gain G and shutter speed S.
  • the parameters for the temperature T, gain G and shutter speed S are not necessarily found for each taking.
  • the temperature T is kept stable after the lapse of a given time from the power source being put on, and if the once calculated temperature information is recorded in the standard value application block 209, it is then possible to make do without the following calculation steps. By such arrangement, faster processing, power savings, etc. are achievable.
  • the noise calculation coefficient worked out at the coefficient calculation block 210 is forwarded to the output block 114.
  • the upper limit setting block 213 operates to limit the quantity of noise such that it does not exceed the given threshold value. The reason is to factor in a case where the reduction processing for the theoretical quantity of noise may be in excess.
  • the value set at the upper limit setting block 213 is forwarded to the output block 114. If necessary, the function of the upper limit setting block 213 may be stopped via the control block 115 by operation of the external I/F block 116.
  • Fig. 11 is illustrative of the architecture of data recorded by the output block 114 in the image recording medium.
  • Fig. 12 is illustrative of the architecture of one of those image files.
  • the image file 311 is made up of image data 330 and incidental information 320.
  • the incident information 320 is further composed of taking information 321 and noise characteristics 322.
  • Fig. 13 is illustrative of the architecture of the noise characteristics 322.
  • the noise characteristics 322 are composed of coefficients A, B, C and D corresponding to the functions a(), b(), c() and d() in the aforesaid formula (3), and the upper limit value of the quantity of noise. These values are worked out by the coefficient calculation block 210 at the time when the image data 330 stored in the same file are taken and recorded, and provide information for describing noise generation characteristics adaptive to the image data 330.
  • Fig. 14 is illustrative of the architecture of an image file according to one modification to the first embodiment of the present invention.
  • approximate noise characteristics 325 are recorded in place of the noise characteristics 322 of Fig. 13, as depicted in the characteristic representation of Fig. 18.
  • the approximate noise characteristics 325 are defined by each interval and the value of a coefficient corresponding to it at a time when noise generation characteristics are approximated by a piecewise straight line.
  • Given in Fig. 18 are data indicative of each interval of the approximate noise characteristics 325, wherein at the upper limit signal values L1-L4 of the intervals, coefficients corresponding to them are represented by slopes A1-A4 and segments C1-C4 of the N-axis.
  • Fig. 33 is representative of the architecture of the noise estimation block in the instant modification.
  • the values of L1-L4, A1-A4 and C1-C4 are stored in the parameter ROM 211 for each temperature and gain, and on the basis of the temperature T from the temperature calculation block 223 and the gain G from the gain calculation block, proper interval data and coefficients are selected by the coefficient selection block 212, and forwarded to and recorded in the output block 114.
  • the temperature calculation block 223 is comprised of the OB area extraction block 200, the variance calculation block 202, the temperature estimation block 203 and the temperature estimating ROM 204 indicated in Fig. 2.
  • the contents of the parameter ROM 211 are determined by measuring the characteristics of the imaging system in advance. As illustrated in the characteristic representation of Fig.
  • signal value L to noise quantity N relations are measured under various temperatures T and gains G.
  • One example is the characteristics indicated by a broken line in Fig. 18.
  • the intervals (L1-L4) are set depending on the level of the signal value L, and an approximate straight line for each interval is determined for the data of the found noise quantity contained in each interval, thereby obtaining the slopes of the approximate straight lines and the segments (C1-C4) of the N-axis. These values are found for each measured temperature T and gain G, and recorded in the parameter ROM 211.
  • Fig. 15 is illustrative of the structure of a recording medium in the second modification to the instant embodiment of the present invention.
  • a memory card 301 is built up of a plurality of image files 311-314 and a noise characteristics file 340.
  • Fig. 16 is a representation of the architecture of one of the image files shown in Fig. 15. In place of the noise characteristics 322 themselves, noise identification information 323 is recorded in the image file 311.
  • a noise characteristics file 340 is retrieved to read a coefficient corresponding to the noise identification information 323 so that a coefficient corresponding to image data 330 can be obtained.
  • Fig. 17 is a representation of the architecture of the noise characteristics file 340.
  • the noise characteristics file 340 has a plurality of noise characteristics recorded in it, and comprises noise characteristic identification information, a coefficient corresponding to it, and an upper limit value thereof. According to the arrangement as described above, it is possible to make effective use of the recording medium, because there is no need of recording the same noise characteristics plural times.
  • Fig. 20 is a representation of the architecture of a noise characteristics file 340 in the third modification to the instant embodiment of the present invention.
  • the noise characteristics file 340 here relies on a lookup table (LUT) in lieu of the function coefficients A, B, C and D so as to describe noise characteristics.
  • LUT lookup table
  • Fig. 21 is illustrative of the contents of one of those LUTs.
  • a noise quantity N is stored with a signal value L as an address so that the noise quantity N corresponding to the signal value L can be produced as output.
  • Fig. 22 is a representation of the architecture of a noise characteristics file 340 in the fourth modification to the instant embodiment of the present invention.
  • the noise characteristics file 340 here is comprised of basic functions a(L), b(L), c(L) and d(L) for describing noise characteristics and a coefficient A, B, C, D for each identification information.
  • the basic functions a(L), b(L), c(L) and d(L) are determined as follows.
  • Possible values are set for parameters such as the temperature T, gain G and shutter speed S of an imaging device.
  • signal value L-to-noise quantity N relations for each of all those conditions are determined by experimentation. Then, these relations are subjected to main component analysis to extract components as basic functions in descending order of contribution. Even when there is difficulty in formulation of a noise model, that model can be described in a limited memory capacity.
  • Fig. 19 is a representation of the architecture of an image file in the fifth modification to the first embodiment of the present invention.
  • an algorithm type 326 and a control range 327 are added to the incidental information 320 of the image file depicted in Fig. 12.
  • the algorithm type 326 designates the algorithm to be used for noise reduction processing.
  • the control range 327 is a value that is used to control the quantity of noise to be reduced on user's request.
  • the control range 327 comprises an upper limit value 802 used at a time when the quantity of reduction is largest, a lower limit value 803 used at a time when the quantity of reduction is smallest, and a standard value 804 used at a time when there is no user's request.
  • an upper limit value 802 used at a time when the quantity of reduction is largest
  • a lower limit value 803 used at a time when the quantity of reduction is smallest
  • a standard value 804 used at a time when there is no user's request.
  • control range 327 may have values halfway between the upper and the lower limit value as candidates of choice for the quantity of noise reduction. In this case, restrictions depending on taking conditions can be imposed on the intensity of noise reduction processing opted by the user; there is no ailment such as too slender or too much effect, ending up with poor resolution.
  • Fig. 8 is a representation of the architecture of the second embodiment of the present invention.
  • a color filter 400, a temperature sensor 401, a PreWB block 402 and a color signal separation block 403 are added to the aforesaid first embodiment of the present invention.
  • the second embodiment is basically equivalent to the first embodiment; like reference numerals stand for like components. Only parts of the second embodiment different than the first embodiment are now explained.
  • the color filter 400 is located at the front of CCD 103, and the temperature sensor 401 is positioned near it.
  • the image buffer 107 is connected to the PreWB block 402 and the color signal separation block 403, and the color signal separation block 403 is connected to the output block 114.
  • the PreWB block 402 is connected to Gain 105.
  • the control block 115 is bidirectionally connected to the PreWB block 402 and the color signal separation block 403.
  • the action of the second embodiment is basically equivalent to that of the first embodiment; only different portions are explained. Referring here to Fig. 8, the flow of signals is explained.
  • the shutter button is half pressed down to place the camera in a pre-image pickup mode.
  • Video signals (color image) taken via the lens system 100, stop 101, low-pass filter 102, color filter 400 and CCD 103 are forwarded to the image buffer 107 by way of CDS 104, Gain 105 and A/D 106.
  • Video signals within the image buffer 107 are forwarded to the metering estimation block 108, the focusing detection block 109 and the PreWB block 402.
  • the stop 101, the electronic shutter speed of CCD 103, the rate of amplification at Gain 105, etc. are controlled.
  • the edge intensity in the image is detected, and the AF motor 110 is controlled such that the maximum edge intensity is obtained, thereby obtaining a focused image.
  • signals having a given luminance level are taken out of the video signals, and summated up for each color signal so that a simple white balance coefficient is worked out.
  • the white balance coefficient is forwarded to Gain 105, where a different gain is multiplied to each color signal to implement white balance.
  • the shutter button is full pressed down by way of the external I/F block 106 so that the image can be taken, and video signals are forwarded to the image buffer 107, as in the pre-image pickup mode.
  • the image is taken on the basis of the exposure condition determined at the metering estimation block 102, the focusing condition determined at the focusing block 109, and the white balance coefficient found at the PreWB block 402, and those taking conditions are forwarded to the control block 115.
  • the video signals within the image buffer 107 are forwarded to the color signal separation control 403, where they are separated for each filter, and then forwarded to the noise estimation block 112 and the output block 114.
  • the color filter 400 located at the front of CCD 103 is of a primary-colors Bayer type.
  • Fig. 9 is a representation of the architecture of the color filter of the primary-colors Bayer type.
  • the primary-colors Bayer type has a basic unit of 2 ⁇ 2 pixels, with location of a red (R) filter, green (G1, G2) filters and a blue (B) filter. Note here that while the green filters are the same ones, they are designated as G1 and G2 for the convenience of processing.
  • the video signals within the image buffer 107 are separated depending on the four color filters of R, G1, G2 and B. Color separations take place on the basis of control at the control block 115, and occur in sync with processing at the signal processing block 113, the noise estimation block 112 and the control block 114.
  • the video signals already subjected to color separations are recorded and stored in a memory card or the like.
  • interpolation processing, enhancement processing, compression processing, all known in the art, etc. are applied to the video signals under control at the control block 115, and the processed signals are forwarded to the processed image output block 119, at which the processed signals are recorded and stored in a memory card or the like.
  • noise estimation block 112 four kinds of signals R, G1, G2 and B generated at the color signal separation block 403 are subjected to the respective adaptive noise estimations. That is, there are coefficients Ai, Bi, Ci and Di worked out for the respective four kinds of signals R, G1, G2 and B, where i is any one of R, G1, G2 and B.
  • the architecture of the noise estimation block 112 is the same as shown in Fig. 2.
  • noise calculation coefficients A, B, C and D are worked out according to the four functions ai(), bi(), ci() and di () recorded for each of the four kinds of signals in the parameter ROM 211 with the temperature T, gain G and shutter speed S as parameters.
  • i is any one of R, G1, G2 and B.
  • ROM 211 there are the four functions ai(), bi(), ci() and di() held, corresponding to the four kinds of signals R, G1, G2 and B.
  • those functions are changed over to the ones adaptive to the respective signals to work out the noise calculation coefficients Ai, Bi, Ci and Di corresponding to the respective four kinds of signals.
  • Fig. 10 is illustrative of color-by-color noise characteristics 361 of the architecture of data recorded at the output block 114 in the recording medium.
  • Fig. 10 corresponds to the noise characteristics 322 contained in the image file of Fig. 12 explained in connection with the first embodiment of the present invention.
  • the noise calculation coefficients A, B, C and D and the noise upper limit values are recorded, four kinds per each. Proper noise estimation can be made for each signal so that high-precision noise reduction processing is feasible.
  • Fig. 27 is illustrative of the architecture of the color-by-color noise characteristics 361 in the first modification of the second embodiment of the present invention.
  • the modification of Fig. 27 is comprised of four coefficients A, B, C and D for a signal G1 that become basic noise characteristics, a correction term for working out the quantities of noises of other signals G2, R and B from the quantity of noise found from the basic noise characteristics, and a scene correction term for making correction of the quantity of noise depending on the scene to be taken.
  • the scene correction term is determined on the basis of known scene identification techniques or taking mode information set from the external I/F block 116; for instance, it is determined in such a way as to get tight for portrait shots or weak for close-up shots.
  • the noise characteristics of the signals G2, R and B it is only needed to keep the correction term in store so that memory capacity can be cut down.
  • a scene-adaptive noise reduction can be implemented so that subjective image quality can be enhanced.
  • the aforesaid arrangement ensures that for signals from the color CCD with the color filter located at its front, the information for estimating the quantity in pixel unit of noise that matches dynamically changing factors such as signal value levels as well as temperature, shutter speed and gain at the time of taking can be recorded for each signal divided by the color filter.
  • the above embodiment has been described typically with reference to the primary-colors Bayer type single-CCD, it is never ever limited thereto. For instance, this arrangement may be equally applied to complementary color filters, 2-chip CCDs, and 3-chip CCDs as well.
  • Fig. 7 is illustrative of one example of the structure of an image processor based on the third embodiment of the present invention, which is made up of an input block 510, an image buffer block 107, a horizontal line extraction block 500, a smoothing block 1 501, a buffer 502, a vertical line extraction block 503, a smoothing block 2 504, a threshold value setting block 505, a signal processing block 113, a display block 514, an external I/F block 503 and a control block 115.
  • the input block 510 is connected to the signal processing block 113 by way of the image buffer 107, the horizontal line extraction block 500, the smoothing block 1 501, the buffer 502, the vertical line extraction block 503 and the smoothing block 2 504.
  • the signal processing block 113 is further connected to the display block 514. Furthermore, the input block 510 is connected to the threshold value setting block 505 that is in turn connected to the smoothing blocks 1 501 and 2 504. A standard value application block 509 is connected to the threshold value setting block 505. The control block 115 is bidirectionally connected to the horizontal line extraction block 500, the vertical line extraction block 503, the threshold value setting block 505 and the standard value application block 509.
  • the third embodiment operates as follows.
  • the input block 510 reads from the image file 311 image data as well as a coefficient indicative of noise characteristics and its upper limit value, forwarding the image data to the image buffer 107 and that coefficient and its upper limit value to the threshold value setting block 505.
  • the horizontal line extraction block 500 for image data sequentially extracts video signals in horizontal line unit, forwarding them to the smoothing block 1 501.
  • the threshold value setting block 505 obtains the quantity of noise in pixel unit from the video signals in horizontal line unit, extracted by the horizontal line block 500, and then forwards that to the smoothing block 1 501 as a threshold value.
  • noise quantity N is worked out from the following formula (5) on the basis of a signal value L for a pixel.
  • N AL B + C ⁇ D
  • the noise quantity N exceeds the upper limit value transmitted from the input block 510, it is forwarded to the smoothing block 1 501 after replaced by that upper limit value.
  • a standard coefficient and its upper limit value are produced as outputs from the standard value application block 509 under control at the control block 115 for forwarding to the threshold value setting block 505.
  • video signals from the horizontal line extraction block 500 are scanned in pixel unit for implementing known hysteresis smoothing wherein the threshold value from the threshold value setting block 505 is used as a noise quantity. The results are sequentially produced as outputs into the buffer 502.
  • the hysteresis smoothing at the smoothing block 1 501 occurs in sync with the operation of the threshold value setting block 505 under control at the control block 115. As all signals at the image buffer 107 are processed at the smoothing block 1 501, it allows the vertical line extraction block 503 to sequentially extract the video signals in vertical line unit from the buffer 502 under control at the control block 115 and forward them to the smoothing block 2 504. On the basis of control at the control block 115, the threshold value setting block 505 obtains the quantity of noise in pixel unit from the video signals in horizontal line unit, extracted by the horizontal line block 500, and then forwards that to the smoothing block 2 504 as a threshold value.
  • Fig. 28 is illustrative of the image processor in the first modification to the third embodiment of the present invention.
  • This modification is characterized in that a noise reduction block A582, a noise reduction block B583, a selector A581 and a selector B584 are provided in lieu of the noise reduction block 512.
  • algorithm types are read from the image file 311 through the input block 510.
  • selection is made between selector A581 and selector B584 in response to the type of algorithm, switching between noise reduction algorithms.
  • selection is made between the noise reduction algorithms so that effective noise reduction processing is enabled.
  • Fig. 25 is illustrative of the architecture of the threshold value setting block 505 in the second modification to the third embodiment of the present invention.
  • This modification is comprised of a cutoff quantity setting block 801, a noise quantity calculation block 601 and a noise quantity correction block 602.
  • Values in a control range are read from the image file 311 through the input block 510 for forwarding to the threshold value setting block 505.
  • the cutoff quantity setting block 801 presents such a user interface as depicted in the representation of Fig. 24, obtaining the noise reduction level desired by the user depending on the position of a slide bar set by the user.
  • a value halfway between the upper and lower limit values of the control range is worked out as a noise correction quantity for forwarding to the noise quantity correction block 602.
  • the position of the slide bar and the noise correction quantity are in proportional relations, as depicted in Fig. 29.
  • the position of the slide bar and the noise correction quantity may be in non-linear relations.
  • the user would subjectively feel cutoff effect changes as natural ones.
  • the noise quantity calculation block 601 uses the signal value of the pixel to be processed and four coefficients A, B, C and D to work out the noise quantity from formula (5) and forward it to the noise quantity correction block 602.
  • the noise quantity transmitted from the noise quantity calculation block 601 is corrected depending on the noise correction quantity transmitted from the cutoff quantity setting block 801.
  • the noise correction quantity is multiplied to the noise quantity transmitted from the noise quantity calculation block 601, thereby working out the quantity of corrected noise.
  • the corrected noise quantity is forwarded to the smoothing blocks 1 501 and 2 502, where it is used as a threshold value at the time of smoothing.
  • the aforesaid arrangement makes it possible to estimate the quantity of noise in pixel unit, depending on dynamically changing factors such as signal value levels as well as temperatures, shutter speeds and gains at the time of taking. On the basis of such estimations, only signals less than the noise quantity are smoothed so that high-precision noise reduction processing is enabled. With the upper limit value set to the estimated noise quantity, processing can be implemented with good preserving of the original signals. Note here that while the noise quantity is estimated in pixel unit in the aforesaid third embodiment, the present invention is never ever limited to that. For instance, the noise quantity may be estimated in any desired unit area such as 2 ⁇ 2 pixels, and 4 ⁇ 4 pixels, in which case the precision of noise estimation goes down somewhat, but much faster processing is achievable.
  • An image processor according to the fourth embodiment of the present invention is practically the same as shown in the representation of Fig. 7, and applied as a display system directed to the image file produced from the second embodiment of the present invention.
  • the noise reduction processing described in connection with the second embodiment is implemented for the separated four signals R, G1, G2 and B.
  • Fig. 26 is illustrative of the architecture of the threshold value setting block 505 in the first modification to the fourth embodiment of the present invention.
  • the threshold value setting block 505 is built up of a noise quantity calculation block 601, a noise quantity correction block 602 and an upper limit setting block 603.
  • a signal value of each of G2, R and B for the pixel to be processed and four coefficients A, B, C and D for the signal G1 that provide the basic noise characteristics are used to find out the noise quantity of each of G2, R and B from formula (5), and that noise quantity is forwarded to the noise quantity correction block 602.
  • the noise quantity of each of G2, R and B transmitted from the noise quantity calculation block 601 is corrected by the correction term and scene correction term for each of G2, R and B, and the noise quantity corresponding to each of G2, R and B is worked out.
  • the noise correction block 602 made up of a multiplier, the correction term and scene correction term for any one of G2, R and B is multiplied to the noise quantity corresponding to each of G2, R and B transmitted from the noise quantity calculation block 601, thereby working out the noise quantity for each of G2, R and B.
  • the upper limit setting section 603 when there is the noise quantity corrected at the noise correction block 602 exceeding the upper limit value, it is replaced by that upper limit value, thereby keeping cutoff processing from going too far.
  • the noise quantity produced out of the upper limit setting block 603 is forwarded to the smoothing blocks 1 501 and 2 502, where it is used as a threshold value for smoothing processing.
  • Fig. 34 is a flowchart for noise reduction processing on software. Such software processing is achievable by letting a computer run on a program. If the program is recorded in a recording medium and that recording medium is moved as desired, then that program can be run wherever there is a computer installed.
  • a full-color signal is read in at Step 1.
  • header information is read in and a noise calculation coefficient corresponding to the full-color signal is read in.
  • the signal is separated into individual colors.
  • a noise calculation coefficient matching the separated color is selected for forwarding to Step 7.
  • each color signal is individually scanned.
  • a given size, for instance, 4 ⁇ 4 pixel unit, around a pixel of interest is extracted to work out an average of a signal value level for forwarding to Step 7.
  • Step 7 on the basis of the aforesaid formula (5), a signal value level in a local area around the pixel of interest and the noise calculation coefficient are used to work out the noise quantity matching that signal value level for forwarding to Step 8.
  • smoothing processing is applied to the signal of the pixel of interest using as a threshold value the noise quantity matching the signal value level in the local area.
  • the signals subjected to the smoothing processing are sequentially produced as outputs.
  • Step 10 whether or not screen scanning is over is determined, and if not, there is Step 5 resumed for individual scanning of the aforesaid respective color signals. Subsequently, loop processing of Step 5-Step 10 is repeated. If screen scanning is over, there is Step 11 taken over.
  • Step 11 whether or not all the color signals are subjected to the aforesaid respective processing steps is determined, and if not, there is Step 3 resumed where the aforesaid color signals are individually scanned, after which loop processing of Step 3-Step 11 is repeated. If the processing steps for the full-color signal are over, it means that job has been done.
  • the aforesaid arrangement ensures that for signals from the color CCD with the color filters located on its front, it is possible to make estimation of, per signal divided by the color filter, the quantity of noise in pixel unit that depends on dynamically changing factors such as signal value levels as well as temperatures, shutter speeds and gains at the time of taking. Smoothing operation that occurs on the quantity of noise estimated for each color signal makes high-precision noise reduction processing possible. Note here that while the above embodiment has been described with reference to a primary-colors Bayer type single CCD, the present invention is never ever limited to that. For instance, the present invention may be equally applied to complementary color filters, and 2-chip or 3-chip CCDs as well.
  • Fig. 34 is illustrative of one exemplary program run on color image processing, but this program may be modified in such a way as to fit in with gray scale image processing.
  • Step 3 of separating the read-in signals for each color signal, Step 5 of individual scanning of the color signals, and Step 11 of judging whether or not the processing of the full-color signal is over are omitted.
  • Fig. 35 is a flowchart of one exemplary program run on gray scale image processing. In the example of Fig. 35, the steps involved are the same as in the processing of Fig. 34 saving the color image processing step, but the display of step counts here is partially modified.
  • the image processing program for color images, shown in Fig. 34, and the image processing program for gray scale images may be recorded in a recording medium. If that recording medium is installed in a computer, then noise reduction processing of high precision can be applied to color images and gray scale images wherever the computer operates yet without restrictions on place and time.
  • the present invention can provide an imaging system and an image recording medium as well as an image processor, an image processing program and its recording medium, wherein, even upon post-processing of data recorded in the recording medium, noise reduction processing optimized for taking conditions can be implemented.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Facsimile Image Signal Circuits (AREA)
  • Picture Signal Circuits (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)
  • Television Signal Processing For Recording (AREA)
  • Color Television Image Signal Generators (AREA)
  • Image Processing (AREA)
EP05734244A 2004-04-14 2005-04-13 Imaging device, image recording medium, image processing device, image processing program, and recording medium thereof Withdrawn EP1737217A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2004119052A JP2005303803A (ja) 2004-04-14 2004-04-14 撮像装置と画像記録媒体、および画像処理装置ならびに画像処理プログラムとその記録媒体
PCT/JP2005/007490 WO2005099356A2 (ja) 2004-04-14 2005-04-13 撮像装置

Publications (1)

Publication Number Publication Date
EP1737217A2 true EP1737217A2 (en) 2006-12-27

Family

ID=35150403

Family Applications (1)

Application Number Title Priority Date Filing Date
EP05734244A Withdrawn EP1737217A2 (en) 2004-04-14 2005-04-13 Imaging device, image recording medium, image processing device, image processing program, and recording medium thereof

Country Status (5)

Country Link
US (1) US20070242320A1 (ja)
EP (1) EP1737217A2 (ja)
JP (1) JP2005303803A (ja)
CN (1) CN1961569A (ja)
WO (1) WO2005099356A2 (ja)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4466584B2 (ja) 2006-02-20 2010-05-26 セイコーエプソン株式会社 照度取得装置、照度取得方法および照度取得プログラム
JP4998009B2 (ja) * 2007-02-26 2012-08-15 株式会社ニコン 画像処理装置及び撮像装置
US8711249B2 (en) 2007-03-29 2014-04-29 Sony Corporation Method of and apparatus for image denoising
US8108211B2 (en) 2007-03-29 2012-01-31 Sony Corporation Method of and apparatus for analyzing noise in a signal processing system
JP5464541B2 (ja) * 2009-01-20 2014-04-09 株式会社日立国際電気 ノイズリダクション回路
JP5460173B2 (ja) * 2009-08-13 2014-04-02 富士フイルム株式会社 画像処理方法、画像処理装置、画像処理プログラム、撮像装置
JP2012004703A (ja) * 2010-06-15 2012-01-05 Sony Corp 撮像装置及び方法、並びにプログラム
US10880531B2 (en) * 2018-01-31 2020-12-29 Nvidia Corporation Transfer of video signals using variable segmented lookup tables
CN113286142B (zh) * 2021-05-20 2023-01-24 众芯汉创(北京)科技有限公司 一种基于人工智能的图像成像感光度预测方法及***

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003204459A (ja) * 2001-10-23 2003-07-18 Konica Corp デジタルカメラ、及び画像再生装置
JP3762725B2 (ja) * 2002-08-22 2006-04-05 オリンパス株式会社 撮像システムおよび画像処理プログラム

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO2005099356A3 *

Also Published As

Publication number Publication date
JP2005303803A (ja) 2005-10-27
WO2005099356A2 (ja) 2005-10-27
WO2005099356A3 (ja) 2005-12-15
CN1961569A (zh) 2007-05-09
US20070242320A1 (en) 2007-10-18

Similar Documents

Publication Publication Date Title
EP1737217A2 (en) Imaging device, image recording medium, image processing device, image processing program, and recording medium thereof
CN1675919B (zh) 摄像***及图像处理方法
US7570287B2 (en) Image pickup system for noise estimating and noise reduction
US7856174B2 (en) Apparatus and method for image pickup
JP3899118B2 (ja) 撮像システム、画像処理プログラム
JP3934597B2 (ja) 撮像システムおよび画像処理プログラム
JP4465002B2 (ja) ノイズ低減システム、ノイズ低減プログラム及び撮像システム。
JP4768448B2 (ja) 撮像装置
EP1601185B1 (en) Noise reduction device and method using a shaded image in an electronic camera
JP4427001B2 (ja) 画像処理装置、画像処理プログラム
US20050083419A1 (en) Image sensing apparatus and image sensor for use in image sensing apparatus
US20070165282A1 (en) Image processing apparatus, image processing program, and image recording medium
US7916187B2 (en) Image processing apparatus, image processing method, and program
US20080043115A1 (en) Image Processing Device and Image Processing Program
EP1947840A1 (en) Image processing system and image processing program
US20070206885A1 (en) Imaging System And Image Processing Program
JP2006023959A (ja) 信号処理システム及び信号処理プログラム
EP1998552B1 (en) Imaging apparatus and image processing program
JP4349380B2 (ja) 撮像装置、画像を取得する方法
JP2006087030A (ja) ノイズ低減装置
KR101612853B1 (ko) 촬영장치, 촬영장치의 제어방법 및 제어방법을 실행시키기 위한 프로그램을 저장한 기록매체
JP2008219230A (ja) 撮像装置及び画像処理方法
JP4479429B2 (ja) 撮像装置
KR101595888B1 (ko) 촬영장치, 촬영장치의 제어방법 및 제어방법을 실행시키기 위한 프로그램을 저장한 기록매체
GB2610018A (en) Information processing apparatus that notifies subject blur, image capturing apparatus, information processing method, and control method

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20061002

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): DE FR GB

DAX Request for extension of the european patent (deleted)
RBV Designated contracting states (corrected)

Designated state(s): DE FR GB

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Effective date: 20100210